Your next car night need 300GB of RAM, and so will humanoid robots
When Micron's Memory Forecast Is a Robotics Story
When Micron Technology CEO Sanjay Mehrotra told analysts on Wednesday that humanoid robots will need compute platforms on par with Level 4 autonomous vehicles — including hundreds of gigabytes of DRAM — it landed in the middle of an earnings call notable for other reasons. Micron had just posted quarterly revenue of $23.86 billion, nearly triple what it made in the same quarter a year ago. Net income came in at $13.79 billion on gross margins of 74.4%. Despite the beat, shares fell from $461.73 to $441.28 in after-hours trading — investors had apparently hoped for more.
That disconnect is the more familiar semiconductor story. But for anyone watching robotics, the interesting thing was what Mehrotra said about the machines themselves.
Autonomous vehicles capable of driverless operation in defined areas — what the industry calls L4 autonomy — currently ship with roughly 16 gigabytes of DRAM. Mehrotra told analysts that as L4 deployment scales, bills of materials will require over 300 gigabytes per vehicle. The robotics pitch followed directly: humanoid robots, he said, will need a comparable compute platform. That is roughly twenty times the memory in a typical smartphone and a significant jump from current humanoid deployments, which generally sit well below that threshold.
"We are on the cusp of a 20-year growth vector in robotics," Mehrotra told analysts, "and expect robotics to become one of the largest product categories in the technology world."
That is a CEO announcing a bet, not a verified fact. But it is consistent with what the compute requirements imply. If every humanoid robot eventually needs the memory footprint of an L4 vehicle — and L4 vehicles are among the most memory-intensive machines in existence — then the supply chain that feeds both industries is not a supporting actor in the robotics story. It is a limiting reagent.
Micron's automotive and embedded segment hit a record $2.708 billion in revenue last quarter, up 162% year over year. Supply remains so tight that Micron says it can fulfill only half to two-thirds of customer demand in the medium term. The company is investing heavily in new fab construction planned at several sites — Tongluo, Idaho, New York, and Singapore — with meaningful initial output targeted in 2027-2028, according to Micron's Q2 2026 earnings call transcript. Fiscal 2026 capital expenditures are now expected to exceed $25 billion, up from prior guidance, the company said in its investor materials.
Those constraints are not abstract for robotics companies. Memory is a fixed cost input. If Micron is already capacity-constrained at current robot deployment volumes, the path to mass production of humanoid robots — Figure, Tesla Optimus, Agibot, and everyone else — runs directly through whether the memory supply chain can scale in parallel. HBM allocations for AI chips have already crowded out other memory demand. If humanoid robots become a meaningful memory consumer — and Mehrotra clearly expects they will — the existing capacity picture gets tighter before it gets better.
The 300-gigabyte figure is from an earnings call, not a peer-reviewed paper or a published deployment report. It is a forward-looking claim from a company that sells memory and benefits from optimism about memory demand. Treat it accordingly. But the underlying dynamic is real: memory-intensive machines are coming, the supply is constrained, and the fabs that would relieve that constraint will not be ready for years.
Sources: