Samsung is now supplying customer samples of its SOCAMM2 memory module to AI data center customers — more than a month after Micron shipped the first samples and about a week after SK hynix entered mass production. The production hierarchy is the story: Micron shipped first, SK hynix is producing today, and Samsung is working through the final manufacturing ramp after resolving warpage defects that had delayed mass production. All three companies are chasing the same window — NVIDIA's next AI accelerator platform arrives in the second quarter, and whoever ships at volume when it does locks in the reference design position.
SOCAMM2 — Small Outline Compression Attached Memory Module 2 — is a server memory module built from LPDDR5X, the low-power memory architecture in smartphones and laptops, adapted for AI servers where moving data between processor and memory can consume as much power as the computation itself. All three companies claim substantially higher bandwidth and substantially lower power draw than the RDIMM standard currently in most servers.
SK hynix is producing today on 1cnm sixth-generation LPDDR5X, with more than twice the bandwidth and over 75 percent better power efficiency than conventional RDIMM, built for NVIDIA's Vera Rubin chip, according to SK hynix's press release and corroborated by TrendForce and Wccftech. Samsung claims up to 153.6 GB/s of bandwidth — 2.6 times higher than DDR-based server memory — and over 70 percent better power efficiency, according to Samsung Semiconductor's product page. Micron shipped the world's first 256GB SOCAMM2 customer samples in early March with 256GB of capacity, roughly a third higher than Samsung and SK hynix, but has not announced a production date, according to Micron's investor release. The density comes from its 1-gamma DRAM process.
Samsung resolved manufacturing delays that had held up the product by lowering soldering temperatures from above 260C to below 150C and shifting the die configuration from a dual-tower to a single-tower structure for mechanical rigidity, according to TrendForce citing ETNews. Samsung says it is now working directly with NVIDIA on optimization for accelerated infrastructure, according to Samsung's news-events blog.
There is a tension in the production claims worth noting. JoongAng Ilbo, cited by TrendForce, reported Samsung is strengthening its position as the first in the industry to begin mass production of 192GB SOCAMM2 — which appears to clash with SK hynix's April 20 mass production announcement. Both can be true: the companies may be referring to different production stages, different product configurations, or different capacity points in their respective ramps. The honest summary is that all three are in or entering mass production on overlapping timelines.
Samsung says the module can be swapped without touching the mainboard, which matters in systems designed to run continuously, and that its horizontal layout aids airflow and heat-sink placement in both air- and liquid-cooled racks, according to Samsung Semiconductor. SK hynix makes the same serviceability pitch.
In Micron's internal testing, a system running 2 terabytes of low-power DRAM per CPU cut time-to-first-token latency on Llama 3 70B to 0.12 seconds from 0.28 seconds for a 1.5TB configuration, according to Micron. Vendor benchmarks are vendor benchmarks.
AMD wrote this month that LPDDR5X still needs stronger RAS — the reliability features that keep server memory running without silent data corruption — to match what DDR DIMMs have built for 24/7 operations, according to AMD's blog. If stacked LPDDR packaging carries a significant cost premium, some operators will keep RDIMM as the budget option. If the power numbers hold in production, that calculation changes.
SK hynix has production. Samsung has samples in customers' hands. Micron shipped first. The production hierarchy is not on a roadmap. It is what's happening right now.