Skip to content
Tech News
← Back to articles

Micron enters high-volume production of HBM4 for Nvidia Vera Rubin - 2.3x bandwidth improvement and 20% boost in power efficiency

read original get Micron HBM4 Memory Module → more articles
Why This Matters

Micron's high-volume production of HBM4 memory for Nvidia's Vera Rubin GPU platform marks a significant advancement in AI and data center technology, offering substantial increases in bandwidth and power efficiency. This development enables more powerful and energy-efficient AI systems, shaping the future of high-performance computing. The simultaneous launch of related storage and memory products underscores Micron’s role in advancing integrated AI ecosystems.

Key Takeaways

Micron has announced that it has entered high-volume production of its HBM4 36GB 12-Hi memory, designed for Nvidia's Vera Rubin GPU platform. Making the announcement at GTC 2026, the memory giant simultaneously confirmed high-volume production of the industry's first PCIe 6.0 data center SSD and a new SOCAMM2 module, making it the first memory supplier to bring all three products to volume shipment for the Vera Rubin ecosystem at the same time.

The HBM4 36GB 12H stack runs at over 11 Gb/s pin speeds, delivering bandwidth greater than 2.8 TB/s. Compared to Micron's HBM3E at the same 36GB 12H configuration, that represents a 2.3 times bandwidth increase alongside more than 20% improvement in power efficiency, according to Micron's internal power calculator data.

"The next era of AI will be defined by tightly integrated platforms developed through joint engineering innovations across the ecosystem. Our close collaboration with NVIDIA ensures that compute and memory are designed to scale together from day one," said Sumit Sadana, executive vice president and chief business officer at Micron Technology, in a press release. "With HBM4 36GB 12H, alongside the industry's first SOCAMM2 and Gen6 SSD now in high-volume production, Micron's memory and storage form a core foundation that unlocks the full potential of next-generation AI."

Article continues below

Micron has also shipped samples of a 48GB 16H HBM4 stack to customers. The additional four die layers give the 16H configuration a 33% capacity increase per HBM placement over the 36GB 12H product, a milestone that points toward denser configurations in future AI accelerator generations.

Last month, the company announced that the 9650 SSD had entered mass production, marking the first time that a PCIe 6.0 SSD had entered that stage of production. The drive supports up to 28 GB/s sequential read throughput and 5.5 million random read IOPS, doubling PCIe 5.0 read performance at 100% higher performance per watt. Unsurprisingly, it targets AI inference, training, and agentic workloads in liquid-cooled environments and is optimized for Nvidia's BlueField-4 STX reference architecture.

Meanwhile, the 192GB SOCAMM2 module is designed for Nvidia Vera Rubin NVL72 systems and standalone Vera CPU platforms, with Micron's SOCAMM2 portfolio spanning 48GB to 256GB capacities. The Vera Rubin platform supports up to 2TB of memory and 1.2 TB/s of bandwidth per CPU using the module.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.