SemiAnalysis estimates that memory will account for roughly 30% of total hyperscaler capex in calendar year 2026, up from approximately 8% in CY23 and CY24. The firm projects that share will climb further in CY27, representing a near four-fold shift in just four years as DRAM prices surge beyond imagination and HBM remains massively undersupplied.
SemiAnalysis expects DRAM prices to more than double in CY26, with another double-digit ASP increase in CY27. LPDDR5 contract pricing has already risen more than three times since Q1 2025, and the firm estimates open-market pricing will likely exceed $10/GB this quarter.
HBM, the vertically stacked memory at the core of AI accelerators, remains undersupplied through CY27 according to SemiAnalysis’s findings, with memory now constituting a massive share of the approximately $250 billion in incremental hyperscaler spend projected for this calendar year.
Article continues below
Memory is taking over Hyperscaler CapEx.In CY23 and CY24, memory was ~8% of total Hyperscaler spend. We estimate it hits 30% in CY26 and moves higher in CY27. That's a near-4x shift in just four years. (1/4) 🧵 pic.twitter.com/fUxpwUYfcOApril 3, 2026
This is already reflected in AI server pricing, with SemiAnalysis noting that B200 prices are set to rise by up to 20% by year-end, driven in large part by memory cost inflation. That aligns with the broader industry, with manufacturers having acknowledged steep component cost increases in recent earnings calls. Dell's COO, Jeff Clarke, described the rate of cost movement as "unprecedented" in its Q325 earnings call back in November.
Counterpoint Research has separately projected that DDR5 64GB RDIMM modules could cost twice as much by the end of 2026 as they did in early 2025. AI servers built on Nvidia's LPDDR-based platforms are seeing some of the steepest increases because of the sheer volume of memory per system.
An interesting dynamic SemiAnalysis noted is that Nvidia receives what the firm calls "VVP" (Very Very Preferred) DRAM pricing from suppliers, “well below [the rates paid by] both hyperscalers and the broader market.” This, according to SemiAnalysis, compresses Nvidia's own server cost exposure and pushes down overall market pricing benchmarks, masking how severe the supply crunch actually is for everyone else.
AMD sits on the other side of that dynamic, with its AI accelerator SKUs generally carrying higher memory content per unit, and the company doesn’t benefit from the same preferential supplier pricing. At a time when AMD operates at far lower AI accelerator volume than Nvidia, making AMD “structurally more exposed [to memory cost inflation] at a time when it operates at far lower AI accelerator scale.” In other words, Nvidia's purchasing scale across HBM and conventional DRAM gives it leverage that smaller-volume buyers simply can’t replicate.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
... continue reading