Skip to content
Tech News
← Back to articles

A Google AI breakthrough is pressuring memory chip stocks from Samsung to Micron

read original get Samsung 128GB DDR5 RAM → more articles
Why This Matters

Google's recent breakthrough in AI model efficiency with the TurboQuant compression technique is causing concern in the memory chip industry, as it could reduce demand for memory components used in large language models. This development highlights the rapid pace of innovation in AI and its potential to disrupt existing hardware markets, impacting both investors and tech companies reliant on memory chip sales.

Key Takeaways

Signage outside the Google headquarters in Mountain View, California, US, on Tuesday, Feb. 3, 2026.

Google's latest research which claims to make AI models more efficient is putting pressure on memory stocks, with investors concerned the breakthrough could see a slowdown in chip demand.

On Thursday, shares of the world's two biggest memory chipmakers, SK Hynix and Samsung, fell 6% and nearly 5%, respectively in South Korea. Japanese flash memory company Kioxia dropped nearly 6%. These moves followed falls in Sandisk and Micron in the U.S. on Wednesday. Both companies were lower in U.S. premarket trade on Thursday.

Alphabet 's Google on Tuesday unveiled TurboQuant, a new compression method that it says could reduce the amount of memory required to run large language models by six times. The technique focuses on reducing the key value cache, which stores the past calculations of an AI model so it doesn't have to run them again.

The technique is aimed at making AI models more efficient, a major goal of the leading labs.

Investors fear that this could reduce the demand for AI memory chips, which have been a critical component to train up huge LLMs from companies like Google, OpenAI and Anthropic.

Matthew Prince, CEO of Cloudflare, called the research "Google's DeepSeek," referencing the efficiency breakthroughs made by Chinese AI firm DeepSeek last year which caused a massive sell-off in tech stocks.

"So much more room to optimize AI inference for speed, memory usage, power consumption, and multi-tenant utilization," he said in a post on X on Wednesday.