Skip to content
Tech News
← Back to articles

Nvidia's exposure to Asian supply chains for components hits 90% of its production costs — marked increase from 65% could intensify as physical AI adds even more exposure

read original get Nvidia Supply Chain Report → more articles
Why This Matters

Nvidia's increasing reliance on Asian supply chains, now accounting for 90% of its production costs, highlights significant vulnerabilities in its hardware manufacturing process, especially as physical AI products demand more wafer capacity and components. This dependence could lead to supply chain disruptions, impacting product availability and innovation in the AI hardware sector. For consumers and the industry, this underscores the importance of diversifying supply sources and investing in supply chain resilience to sustain AI advancements.

Key Takeaways

Asian suppliers now represent roughly 90% of Nvidia's production costs, up from about 65% a year earlier, according to data compiled by Bloomberg. That figure captures Nvidia's established data center supply chain: TSMC fabrication, SK hynix and Samsung HBM, and server assembly from Foxconn and Quanta. But the company's physical AI hardware is now adding entire new product categories that route through those same suppliers.

Nvidia's Jetson Thor robotics platform, released last August, is built on the Blackwell GPU architecture and fabricated on TSMC's 3nm process. The top-end T5000 module delivers 2,070 FP4 TFLOPS with 128 GB of LPDDR5X memory, while a lower-cost T4000 variant introduced at CES 2026 offers 1,200 FP4 TFLOPS with 64 GB at $1,999 per unit in volume. Both use Arm Neoverse-V3AE CPU cores and LPDDR5X sourced from Samsung or SK hynix.

These modules compete for TSMC 3nm wafer starts alongside Blackwell data center GPUs. Partners, including Boston Dynamics and Amazon Robotics, are building on the platform, and LG has confirmed that it’s “exploring a strategic collaboration in physical AI,” with Nvidia, including the robotics ecosystem, Bloomberg reported. Nvidia's DRIVE AGX Thor automotive SoC is another Blackwell-based product line competing for the same 3nm wafer capacity.

Article continues below

None of these physical AI products requires TSMC's CoWoS advanced packaging, which remains the primary bottleneck for data center GPU production, but they do consume 3nm wafer capacity and Asian-sourced LPDDR5X, both of which are already constrained.

The same memory market dynamics feeding Nvidia's newer physical AI products are simultaneously killing off its older ones. At the end of April, it was reported that Nvidia has accelerated end-of-life timelines for its Jetson TX2 and Xavier modules because LPDDR4 supply has become too constrained to maintain production. Samsung has moved away from LPDDR4 manufacturing, and AI-driven demand has redirected memory capacity toward higher-margin products.

That forces Jetson customers onto Orin or Thor modules, which use LPDDR5X from the same Asian memory suppliers whose capacity is already stretched by HBM and data center DRAM demand. TSMC's CoWoS advanced packaging for data center GPUs is growing at an 80% compound annual growth rate, TSMC's head of North American packaging told CNBC last month, and chips fabricated at TSMC's Arizona Fab 21 still ship back to Taiwan for packaging.

Nvidia committed to $500 billion in U.S. server manufacturing last year, with Foxconn and Wistron, and Amkor and SPIL are building advanced packaging facilities in Arizona. But those operations are not yet at production scale, and physical AI product lines are widening the range of components sourced from Asia faster than domestic capacity can absorb them.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.