Go Deeper with TH Premium (Image credit: Future) Want more? We've got an exclusive roadmap to the future of high-bandwidth memory — only for subscribers of Tom's Hardware Premium.
SK hynix is bringing its HBM ambitions to U.S. soil with a $3.9 billion plan to build its first domestic manufacturing facility — a 2.5D advanced packaging plant in West Lafayette, Indiana. The site, developed in partnership with Purdue University, is aimed at producing turnkey HBM modules for AI accelerators by 2028.
In bringing manufacturing to the U.S., SK hynix aims to vertically integrate its HBM supply chain, which currently relies on outside firms, to handle the delicate process of mounting HBM stacks to logic dies via interposers. That assembly work is increasingly a bottleneck for high-end GPUs, especially as demand for AI silicon continues to outstrip available packaging capacity. By owning both the memory and the interconnects, the company is clearly trying to target the kind of leverage that TSMC holds over the likes of Nvidia.
The Indiana facility, which is partially funded by $458 million in CHIPS Act grants and loans, is SK hynix’s first U.S. production site and part of a broader effort to bring critical semiconductor infrastructure closer to U.S. customers. It comes alongside the company’s new R&D outpost near Seattle, built in part to deepen ties with Nvidia and other domestic hyperscalers. When complete, the plant will package HBM chips with silicon interposers and integrate them with partner dies into a single, thermally optimized 2.5D module ready for use in AI servers or supercomputing clusters.
According to filings, SK hynix plans to operate a full mass-production line at the site, supported by a dedicated talent pipeline from Purdue. That puts it in direct competition with TSMC’s CoWoS platform, which has been the de facto standard for high-end HBM packaging since Nvidia’s Pascal era. And with TSMC’s CoWoS capacity effectively sold out through 2027, customers are already searching for alternatives.
Turnkey HBM
The biggest problem with HBM is that it’s a packaging challenge. HBM stacks multiple memory dies vertically using through-silicon vias (TSVs), all of which must be mounted on a large interposer next to a host processor. That assembly must account for thermal expansion, routing complexity, and thousands of microbumps, resulting in a tightly coupled chiplet module with massive I/O bandwidth and low power draw, ideal for AI training or HPC workloads.
Until now, HBM suppliers like SK hynix and Samsung have typically sold raw memory stacks, leaving GPU vendors to rely on foundry partners for packaging. Nvidia’s H100 and AMD’s MI300X, for instance, use HBM2e and HBM3 mounted via TSMC’s CoWoS process. But with demand for accelerators reaching historic highs — and HBM4 promising even more aggressive stack designs — the need for in-house packaging has become a priority.
SK hynix’s stated goal is to deliver a “turnkey” solution: HBM stacks already integrated with silicon interposers and, potentially, host dies from customers. That would allow hyperscalers or chip designers to skip TSMC entirely for final assembly, receiving ready-to-mount modules instead. It’s a fundamental shift in how HBM enters the supply chain, positioning SK hynix as a full-stack supplier rather than a component vendor.
This type of setup already has some precedent. TSMC has steadily expanded its role from foundry to integrator over the past decade, using its packaging platforms (CoWoS, InFO, SoIC) to create customer lock-in beyond wafer fabrication. SK hynix has obviously drawn inspiration from that playbook, starting from memory and working outward. It also places pressure on Samsung, which is reportedly evaluating its own U.S. packaging line to support future Tesla and AMD accelerator deployments.
... continue reading