Intel Foundry this week released a promotional document aimed at detailing its leading-edge front-end and back-end offerings for AI and HPC applications, and showcased its 'AI chip test vehicle' that demonstrates the company's current packaging capabilities. Indeed, they are quite impressive as the company is showing off an 8 reticle-sized system-in-package (SiP) that features four logic tiles, 12 HBM4-class stacks, and two I/O tiles. More importantly, unlike the massive concept with 16 logic tiles and 24 HBM5 stacks the company presented last month, this one is actually manufacturable today.
First and foremost, it is necessary to note that what Intel Foundry is showcasing is not a working AI accelerator, but rather an 'AI chip test vehicle' that shows how future AI and HPC processors can be physically built (or rather assembled). To a large degree, the company is demonstrating its entire construction method that combines large compute tiles, stacks of high-bandwidth memory, ultra-fast chip-to-chip links, and a new class of power delivery into one manufacturable package. This package differs significantly from what TSMC offers today (more on this later). In short, the concept shows that next-generation heavy-duty AI processors are multi-chiplet designs, and Intel Foundry can build them.
At the heart of this platform are four large logic tiles allegedly built on Intel 18A process technology (hence featuring RibbonFET gate-all-around transistors and PowerVia backside power delivery) that are flanked by HBM4-class memory stacks and I/O tiles and presumably stitched together with EMIB-T 2.5D bridges embedded directly in the package substrate. Intel uses EMIB-T, which adds through-silicon vias inside the bridges so that power and signals can pass vertically as well as laterally, to maximize interconnection density and power delivery. Logically, the platform is designed for UCIe die-to-die interfaces running at 32 GT/s and beyond, which are also seemingly used to attach presumably C-HBM4E stacks.
The test vehicle also previews Intel's move toward vertical integration. The company's roadmap includes Intel 18A-PT process technology developed specifically for chiplets that stack other logic dies or memory on top and therefore must feature backside power delivery, pass-through TSVs, and hybrid bonding. In the case of the 'AI chip test vehicle', 18A-PT base dies sit under 18A/18A-P compute dies and either act as large caches or perform some additional work. To connect chiplets vertically, Intel uses its Foveros family — Foveros 2.5D, Foveros-R, and Foveros Direct 3D — packaging technologies to enable fine-pitch copper-to-copper bonding between active dies. Together with EMIB bridges, these methods enable Intel to build a hybrid lateral-and-vertical assembly that Intel positions as an alternative to large silicon interposers with higher wafer utilization and yield.
(Image credit: Intel)
For multi-chiplet AI and HPC accelerators, power delivery is the principal design constraint. To that end, Intel's platform should be able to combine all of Intel's latest power-related innovations, including PowerVia, on-die Omni MIM capacitors, bridge-level decoupling in EMIB-T, base-die eDTC and eMIM-T capacitors, and embedded CoaxMIL inductors to support 'semi' integrated voltage regulators (IVR) that are located beneath every stack and below the package itself (as opposed to IVRs in the case of TSMC's CoWoS-L, which are a part of the interposer). This layered network is designed to support fast current swings of generative AI workloads without collapsing voltage margins and deliver maximum clean power when needed.
Showing an 'AI chip test vehicle' is a way for Intel to attract customers. Yet, it remains to be seen whether the company's codenamed Jaguar Shores AI accelerator, due in 2027, will use the architecture that Intel is showcasing today.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.