Intel was the first company to build an explicitly disaggregated chiplet design, comprising 47 chiplets, with its Ponte Vecchio compute GPU for AI and HPC applications. This product still holds the record for the most populous multi-tile design, but Intel Foundry envisions something considerably more extreme: a multi-chiplet package that integrates at least 16 compute elements across eight base dies, 24 HBM5 memory stacks, and scales to 12X the size of the largest AI chips on the market ( 12x reticle size, beating TSMC's 9.5x reticle size). Of course, we can only wonder about the power consumption and cooling requirements for such beastly processors.
(Image credit: Intel)
Intel's conceptual 2.5D/3D multi-chiplet package demonstrates 16 large compute elements (AI engines or CPUs) produced on Intel's 14A or even more sophisticated 14A-E process technology (1.4nm-class, enhanced features, second-gen RibbonFET 2 gate-all-around transistors, improved PowerVia Direct backside power delivery).
These sit on top of eight (presumably reticle-sized) compute base dies made on 18A-PT (1.8nm-class, performance enhanced with through-silicon vias (TSVs), and backside power delivery) that can either do some additional compute work, or pack plenty of SRAM cache for the 'main' compute dies, as Intel shows in its example.
From cutting-edge interconnects to system-level assembly and test, Intel Foundry delivers the scale and integration needed to power next-generation multichip platforms. https://t.co/smSje92QQh #IntelFoundry #Semiconductors pic.twitter.com/4sAVftVRhgDecember 22, 2025
*Expand the tweet above to watch the fly-through video.
The base dies are connected to the compute tiles using Foveros Direct 3D, leveraging ultra-high-density sub-10 µm copper-to-copper hybrid bonding to deliver maximum bandwidth and power to the top dies. Intel's Foveros Direct 3D is currently the pinnacle of Intel Foundry's packaging innovations, underscoring the very sophisticated design.
The base dies leverage EMIB-T (an enhanced version of Embedded Multi-Die Interconnect Bridge with TSVs), with UCIe-A on top, for lateral (2.5D) interconnections among themselves and with I/O dies made on 18A-P (1.8nm-class, performance-enhanced), and custom base dies, for up to 24 HBM5 memory stacks.
It is noteworthy that Intel proposes to use EMIB-T with the UCIe-A on top to connect customized HBM5 modules rather than use JEDEC-standard HBM5 stacks with an industry-standard interface, possibly to get more performance and capacity. Given the concept nature of the demonstration, the use of custom HBM5 stacks is not a design requirement; it is simply a way to show that Intel integrates such devices as well.
... continue reading