After criticizing leading chipmakers for slow capacity expansion and claiming his companies need 100 – 200 billion AI processors annually, Elon Musk last week unveiled TeraFab — a chipmaker aiming to produce logic chips, HBM4 memory, and advanced packaging under one roof. Backed by an initial ~$20 billion investment, the project targets manufacturing chips consuming 1 terawatt (1 TW) of power per year using leading-edge process technology within the next several years.
But an exhaustive analysis by Tom's Hardware Premium reveals so many factors working against TeraFab, an effort designed primarily to produce chips in-house, that it appears highly unrealistic — at best a step towards partial vertical integration for Tesla, SpaceX, and xAI.
Barriers to entry in the semiconductor industry are so high that launching a new player capable of manufacturing chips in high volumes on leading-edge process technologies is nearly impossible, from both a capital investment and an expertise point of view. All new foundries established in recent decades were either spun off from leading integrated device manufacturers (Intel Foundry, GlobalFoundries, Samsung Foundry), backed by governments (Rapidus, Tata Semiconductor, Hua Hong/HLMC, SMIC), or focused on niche markets (SkyWater, Ayar Labs, Lightmatter). And many of these new players — Intel Foundry, Rapidus, and Tata — have yet to prove that they can be competitive world-class contract semiconductor makers.
Article continues below
TeraFab does not plan to become a foundry; its only purpose is to serve the silicon needs of Elon Musk's companies, including Tesla, SpaceX, and xAI. Yet its need for capital (from $4 to $5 trillion, depending on how you count), equipment, expertise, and a skilled workforce are extremely vast. Meeting the aforementioned needs is quite literally impossible within a realistic timeframe. Here's why.
A question of capital
Money is the most obvious challenge for Elon Musk's chip venture. To build 1 TW of AI silicon per year, Elon Musk's TeraFab will need to process the equivalent of 22.4 million Rubin Ultra GPU wafers per year, 2.716 million Vera CPU wafers per year, and 15.824 million HBM4E wafers per year, according to estimates from premier semiconductor analysis firm Bernstein. To do so, TeraFab will need from 142 to 358 fabs, the report claims.
An Intel semiconductor fabrication plant in Oregon. (Image credit: Intel)
Bernstein's calculations are based on a top-down conversion of compute demand into semiconductor manufacturing requirements. They start with Musk's goal of 1 TW of annual AI compute and translate it into the number of AI racks needed, using assumptions about Nvidia's rack power (e.g., 120 kW to 600 kW), GPUs per rack, and system architectures similar to Nvidia’s Blackwell and Rubin platforms.
They then convert those systems into chip, wafer, and fab demand using fixed assumptions for die sizes (e.g., ~825 mm² GPUs, ~800 mm² CPUs), HBM configurations, and yields. This is where Bernstein's analysis gets a bit rough: The firm assumes the capacity of a modern fab is 50,000 Wafer Starts Per Month (which is too high for a leading-edge logic fab, and too low for a DRAM fab) and that it costs $35 billion to build (which is not enough for a 50K WSPM logic fab, but may be too high for a DRAM fab). These assumptions increase the estimated costs of the whole project; while the ballpark of several trillion seems to be correct, $5 trillion may be too high.
... continue reading