Tech News
← Back to articles

Elon Musk reveals roadmap with nine-month cadence for new AI processor releases, beating Nvidia and AMD's yearly cadence — Musk plans to have the highest-volume chips in the world

read original related products more articles

Nvidia tends to release its AI GPUs at an annual cadence, which keeps the company ahead of all rivals. AMD has invested a lot to keep up, so it also launches new AI accelerators on a yearly rhythm. Apparently, Elon Musk wants Tesla to progress even faster and release new AI processors every nine months to perhaps eventually catch up with AMD and then market leader Nvidia. There seems to be caveat with Musk's plans, but he seems to be looking forward a solution.

"Our AI5 chip design is almost done and AI6 is in early stages, but there will be AI7, AI8, AI9," Elon Musk posted on X. "Aiming for a 9-month design cycle. Join us to work on what I predict will be the highest volume AI chips in the world by far!"

Elon Musk's Tesla is not as prompt as AMD and Nvidia when it comes to releasing new hardware. There is an explanation for this: the company's processors are primarily meant for cars, which require redundancy and safety certifications. While redundancy is common for large high-performance AI processors that tend to be the maximum size possible (the reticle limit of an EUV lithography system), the safety required for cars is a whole different level.

Automation safety for automotive chips — particularly those used in advanced driver-assistance systems (ADAS) and autonomous driving — must comply with strict functional-safety requirements. The ISO 26262 standard serves as one of the governing specifications, but it is by far not the only one.

For advanced ADAS and automated driving (to a full-self drive degree), regulators increasingly require scenario-based testing (edge cases, failure modes), on-road testing permits (for higher automation levels), safety-of-the-intended-functionality, and cybersecurity compliance and software updates. After all, it goes without saying that developing a processor for a car is easier than building one for a data center.

Can the cycle be shortened, assuming that Tesla retains its processors to be both car- and data-center-bound? It seems to be feasible, but only with very strong constraints, and it will not look like a traditional 'clean-sheet' chip cycle. Let's unpack a bit.

A 9-month design cycle is realistic only if AI6, AI7, AI8, and AI9 are incremental, platform-based iterations, not clean-sheet designs. That means reusing the same core architecture, programming model, memory hierarchy, safety framework, and most IP, with changes limited to scaling compute, tuning SRAM, modest dataflow tweaks, or a planned node retarget. Any attempt to introduce something that goes beyond compute, such as a new memory type, compiler model, coherency scheme, or safety architecture, would immediately lengthen the schedule. On the competitive data center level dominated by Nvidia, these standards are redundant, though: performance and the software stack matter.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors

From a carmaker's point of view, automotive requirements make this cadence easier, not harder: long lifecycles, determinism, and ISO 26262 safety force designs towards very conservative evolution and locked interfaces. Given the overlapping development (multiple generations in flight), vertical integration, and a single internal customer, Tesla could sustain this cadence.

Meanwhile, the 'highest-volume AI chips' clearly suggest that we are dealing with processors meant for chips deployed across millions of vehicles, which is a far higher unit volume than data-center AI accelerators.

... continue reading