Nvidia says it is trying to introduce artificial intelligence to every stage of the chip design process, drastically reducing development time. Notably, the company has revealed that porting a standard cell library, a task that previously took eight engineers 10 months to complete, can now be done overnight by a single GPU. However, the company's chief scientist, William Dally, says that artificial intelligence is still not quite close to designing a processor completely by itself.
"We are trying to use AI wherever we can in our design process," Dally told Google's Jeff Dean. "I would love to have the end-to-end stage where I could simply say, 'design me the new GPU,' but I think we are a long way from that."
Nvidia is already using AI across multiple stages of chip design, from circuit-level optimizations to system-level exploration, and achieves orders-of-magnitude productivity gains and, in some cases, better-than-human results, according to Dally.
Article continues below
At the lowest level, AI has already transformed standard cell development, one of the most time-consuming steps when transitioning to a new fabrication process. Porting a standard cell library of roughly 2,500–3,000 cells previously required a team of eight engineers working for about 10 months, according to Dally. Nvidia has replaced this work with a reinforcement learning system called NB-Cell, which can now complete the same task overnight on a single GPU.
At a higher level, Nvidia has developed internal large language models — Chip Nemo and Bug Nemo — trained on proprietary architecture documentation covering all GPUs that Nvidia has ever developed. These LLMs can act like engineering assistants who can explain to junior designers how complex hardware blocks work. As a result, Nvidia no longer has to bother senior engineers about things that can be done by LLMs.
"We had a series of LLMs that we called Chip Nemo and Bug Nemo. We took a generic LLM, and then we fine-tuned it by feeding it all of the design documents proprietary to Nvidia," Dally said. "So this is stuff you cannot get outside the company, it is all of the RTL, hardware design docs, all of the RTL for every GPU ever designed at Nvidia, all of the architecture specs for those. Now you have this LLM that is actually very smart about GPU design. […] When you have a junior designer, they can ask Chip Nemo, and Chip Nemo will explain [how GPUs work]. It improves productivity that way; it is a very patient mentor."
Beyond cell libraries and engineering assistance, Nvidia is applying reinforcement learning to classical circuit design problems. For example, an RL-based system explores design options in a trial-and-error manner, and this approach helps to create chip designs that exceed human results in terms of area, power, and performance faster than humans can.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter Get Tom's Hardware's best news and in-depth reviews, straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
"It comes up with totally bizarre designs that no human would ever come up with, but they are actually 20% or 30% better than the human designs," said Dally.
... continue reading