Tech News
← Back to articles

Nvidia’s $2B Synopsys stake strengthens its push into AI-accelerated chip design — partnership to bolster GPU-accelerated EDA tools, but a careful balance is required

read original related products more articles

Nvidia has taken a $2 billion equity stake in Synopsys, as the two companies announced a long-term collaboration to accelerate electronic design automation workloads on GPUs. Under the partnership, which was announced on December 1, the two companies will co-develop tools intended to shift computationally heavy EDA tasks from CPUs to Nvidia GPUs.

Synopsys already dominates several segments of the chip-design software market, including generative AI for chip development, and its tools are used across the industry by CPU and GPU vendors.

Nvidia and Synopsys emphasized that the agreement is non-exclusive and that Synopsys will continue its work with other hardware makers, but it still raises questions about influence and long-term control of the design pipeline. That’s especially true for Nvidia, for whom the deal serves to accelerate its own development cycle while giving the company a foothold in the upstream tooling that competitors rely on.

Pushing chip design deeper into accelerated computing

(Image credit: Nvidia)

GPU-accelerated EDA is nothing new, but this partnership formalises it at a scale that no other chip designer currently matches. In the announcement, Jensen Huang described a workflow in which full-chip simulations and verification steps are executed on GPU clusters rather than CPU farms.

"CUDA GPU-accelerated computing is revolutionizing design — enabling simulation at unprecedented speed and scale, from atoms to transistors, from chips to complete systems, creating fully functional digital twins inside the computer," he said.

The shift may reduce multi-week simulation stages to timescales closer to days or hours, depending on the workload. Synopsys is contributing its existing AI-assisted design stack while adopting Nvidia’s software frameworks, including CUDA and the company’s agent-based automation work from its NeMo platform.

It appears that the aim is to create a generation of tools that treat accelerated compute as the baseline for design work. Synopsys has been integrating AI into its DSO.ai and VSO.ai products for years, but the scale of GPU compute available through Nvidia’s platform allows full-chip workloads that were previously constrained by CPU throughput. This, in practice, would mean more design variants can be explored, more exhaustive verification passes can be performed, and layout optimization loops can be run to convergence rather than truncated for schedule reasons.

Additionally, if EDA becomes faster and more automated on Nvidia hardware, Nvidia is the first beneficiary. It can build, evaluate, and iterate on silicon faster than rivals bound to traditional CPU clusters unless those rivals make equivalent transitions to accelerated simulation. Cadence has been experimenting with similar GPU-accelerated workflows, but without the scale or direct equity alignment that Synopsys now has with Nvidia.

... continue reading