Tech News
← Back to articles

Best Options for Using AI in Chip Design

read original related products more articles

Experts at the Table: Semiconductor Engineering sat down to discuss how and where AI can be applied to chip design to maximize its value, and how that will impact the design process, with Chuck Alpert, Cadence Fellow; Sathish Balasubramanian, head of product marketing and senior director for custom IC at Siemens EDA; Anand Thiruvengadam, senior director and head of AI product management at Synopsys; Sailesh Kumar, CEO of Baya Systems; Mehir Arora, head of engineering at ChipAgents; Daren McClearnon, product manager at Keysight. What follows are excerpts of that conversation. To view part one, click here.

L-R: Keysight’s McLearnon; Synopsys’ Thiruvengadam; ChipAgents’ Arora; Cadence’s Alpert; Siemens’ Balasubramanian; Baya Systems’ Kumar.

SE: Does AI fundamentally change chip design? In the past, designs were broadly domain-specific? But in the future, can you have domains and sub-domains, as opposed to one large domain?

Balasubramanian: Yes, and that’s a big change. For each of the verticals, there are certain criteria that are important, like automotive or mission-critical applications. In high-performance computing, no one used to care. We used to talk about low power, but five years ago, no one was talking about low power anymore. Just look at NVIDIA racks and their power consumption. It’s all speed, speed, speed. Each vertical has its own requirements, and that drives what kind of technology or product or solution we need to provide to our customers.

Kumar: CAD tools can benefit a lot from AI. As you build these models, you automate a lot of things based on data. That’s where you can build vertical solutions because each vertical’s design patterns are different. So you can have highly focused AI.

Alpert: I look at the technology and ask, ‘What can we build with LLMs today? What do we think we can deliver in three months or six months?’ Obviously you want to collect the low-hanging fruit like debug analysis. You generate scripts to say, ‘What is wrong with my design? How can I fix it? Give me suggestions.’ In automotive, we all have lane assist now, and when you back up and something is behind you, it goes ‘beep beep.’ It doesn’t fundamentally change the way you drive. Maybe you don’t twist your neck as much because you’re looking at the camera. But you’re still driving the car, and you can see that self-driving cars are coming. If you have a kid today, you might not ever need to teach them to drive. So there are these short-term things that will make designers more productive, and there are these longer-term things, like self-driving cars, that will really be disruptive.

Balasubramanian: We have a PAVE360 digital twin for automotive. You can run your RTL on an emulator or on an FPGA for prototyping. You can run your software. You can build in your road and traffic scenarios, and you can use synthetic data to model the way the whole car behaves. And then we can connect that to a real car on a test track. That’s a digital twin. EDA has been doing this for a long time. We’re just getting into a higher and higher abstraction.

Thiruvengadam: The steady state, through autonomous workflows, is going to be truly disruptive. We are quite a way away from that. But our agentic vision involves increasing levels of autonomy. We essentially have rolled out an L1 through L5, where L5 is the Holy Grail with fully autonomous end-to-end workflows. L1 is where we are today, and maybe heading into L2. L3 involves orchestration and then planning and decision-making. When we get to L5, we’ll be asking questions like, ‘Are junior-level engineers really needed?’ That’s when those questions become pertinent, but not right now.

Arora: We can twist this whole discussion on its head. Up until now, we’ve been talking about EDA tools for humans, but we can start to talk about EDA tools for agents. These agents are trained on natural language, and that’s what the pre-training is. But we can actually build better tooling for agents. The interesting thing about agentic systems is that there are only a couple of ways in which LLMs are superior to humans. One of them is that they have more patience than your average human, and they read a lot faster. And so from this perspective, you can start to think about developing EDA tooling where you’re more interested in the parallelism. You’re interested in trying more things more aggressively, as opposed to a human, who is kind of single-threaded. So in some sense, we can talk about slapping AI on top of existing tooling. But we also have to think about the fact that at some point we’re going to have agentic platforms. What tools are those platforms going to end up using? This is going to become a serious domain, because there are big differences in the way humans use these EDA tools and the way agents use these tools.

... continue reading