Experts at the Table: Semiconductor Engineering sat down to discuss how and where AI can be applied to chip design to maximize its value, and how that will impact the design process, with Chuck Alpert, Cadence Fellow; Sathish Balasubramanian, head of product marketing and senior director for custom IC at Siemens EDA; Anand Thiruvengadam, senior director and head of AI product management at Synopsys; Sailesh Kumar, CEO of Baya Systems; Mehir Arora, head of engineering at ChipAgents; Daren McClearnon, product manager at Keysight. What follows are excerpts of that conversation. To view part one, click here. L-R: Keysight’s McLearnon; Synopsys’ Thiruvengadam; ChipAgents’ Arora; Cadence’s Alpert; Siemens’ Balasubramanian; Baya Systems’ Kumar. SE: Does AI fundamentally change chip design? In the past, designs were broadly domain-specific? But in the future, can you have domains and sub-domains, as opposed to one large domain? Balasubramanian: Yes, and that’s a big change. For each of the verticals, there are certain criteria that are important, like automotive or mission-critical applications. In high-performance computing, no one used to care. We used to talk about low power, but five years ago, no one was talking about low power anymore. Just look at NVIDIA racks and their power consumption. It’s all speed, speed, speed. Each vertical has its own requirements, and that drives what kind of technology or product or solution we need to provide to our customers. Kumar: CAD tools can benefit a lot from AI. As you build these models, you automate a lot of things based on data. That’s where you can build vertical solutions because each vertical’s design patterns are different. So you can have highly focused AI. Alpert: I look at the technology and ask, ‘What can we build with LLMs today? What do we think we can deliver in three months or six months?’ Obviously you want to collect the low-hanging fruit like debug analysis. You generate scripts to say, ‘What is wrong with my design? How can I fix it? Give me suggestions.’ In automotive, we all have lane assist now, and when you back up and something is behind you, it goes ‘beep beep.’ It doesn’t fundamentally change the way you drive. Maybe you don’t twist your neck as much because you’re looking at the camera. But you’re still driving the car, and you can see that self-driving cars are coming. If you have a kid today, you might not ever need to teach them to drive. So there are these short-term things that will make designers more productive, and there are these longer-term things, like self-driving cars, that will really be disruptive. Balasubramanian: We have a PAVE360 digital twin for automotive. You can run your RTL on an emulator or on an FPGA for prototyping. You can run your software. You can build in your road and traffic scenarios, and you can use synthetic data to model the way the whole car behaves. And then we can connect that to a real car on a test track. That’s a digital twin. EDA has been doing this for a long time. We’re just getting into a higher and higher abstraction. Thiruvengadam: The steady state, through autonomous workflows, is going to be truly disruptive. We are quite a way away from that. But our agentic vision involves increasing levels of autonomy. We essentially have rolled out an L1 through L5, where L5 is the Holy Grail with fully autonomous end-to-end workflows. L1 is where we are today, and maybe heading into L2. L3 involves orchestration and then planning and decision-making. When we get to L5, we’ll be asking questions like, ‘Are junior-level engineers really needed?’ That’s when those questions become pertinent, but not right now. Arora: We can twist this whole discussion on its head. Up until now, we’ve been talking about EDA tools for humans, but we can start to talk about EDA tools for agents. These agents are trained on natural language, and that’s what the pre-training is. But we can actually build better tooling for agents. The interesting thing about agentic systems is that there are only a couple of ways in which LLMs are superior to humans. One of them is that they have more patience than your average human, and they read a lot faster. And so from this perspective, you can start to think about developing EDA tooling where you’re more interested in the parallelism. You’re interested in trying more things more aggressively, as opposed to a human, who is kind of single-threaded. So in some sense, we can talk about slapping AI on top of existing tooling. But we also have to think about the fact that at some point we’re going to have agentic platforms. What tools are those platforms going to end up using? This is going to become a serious domain, because there are big differences in the way humans use these EDA tools and the way agents use these tools. Alpert: We’re already there. We have tools that have been built on top of tools, and these tools have been modified for that purpose. These optimization AI tools are part of a journey, and we’ll connect those into bigger agentic workflows. Thiruvengadam: We look at this in terms of value tiers. The optimization is one value tier for AI, and that’s already been tested for many years. We have optimization engines across multiple verticals with an EDA flow. So, for example, test, analog, digital implementation, or verification. Then there’s another value tier in terms of analytics. These are important because they’re the foundations for the next value tier, which is generative AI and agentic AI. Agentic eventually will leverage tools, and also some of the idle engines, to orchestrate a full flow. It’s not just a tool. We think in terms of value hierarchy. McClearnon: One of the things we’re seeing is the application people in verticals are pulling forward to solve specific problems. One of those problems is radio-oriented propagation of data, other kinds of materials, and things that solve real use cases. These are more than just EDA. There is a synthesis of other kinds of measurements and observations. So the idea of the agents may be in a customer’s organization, driving some of our pieces to solve their tasks. Also, something we’re beginning to see is people owning their own data and machine learning to drive specific applications. SE: A senior engineer using AI can spot something that is way off base and recognize that a tool has messed up, and it needs to be reconfigured. Young engineers don’t necessarily have the history to recognize that. How do we ensure we don’t have these kinds of obvious errors? Balasubramanian: It depends on what stage it’s in. What we’re trying to do is give as much information as possible, where we are very confident in the answer for the engineer. For example, maybe someone is doing a place-and-route block, and there’s a global route with huge congestion, and they’re just pushing through the flow and running multiple thousands of scenarios. So there are a lot of different recipes, but there is intelligence you can build in that can prevent engineers from going in the wrong direction. If you have a centralized EDA AI system that is driving these flows, when you hit certain heuristics in your design, you actually have to give feedback, either good or bad, to an engineer. ‘Hey, you don’t need to proceed further. Or, I know you’re on the wrong track.’ It doesn’t matter if it’s a junior or a senior engineer. It’s a matter of how much we can learn from what we have done before. Alpert: That’s not a new problem. We’ve been building engineering teams in other geographies with lots of inexperienced people, and they learn. Having young talent is really important. As we all age, companies become stuck in doing things the same way. You have to bring in new people to be disruptive to it, and that’s a good thing. Just like customers build flows and write scripts to get their own secret sauce, I see them building their own agents on top of our infrastructure. We provide tools, and if we use common protocols like MCP (Model Context Protocol), maybe they won’t be doing scripts anymore. They’ll be building agents specifically for their design tasks that work with our tools and APIs. I can foresee a world where that skill emerges and young engineers will probably be good at that. Thiruvengadam: We’re being too quick to jump into an agentic discussion. There are many steps to it. There are generative AI capabilities even before we get to agentic. There’s a whole bunch of assistance we can provide to young engineers. SE: It’s moving a lot faster than in the past, though. Thiruvengadam: Yes, it’s evolving really fast. But you still cannot skip these critical steps. For example, there are knowledge assistants that will greatly benefit a junior engineer compared to an expert. The assistance that you can provide a junior or mid-level engineer when it comes to script generation, recipes, next steps when you look at a log file — these are all things that are available today. These are the foundations for agents. McClearnon: If you look at sports and kids coming out of minor leagues and college, they’re hitting their peak not at 27 or 28, but at 23. Part of it is the coaching and specific feedback on how they can improve. For engineers, the ability to dream bigger, get feedback, and accomplish ‘fail faster,’ is accelerated learning — even for humans. When EDA was less mature, there was resistance. You’re not going to use a calculator anymore. It’s part of how you learn. Arora: That’s an excellent point. What we’re circling around is the question of legibility. How do you make these tools legible to people. On one hand, we’ve been talking about pushing humans up to the level of AI systems, which are a bit hard to understand because they’re doing so much for you in an autonomous way. But we also can push down. We can start with the AI systems, and we can try to make them more legible to humans. A fundamental problem that is going to be attempted to be solved at the research level, at the product level, and by multiple businesses, is how do we make AI proof of work legible to people. When an engineer comes to you and they’ve delivered something to you, and you’re also an engineer and you understand all the same jargon, they can effectively describe to you in very quick terms some sort of short, informal proof that the work they’ve achieved ticks all the boxes that you are interested in. I’m not necessarily talking about a formal proof, but something that is sufficient for another human being to understand. ‘Yes, the work that I did actually checked all the boxes. I dotted all the i’s across, all the t’s.’ Can AI systems do the same thing? Can we make the review of AI itself another task that we optimize for, because as they start to do more, it’s going to become even more indecipherable? It’s going to be very hard. So we have to bring this down into a way where humans are going to be able to be put in the position to review this work more efficiently and effectively, where you don’t necessarily have to look at everything in painstaking detail and re-derive everything yourself. Kumar: Hopefully, AI can bring a higher level of abstraction. The way we scale is by abstraction. We used to design chips at a much lower level of abstraction. Now we design at a higher level of abstraction. We are in the IP paradigm where we design chips from IPs, and we’ll get into the chiplet paradigm. If AI can help make the design abstractions higher, that will accelerate the learning of the junior engineers. At Baya we do on-chip fabrics, and we started with a couple of dozen engineers with 20-plus years of experience. Now we’re bringing in new talent, and I’m always amazed at what they can do. They come right out of college, and within six months, they become very productive. They put in 80 hours a week. They are just absolutely mind-blowing. So we have to find every possible way to bring in the new talent and make sure that we can scale our capacity and leverage them in the best possible way. If AI can help with that by creating new abstraction levels, making the tools easier to use, and leverage the whole knowledge base effectively, that can lower the bar and bring all these junior engineers into the mainstream.