At a Synopsys Converge panel last month, Thomas Andersen told an audience that AI would soon automate the dullest, most grinding part of chip design: generating test workloads, hunting bugs, and verifying that a chip actually does what the spec says it should. Andersen's two-year forecast was measured compared to what Microsoft is promising. But AMD Fellow Alex Starr had a different message on the same panel: most of this capability is already available today, you don't have to wait.
That gap — "two years" versus "right now" — is the real story in AI-assisted chip design. And it reveals something uncomfortable about how the companies controlling 92 percent of the $4 billion US electronic design automation market talk about their own disruption.
Verification, the process of proving a chip design works before it gets fabbed, consumes 60 to 70 percent of design time, according to Techsplicit. A single missed bug can kill a launch, cost hundreds of millions, and take a year to respin. It's the most expensive part of the job, which is exactly why it's the first target for automation.
The established EDA vendors are not ignoring AI. Synopsys, which along with Cadence and Siemens EDA controls nearly all of the market, reported that a customer using its formal verification AI saw a 35 percent engineering productivity boost and validated 10 design components in 10 days. Cadence's ChipStack AI agent claims up to 10x productivity gains in certain tasks. Nvidia poured $2 billion into Synopsys last December specifically to accelerate simulation workloads on GPUs, with EDA among the targets.
Those are real numbers. But they're also the numbers a market leader wants you to see while the ground shifts underneath them.
The six organizations on the Synopsys Converge panel — Thomas Andersen from Synopsys, Sridhar Boinapally from Intel, Alex Starr from AMD, Stuart Oberman from Nvidia, Silvian Goldenberg from Microsoft, and Borivoje Nikolic from UC Berkeley — represent the institutions most exposed to whatever comes next. Semiconductor Engineering covered the discussion.
Andersen said generative AI and reasoning models have "made a huge leap" in automating verification work. Starr's counterpoint was less comfortable: the tools exist now, the question is whether the industry has organized itself to use them. That's a process problem, not a technology problem. And process problems inside fabless companies, foundries, and EDA vendors don't get solved by selling more software licenses.
What makes Starr's point interesting is that the EDA industry has a structural interest in framing automation as a future thing. If verification work shrinks by 50 percent in two years — as Dr. Erik Berg, a principal engineer at Microsoft's silicon verification team, posited at the Accelera DAC luncheon — the revenue model built on charging for compute cycles per verification run faces pressure. EDA software is priced partly by capacity. If you need half as much capacity, the math changes.
UC Berkeley's Nikolic offered the most unambiguous evidence that the technology is further along than the roadmap-speak suggests. His group published work showing AI discovered analog circuit topologies that had been seen in literature but never used — because they were too complex for human engineers to reason about. The AI didn't invent new physics. It found connections that were already there, hiding in papers nobody had fully worked through. The group also produced a paper showing AI found a cache replacement policy for general-purpose processor cores that beats what human architects developed.
That is not incremental improvement. That is the AI doing something human designers couldn't, working from the same problem definition. The caveat is that these are research results, not shipped production tools. But the track record of AI finding inhuman solutions in other engineering domains — protein folding, materials discovery, circuit topology — suggests the pattern will hold.
The limitations are real too. The panel noted that AI struggles with specifications that weren't written for machine parsing, with analog blocks where physics dominates, and with verification scenarios that require understanding intent rather than matching patterns. You cannot verify what you cannot specify. If a human designer didn't know what edge case to check for, neither will the AI — until the reasoning models get better at inferring intent from context.
Nvidia's Oberman framed it practically: the question is not whether AI can help, but whether it can help at a specific company, on a specific node, with a specific team that has specific tooling already in place. The answer varies. And that variance is why the "two years" timeline from Synopsys and the "available today" read from AMD can both be true.
The EDA market's concentration — three companies handling 92 percent of spending — creates a particular dynamic. If AI genuinely halves verification workload, those three companies have to find new ways to justify their pricing, or they have to sell more seats, more compute, more everything to the same customers who suddenly need less of the core product. The startup pressure is real: Techsplicit catalogued a wave of AI-native EDA entrants targeting verification, parasitic extraction, and analog design. None of them have meaningful market share yet. But a 10x productivity claim from a well-funded challenger is a different threat than a 10 percent improvement from an incumbent.
The two-year forecast from Synopsys may be conservative for the technology and optimistic for the business model. The "available today" read from AMD may be accurate for the leading edge and irrelevant for the long tail of chip teams still running legacy flows. Both readings are correct. That's what happens when a market leader and a fabless chip company look at the same data and see different things — because they're protecting different things.
† Add footnote: "† Source-reported; not independently verified."