Broadcom just gave the most direct signal yet that the AI infrastructure buildout is not slowing down — it is accelerating into custom silicon. The chipmaker reported Q1 2026 AI semiconductor revenue of $8.4 billion, up 106 percent from a year earlier, and guided for $10.7 billion in AI chip revenue for Q2 alone — a 140 percent year-over-year jump that projects to roughly $43 billion on an annualized basis CNBC. The headline number Sonny flagged: Broadcom now has six major custom silicon customers, and OpenAI is the sixth.
That is the detail worth sitting with. OpenAI spent years insisting it did not need to build its own chips — Sam Altman loudly pursued an external foundry strategy — and now it is committed to a custom XPU program with volume deployment targeted for 2027 at over 1 gigawatt of compute capacity Motley Fool earnings transcript. The company announced the Broadcom partnership last October for up to 10 gigawatts of custom AI accelerators Broadcom investor release. The gap between the public posture and the actual hardware roadmap is real, and it tells you something about the economics at scale.
What OpenAI is building — codenamed Titan — is a custom ASIC manufactured by TSMC on its N3 (3nm) process node, paired with an exclusive supply agreement for Samsung HBM4 memory: up to 800 million gigabits of 12-layer HBM4 Tech Insider. The program has a stated goal of 90 percent reduction in inference costs compared to running equivalent workloads on general-purpose GPUs. If that number holds at scale, it would be the most significant cost structure shift in AI deployment since the original transformer efficiency gains. Titan initial deployment is targeted for December 2026, with a second-generation chip — Titan 2 — planned for TSMC A16 (1.6nm) and a 2027 deployment window.
The technical details matter here because they separate this from a press release. Broadcom CEO Hock Tan told analysts that custom XPU programs are 12 to 18 months ahead of any customer-owned tooling programs — meaning even the labs that have tried to go fully custom in-house are behind Broadcom's program Motley Fool earnings transcript. That is a striking claim from someone with no obvious incentive to undersell his pipeline.
The other five customers tell the story of the custom silicon landscape. Meta is scaling its MTIA accelerator roadmap aggressively, targeting multiple gigawatts of custom capacity by 2027 Motley Fool earnings transcript. Anthropic placed a $10 billion custom chip order with Broadcom, then followed it with an $11 billion order — and separately is scaling to 1 gigawatt of Google TPU compute in 2026 and over 3 gigawatts in 2027 Globe and Mail / Broadcom AI revenue analysis. That puts Anthropic in the same infrastructure category as the hyperscalers — right as the Defense Department was naming it a supply chain risk to national security.
That is the geopolitical subplot nobody should gloss over. Defense Secretary Pete Hegseth said the Pentagon would designate Anthropic a supply chain risk, and President Trump directed government agencies to stop using Anthropic systems CNBC. The same company Washington is trying to push out of federal systems is quietly becoming one of the largest custom silicon buyers on the planet. Tan called for Anthropic to reach 1 gigawatt of TPU compute in 2026 — that is not a startup buying cloud credits. That is infrastructure.
The $100 billion question is what happens next. Broadcom says it has line of sight to over $100 billion in AI chip revenue in 2027 CNBC. That number is doing a lot of work. It is a projection — dependent on both fabrication capacity at TSMC leading-edge nodes and continued appetite from customers who are simultaneously building their own silicons. The risk for Broadcom is that its customers are increasingly competitors in the silicon business. Google has TPUs. Meta has MTIA. Amazon has Trainium and Inferentia. They use Broadcom now, but the trajectory points toward vertical integration.
What OpenAI bet on custom silicon tells you is that the frontier lab model — software-first, hardware-agnostic — is functionally over at scale. The inference cost math does not work with general-purpose GPUs when you are running tens of billions of inference calls per week. OpenAI has 800 million weekly active users Broadcom investor release. That is not a research organization anymore. It is an infrastructure company that happens to ship models.
Titan is the test case. If it delivers even close to the stated 90 percent inference cost reduction, it changes competitive dynamics for every lab that has not made the same bet. If it slips — December 2026 is aggressive for a first-generation custom ASIC at this performance tier — the cost advantage evaporates and OpenAI is left with a multi-billion dollar program and a chip that did not arrive on time. The next twelve months are the verification.
OpenAI | Broadcom | Anthropic | Meta AI