Fractile says its AI chips can run 100x faster than Nvidia. Its only proof is a simulation.
Anthropic is in early talks to buy AI inference chips from UK startup Fractile — pursuing a fourth supply track at a moment when GPU availability is tight, inference margins are burning, and the company's run-rate revenue has surged past $30 billion.
Fractile, founded in 2022 by Oxford PhD Walter Goodwin, is developing SRAM-based compute-in-memory chips that co-locate memory and compute on the same die, eliminating the data movement between separate DRAM and processing logic — the dominant energy cost in conventional AI silicon. Winbuzzer reported the talks this week. The Information first confirmed them. Fractile would become Anthropic's fourth chip supply track alongside Nvidia, Google, and Amazon.
The context that makes the conversations intelligible is real. Anthropic's Claude run-rate revenue has surged past $30 billion, up from roughly $9 billion at the end of 2025, according to people familiar with the company's financials. But the company's gross profit margin for AI product operations fell short of target last year, according to Digital Today Korea, because inference costs ran higher than expected — a pressure that compounds as revenue scales. Adding a fourth supply track would give Anthropic more leverage in a market where GPU availability remains constrained and inference margins matter more with every billion dollars of additional run-rate.
Fractile's architectural bet is coherent in principle. Groq has shipped chips using a similar SRAM-as-compute approach. Cerebras has shipped chips with massive on-die SRAM. The company employs 14 people, emerged from stealth in July 2024 with $15 million in seed funding from Kindred Capital, the NATO Innovation Fund, and Oxford Science Enterprises, and is actively raising approximately $200 million at a valuation near $1 billion, according to Tech Funding News — a round that has not yet closed. Fractile claims its approach can run large language models 100 times faster and 10 times cheaper than Nvidia's GPUs, based on simulation results it has not yet tested on physical silicon. Fractile's chips are projected to reach commercial readiness around 2027, with no volume delivery schedule defined.
What happens next depends on which way the bet breaks. If Fractile's architecture delivers — or even approaches — its stated targets, it would reduce AI labs' dependence on the HBM memory that has made GPU supply tight and Nvidia's position hard to dislodge. If it doesn't — because SRAM density caps out at scale, thermal realities bite harder than simulation suggests, or execution slips — the episode illustrates what it always illustrates: inference cost pressure is acute enough that large labs will sit across the table from pre-silicon startups while their gross margins are burning.
The Information first reported Anthropic's interest in Fractile. The publication is paywalled and its direct quotes cannot be reproduced here. The report has not been independently confirmed by type0.
Anthropic, Nvidia, and Fractile declined to comment for this article.