Anthropic dominates AI business spend with 73% share. Mizuho lists stocks to watch - Investing.com
While the Pentagon was cutting off Anthropic's government contracts, enterprise customers were choosing it at a record rate.
Anthropic now captures 73 percent of all spending among companies buying AI tools for the first time, according to customer data from Ramp, a fintech company that tracks corporate spending. That figure was 50 percent in January. The change reflects a sharp reversal in enterprise AI spending patterns — and it is happening precisely as Anthropic is fighting the Trump administration in court over its refusal to give the Pentagon unrestricted access to its models.
Mizuho analyst Chris Klein noted the dynamic in a client note this week: Anthropic is winning in the market despite — or, in the view of some customers, because of — its confrontation with the government. "This is smart and good for key suppliers," Klein wrote, flagging Microsoft and Oracle as among those positioned to benefit from Anthropic's continued growth.
The revenue trajectory supports the enterprise demand picture. Anthropic's annualized revenue reached $2.5 billion in February, putting the company on track to surpass OpenAI's revenue by the end of 2026, according to estimates from research firms Epoch and Semianalysis. That is a remarkable position for a company that, a year ago, was widely described as the smaller, more cautious competitor to OpenAI and Google.
The TIME profile published this week offers one explanation for the appeal. The magazine spent three days at Anthropic's San Francisco headquarters interviewing executives, engineers, and safety teams. The picture that emerges is of a company that has made a genuine bet on institutional trust as a product feature — and is now discovering both the value and the cost of that bet.
One episode stands out: in February 2025, Anthropic's frontier red team discovered that a soon-to-be-released version of Claude could potentially help bad actors make biological weapons. Five members of the team scrambled from a conference to a hotel room, turned a bed on its side to use as a desk, and spent hours analyzing the results. The company held the release of Claude 3.7 Sonnet for 10 days until they were certain it was safe. Logan Graham, the red team leader, later described it as "a fun and interesting day." Graham told TIME: "Some people's intuition from growing up in a peaceful world is that somewhere there's a room full of adults who know how to fix it. There are no groups of adults. There is no room in the first place. There is no door you're looking for. You are responsible."
The company's safeguards head, Dave Orr, put it differently: "We're driving down a cliff road. A mistake will kill you. Now we're driving at 75 instead of 25."
That posture has a market effect. In enterprise software, where buyers are increasingly anxious about regulatory exposure and reputational risk from AI failures, Anthropic's documented caution — and its willingness to absorb financial costs from that caution — looks like a different kind of reliability. The Pentagon fight, from this angle, is not a liability. It is evidence that Anthropic means what it says about its red lines.
Anthropic signed a $200 million partnership with Snowflake in February to make Claude available through Snowflake's platform. It aired two Super Bowl commercials that month. By every external measure available, the company is growing faster than at any point in its history. The Pentagon designation covers only military contracts. Enterprise and commercial customers are not covered. And they are choosing Anthropic in increasing numbers.
Sources: