Insurance Has Spent Decades Building Data Moats. Verisk Just Opened the Gates to AI Agents.
Insurance only works as a business because two parties want opposite things.
The policyholder wants a low premium. The insurer wants to price accurately and avoid covering too many high-risk customers — a dynamic where the people most likely to need insurance are also the ones most likely to buy it at any given price. That adversarial structure — humans on both sides with conflicting incentives, mediated by risk models — is what makes the market function. On May 5, 2026, Verisk, which builds the risk models most U.S. property and casualty insurers depend on, announced it had connected those models to AI agents through MCP — Model Context Protocol, the standard that lets AI assistants call external data sources. The framing from both companies was that this democratizes access to risk data. The adversarial bargain that insurance runs on assumes asymmetry of information. MCP appeared to remove it.
Except it doesn't — not yet. The Verisk MCP connectors are read-only data retrieval. An insurer's AI can query Verisk's risk model through Claude. The policyholder cannot independently verify, challenge, or replicate that same model query. The asymmetry hasn't been abolished; it has been replicated on the AI side of the market. Verisk still controls the underlying model and its outputs. What Verisk opened is a window into the vault, not the vault itself.
That is still a structural bet, just a narrower one than the marketing implies. Verisk has 40 AI solutions deployed internally over more than two decades, per its own announcement. It is now exposing the data those systems run on through a protocol any model provider can implement. The company is converting a data moat into a distribution layer — while retaining the underlying data advantage. Whether that is a durable position depends on whether any other data holder decides to do the same thing.
Lee Shavel, Verisk's president and chief executive, framed the launch as giving insurers "conversational access" to analytics that used to require a data engineering team. Mike Ram, Anthropic's head of insurance, described the connectors as letting carriers "operate at a higher level" with their own risk data. Both descriptions are marketing language for the same underlying thing: proprietary data is now composable infrastructure.
The irony is not subtle. Verisk spent decades building its competitive position on exactly this asymmetry — you could not get Verisk's risk models without being a Verisk customer. Now it is publishing the interface spec and hoping other companies build on top of it rather than around it. The window is open. The question is whether Verisk has traded a gate for a distribution channel it can still control.
No major competitor has published a comparable move. LexisNexis and CoreLogic — which operate adjacent data businesses in insurance risk — have not announced MCP integrations. No public earnings call, press release, or regulatory filing from either company signals an analogous strategy. Whether they are watching Verisk's experiment or waiting to see how it performs is not known from public sources.
Neither has any regulatory framework addressed what happens when both sides of a risk transaction query the same model through the same protocol. No insurance-specific AI decision-making rule — from NYDFS, the EU AI Act, or state insurance regulators — defines the obligations or rights that attach to AI-to-AI risk assessment. That gap is not a problem today. It is a question the industry will eventually have to answer.
The efficiency claims Verisk cites — hundreds of staff-hours per carrier annually for underwriting, and 30 minutes to two hours per restoration estimate (per Verisk's own projections, no independent production data available) — come from the company that benefits if insurers adopt the connectors. No outside party has confirmed production impact for either connector. The read-only architecture means the efficiency case depends on insurers choosing to query Verisk more often — not on any autonomous action the model takes.
Whether other data-rich industries follow — healthcare records, supply chain logistics, trade finance — is the real question the Verisk launch opens. If they do, the agent infrastructure stack changes shape: less about which foundation model you use, more about which proprietary dataset you can wire in. Verisk has made the first move in what may become a broader repricing of who owns the data that AI agents depend on.