The spreadsheet did to financial modeling what AI agents are about to do to compliance — put a critical business function in the hands of people who never learned to program, unlock real productivity gains, and create accountability problems that take a decade to fully understand. That is not the product pitch. That is the story.
A compliance officer can now build an AI agent that approves trades — no developers required, no engineering ticket, no waiting for a sprint. The capability arrived this week. Who is accountable when something goes wrong has no answer in U.S. law. The tool is Comply's MCP Server — Model Context Protocol, a standardized way for AI assistants to connect directly to business systems. Comply announced April 23 that compliance officers can use it to build trade pre-clearance agents without writing code: a fund manager instructs the AI to submit a trade request, receives an immediate approval or denial, and the agent was built by the compliance officer, not engineering. Oliver Wyman estimated in February that agentic AI can automate up to 70 percent of manual compliance work and improve risk detection accuracy by as much as four times. Those gains arrive before any U.S. framework defines what compliance teams can and cannot put into production with them.
Lotus 1-2-3 put financial modeling in the hands of non-programmers in the 1980s. Efficiency gains were genuine. The accountability problems that followed — regulators arriving and nobody able to reconstruct how a model worked — played out over years. An AI agent running in real time against market data decides faster than any formula, in contexts more complex than any spreadsheet, and is considerably harder to audit after the fact. The accountability gap is structural, not incidental. The question compliance teams will face is not whether the tooling works — it likely will — but whether a decision it enabled can be explained to an examiner six months later who asks why a particular trade was approved by a system no developer ever touched.
Singapore moved first. The city-state's Infocomm Media Development Authority published what it calls the world's first cross-sector governance framework for AI agents in financial services in January 2026, covering decision attribution, audit trails, and accountability chains. MetaComp cited it as the most comprehensive response to date. The U.S. has no equivalent. SEC examination guidance on AI-assisted compliance decisions remains, in the view of most compliance lawyers tracking the issue, a work in progress. The EU AI Act covers general-purpose models; it was not designed for agentic systems running in real time against live market data.
Fewer than one in three financial institutions have adequate controls to oversee AI agents, according to MetaComp citing McKinsey research from January 2026. Comply's own data shows 96 percent of compliance leaders are exploring or using AI, but only 49 percent say their firm has a formal AI governance policy. The adoption speed is not hypothetical. The governance infrastructure to account for it is missing — and nobody has built the equivalent of what Singapore published five months ago.
Comply has disclosed neither funding nor revenue; the 5,000-plus firm figure describes headcount, not commercial scale. No reference customers spoke on record at launch. ComplyAI Policy Guide is in testing and ships via MCP Server in mid-2026.
Singapore's framework does not answer that question for U.S. firms. It shows what an answer might look like — and that no one else has tried.