FIS Built an AI to Find Money Launderers. Who Pays When It Gets It Wrong?
On May 4, FIS announced a partnership with Anthropic to embed an autonomous AI agent inside the compliance infrastructure of some of the world's largest banks. U.S. financial institutions spend $35–40 billion annually on anti-money laundering operations, according to FIS's own press release. The UN estimates $2 trillion in illicit funds flows through the global financial system every year. FIS says its new Financial Crimes AI Agent will compress AML alert and case investigations from days to minutes.
The agent will not wait for a human to approve every decision. According to FIS, the Anthropic Applied AI team and forward-deployed engineers are embedded with FIS to co-design the Financial Crimes AI Agent. BMO and Amalgamated Bank will be among the first institutions to deploy the agent, with broader availability planned for the second half of 2026.
Exactly four weeks earlier, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders to an urgent meeting on concerns that the latest artificial intelligence model from Anthropic would usher in an era of greater cyber risk, Bloomberg reported. They were particularly focused on the Mythos model, according to multiple outlets covering the April 7 session. Four weeks later, the same company regulators flagged is embedded inside FIS, which processes transactions for banks handling trillions of dollars in daily volume.
The core question nobody has answered: who is liable when the agent gets it wrong?
FIS describes an agent that acts autonomously. If it flags a legitimate business for investigation, the fines, legal fees, and business disruption fall on the financial institution — not FIS, not Anthropic, according to the contract structures these companies typically use, and not in any current federal banking regulator guidance. FIS's press release describes the agent as the decision-maker. If the flag is wrong, the bank's options are to accept the automated decision, manually review it, or turn the system off — which defeats the efficiency argument.
No federal banking regulator has issued guidance specifically addressing how banks should handle disputes when an AI agent makes the initial call on suspicious activity. Existing AML frameworks assume a human in the loop. Autonomous AML does not have a mapped liability chain.
Anthropic and FIS did not respond to questions about who bears financial responsibility for erroneous AI-driven flags. The banks using the system have not published their own compliance analyses. FinTech Global reported that the $35–40 billion U.S. AML operations market is one target for the partnership, and noted that agents will be evaluated against criteria including accuracy and explainability — though who sets those criteria and how errors are adjudicated remains unresolved in the public record.
The accountability gap is not hypothetical. The Next Web reported that Anthropic is building financial services agents simultaneously across JPMorgan Chase, Goldman Sachs, Moody's, Xero, Intuit, and FIS — a concentration of autonomous AML capability inside a small number of providers that financial regulators have already flagged as a systemic concern. If the agent makes a systematic error, the fallout does not stay contained at one bank.
Anthropic reported approximately $30 billion in annualized revenue in April, The Next Web reported, citing Reuters. The financial services push is one of the clearest signals yet that the company is moving from model sales into embedded operational roles inside critical financial infrastructure — a different kind of exposure than selling API tokens.
What comes next: the partnership is targeting H2 2026 for initial deployment at BMO and Amalgamated Bank. Federal banking regulators in the U.S., EU, and UK will need to decide whether existing AML frameworks cover autonomous AI decisions or require new rules. Until they do, the liability chain runs through the bank — and the bank's only real recourse is to turn off the system that was supposed to save money.