APRA says banks are putting AI agents to work before building the controls to govern them
Banks are racing to put AI into loan processing, fraud checks and customer service before they have built the controls to manage software that can act like a worker. Australia’s prudential regulator just said the problem out loud: many financial institutions have not updated identity and access systems for “nonhuman actors such as AI agents,” even as those systems move into live financial workflows.
That makes this more than another warning that AI is risky. In a letter to industry published Wednesday, the Australian Prudential Regulation Authority, Australia’s prudential regulator, said a late-2025 review of selected large banks, insurers and superannuation trustees found weak post-deployment monitoring, weak model-behavior monitoring, weak change management and weak decommissioning for AI systems already being trialed in areas like claims triage, loan application processing and fraud disruption.
The sharpest part of the letter is not the general governance language. It is the operating detail. APRA said some firms are already heavily dependent on a single provider for multiple AI use cases, with few tested ways to exit or substitute if that provider fails. It also listed prompt injection, data leakage, insecure integrations, exploit injection and misuse of autonomous AI agents among the common attack paths it is seeing. That is regulator language, but it is also a decent map of what breaks when companies buy agent software before they build the control plane around it.
The control plane here means the boring but load-bearing layer that decides who or what is allowed to act, what it can touch, how its behavior is monitored after launch, and how it gets shut down when something goes wrong. APRA’s warning matters because it suggests the market has been treating AI rollout as a model-selection problem when the harder problem is operational accountability. If an AI agent can trigger actions inside a bank, someone has to own that agent the way they would own a privileged employee account. APRA said firms should keep an inventory of AI tools and use cases, maintain human involvement for high-risk decisions and assign clear accountability. As Super Review reported, APRA expects boards and accountable executives to treat AI as a prudential risk issue rather than a technology side project.
That pressure is not coming only from one Australian regulator. On Monday, the FIDO Alliance announced a new Agentic Authentication Technical Working Group and a standards effort around agent-initiated commerce, arguing that current authentication and authorization systems were built for direct human interaction, not delegated actions initiated by software agents. FIDO is not evidence that banks have solved the problem. It is evidence that the identity layer is now scrambling to catch up.
There is a limit to how far this APRA letter can take the story. The regulator did not name the institutions it reviewed, did not quantify how many failed each control and did not announce a public enforcement action. This is still a supervisory warning, not a courtroom moment. But it is a specific warning, based on live supervisory work, and that alone makes it more useful than the usual parade of vendor white papers about responsible AI.
The pressure now shifts to boards, chief information security officers and vendors selling agent workflows into finance. If they want AI systems making or shaping decisions in lending, claims, fraud operations and customer support, they need to show not just that the models work, but that the software has an owner, an audit trail and an off switch. The firms that cannot answer those questions are about to learn that the real bottleneck in agentic finance is not model access. It is control.