Singapore has published an AI Risk Management Toolkit that explicitly covers agentic AI — systems that act autonomously without requiring human approval at each step — marking a shift from principles-based guidance to an enforceable operating discipline for the financial sector.
The Monetary Authority of Singapore released the toolkit on March 20, completing Phase 2 of Project MindForge, a multi-year effort developed collaboratively by 24 banks, insurers, and capital market firms. That co-development model is the structural novelty. DBS, Julius Baer, OCBC, Standard Chartered, UOB, Income Insurance, Prudential, BlackRock, CITI, HSBC, MSIG, and Microsoft participated in drafting the toolkit alongside MAS, according to Business Times. Most governance frameworks are handed down; this one was negotiated. Whether that produces guidance that is actually implementable or simply distributes compliance ownership across enough firms that nobody owns it remains an open question.
The toolkit is organized around a four-part Operationalisation Handbook — Scope and oversight; AI risk management; AI lifecycle management; Enablers — alongside an Executive Handbook addressing 17 specific governance considerations, from board accountability and AI inventories to third-party risk, use case-level controls, and employee training. MAS had launched a public consultation on proposed AI Risk Management Guidelines in November 2025 and is presently reviewing responses. Kenneth Gay, MAS's Chief FinTech Officer, said the toolkit "marks a major step forward in our journey to ensure the responsible adoption of AI in finance" — language that signals enforcement intent, not voluntary aspiration.
The shift from MAS's 2018 FEAT principles to this toolkit represents a move from values-based AI ethics to an enforceable operating discipline. As Kovrr's analysis notes, MAS is explicitly defining AI scope to include AI agents — systems that learn and infer from inputs to generate outputs influencing physical or virtual environments — extending beyond traditional AI and generative AI. That is the forward-looking bet: that agentic AI is not a future governance problem but a current operational reality requiring concrete controls.
The timing creates an interesting transatlantic contrast. The U.S. Treasury and FSOC launched their AI Innovation Series on March 23, three days after MAS's announcement, with Treasury Secretary Scott Bessent characterizing failure to adopt AI as its own risk — a signal that Washington's posture is adoption-enabling rather than constraint-focused. Singapore's approach is the opposite bet: that prescriptive rules, co-developed with industry, produce safer deployment than letting financial institutions figure it out.
What makes this worth watching beyond Singapore is portability. The framework's four-part structure — Scope and oversight; AI risk management; AI lifecycle management; Enablers — provides a template that other regulated sectors could adapt. A governance framework co-authored by the firms it will regulate is a more plausible starting point than one drafted by a single regulator working alone. Whether that produces guidance that is actually implementable or simply distributes compliance ownership across enough firms that nobody owns it remains an open question.
The test will come when MAS's proposed guidelines move from voluntary toolkit to binding rules. The BuildFin.ai initiative will house an AI risk management workgroup comprising MindForge consortium members and other practitioners to develop implementation resources as the framework evolves. The four-part structure signals that MAS is treating agentic AI as a current operational governance problem — not a hypothetical future risk requiring further study.