US Treasury's New AI Risk Framework Gives Financial Institutions 230 Controls to Manage AI Systems
The US Treasury has published a comprehensive AI risk management framework for financial institutions, offering a structured approach to identifying, evaluating, and governing AI risks across the sector.
The Financial Services AI Risk Management Framework (FS AI RMF) was developed in collaboration with more than 100 financial institutions and industry organisations, with input from regulators and technical bodies. It defines 230 control objectives organised around four core functions adapted from the NIST AI Risk Management Framework: govern, map, measure, and manage.
The framework addresses risks that existing technology governance frameworks do not cover, including algorithmic bias, limited transparency in decision processes, cyber vulnerabilities, and complex dependencies between systems and data. Of particular concern are large language models, whose behaviour can be difficult to interpret or predict.
Unlike traditional software, which operates deterministically, AI output varies depending on context. The framework requires financial institutions to ensure AI outputs are reliable, systems are protected against cyber threats, and decisions can be explained when they affect customers or have regulatory relevance.
The framework also introduces a four-stage AI adoption maturity model. Organisations are classified based on their use of AI, from limited deployment in traditional predictive models to core business process integration. Each stage carries different control requirements appropriate to the risk profile.
The Guidebook recommends maintaining incident response procedures specific to AI systems and creating a central repository for tracking AI incidents. This will help organisations detect failures and improve governance over time.
The FS AI RMF positions itself as an extension to the broader NIST framework, adding sector-specific controls and practical implementation guidelines that reflect financial services regulatory expectations.