Regulatory Influencer: United States Advances Artificial Intelligence Governance Architecture for Financial Services
Introduction
Rather than introducing sweeping artificial intelligence (AI) legislation, US policymakers are constructing a sector-based governance architecture for AI in financial services, built around voluntary frameworks, supervisory coordination and existing regulatory structures.
On February 18, 2026, the United States Department of the Treasury announced the results of a public-private initiative to strengthen cybersecurity and risk management for artificial intelligence (AI) in financial services. The initiative produced a shared Artificial Intelligence Lexicon and a Financial Services Artificial Intelligence Risk Management Framework.
The framework is operationalized through a Risk and Control Matrix mapping approximately 230 control objectives. The controls are proactive measures that organisations can take to mitigate risks and have been designed to provide specific governance outcomes for risk management, across areas such as model oversight, cybersecurity, and operational resilience. The Control Objective Reference Guide outlines how those controls can be implemented in practice using a four-stage process of govern, map, measure, and manage. The guidebook provides further instructions on how firms can embed such controls into existing compliance, risk management, and internal audit processes.
The Artificial Intelligence Executive Oversight Group (AIEOG) initiative represents a coordinated public-private effort between the Financial and Banking Information Infrastructure Committee (FBIIC) and the Financial Services Sector Coordinating Council (FSSCC), to develop governance tools and risk management practices for artificial intelligence in the financial sector.
The Treasury’s initiative comes as AI becomes increasingly foundational to financial services, reshaping markets and products from credit underwriting and fraud detection to algorithmic trading and digital identity. It is also transforming how firms manage compliance, risk management, and operational risk. As AI systems move from experimentation into core business functions, the regulatory landscape for financial services is evolving through federal policy initiatives and sector-specific supervisory guidance. This transition from experimentation to operational deployment is driving increased regulatory attention.
The bigger picture
AI governance and regulatory divergence
The Treasury initiative is a reflection of broader global efforts to establish governance over AI. Global debate over artificial intelligence regulation has increasingly mirrored the technology’s emergence as a domain of geopolitical and economic competition. Governments are seeking to shape regulatory standards that influence how and where AI technologies are developed and deployed.
The European Union has adopted a comprehensive legislative regime through the Artificial Intelligence Act, which establishes binding obligations based on risk classification. China has taken a more centralized approach through algorithm registration and oversight requirements for technology providers. Other jurisdictions have introduced national AI strategies or begun incorporating artificial intelligence into existing consumer protection and data governance regimes.
In contrast to these approaches, the United States has so far taken a different path. Rather than adopting comprehensive AI legislation, policymakers have generally favored sector-specific guidance and supervisory coordination. Several US states, including California, have adopted AI-related laws, while hundreds of additional bills remain under consideration, raising concerns about a fragmented regulatory environment.
Federal policy: coherence over fragmentation
Recent federal policy initiatives have emphasized coordination rather than prescriptive regulation. The January 2025 Executive Order on Artificial Intelligence and the White House’s July 2025 Artificial Intelligence Action Plan both highlight the importance of maintaining national coherence and avoiding a patchwork of state-level AI regulation that could slow innovation.
Within financial services, the Treasury initiative reflects this approach by adapting existing compliance and risk management structures rather than introducing a new standalone regulatory regime for AI. The resulting framework translates broad artificial intelligence risk management principles into structured governance expectations that financial institutions can integrate into established regulatory processes, including:
- Maintaining a documented regulatory risk register to track evolving AI-related legal and supervisory guidance.
- Implementing formal responsibility matrices (RACI) to help assign compliance ownership across legal, risk, and technology teams.
- Establishing an AI system inventory to help detail business purpose, data sensitivity, and regulatory impact.
- Instituting third-party risk management procedures, by establishing due diligence and oversight for vendors providing AI models.
The Treasury’s framework adapts the existing NIST AI Risk Management Framework to provide specific, actionable guidance for financial services firms, including banks. The NIST framework, released in January 2023 by the National Institute of Standards and Technology (NIST) within the Department of Commerce, is broad based and voluntary. The Treasury’s framework builds on this structure by translating its high-level governance principles into sector-specific control expectations that financial institutions can integrate into existing risk and compliance functions. For example, it introduces concrete implementation tools such as a formal regulatory monitoring programme and a centralised compliance registry, enabling firms to track AI-related legal obligations, assign accountability, and evidence compliance within existing governance structures.
The framework’s operational controls also emphasise documented accountability. AI risk often spans legal, compliance, technology and business units. The Control Objective Reference materials outline practical mechanisms for assigning and enforcing governance responsibilities, including RACI-style responsibility matrices, formal escalation procedures for unresolved compliance issues, and centralised registers that track ownership of AI-related obligations.
For example, where a bank deploys an AI-driven credit decisioning model, responsibility for model validation, bias testing, and regulatory compliance can be clearly assigned across risk, compliance, and technology teams, with outstanding issues escalated and tracked to resolution. In financial services, unclear ownership of risk is itself a supervisory concern. By helping firms explicitly assign and periodically validate AI-related responsibilities, the framework helps avoid fragmentation of accountability.
This approach is also reflected in the activities of other federal agencies. For example, the Securities and Exchange Commission established an agency-wide Artificial Intelligence Task Force in 2025 to coordinate oversight and ensure that AI-related risks are addressed across existing regulatory mandates.
Integrating AI into financial services compliance
For financial institutions, the adoption of artificial intelligence creates a structural governance challenge. AI systems often evolve rapidly, rely on external data sources and operate across multiple business functions, including technology, risk, compliance and business operations. As these systems can directly influence regulated outcomes such as credit decisions, fraud monitoring or trading strategies, regulators expect firms to demonstrate clear oversight and accountability. For example, the Consumer Financial Protection Bureau has published guidance on credit denials involving AI, requiring lenders to be able to explain and document the basis for automated decisions to ensure compliance with fair lending and consumer protection rules. Similarly, in 2024 the Financial Industry Regulatory Authority reminded firms that its supervision rules and communication standards apply equally when firms use AI tools, requiring firms to address specific risks such as data integrity and model governance.
The Treasury framework attempts to address this challenge by providing practical governance tools. These include standardized terminology through the AI Lexicon, a maturity-based adoption questionnaire, and a detailed risk and control matrix containing 230 control objectives.
By aligning definitions across legal, technical and business functions, the Lexicon reduces ambiguity in risk assessment and regulatory reporting, helping compliance teams identify, categorize and monitor AI systems more coherently. Rather than imposing new regulatory obligations immediately, the framework is intended to integrate AI into existing governance processes. In practice, this may include maintaining centralized inventories of AI systems, assigning clear responsibility for model oversight, and conducting ongoing testing for bias, performance reliability and cybersecurity vulnerabilities. The Adoption Stage Questionnaire reinforces this scalable approach by calibrating control expectations to an institution’s level of AI maturity, enabling firms to expand deployment progressively without triggering disproportionate compliance burdens.
In this way, the initiative reflects a wider supervisory expectation that AI governance will increasingly be assessed through established regulatory domains such as model risk management, operational resilience, consumer protection and operational risk oversight.
Why should you care?
The Treasury initiative provides an early signal of how AI governance may increasingly be incorporated into existing supervisory frameworks. Rather than introducing a standalone regulatory regime for artificial intelligence, US policymakers appear to be focusing on integrating AI risk management into established areas of financial regulation such as model risk management, operational resilience, and cybersecurity. As a result, institutions deploying AI systems may face growing expectations to demonstrate that these technologies are subject to the same governance and oversight standards applied to other critical risk domains.
Artificial intelligence also raises important consumer protection considerations. Banks deploying AI underwriting or decisioning tools may face exposure under fair lending, the Fair Credit Reporting Act (FCRA) and Unfair, Deceptive, or Abusive Acts or Practices (UDAAP) statutes where bias testing or explainability controls are insufficient.
The framework also highlights the importance of documentation and accountability in the use of artificial intelligence. Financial institutions may increasingly be expected to maintain inventories of AI systems, assign clear responsibility for their governance and demonstrate structured oversight of model performance, bias testing and risk monitoring. From a supervisory perspective, the ability to show how artificial intelligence systems are documented, monitored and integrated into existing compliance processes may become a significant component of regulatory examinations.
At an operational level, the initiative underscores the need for stronger cross-functional governance. AI systems often span multiple business units across the financial institution, including lending, marketing, operations, call center, legal and compliance. Ensuring that these systems are deployed responsibly may therefore necessitate clearer coordination across these functions, as well as defined escalation and oversight processes.
Finally, greater clarity on what effective governance outcomes look like for organizations employing AI at different scales may increase confidence among boards, compliance teams and supervisors, potentially enabling financial institutions to deploy AI systems more quickly, while maintaining appropriate safeguards.
Next steps
Although the framework is non-binding, it provides a useful reference point for institutions seeking to strengthen their AI governance processes.
In anticipation of evolving supervisory expectations, financial institutions should begin taking practical steps to formalize their AI governance frameworks, including:
- Assessing where AI systems operate within the organization and how they intersect with existing governance frameworks, including compliance management systems, regulatory change management, model risk management, third party-risk management, and wider enterprise risk oversight processes.
- Reviewing whether responsibility for AI governance and compliance is clearly documented across relevant business functions.
- Evaluating whether existing model risk management, audit and third-party oversight frameworks adequately capture AI systems.
- Monitoring further regulatory developments as federal agencies continue to refine supervisory expectations around artificial intelligence.




