The Treasury Select Committee’s report highlights gaps in accountability, stress testing and third-party controls, signalling that financial institutions should prepare for clearer rules and increased supervisory expectations.
The committee concluded that the UK’s current regulatory approach to the use of AI by financial institutions creates significant risks to consumers and financial stability, potentially undermining the benefits the technology is intended to deliver.
It noted that 75 percent of UK financial services firms already use AI, with adoption highest among insurers and internationally active banks, underscoring the systemic relevance of the issue.
The UK government and much of the financial services industry argue that AI offers significant benefits, including faster customer service and enhanced cyber-defences, particularly in support of financial stability.
The committee does not dispute these benefits, but questions whether the current regulatory framework is sufficient to manage the associated risks.
Its findings were informed by 84 written submissions, correspondence from six major AI and cloud providers and four oral evidence sessions.
To date, UK regulators have focused on monitoring AI adoption, including tracking complaints and social media comments and engaging regularly with market participants.
However, the committee warned that this approach is creating regulatory uncertainty and identified several key areas of risk:
- Lack of transparency in AI-driven credit and insurance decision-making.
- AI-enabled product tailoring and automated decisions risk increasing financial exclusion, particularly for vulnerable consumers.
- Unregulated financial advice generated by AI tools risks misleading or misinforming consumers.
- Wider use of AI may increase fraud risks
The committee also highlighted the need for greater clarity on accountability when AI-related failures occur, noting that although the Bank of England and the Financial Conduct Authority (FCA) conduct cyber stress testing, neither undertakes AI-specific cyber or market stress testing.
The report also criticised the UK’s slow progress in implementing the critical third parties (CTP) regime, with HM Treasury yet to designate its initial list despite financial institutions’ reliance on a small number of AI and cloud providers.
On publication, the committee warned that, “By adopting a wait-and-see approach, the major public financial institutions, which are responsible for protecting consumers and maintaining stability in the UK economy, are not doing enough to manage the risks presented by the increased use of AI in the financial services sector.”
Boosting clarity
For financial institutions, the lack of clarity on AI regulation creates material operational and compliance risk.
Under the current framework, the FCA is responsible for consumer protection, market integrity and competition, while the Bank of England is responsible for maintaining monetary and financial stability, primarily through its Financial Policy Committee (FPC).
The committee was clear in its analysis that the current approach is insufficient to counter the risks posed by AI and to ensure that market participants know exactly what their obligations are with respect to the use of such technology.
It made three specific recommendations:
- By the end of 2026, the FCA must release practical guidance for firms on how existing consumer protection rules apply to their use of AI, including expectations regarding senior management accountability and assurance.
- The Bank of England and the FCA must introduce AI-specific stress testing.
- By the end of 2026, HM Treasury must designate the major AI and cloud providers as CTPs.
These steps would constitute a significant change to the way financial institutions’ use of AI is governed and supervised.
They would give firms full and clear information on what they need to do to be compliant, where their systems and processes are vulnerable under stress and which of their external suppliers, if disrupted, could pose systemic risks to the UK financial services industry.
Protection against major AI-related incidents
Given that the UK does not currently have AI-specific legislation or AI-specific financial regulation, the role of the individual regulators is central to protecting individuals and the system from harm.
In its written statement to the committee, the FCA stated that it “aims to enable the safe and responsible use of AI within financial services, realising the potential benefits of AI for markets and consumers while balancing the risks. Our approach is principles-based and outcomes focused.”
This stance reflects the FCA’s longstanding preference for outcomes-based regulation over prescriptive rules, but the committee’s findings suggest that this approach has not kept pace with the speed or scale of AI adoption across financial services
On the release of the report, Dame Meg Hillier, chair of the Treasury Select Committee, said:
“Firms are understandably eager to try and gain an edge by embracing new technology, and that’s particularly true in our financial services sector which must compete on the global stage.
“The use of AI in the City has quickly become widespread and it is the responsibility of the Bank of England, the FCA and the Government to ensure the safety mechanisms within the system keeps pace.
“Based on the evidence I've seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk.”
UK financial institutions already using, or considering adopting, AI tools will need to monitor closely how regulators respond to the committee’s recommendations, particularly given the growing political pressure to demonstrate effective oversight.
Practical guidance from the FCA will increase clarity, but will require a review of systems and practices and potentially lead to a greater level of compliance activity, raising costs.
Similarly, AI-specific stress testing will provide insight into areas of weakness, which is a positive for firms, but will bring with it the cost of remedying issues and reinforcing protections.
And the authorities have been clear that the designation and direct oversight of CTPs does not reduce financial institutions’ accountability, as they still have firm-level responsibility for their own operational resilience, outsourcing and third-party risk management.
Although regulators may not implement all of the committee’s recommendations in full or on the proposed timeline, the direction of travel is clear: firms should assume greater scrutiny of AI use is coming and prepare accordingly.
Those that treat the report as an early warning and move now to strengthen governance, testing and third-party controls will be better positioned when regulators translate the committee’s criticism into concrete supervisory action.




