AI In Spotlight As MPs Press Financial Services Industry On Risks To Customers

May 12, 2025
Back
Lawmakers in the UK have raised concerns over the use of artificial intelligence (AI) in financial services, warning that without robust safeguards it could put vulnerable consumers at risk and undermine trust in the sector.

Lawmakers in the UK have raised concerns over the use of artificial intelligence (AI) in financial services, warning that without robust safeguards it could put vulnerable consumers at risk and undermine trust in the sector.

During a hearing last week, AI emerged as a priority for the Treasury Select Committee in the UK’s House of Commons, with Labour, Conservative and Liberal Democrat members of parliament (MPs) all raising concerns about its use in the banking industry. 

The Treasury Committee’s evidence session, part of its ongoing inquiry into AI in financial services, explored the risks and opportunities posed by the rapid integration of AI technologies. 

Last year, a joint survey by the Bank of England and the Financial Conduct Authority (FCA) found that 75 percent of firms in the sector are already using AI, with another 10 percent planning to adopt it within the next three years, showing that this is a live issue.

Although witnesses appearing at the hearing remained relatively cool-headed about the application of AI in financial services, it was clear that MPs plan to scrutinise the issue closely. 

Cybersecurity and big tech

Some of the most pressing issues were unsurprising — especially considering the risks of AI tools being deployed by criminals, as covered by Vixio in our AI Outlook

For example, Lola McEvoy, a Labour MP, asked about the implications of generative AI for cybersecurity. 

Industry leaders warned of increasing threats, and the need for proportionate investment in defence mechanisms to match the scale of risk.

“The technology is available as much to us to benefit from and use as it is to those who want to use it in the wrong way,” said Jana Mackintosh, managing director, payments and innovation at UK Finance. 

“Within cybersecurity and operational risk management, again when it comes to financial services we have great frameworks and regulations to help us think that through.”

She concluded that “the introduction of AI may bring with it speed or slightly new risks to consider but ultimately that is an arms race we have had for a while”.

Big tech potentially representing a barrier to innovation also emerged as another key issue.

“The likes of Alphabet, Meta, and Apple are all going to want to buy whatever they anticipate is going to be the best technology in this field and I wonder if that is a specific worry of yours or if you think that is just a normal worry that you have always had,” said Liberal Democrat MP Bobby Dean.

Witnesses agreed that this could be a risk for firms to consider. 

Consumer issues 

The topic of consumer protection was also a running theme, with concerns over how AI may be used by banks to de-risk customers or potential employees they do not want.

“Is there not a risk that, given the profiling that you can now do both of your customers and of your potential recruits, notwithstanding the obligations of the consumer duty there is going to be a huge temptation for large organisations to say, ‘We do not want people with these characteristics’ and they can be screened out without anyone knowing about it? How do we give confidence that the industry will not be doing that?,” asked Conservative MP and former City minister John Glen. 

Jana Mackintosh, payments chief at UK Finance, acknowledged the risk of bias, but emphasised that firms are committed to risk management and that existing regulatory frameworks, including the Consumer Duty, apply regardless of whether decisions are made by humans or algorithms. 

“Applying a technology tool such as AI should not change that risk culture, that kind of behaviour that you would expect within the organisation,” she said. 

“There may be greater temptations, but within the risk frameworks that you employ to recruit and to manage employees the tool itself as a technology should not change the culture and behaviour of an organisation.”

She added that “it is not well understood enough or developed enough for us to feel certain that there are not any risks associated with deploying that confidently and comfortably.” 

“We do not see, certainly in the use cases that we have explored across banking and payments, that those are being used as use cases.”

She also pointed out that, as things stand, the technology and the adoption have been focused predominantly on “either well-understood areas such as risk management and fraud detection, or emerging areas where there are low risk use cases with human monitoring over those use cases”. 

She added that there is the capability to regulate away higher-risk use cases as well, noting work that the EU has done with the AI Act. 

MPs also questioned the opacity of AI models, referencing the Dutch tax authority scandal in which an algorithmic fraud detection tool was found to be disproportionately targeting families with protected characteristics. 

This led to wrongful accusations, and in some instances even suicides, and Labour MP Yuan Yang asked whether regulators should strive for greater transparency from firms using AI, given the “black box” nature of many models.

“Under the current regulation regime, banks have to make many things transparent confidentially to regulators,” she said.

“Is there a need for an increased level of transparency given the increased level of opacity of these models? If so, what kinds of transparency and data sharing do you think would help customers and regulators feel confident in these new models?”

David Otudeko of the Association of British Insurers reiterated that compliance with the Consumer Duty applies to all aspects of a business model, including AI-driven decisions.

“If it is built into your business model you have to comply with it, and that includes the outputs and the decisions you make as a result of using AI.”

“It is not distinct from other elements of the decision making of how an insurer decides on price, value and so on. As we know, the point around the price of insurance products is driven by quite a few other things aside from the use of AI.”

Mackintosh argued that firms are mainly interested in deploying AI for one reason — the benefit that it could bring to consumers. 

“Like a lot of this being deployed across financial services — whether AI, quantum, open banking, some of the new forms of money or programmability — for consumers a lot of this happens behind the scenes,” she said. 

She concluded that these are technologies that help improve operational activities or product offerings. 

“Consumers want to see good outcomes. They want to see a product that serves their needs. They want to see a product that is competitive. They want to see a product that can address whatever need that they have.”

“That is where we sometimes need to get back to when we talk about the interaction of the technologies with consumers,” she said. 

“You deploy these technologies for a reason, and that reason in financial services is making sure that you can provide a better product to your customers.”

Westminster remains sceptical about AI

The UK government and regulators such as the Financial Conduct Authority (FCA) have tended to comment positively on the possibilities of AI in recent months. 

The government certainly sees the technology as an opportunity to harness economic benefits and productivity. 

For example, Prime Minister Keir Starmer said it “will drive incredible change in our country”, while suggesting that “the AI industry needs a government that is on their side, one that won’t sit back and let opportunities slip through its fingers”. 

Finance minister Rachel Reeves, meanwhile, argued that “AI is a powerful tool that will help grow our economy, make our public services more efficient and open up new opportunities to help improve living standards”.

However, the Treasury Select Committee seems less sure, and its MPs have taken a more cautious approach. 

The government is hammering home the notion of economic opportunity, global competitiveness and industrial strategy, yet the MPs on the committee are highlighting issues such as bias, exclusion and possible consumer harm. 

The case for guardrails against the most severe risks may be a key theme in the coming years.

Although the UK government has so far steered away from a regulatory regime like the EU’s AI Act, it may find that such a framework is necessary to ensure that high-risk use cases are not exploited. 

Our premium content is available to users of our services.

To view articles, please Log-in to your account, or sign up today for full access:

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.