75 Percent Of UK Firms Use AI But Are Held Back By Regulation, Says FCA

November 26, 2024
Back
A new survey by the Bank of England and the Financial Conduct Authority (FCA) has found that UK firms are adopting AI across a range of use cases, but remain cautious about regulatory constraints.

A new survey by the Bank of England (BoE) and the Financial Conduct Authority (FCA) has found that UK firms are adopting AI across a range of use cases, but remain cautious about regulatory constraints.

In their third instalment of the Artificial Intelligence and Machine Learning Survey, the BoE and FCA surveyed 118 regulated firms in an effort to understand their use of AI in financial services.

The respondents include banks, insurance and payments firms, with UK deposit-taking institutions and building societies making up almost half of the total.

Overall, 75 percent of firms said they are already using AI, and a further 10 percent said they are planning to use the technology over the next three years.

This is a significant increase on the figures from the 2022 survey, when 58 percent of firms said they were using AI and 14 percent said they were planning to do so.

The insurance sector reported the highest percentage of firms currently using AI (95 percent), closely followed by international banks (94 percent).

Financial market infrastructure firms — a category that includes payments firms, in the survey methodology — had the lowest percentage of respondents currently using AI (57 percent).

The results signal a high degree of optimism and enthusiasm for the use of AI among financial institutions, but also a clear recognition of its risks.

Around two-thirds of firms rated their use of AI as “low materiality”, while only 16 percent of firms reported using AI for “high materiality” purposes.

According to the survey methodology, “materiality” was defined as the application’s impact on the firm’s performance.

This could be quantitative, such as book or market value exposure, or number of customers impacted, or qualitative, such as importance in informing business decisions and potential impact on solvency or profitability.

High take-up but large knowledge gaps

Despite their enthusiasm for the technology, firms did not shy away from admitting to their lack of experience and lack of confidence in using AI.

Almost half of respondent firms said they only have a “partial understanding” of the AI technology they are using, while around a third said they have a "complete understanding” of it.

The BoE and FCA attribute this lack of knowledge to the widespread use of “third-party implementations” of AI across the financial sector.

As per the survey’s definition, this refers to a use of AI where most of the development or deployment process is implemented by a third party.

A third of all AI use cases were third-party implementations — more than double the percentage (17 percent) from the 2022 survey.

“This supports the view that third-party exposure will continue to increase as the complexity of models increases and outsourcing costs decrease,” the regulators said.

There is also a potential concentration risk in the provision of these AI applications, with the top three third-party providers accounting for 73 percent, 44 percent and 33 percent of all cloud, model and data providers respectively.

Benefits and constraints

The area with the highest percentage of respondents using AI is optimisation of internal processes (41 percent), followed by cybersecurity (37 percent) and fraud detection (33 percent).

Over the next three years, an additional 36 percent of respondents expect to use AI for customer support (including chatbots), 32 percent for regulatory compliance and reporting, and 31 percent for fraud detection.

Among firms currently using or planning to use AI, the most common perceived regulatory constraint is data protection and privacy laws.

One in four firms rated data protection and privacy as a “large constraint”, while almost a third rated it as a “medium constraint”.

The next most common perceived constraint was cybersecurity rules, followed by the FCA’s Consumer Duty rules.

The results suggest that although firms see the potential benefits of AI, they are aware of the regulatory risks and continue to exercise caution when implementing the technology.

For example, the survey found that 84 percent of firms already have a designated person who is accountable for their AI framework.

Firms’ proactiveness in ensuring accountability for AI use also suggests that they have been keeping a close eye on potential options for AI regulation in the UK.

As covered by Vixio, one of the key proposals of an AI bill introduced by Christopher Holmes, a member of the House of Lords, is that every business “developing, deploying or using” AI must have a designated AI officer.

This officer would be required to ensure safe, ethical, unbiased and non-discriminatory use of the technology, and would be made accountable in cases where these standards are breached.

Lord Holmes’ bill has come unstuck following the snap general election in June this year, however, after which it was not re-introduced to parliament.

However, Holmes subsequently told Vixio that there is still a “significant gap” in legislation with regard to AI, which lawmakers cannot ignore.

"AI is already impacting people's lives, and there are numerous issues that need to be addressed,” he said.

Take a look at Vixio's AI Outlook here.

Our premium content is available to users of our services.

To view articles, please Log-in to your account, or sign up today for full access:

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.