AI Governance ’Crucial’, Warn UK Financial Regulators

February 22, 2022
Back
A new report published by the Bank of England and the Financial Conduct Authority has emphasised the need for oversight and standards in the City’s response to artificial intelligence (AI).

A new report published by the Bank of England and the Financial Conduct Authority (FCA) has emphasised the need for oversight and standards in the City’s response to artificial intelligence (AI).

After a year-long investigation into financial services’ use of AI, the UK’s banking watchdogs have stressed that firms and the authorities must set appropriate standards in the field.

The Artificial Intelligence Public-Private Forum (AIPPF) was launched in October 2020 and has brought together experts from financial services, alongside the technology sector and academia.

Via quarterly meetings, the forum has sought to share information and aid regulators in understanding the challenges that could arise from using AI in the industry.

Through the guidance of the two regulators, forum participants have scrutinised the merits, challenges and governance of AI within financial services, considering the increasingly important role it is playing in areas of the sector, such as fintech.

“Artificial intelligence (AI) is a rapidly evolving and powerful tool which financial services firms are using in an increasing number of ways,” says the report, noting that its use can bring benefits to consumers, businesses and the wider economy.

However, AI can also amplify risks and create new challenges, the report cautions. “The AI models used in the financial system are becoming increasingly sophisticated. Their speed, scale, and complexity, as well as their capacity for autonomous decision-making, have already sparked considerable debate.”

Recent data from the UK government has suggested that around 15 percent of all businesses have adopted at least one AI technology, which translates to 432,000 companies.

In addition, around 2 percent of businesses are currently piloting AI and 10 percent plan to adopt at least one AI technology in the future, equating to 62,000 and 292,000 UK businesses respectively.

“Increasing adoption of AI is inevitable as firms look to improve the efficiency of processes that have become increasingly manual and costly as a result of snap reactions to regulatory pressure over the last ten years or so,” argued David Brain, a partner at Avyse Partners, UK-based regulatory consultants.

Brain told VIXIO that the conclusions that the report comes to regarding data, model risk and governance are all very sensible. “The report reads well from start to finish. Whether you are an AI expert or novice, the logical flow of the report makes it easy to understand and easy to grasp the key considerations firms should be thinking about.”

Governance and oversight

The report concludes that standards for AI governance should be set by a centralised body within firms, with overall responsibility for AI being held by one or more senior managers, with business areas being accountable for the outputs and adherence to the governance standards.

Firms should also ensure that there is an appropriate level of understanding and awareness of AI’s benefits and risks throughout the organisation.

The regulators have considered the regulatory framework for AI use. “A key focus of AI regulation should be on how AI affects decision-making,” according to the report.

With this, regulators should provide greater clarity on the types of outcomes they expect for AI governance and controls.

However, enforcing metrics around outcomes may prove to be challenging in practice, the report warns. Establishing an auditing regime for AI practitioners and the professionalisation of data science would help foster wider acceptance of and trust in AI systems.

Tom Whittaker, a senior associate at Burges Salmon, said that firms would likely welcome more regulatory oversight, depending on what this involves.

“It is crucial to financial services as a whole firms deploy and develop AI safely so that consumers trust AI,” he told VIXIO. “Regulators and regulations play an important role in building that trust.”

Yet, regulators also recognise the need to ensure that any regulatory burden is proportionate to the AI system’s risks and does not stifle innovation, he pointed out. “The report anticipates more regulatory clarifications than specific regulatory oversight.”

The report does not call for specific regulations or legislation for financial services.

According to Whittaker, this suggests that there is not the same appetite in the financial services sector at least for AI-specific legislation or regulation like the EU’s proposed AI act.

“However, the possibility of regulation or legislation for specific sectors or use-cases which impact financial services firms, such as restricting how AI is used in the employment context, cannot be discounted,” he said.

Brain said that financial players may not endorse tighter AI regulation. “This may serve to constrict innovation.”

That said, such players would likely welcome regulators laying out their broad views and examples of what they consider to be good practice, he continued, pointing out that frameworks can be designed with these in mind. “Firms would also likely appreciate regulators focusing on purpose and outcomes, as is being seen in the AML space.”

“If the focus is on purpose, firms can cut through the detail of regulation and look at good outcomes from the beginning, something that should transcend the specifics of regulatory requirements,” he said.

Managing mistakes

The report specifically outlines the risks related to payments. For example, with anti-money laundering and fraud protection, mistakes that result from AI could lead to payments or transactions being wrongly denied or granted.

These outcomes would not just have an impact on the consumer either. Liability for denying payments or services to legitimate customers, as well as reputational and potential conduct risk, could arise from this for the firm in question.

This feeds into the regulators’ concern about financial exclusion arising from the increasing use of AI.

AI systems may prevent certain customers from accessing a financial product or service, the report warns. “They may restrict customers’ ability to get credit or insurance cover; their ability to access certain investment products or even their ability to enter into a relationship with the financial institution.”

These systems may also prevent customers from enjoying one or more benefits that they can “reasonably expect” from an existing product or relationship, such as their claims against an insurance policy, their ability to make payments or other transactions.

Consumers are at risk of other failings too. For example, consumers may experience unfavourable commercial outcomes compared with others when applying for or using a product or service.

Customer duty

AI systems could also increase the chance of a breach in financial institutions’ fiduciary duty to consumers, such as the fair treatment principle, affordability, treatment of vulnerable consumers and fair complaints processing.

Such cases could also trigger a negative impact for firms, such as financial losses (via a poor credit algorithm), reputational risk (for example, if automated credit scores resulted in the mistreatment of vulnerable or minority customers), and the inevitable risk of fines that can come with regulatory failings, such as a failure to maintain people’s data protection rights.

AI has already filtered into payments in the UK, and worldwide at that. Some of the places it is most likely to be found are in fraud detection, as well as helping improve the speed and efficiency of the payment process, by reducing the extent to which humans need to be involved.

AI can facilitate straight-through processing of payments, by automating workflows, providing decision support and applying image recognition to documents. Developments in speech recognition technology also mean that banks can increasingly process payments initiated via voice, where the initiator has used a smartphone or smart speaker.

Chatbots are already able to respond to simple queries from clients and carry out basic tasks, such as creating or cancelling a standing order or direct debit, or giving more information on a payment that a customer does not recognise.

As AI continues to be rolled out to help improve business process and customer experience, increasing oversight will become inevitable for the protection of both users and firms deploying it.

Our premium content is available to users of our services.

To view articles, please Log-in to your account, or sign up today for full access:

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.
No items found.