A member of the US Federal Reserve Board has argued in favour of a wait-and-see approach to regulating artificial intelligence (AI), given the dangers of imposing rules on the technology too early in its development.
Governor Michelle Bowman, speaking in Washington, DC last week, said it is too soon for the US to regulate AI when the risks and benefits of the technology are still unclear.
Focusing on the use of AI in the financial sector, Bowman acknowledged the arguments of both proponents and sceptics of the technology.
To its proponents, AI is set to unlock “momentous” gains in efficiency and productivity, rivalling those of the industrial revolution, she said.
To its critics, however, the technology is set to introduce new and unpredictable risks into the financial sector, including new threats to cybersecurity and new forms of AI-enabled fraud.
“Over time, it has become clear that AI's impact could be far-reaching,” said Bowman, “particularly as the technology becomes more efficient, new sources of data become available, and as AI technology becomes more affordable.”
'General principles' for regulating new technology
Comparing AI to earlier forms of innovation, Bowman outlined several “general principles” that regulators should apply when approaching any new technology.
First, regulators must understand the technology before they consider whether and how to devise a regulatory approach.
A “foundational element” of this process is the development of staff expertise in the new technology — a capacity that the Federal Reserve is currently attempting, but struggling, to acquire.
“As this technology becomes more widely adopted throughout the financial system, it is critical that we have a coherent and rational policy approach,” said Bowman.
“That starts with our ability to understand the technology, including both the algorithms underlying its use and the possible implications — both good and bad — for banks and their customers.
“In suggesting that we grow our understanding and staff expertise as a baseline, I acknowledge that this has been, and is likely to remain, a challenge.”
Bowman noted that the Federal Reserve and other banking regulators have to compete for the same limited pool of AI expertise as the private sector.
Although not an easy battle to win, regulators must invest greater resources in acquiring this expertise, she said, lest they fall too far behind the real-world use of the technology by regulated firms.
Secondly, Bowman said that regulators should adopt a policy of “technology agnosticism” when thinking about AI.
“We should avoid fixating on the technology, and instead focus on the risks presented by different use cases,” she said.
“These risks may be influenced by a number of factors, including the scope and consequences of the use case, the underlying data relied on, and the capability of a firm to appropriately manage these risks.”
This approach would allow regulators to moderate their supervision of lower-risk activities, while increasing their supervision of higher-risk activities.
What defines “risk” should not be a question of whether AI is used in a certain process, said Bowman.
Regulators should instead seek to understand how the use of AI affects the safety and soundness of financial products and financial stability more generally.
“A posture of openness to AI requires caution when adding to the body of regulation,” she said.
“Fundamentally though, the variability in the technology will almost certainly require a degree of flexibility in regulatory approach.”
Do existing regulations already cover AI?
Looking ahead, Bowman called for the Federal Reserve to conduct a gap analysis to determine whether there are “blind spots” that require additional regulation that specifically targets AI.
This analysis should consider the adequacy of existing regulatory frameworks that already serve to mitigate some of the major risks of AI, she said.
In Bowman’s view, many of these risks are already “well-covered” by existing laws and regulations. Hence, new regulations could be at risk of duplicating or overlapping with existing regulations.
For example, the US already has clear regulations on fair lending, cybersecurity, data privacy, third-party risk management and copyright, which can easily be applied to current uses of AI.
Concentration risk
Finally, Bowman noted that current uses of AI often rely on external parties, such as cloud computing providers, licensed generative AI technologies and core service providers.
The concentration of these services among a small number of providers poses cybersecurity risks, which are heightened within the context of financial services.
As covered by Vixio, this risk was also highlighted by the Bank of England (BoE) and the UK Financial Conduct Authority (FCA) in a survey on AI that was published earlier this month.
According to the survey, the top three providers of cloud computing, modelling and data accounted for 73 percent, 44 percent and 33 percent of the total provision of these services respectively.
The BoE and FCA also noted that a third of all uses of AI among respondents are third-party implementations, which is more than double the percentage (17 percent) from the 2022 survey.
“This supports the view that third-party exposure will continue to increase as the complexity of models increases and outsourcing costs decrease,” the regulators said.