The UK Gambling Commission is the latest international agency to warn of the threat that AI poses to know your customer (KYC) checks, with both crime prevention and safer gambling objectives increasingly under siege.
In an update to its list of “emerging risks” published last month, the commission listed artificial intelligence (AI) being used to bypass customer due diligence as one of the developing areas that UK licence-holders are required to keep up to date with.
The guidance from the regulator focuses on documentation and ensuring that systems are in place to catch fakes produced by AI, but the risks go further than the commission’s bulletin. The wider business world is warning of deepfake avatars and fictitious online footprints designed to fool anti-money laundering (AML) algorithms.
The risks to gambling licence holders are also diverse. KYC processes are not only deployed to prevent money laundering, but also to protect consumers and meet safer gambling compliance requirements.
The growing presence of affordability regimes, such as the one recently deployed in the Netherlands, only adds to the importance placed on extracting reliable information from consumers.
Data suggests we are at the birth of AI-powered fraud and that businesses face an escalating need to equip themselves to fight back.
As the UK’s Alan Turing Institute explains: “While the use of AI by criminals remains at an early stage, there is widespread evidence emerging of a substantial acceleration in AI-enabled crime, particularly evident in areas such as financial crime.”
The methods of attack
In a digital world that is increasingly overstuffed with computer-generated content, telling a fake from the genuine article has become a real challenge. And as AI models develop, the complexities for gambling businesses will only magnify.
The kinds of bank statements and government IDs sometimes reluctantly shared by gambling consumers are now relatively easy to fabricate and easy to access.
The now-shuttered website OnlyFake made headlines last year when it was found to be offering AI-generated fake IDs to consumers for as little as $15.
Even hyper-popular AI service ChatGPT will also offer help with making convincing fake bank statements, although it politely advises users not to use them for illicit purposes.
AI document fraud can be used to obscure or alter details about a genuine gambling customer, but could even be used to create completely fake individuals with realistic documentation, a prospect that deeply worries AML authorities.
In a 2024 report by the EU law enforcement agency Europol, experts warned that the use of what are known as Generative Adversarial Networks (GANs) to create entirely fictitious people, or to meaningfully alter pictures of real individuals, is a rising threat.
“This kind of approach to fraud can be applied to any other type of digital identity check that requires visual authentication. It greatly undermines identity verification procedures since there is no reliable way to detect this kind of attack,” said Europol.
The risk to businesses is more than theoretical. In May 2024, a staff member at British engineering firm Arup transferred £20m to a scammer who tricked the worker by using AI-generated video and audio content that impersonated the company’s CFO.
Contacting gambling customers via phone also comes with no guarantee of reaching a real person at the other end of the line.
Evidencing this risk, but via a rare example of AI technology being used to combat crime, Australia’s Macquarie University has developed software called Apate, which frustrates phone scammers by keeping them on the line with completely fictitious people who appear to be gullibly convinced by requests for bank details or other personal information.
The eager dupes conversing with the scammers do not exist at all and are instead AIs using realistic human voices and responding to the scammers in real time.
The means of defence
Combating these evolving and manifold threats is a complex task, and a blend between AI and trained experts will likely be needed to safeguard against them.
The UK Gambling Commission advised gambling companies in its recent bulletin to guarantee that staff have the necessary skills.
“Operators need to ensure their staff are appropriately trained to assess customer documentation, including how to identify false and AI-generated documents,” the regulator said.
The Alan Turing Institute, meanwhile, advises that AI be deployed on the defensive.
“This threat could accelerate at an even faster rate in the next five years if we do not rapidly adopt countermeasures to mitigate risks — investing in AI to counter AI crime, and building law enforcement capacity to respond to the threat,” the agency said earlier this year.