FinCEN Issues Urgent Warning On GenAI Fraud

November 15, 2024
Back
The US Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) has issued an alert on the use of deepfake media created with generative artificial intelligence (GenAI) tools.

The US Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) has issued an alert on the use of deepfake media created with generative artificial intelligence (GenAI) tools.

In the new critical alert, FinCEN attempts to help financial institutions recognise and combat fraud schemes involving deepfake media created with GenAI tools. 

According to FinCEN, criminals are increasingly using synthetic media to bypass traditional identity verification, potentially destabilising efforts to protect against financial crimes.

“While GenAI holds tremendous potential as a new technology, bad actors are seeking to exploit it to defraud American businesses and consumers, to include financial institutions and their customers,” said FinCEN director Andrea Gacki. 

“Vigilance by financial institutions to the use of deepfakes, and reporting of related suspicious activity, will help safeguard the U.S. financial system and protect innocent Americans from the abuse of these tools.”

FinCEN’s alert details fraud typologies, red flag indicators and best practices for financial institutions. 

A growing threat

The alert comes amid a rise in banks’ and other financial entities’ reporting of fraudulent activities involving deepfakes.

Deepfake use often includes falsified identity documents such as photos, videos and even audio files that imitate genuine records. 

These synthetic documents allow criminals to circumvent anti-fraud checks, open accounts under fake identities and facilitate money laundering.

“Criminals use new and rapidly evolving technologies, like GenAI, to lower the cost, time, and resources needed to exploit financial institutions’ identity verification processes,” the warning says. 

FinCEN cited recent cases where criminals used deepfakes in scams, including fake identity documents for account openings, phishing schemes targeting employees and synthetic voice calls as part of "family emergency" and "romance" scams. 

In such cases, fraudsters are using GenAI to manipulate voices and images, adding a new layer of deception to previously straightforward scams.

To assist with detection, FinCEN has provided several red flags, such as inconsistent identity photos, mismatched customer data, suspicious device or IP address activity and patterns of account behaviour that could indicate fraud. 

The agency also recommends using measures such as multi-factor authentication (MFA) and live verification checks as potential defences. 

However, criminals may still evade these by using sophisticated plugins or creating synthetic responses on the spot.

FinCEN encouraged financial institutions to include the code “FIN2024-DEEPFAKEFRAUD” in suspicious activity reports (SARs) linked to deepfake media abuse. This will help authorities track and counteract this growing threat more effectively.

Our premium content is available to users of our services.

To view articles, please Log-in to your account, or sign up today for full access:

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.