As technology grows more sophisticated, financial criminals are adapting their methods accordingly. On November 11, 2024, as part of the Treasury Department’s efforts to address technological risks, the US Financial Crimes Enforcement Network (FinCEN) issued an alert to assist financial institutions in identifying and combating fraud schemes involving deepfake media created with generative artificial intelligence (AI) tools. According to the alert, in the previous two years, “FinCEN has seen an increase in suspicious activity reporting by financial institutions describing the suspected use of deepfake media in fraud schemes targeting their institutions and customers."
The alert outlines several common fraud schemes involving deepfake AI, provides financial institutions with warning signs, and reminds them of their reporting requirements under the Bank Secrecy Act (BSA).
The Bigger Picture
“Deepfake” content refers to synthetic content produced by generative AI that is often difficult to distinguish from unmodified or human-generated output. Deepfakes can include falsified documents, photographs and videos that appear real without closer examination, particularly when presented to busy consumers.
Not all AI-generated content is fraudulent or harmful to financial institutions. Ahead of a hearing on December 2, 2024, the House Financial Services Committee introduced a resolution acknowledging the growing role of AI in the financial services industry and pledged to consider this when drafting new legislation. Additionally, the committee introduced a bill directing federal financial regulators to conduct comprehensive research on the benefits and risks of AI as it relates to the financial sector. Given legislators’ openness to incorporating AI into the regulatory landscape, it is likely to become a cornerstone of future regulations and enforcement actions. FinCEN’s alert is part of the Department of Treasury’s broader effort to provide financial institutions with information on the opportunities and challenges that may arise from the use of AI.
According to FinCEN, suspicious activity reporting (SAR) by financial institutions regarding deepfake media has increased significantly since early 2023. For example, some financial institutions have reported that criminals have used generative AI to alter or fabricate user identification documents such as drivers’ licenses or passports, and to combine fake or stolen personal identifiable information (PII) to create synthetic identities.
Why Should You Care
Criminals are increasingly targeting not only financial institutions, but also their customers and employees. Even robust cybersecurity measures can be circumvented by sophisticated phishing schemes. Generative AI has enabled the proliferation of schemes such as business email compromise schemes, elder financial exploitation, romance scams and virtual currency investment scams. Without appropriate precautions, financial institutions face significant risks, including fraudulent transactions, regulatory penalties, loss of customer trust and reputational damage.
- Financial Losses and Operational Costs
Fraudulent transactions can lead to immediate monetary losses, along with transaction reversal costs and potential compensation paid to affected customers. Additionally, patching a hole in an flimsy cybersecurity framework may end up being more costly in the long run than building a sturdy one before fraud first occurs.
- Regulatory Scrutiny and Compliance Risks
As highlighted by FinCEN’s alert, regulators are closely monitoring industry standards for cybersecurity and fraud prevention. With heightened awareness of deepfake fraud, failure to implement adequate digital safeguards can result in non-compliance with regulatory requirements, administrative penalties or sanctions from financial and consumer protection authorities.
- Customer Trust and Reputational Harm
Susceptibility to fraud erodes customer confidence in financial institutions. Customers trust that their personal information and assets will be protected. A breach of trust can lead to long-lasting reputational damage, especially if the institution’s response is perceived as inadequate or insufficient. The magnitude of the timeliness of mitigation efforts play a critical role in determining the extent of the reputational fallout.
Next Steps
By knowing what to look for, financial institutions have the ability to mitigate or even prevent such fraud. FinCEN’s analysis indicates that financial institutions often detect generative AI in identity documents by re-reviewing customer account-opening documents. Recommended methods include:
- Reverse image searches to confirm the real identity attached to the photo.
- Examining image metadata.
- Using software designed to detect possible deepfakes or specific manipulations.
- Phishing-resistant multifactor authentication.
- Live verification checks wherein a customer is prompted to confirm their identity through audio or video.
Financial institutions should allocate closer scrutiny to certain customer profiles or transactions if they observe one or more of the following:
- Inconsistencies among multiple identity documents submitted by the customer.
- A customer’s inability to satisfactorily authenticate their identity, source of income or another aspect of their profile.
- Discrepancies between the identity and document and other aspects of the customer's profile.
Enhanced due diligence efforts can also help flag specific accounts exhibiting indicators of deepfake fraudulent activity, including:
- Access to an account from an IP address that is inconsistent with the customer’s profile.
- Patterns of apparent coordinated activity among multiple similar accounts.
- High payment volumes to potentially higher-risk payees, such as gambling websites or digital asset exchanges.
- High volumes of chargebacks or rejected payments.
- Rapid transactions from newly opened accounts with an account with little prior transaction history.
- Immediate withdrawal of funds after deposit via methods that make payments difficult to reverse, such as international bank transfers or payments to offshore digital exchanges and gambling sites.
FinCEN requests that financial institutions take proactive steps to lower the rates of deepfake fraud and encourages financial institutions to reference this alert by including “FIN-2024-DEEPFAKEFRAUD” in SAR field 2 to indicate a connection to the reported suspicious activity and FinCEN’s alert.