The intergovernmental organisation advocates strengthening legal frameworks and boosting information sharing to address artificial intelligence (AI)-driven fraud, in response to rapid increases in both the volume and sophistication of criminal activity.
In its second Global Financial Fraud Threat Assessment, published in March 2026, Interpol argues that financial fraud is globalising and industrialising at a pace that is exceeding regulatory and law enforcement capacity.
The report estimates that global losses to financial fraud reached $442bn in 2025 and that the scale of offending will “escalate significantly” over the next three to five years.
According to Interpol, the main driver of the continued growth of financial fraud is the increased availability of AI technology and its low barriers to entry.
The report finds that the proliferation of generative AI and “agentic” systems – autonomous tools capable of making decisions – is enabling criminals to automate entire fraud lifecycles, from identifying and profiling victims to executing scams and laundering proceeds.
Interpol also provides evidence that current deepfake tools can clone voices and faces from just seconds of genuine audio or video, and that dark web marketplaces offer “synthetic identity kits” complete with biometric data.
These capabilities are quickly transforming financial fraud into a scalable, global industry.
The evolving fraud mix
According to the report, business email compromise (BEC) remains the most frequently reported fraud type globally, and is increasingly being enhanced by AI-driven impersonation techniques.
The use of AI is also making BEC harder to detect via traditional red flags such as poor grammar and spelling.
Investment fraud continues to generate some of the “highest losses”, with crypto-asset schemes becoming more sophisticated in their use of social engineering and staged “returns” (small payments to victims) to build trust.
Meanwhile, emerging threats such as AI-enabled fake kidnapping scams and synthetic identity fraud, including the exploitation of children’s identities, underscore how quickly new attack vectors are being operationalised.
In addition, Interpol highlights the growing role of organised criminal syndicates, which are collaborating across borders and specialising their operations, often working with dedicated money laundering networks.
In parallel, the global spread of scam centres illustrates the scale and coordination of today’s fraud ecosystem. Such centres have been identified across multiple regions, with their operations involving trafficked individuals from nearly 80 countries.
The link between AI-driven fraud and human trafficking highlights the increasing convergence of financial crime and human rights risks that firms should monitor under broader environmental, social and governance (ESG) mandates
Interpol recommendations for regulators, industry
For legal and compliance teams, the most important part of the report lies in its recommendations to regulators, law enforcement and financial institutions.
First, Interpol calls for strengthened legal frameworks to address AI-driven and crypto-enabled fraud, including the criminalisation of malicious uses of generative AI and enhanced oversight of virtual asset service providers (VASPs).
The report advocates increased know-your-customer (KYC) and anti-money laundering (AML) expectations, including deepfake detection, as well as real-time transaction monitoring mechanisms and standardised reporting requirements.
Second, Interpol emphasises the importance of partnerships and information sharing.
It stresses the need for structured collaboration between financial institutions, telecommunications providers, technology firms and law enforcement.
This includes mechanisms for real-time sharing of data on suspicious transactions and fraudulent platforms, supported by appropriate legal and data-protection safeguards.
The agency also calls for a system of centralised data collection to enable “rapid identification” of fraud patterns, particularly recurring tactics such as impersonation schemes, investment fraud and AI-enabled attacks.
Regulatory responses to the rise of AI
Interpol’s recommendations are clear in their focus: targeting the malicious use of AI and strengthening cross-sector defences. However, early regulatory responses to the increasing use of AI tools are prioritising governance of legitimate AI deployment within financial services.
This divergence highlights a growing gap between the operational realities of financial fraud and the current trajectory of regulation.
In the US, the White House’s National Policy Framework for Artificial Intelligence, published in March 2026, comes closest to aligning with Interpol’s recommendations.
The framework explicitly recognises the risks posed by AI-enabled impersonation and synthetic media, and calls for legislative measures to address the unauthorised reproduction of individuals’ faces, voices and likenesses.
However, the framework remains high-level and non-binding, setting out legislative priorities rather than enforceable obligations.
At the same time, it emphasises innovation, economic competitiveness and the need to avoid regulatory fragmentation, including through the proposed pre-emption of state-level AI laws.
As covered by Vixio, the US framework reflects a broader strategic priority: defining who regulates AI before fully determining how it should be regulated in practice.
By contrast, Singapore’s Monetary Authority of Singapore (MAS) has taken a highly practical, implementation-led approach through its AI Risk Management Toolkit.
Developed in collaboration with 24 industry partners, the framework focuses on embedding AI governance within financial institutions, providing detailed guidance on oversight structures, risk identification and lifecycle controls across AI systems, including emerging agentic models.
The emphasis is on operationalisation: ensuring that firms can safely deploy and scale AI technologies through robust internal controls and governance mechanisms.
Although this approach does not directly target the malicious use of AI, it strengthens firms’ internal resilience against increasingly sophisticated fraud threats.
In particular, enhanced visibility over AI systems, improved risk management processes and stronger governance frameworks may support earlier detection of anomalous behaviour and reduce exposure to AI-enabled attacks.
At the same time, the MAS initiative reflects another of Interpol’s priorities: collaboration. By working closely with industry in developing the toolkit, the regulator is fostering the type of public-private partnership that Interpol identifies as critical to tackling AI-enhanced fraud.
As noted in Vixio’s analysis, the toolkit is likely to act as a precursor to more formal, binding requirements, suggesting that operational AI governance may soon become a regulatory expectation rather than a best practice.
Similarly, in the UK, the Financial Conduct Authority (FCA) has identified agentic AI as a live policy issue, acknowledging the complexities of autonomous systems operating without direct human oversight, particularly in relation to accountability and liability.
Although the FCA’s approach is not explicitly framed around fraud prevention, it reflects a broader move towards ensuring that firms are operationally prepared for the risks associated with advanced AI systems.
From governance to enforcement
Taken together, these developments suggest that regulators are beginning to respond to the rapid evolution of AI, but not yet in a way that fully aligns with the threat landscape outlined by Interpol.
Current frameworks are largely focused on governing how financial institutions use AI, rather than on preventing how criminals misuse it.
Interpol’s assessment emphasises that financial fraud has already entered an operational, AI-driven phase. The question now is whether regulation can evolve quickly enough to meet it.
The gap between Interpol’s warnings and regulatory focus suggests that, for now, the burden of defence remains on financial institutions. Firms should look to the MAS toolkit as a blueprint for resilience, even in jurisdictions where formal rules have not yet been introduced.




