The increasing scale, speed and accessibility of modern digital payments have seen fraud evolve into a highly industrialised and global threat, and regulators are setting clear prevention and detection expectations for firms.
Many are reframing fraud as a systemic, core consumer protection challenge rather than just an isolated type of financial crime, and shifting from retrospective, post-transaction controls to a model of pre-emptive intervention, shared accountability and systemic friction.
Firms failing to implement real-time, consumer-centric controls will not only face severe regulatory penalties and reimbursement costs, but will also suffer significant commercial impacts due to the erosion of consumer trust.
The APP fraud challenge
Regulators are highly focused on authorised push payment (APP) fraud and social engineering scams. The speed and user-initiated nature of real-time payments via services such as India’s UPI and Brazil’s Pix compress the window for fraud detection to near zero, meaning reduced payment friction has become a liability.
Fraudsters are exploiting the seamlessness of these platforms to execute scams at scale, prompting regulators such as the Reserve Bank of India (RBI) to consider drastic measures, such as introducing time lags, “kill switches” and transaction delays to prevent payments reaching fraudsters.
In Europe, the European Payments Council (EPC) has formalised specifications for Verification of Payee (VOP) to ensure payment service providers (PSPs) can consistently authenticate the recipient before a transaction is finalised.
Key actions firms can take to tackle APP fraud:
- Introduce delays or targeted interventions for divergent, high-value or first-time payments.
- Prompt users with risk-based questions, such as, “Has someone asked you to make this payment urgently?” or “Were you contacted via phone or social media?”.
- Redesign apps to detect if a customer is conducting a transaction while on a phone call, triggering an automatic alert confirming that the payment provider has not contacted them.
- Integrate confirmation of payee or IBAN name checks directly into the payment flow.
- Sequence detection models to include device intelligence (e.g., new devices or VPN use), behavioural biometrics (e.g., typing speed or hesitation), transaction anomalies and network intelligence (e.g., known mule accounts).
Responding to AI
AI is lowering barriers to entry for criminals and automating the entire fraud lifecycle, from profiling victims to laundering proceeds. To combat AI-driven fraud, the Financial Action Task Force (FATF) and Interpol are calling for the criminalisation of malicious generative AI use and the mandatory integration of deepfake detection into KYC processes.
On a national level, authorities are focusing on robust operational governance. For example, the Monetary Authority of Singapore (MAS) has deployed an AI Risk Management Toolkit that requires financial institutions to implement strict oversight, lifecycle controls and risk identification for AI systems to build internal resilience against attacks.
Key actions firms can take to address AI-driven fraud:
- Define APP fraud as a standalone risk category with its own dedicated control framework and reporting lines, rather than a subset of anti-money laundering (AML).
- Ensure real-time decisioning models are explainable and that the logic can be clearly evidenced.
- Map fraud exposure across all products and channels, ensuring that new innovations are equipped with fraud prevention mechanisms from the beginning, rather than retrofitted post-launch.
How social media links to fraud
A substantial proportion of APP fraud in particular originates online, and regulators and industry bodies are pushing for shared responsibility frameworks to force digital platforms to detect repeat offenders and verify advertisers.
Singapore’s Shared Responsibility Framework and Australia’s Scams Prevention Framework already impose liability and penalties on a mix of digital platforms, telcos and financial institutions.
In the UK, the Payments Association is lobbying for a phased regulatory model that would mandate tech and social media companies to verify advertiser identities and screen for scam advertisements. Platforms would face “proportionate penalties” for systemic prevention failures.
Key actions firms can take to tackle fraud originating on social media:
- Design systems to ingest and act upon real-time intelligence, such as law enforcement alerts and shared industry data.
- Embed rapid refund and dispute workflows into customer service and inter-institutional coordination processes to freeze and recover funds quickly.
Demographic distribution
Regulators are increasingly acknowledging that fraud risk is not evenly distributed across the population, with older demographics disproportionately targeted by social engineering and impersonation scams.
Consequently, authorities are moving away from uniform security measures and implementing granular, risk-based rules for populations disproportionately targeted by scams.
For example, the National Bank of Georgia requires enhanced, behaviour-based monitoring for users over the age of 60, and in India the RBI is considering mandatory additional authentication for users over the age of 70.
Key actions firms can take to tackle fraud targeting vulnerable consumers:
- Customer support agents must be trained to identify live scam scenarios as they happen, using indicators such as tone of voice and body language, rather than just processing post-fraud complaints.
- Operations teams need the capability to push a phone call or SMS to a user within minutes of a high-risk transaction being flagged.
Organised crime and fraud
Regulators are alarmed by the cross-border collaboration of organised criminal syndicates and the spread of global scam centres that rely on trafficked individuals, highlighting a disturbing convergence between financial fraud, money laundering and severe human rights abuses.
Because fragmented data enables synthetic identities and money laundering, regulators are working to clean up the underlying ecosystems.
For example, Brazil’s Central Bank (BCB) has upgraded the security of its Pix network by mandating real-time fraud identification, formalising a Pix Penalties Manual for non-compliant institutions and introducing the Special Return Mechanism, a “dispute button” that allows users and institutions to quickly freeze and recover fraudulent transfers.
Key actions firms can take to tackle organised crime-based fraud:
- Establish 24/7 instant payment monitoring.
- Align engineering and operations with central bank mechanisms (such as dispute-return buttons and shared national financial registries) to allow for the immediate freezing and recovery of fraudulent transfers across different institutions.
Moving from passive to active
As regulators around the world seek to tackle the rise in fraud, we will see a fundamental shift in how payments compliance must be governed.
Authorities will no longer accept passive, post-event reporting or tick-box audits, and payments firms will need to transform their operations to meet real-time, consumer-centric requirements.
Matching risk prevention with seamless operational recovery is set to be a baseline requirement for commercial survival, meaning that although having to re-engineer systems and implement new requirements will be neither easy nor cheap, it will be worth it for firms that transform their approach successfully.
The financial cost of reimbursement and enforcement actions could be high, and customers will quickly move away from firms seen as weak on fraud.
By shifting from uniform approaches to demographic-specific rules, embracing explainable AI models and embedding strategic transaction friction directly into product design, firms can turn regulatory pressure into a competitive asset.




