FIs Urged To Use AI-Powered 'DNA' To Fight APP Fraud

December 2, 2024
Back
With losses to authorised push payment (APP) fraud set to rise, a payments expert at ACI Worldwide is urging financial institutions (FIs) to use AI and ISO 20022 to enable real-time exchange of risk information.

With losses to authorised push payment (APP) fraud set to rise, a payments expert at ACI Worldwide is urging financial institutions (FIs) to use AI and ISO 20022 to enable real-time exchange of risk information.

Last week, ACI Worldwide published its 2024 Scamscope report, looking at trends in APP fraud across the US, the UK, Brazil, India, Australia and the UAE.

Based on current trends, the report projects that annual losses to APP fraud in these markets will hit $7.8bn by 2028.

The report also looks at the proportion of APP fraud that takes place using instant payment systems, also known as real-time payments rails, versus other methods.

Between now and 2028, according to the report, APP fraud using real-time payments rails will increase faster than the overall rate of APP fraud.

By 2028, as shown below, 80 percent of APP fraud is expected to take place using real-time payments, up from 63 percent in 2023.


Source: ACI Worldwide

Cleber Martins, head of payments intelligence and risk solutions at ACI Worldwide, said the immediacy of real-time payments systems makes them an ideal tool for APP fraudsters.

In any APP fraud, as Martins pointed out, the scammer is the only party who knows what is really taking place on both sides of the transaction.

The sending institution lacks risk information about the payee, while the receiving institution lacks risk information about the sender.

If the scammer is successful in inducing the victim to send a payment, they can then leverage the immediacy of the real-time payments to relay the scammed funds to other "mule" accounts at other institutions.

This technique, known as “layering” in money laundering terminology, makes scammed funds difficult to trace and recover.

Building stronger defences against APP fraud

Speaking at Financial Crime 360 in London last month, Martins praised the UK’s new APP fraud reimbursement rules for their focus on the receiving side of the transaction.

Under new rules, the sending institution and the receiving institution must split the cost of reimbursing APP fraud victims (up to £85,000).

Previously, the receiving institution was not considered to be responsible for detecting or preventing cases of APP fraud.

“Nobody had that problem before,” said Martins. “Money coming in was never a problem in terms of fraud.

“But guess what? It is. If you don't get the receiving end to work on preventing these scams, you just cannot control them.”

Under the new rules, the threat of financial penalties for both the sending and receiving institution is expected to put pressure on all firms to improve their controls, creating a virtuous circle of customer protection.

The big question is what those improvements should look like, and how closely firms should work together on developing these improvements.

The false promise of 'data sharing'

For 20 years, Martins said, the financial industry has been discussing various forms of data sharing as a potential way to protect customers from fraud.

If FIs could share personalised data about individual customers with one another, so the theory goes, they would have a clearer picture of if and when a fraud is about to take place.

In most jurisdictions, data protection rules prevent the sharing of all but anonymised transaction data between FIs.

Singapore, through its Collaborative Sharing of Money Laundering/Terrorism Financing Information and Cases (Cosmic) platform, is currently trialling the sharing of customer data between FIs, but only on a small scale, and not currently as a means to prevent APP fraud.

However, even with more permissive data sharing laws, Martins believes that the scammer would still have the upper hand.

As Martins and other conference speakers noted, scammers are increasingly using AI to bypass know your customer (KYC) controls and to socially engineer their victims.

If a scammer creates a fictitious person and opens an account in that person’s name, it is of little use for FIs to be able to share the fictitious data with one another.

The scammer would still be able to target victims and launder funds as described above, as there would be no reason for FIs to treat the customer as suspicious — not until the fraud has been committed, that is.

“Sharing data about the accounts and documents that were used for crime in the past definitely helps to understand the trends in fraudsters’ behaviour,” said Martins.

“But is anyone expecting that the criminals will be using the same accounts in their next attacks? They won’t be, and that's the biggest challenge.”

Collaboration is key

In Martins’ view, the best way for FIs to protect customers from APP fraud is to collaboratively employ a combination of artificial intelligence (AI), data intelligence and human intelligence.

“Those three things together, when they correlate, generate a strong insight or signal,” he said. “That signal is a predictive expression of risk.”

The data points that sending and receiving FIs could feed into this matrix could include customer age, location, IP address, device and transaction amount.

FIs could build risk profiles of the typical customer based on these inputs, and the signals generated could be shared with other FIs in real time using the richer data capabilities of ISO 20022.

By doing so, FIs could help one another understand the risk profile on both sides of a transaction, without exposing any personal data.

Martins likened this process to building a strand of DNA, with each FI contributing to its constituent parts.

“The DNA can be expanded as it receives more signals from the receiving end and transmits them back to the initiating end,” he said.

“So the initiating end can then make an informed decision about whether or not to make a transaction.”

With ISO 20022, the messaging standard used in real-time payments systems, this “DNA” can be added to the transaction message itself, Martins pointed out.

“This is not ‘the future’ — this technology is available today,” he said. “This is the next generation of AI.

“If we don't get to a scenario where institutions can collaborate and share intelligence without exposing their data, we will not be able to learn in real time and gain a stronger visibility than the criminal has on his own.

“We have to expand this DNA so that it can become a consistent and reliable way for those signals to flow both ways, and to be used in real time.”

Our premium content is available to users of our services.

To view articles, please Log-in to your account, or sign up today for full access:

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.