Under its new Scams Prevention Framework, Australia is hoping to become one the first jurisdictions in the world to impose statutory anti-scam controls on non-financial businesses.
Unveiled by the Treasury and introduced to parliament in September, the framework aims to stop scams at source by imposing minimum standards for detecting, preventing, disrupting and responding to scams.
Initially, three key sectors are likely to be designated under the framework: banks; telcos; and digital service platform providers.
The latter category would include social media and instant messaging platforms, alongside paid search advertising platforms.
How does this change things?
The draft legislation lays out two codes: “overarching principles” that will apply to all designated sectors; and sector-specific codes that will apply to each sector.
Sector-specific obligations include measures such as ID verification for advertisers on social media and search platforms, and obligations to investigate all reported scam advertisements within 48 hours.
Banks will be required to verify payee details before transferring funds, and social media and paid advertising platforms will be required to remove reported scam content within 24 hours.
Users of all regulated services must also be offered a “friendly and accessible” method for making a complaint about a scam, including a phone number or an online mechanism.
Designated entities that fail to uphold their obligations under the framework will be subject to “tough penalties”, including fines of up to A$50m (US$33m).
How does this compare? A look at Singapore…
Australia’s Scam Prevention Framework shares similarities with Singapore’s Online Criminal Harms Act (OCHA).
The OCHA, which fully came into effect in June, imposes new scam-fighting obligations on designated firms, including social media and e-commerce platforms.
Under the new rules, Facebook Marketplace and Facebook advertisements must verify the identities of all "risky sellers" by the end of 2024.
If verifying these sellers does not lead to a “significant drop” in reported scam activity, these platforms will be forced to verify all sellers by Easter 2025.
In addition to those designated high-risk e-commerce platforms, Facebook, WhatsApp, Instagram, Telegram and WeChat have been designated high-risk online communication services.
Under the OCHA’s Online Communication Code, these platforms must be able to demonstrate that they are “proactively detecting” and “taking necessary action(s)” against suspected scams and malicious cyber activities.
They must implement a “fast-track channel” to receive reports of scams from law enforcement agencies, and must act on these reports “expeditiously”.
They must also implement “reasonable” ID verification and “strong” login verification to prevent the use of inauthentic accounts or bot accounts by scammers.
How does this compare? A look at the UK…
In the UK, banks such as TSB and regulators such as UK Finance have been urging social media platforms to do more to prevent scams.
In 2023, TSB revealed that scams originating on Meta platforms accounted for 80 percent of refunds across the bank’s three largest categories of fraud (purchase, investment and impersonation).
An earlier 2022 study by UK Finance found that three-quarters of online fraud begins on social media platforms.
However, the UK is yet to introduce legislation that would impose mandatory anti-scam controls on social media platforms.
Instead, regulators have focused on reimbursing victims of authorised push payment (APP) fraud, under new rules that went live this month.
This month Meta unveiled a new information sharing programme known as the Fraud Intelligence Reciprocal Exchange (FIRE) — an indication that the social media giant is coming to acknowledge the problematic role it can play in scams.
FIRE is a threat intelligence sharing programme for financial institutions that allows banks to share information directly with Meta to stop scammers.
With NatWest and Metro Bank as its first participants, Meta said a pilot of the scheme has already helped to remove 20,000 suspicious accounts.
Why should you care?
The success or failure of regulations such as those in force in Singapore and those proposed in Australia will influence regulatory decision-makers in other jurisdictions.
Should the focus on stopping scams at source lead to a significant drop in scam activity and consumer losses, regulators in other jurisdictions are likely to favour a similar approach, and less importance will be placed on reimbursement by banks and payment firms.
Stephen Jones, Australia’s assistant treasurer and minister for financial services, has been outspoken in his view that the UK is making a policy blunder with its new reimbursement rules.
Should Australia’s approach turn the tide in the country’s fight against scammers, the UK may have no choice but to follow suit.
And, one way or another, social media firms will likely have to take more responsibility for tackling fraud — whether that is on a voluntary basis, as with Meta in the UK, or in response to regulation, as in Singapore and potentially Australia.