EU, Google Rules Will Dictate Use Of AI For Affiliates, Panellists Say

June 13, 2024
Back
Two titanic forces, Google and the European Union, are set to govern the use of artificial intelligence applications to generate gambling affiliate content, but how the rules will develop is far from clear, gambling conference attendees were told last week.
Body

Two titanic forces, Google and the European Union, are set to govern the use of artificial intelligence applications to generate gambling affiliate content, but how the rules will develop is far from clear, gambling conference attendees were told last week.

The launch of the ChatGPT AI software in November 2022 has already brought big changes to many industries, and a panel of affiliates, search engine optimisation (SEO) experts and a lawyer grappled with the implications for affiliates, at the iGaming Germany conference in Munich last week.

One reason the topic is highly important is that Google is looking to eliminate middlemen if they are not useful or helpful, said Martin Calvert of UK-based ICS Digital.

So if the use of AI by affiliates seeking to develop content to attract potential players they can send to an online gambling website means their audio, video or written material is no longer deemed “useful”, that is a problem, he said.

Google has already weeded out pretty much every affiliate relying exclusively on AI content, said Izabela Wisniewska of Creatos Media.

But potential problems loom.

What is loosely called “AI” is often better termed a “large language model”, that is, something that is not “intelligent” but an algorithm that uses huge amounts of data to generate text, according to Julia Logan, chief executive of Lithuania-based Zangoose Digital.

The problem is, sometimes these models make up facts, she said.

An answer, or a series of statements, could be “plausible but incorrect”, another panellist said.

There is even a word for AI’s made-up facts: “hallucinations”.

Logan said she once asked an algorithm to rate SEO outputs for “fluffiness” and “cuteness”. The software complied, with SEO rankings for fluffiness and cuteness. 

But it is not just Google’s views on AI use that are important, it is also the fact that as of last month, the EU has begun regulating it.

The EU’s AI legislation is meant to bar “unacceptable risk” in use of AI, and set requirements for high-risk applications, but apply only “light-touch” rules to systems deemed low risk.

The low-risk regulations revolve around transparency, according to the European Council.

“While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes”, the commission wrote in a press release.

Violations incur fines that will be a percentage of worldwide revenue, with the new rules beginning to apply two years after enactment.

High-risk applications include CV-sorting software used for employment, and credit scoring that could deny citizens services such as loans, according to the EU.

The act considers AI chatbots to be limited risk, but customers should be “made aware they are interacting with a machine”, and AI-generated content should be labelled as such.

But a potential difficulty in enforcement is that, so far, there is no reliable method of detecting AI-generated content, Logan said.

Sometimes such content is obviously machine-generated, but some freelancers write in “robotic” fashion because they think that is what search engines want, Calvert said.

Logan wondered what level of AI use needs to be disclosed: what if it is only used for idea generation or correcting grammar and spelling?

The EU will be issuing guidelines over the next two years to help clarify, and court rulings will emerge, in a process similar to the run-up to enforcement of the EU’s General Data Protection Regulation, Štěpánka Havlíková, an attorney with Dentons’ Prague office.

Such regulation may be very complicated for businesses to comply with, but it is consistent with the EU’s goal of protecting the individual, she said.

The best way to comply with AI rules might be to start and end with human intervention, Wisniewska said.

Our premium content is available to users of our services.

To view articles, please Log-in to your account, or sign up today for full access:

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.