A Future Gamble: How Artificial Intelligence Can Protect Players

June 24, 2022
Back
Trade group European Lotteries held an event on Wednesday in Brussels to discuss ushering lotteries into the digital age, with the potential use of Artificial Intelligence (AI) as a means to protect players a highlight of the discussion.

Body

Trade group European Lotteries held an event panel on Wednesday (June 22) in Brussels to discuss ushering lotteries into the digital age, with the potential use of Artificial Intelligence (AI) as a means to protect players a highlight of the discussion.

Keynote speaker Marina Geymonat, a computer scientist who leads the Innovation Lab at Sisal and is an AI expert who consults with the Italian Ministry of Economy and Finance, gave a rundown on how AI can come into play in the gaming industry and the problems it poses.

She outlined its potential as two-fold: first, AI has the ability to protect players who are exhibiting behaviours such as addiction online; and second, conversational AI could help players who may feel uncomfortable voicing their feelings to another person during the heat of a game.

Geymonat elaborated that AI research is moving towards being able “to try to understand when the behaviour of a gamer is becoming dangerous for himself”.

“The difficult thing is to find what is the right data to use and how to seclude this data, because we want to make sure that everything is anonymised and it's privacy preserving.”

She explained that the specific advice of the AI is not within her remit, but that psychologists are currently at work addressing the issue.

The conversational AI can be programmed to answer players in crisis, she said.

“Why not put a virtual assistant for when players are scared, angry, or they’re worried, but they may be ashamed to talk to someone to a human person?”

An AI assistant is able to detect emotions and “not only what you say but how you say it”, and act in a comforting capacity, she said.

AI can also be used to identify fraud and fraudulent behaviour and is already being used in the banking sector. The systems operate by collecting large amounts of data and can therefore learn to spot different kinds of anomalous data.

According to Geymonat, this anomalous data can be flagged for investigation. An easy example is match-fixing, as one can track how people usually play and detect aberrant behaviour.

AI, of course, has its limitations, which go beyond the classic concerns of ethical use.

Trusted flaggers, used to identify illegal content online, are still human across the board, including at large technology giants such as Facebook and Google. “If those giants have not found a way to do that automatically, I doubt it’s going to be found soon,” said Geymonat, who joked during the final Q&A: “You know, the point that machines are so stupid, that is true.”

Our premium content is available to users of our services.

To view articles, please Log-in to your account, or sign up today for full access:

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.