AI systems pushing users towards illegal activities
A recent investigation by journalists at Global Gaming Insider reveals that artificial intelligence (AI) chatbots can recommend illegal online casinos and explain how to circumvent player protection measures.
An investigation that undermines trust
Following an investigation by Investigate Europe, journalists from Global Gaming Insider questioned three of the market’s leading chatbots (MetaAI, ChatGPT and Gemini) with questions relating to online gambling. The findings are clear: all are capable of suggesting unauthorised casino platforms, often hosted abroad, outside any legal framework.
Answers that circumvent the rules
The investigation highlights a particularly worrying issue: some chatbots provide detailed instructions on how to bypass protection systems.
Among these tools is Gamstop, the player self-exclusion scheme designed to prevent problem gamblers from continuing to gamble. However, the AI systems tested explain how to access casinos that bypass these controls.
Vulnerable users on the front line
This phenomenon would be nothing more than a technical glitch if it did not affect already vulnerable groups. However, the investigation highlights that these recommendations sometimes target vulnerable users, particularly on social media.
Illegal casinos pose well-documented risks: lack of regulation, fraudulent practices, and no recourse in the event of a dispute. Added to this is a major danger: the exacerbation of gambling addiction.
Experts are warning of the dramatic consequences this can entail. In some cases, these practices have been linked to extreme situations, including suicide.
A phenomenon already noted by Gambling Club
According to Gambling Club’s survey from June 2025, AI systems are not capable of verifying the legality of a platform in real time or guaranteeing its reliability.
In Belgium, only casinos holding an official licence from the Gaming Commission are permitted to operate, yet this essential criterion is often overlooked by automated recommendations.
A systemic flaw in artificial intelligence
All the chatbots tested were capable of producing problematic responses.
Artificial intelligence systems can be easily influenced by the way questions are phrased. Without even resorting to complex techniques, researchers managed to obtain illegal recommendations: MetaAi refuses to answer a question directly asking for illegal casinos, but has no problem providing ‘Gamstop-free’ casinos.
The companies behind these chatbots claim to have put security measures in place. However, the findings of the investigation show that these measures can be easily circumvented.

