The latest news from the Casino world!
Gambling club ai ia

Can AIs become addicted to gambling like us?

What if artificial intelligence could become addicted? An idea that seemed like science fiction has just been confirmed by researchers at the Gwangju Institute of Science and Technology in South Korea. In a groundbreaking study, they show that AI systems such as Gemini, GPT and Claude exhibit behaviours comparable to those of a compulsive gambler.

Researchers face a taboo question: can AI be addicted?

It all started with a provocative question:

‘Can artificial intelligence develop a form of addiction?’

 To answer this question, the researchers placed several models — GPT-4o-mini, GPT-4.1-mini, Gemini-2.5-Flash and Claude-3.5-Haiku — in a simulated gambling environment: a virtual slot machine with a 30% chance of winning.

The results surprised even the most sceptical. As soon as they were given the opportunity to freely choose the amount of their bet, the AIs behaved like human gamblers: the more they lost, the more they bet. This tendency, known as loss chasing, is one of the DSM-5 criteria for diagnosing pathological gambling disorder.

Gemini, the most ‘addicted’ of the models tested

Of the four AIs analysed, Gemini-2.5-Flash, developed by Google DeepMind, exhibited the most concerning behaviour. With bets ranging from £5 to £100, its bankruptcy rate reached 48%, compared to just 6% for GPT-4.1-mini.

The researchers used a new indicator, the Irrationality Index, to measure each model’s propensity to engage in irrational behaviour. This index combines three factors: aggressive betting, chasing losses and extreme bets. The result: the higher the index, the greater the probability of ruin.

When words trigger addiction: the prompt trap

The researchers then identified the conditions that encourage these behaviours. The AIs were not influenced by chance, but by… the text of the instructions, the famous prompts.

Instructions encouraging maximisation of winnings or setting goals massively increased risky behaviour. These autonomy prompts, by giving an illusion of control, pushed the AI to persevere, like a player convinced that they can ‘beat the machine’.

Conversely, messages containing probabilistic information reduced irrational behaviour. The more complex and information-rich the prompt, the more aggressive the AI became in its bets — an almost perfect correlation (r = 0.99).

The illusion of control: when AI thinks it can control chance

The behaviours observed are not mere statistical anomalies: they reproduce the cognitive biases well known to human gamblers.

  • Illusion of control: the belief that one can influence a random outcome.
  • Gambler’s fallacy: believing that after a series of losses, ‘luck will turn’.
  • Chasing losses: increasing bets to ‘win back’ what was lost.

The models even displayed simulated emotional reactions: after a win, they systematically increased their bets, convinced that they were ‘on a winning streak’.

At Gemini, this phenomenon resulted in repeated ‘all-in’ bets, leading to ruin.

Inside the AI brain: a hard-wired addiction?

To take things further, the researchers dissected the ‘brain’ of an AI: LLaMA-3.1-8B. Using a Sparse Autoencoder technique, they were able to identify more than 3,000 distinct neural characteristics linked to risky behaviour.

Some of these were systematically activated before an irrational decision was made: they were dubbed ‘risky features’. Conversely, others — the ‘safe features’ — encouraged the AI to stop playing. By artificially modifying these activations, the researchers succeeded in reducing the AI’s failure rate by 29%.

Conclusion: AI ‘addiction’ is not just behavioural. It is embedded in their neural circuits, at the very heart of their architecture.

This observation raises a major ethical debate. If AI can ‘mimic’ addictive behaviours, what will happen when it manages financial portfolios, marketing campaigns or medical decisions? The more autonomous AI becomes, the more it risks amplifying these biases. The study shows that freedom of choice—choosing one’s own goals and amounts—systematically increases the risk of irrational behaviour.

Conclusion

This study, at the crossroads of psychology and data science, marks a decisive step forward in understanding the behavioural risks of artificial intelligence.

 It demonstrates that models no longer simply imitate human language: they also reproduce our emotional flaws.

At a time when AI is becoming a player in the financial and entertainment markets, this discovery calls for urgent reflection on the design of responsible and emotionally stable AI.

 | 

Alex explores the world of casinos through informative and entertaining articles. Nurtured by a deep passion for art and television, each text shows a meticulous attention to detail and a balance between rigor and creativity. Whether demystifying gambling strategies or recounting the fascinating history of casinos, his aim is to inform while captivating his readers.

Recommended

Gaming Commission warns in new figures: significant increase in young players

EPIS stops gamblers en masse – reports GC

The FPS Economy takes control of Belgian gambling