Gambling surveys: the mystery of contradictory figures
For more than twenty years, estimates of the number of gamblers and people affected by gambling-related harm have varied greatly depending on the method used to collect them. On the one hand, face-to-face surveys such as the Health Survey for England (HSE) have long reported low rates. On the other hand, online or self-administered surveys, such as the Gambling Survey for Great Britain (GSGB), show much higher figures.
A groundbreaking experimental study
To shed light on this debate, a team led by Professor Patrick Sturgis of the London School of Economics conducted experimental research on a large representative sample. The report, published in August 2025, analyses three factors that may explain the differences in estimates:
- The wording of the survey invitation (whether or not gambling is explicitly mentioned).
- The presence of an interviewer (by telephone or online).
- The list of gaming activities offered (old or updated with digital formats).
These three experiments, conducted with more than 2,900 participants, provide causal evidence that is rarely available in this field.
When the invitation changes the responses
The first experiment reveals a well-known phenomenon in survey methodology: the salience effect. Participants who received an invitation clearly mentioning gambling reported playing more often in the past year: +4 percentage points compared to those who received a neutral invitation focused on health and well-being.
However, the effect on the Problem Gambling Severity Index (PGSI) score remained low and statistically insignificant.
The effect of human presence
The second test addresses a sensitive hypothesis: does the presence of an interviewer prompt respondents to minimise their gambling behaviour?
The results are clear. When the questionnaire was completed online, without an intermediary, the rate of participants with a PGSI score above zero was 4.4 points higher than in telephone interviews. In other words, nearly 50% more cases of problematic behaviour were detected in self-administered questionnaires.
This confirms the power of social desirability bias: when faced with another person, or even in the presence of other members of the household, gamblers tend to underreport their difficulties.
Updating the list of games does not change everything
The third experiment tested a more technical hypothesis: could expanding the list of gaming activities to include new online formats explain the discrepancies observed between the surveys?
The results were modest. With the expanded list, slightly more participants reported having gambled (58% vs. 55%), but the effect remained statistically insignificant. As for the PGSI rate, it remained virtually unchanged.
This conclusion is consistent with previous analyses: the content of the list of activities does not explain the major discrepancies between surveys.
A matter of confidence in the figures
For the Gambling Commission, which is required to provide reliable statistics to the government and the public, these conclusions are decisive. As Ben Haden, Director of Research and Statistics, points out:
‘This research strengthens our confidence in the GSGB results, helps to understand the differences between published gambling surveys and will improve our advice to users.’
However, the study also points out that no method is free from bias. Mentioning gambling attracts more players, while the presence of an interviewer encourages under-reporting. Ultimately, both approaches may stray from the statistical ‘truth.’
What remains to be explained?
A recent example illustrates these tensions. The Adult Psychiatric Morbidity Survey (APMS), conducted face-to-face in 2023/24, estimated that 4.4% of adults had a PGSI score above zero. In the same year, the GSGB reported 14.3%.
According to the researchers, about one-third of this difference can be explained by biases related to the invitation and the presence of an interviewer. But a significant portion remains unexplained.
The report therefore recommends continuing benchmarking work between the different surveys in order to better calibrate the official figures.
A delicate methodological transition
These debates are part of a broader transformation: the migration from traditional surveys to online systems. Faster and less expensive, the latter make it possible to reach large samples, but introduce new challenges in terms of representativeness and comparability with older series.
For experts, the future probably lies in a combination of methods. As the report points out, it is crucial that public officials understand not only the raw figures, but also the mechanisms that produce them.
Between caution and innovation
The results of this experimental study mark an important step forward in understanding the discrepancies between surveys on gambling. They show that the differences are not only due to statistical methodology, but also to human factors such as how people perceive the subject or the social pressure exerted by the presence of an interviewer.