To the Editor: Underplaying the effect of selection and response bias on survey results undermines research and damages public trust in science. Cross‐sectional surveys are a notoriously inaccurate method to determine the prevalence of a condition in the general population.1 This is because cross‐sectional surveys are highly vulnerable to both selection and response bias, which can dramatically overestimate the true prevalence.
The finding reported by Woldegiorgis and colleagues,2 that “18% of people infected with the Omicron variant reported symptoms consistent with long COVID [post‐coronavirus disease 2019 (COVID‐19) condition] 90 days after infection” is based on a survey with a 34% consent rate and a 51% response rate. Of 70 876 people with reported severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) infections, only 2130 participants reported symptoms consistent with long COVID. It is likely that the individuals with symptoms were more inclined to consent to the survey and to complete it, meaning that the true prevalence could theoretically be as low as 3% (2130/70 876). This figure is more consistent with estimates from studies using more complete, prospectively collected data.
Although the limitations of the survey are acknowledged by the authors in the discussion section of their article, “abstract bias” (favourably interpretated data without mention of limitations) and “media release bias” (oversimplified or exaggerated findings) often result in only the headline figure being reported by the media, ignoring potential biases.3 This unbalanced media reporting can damage public trust in science, particularly when the headline result might not pass the “pub test”:4 given that practically everyone has had COVID‐19, many people might be surprised to learn that 1 in 5 of everyone has had long COVID. Aside from eroding public confidence in science, such media reports can provoke unnecessary fear and undermine the recognition of the genuine issues faced by those with long COVID.
- 1. Curtis N. Surveys with a low response rate are unreliable for estimating prevalence. Pediatr Infect Dis J 2025; 44: e66‐e68.
- 2. Woldegiorgis M, Cadby G, Ngeh S, et al. Long COVID in a highly vaccinated but largely unexposed Australian population following the 2022 SARS‐CoV‐2 Omicron wave: a cross‐sectional survey. Med J Aust 2024; 220: 323‐330. https://www.mja.com.au/journal/2024/220/6/long‐covid‐highly‐vaccinated‐largely‐unexposed‐australian‐population‐following
- 3. Curtis N. Rapid response to Mahase E. Bad data is worse than no data: sensationalised headlines reporting surveys with low response rates risks eroding public trust in science [electronic comment to the Editor]. BMJ 2024; 385: q856.
- 4. Curtis N. Rapid response to Kmietowicz Z. Nearly all doctors have faced a complaint in their career, survey finds [electronic comment to the Editor]. BMJ 2021; 372: n707.
Nigel Curtis is supported by a National Health and Medical Research Council Investigator Grant (GNT1197117).