Connect
MJA
MJA

Influenza surveillance in Australia: we need to do more than count

Stephen B Lambert, Cassandra E Faux, Kristina A Grant, Simon H Williams, Cheryl Bletchly, Michael G Catton, David W Smith and Heath A Kelly
Med J Aust 2010; 193 (1): 43-45. || doi: 10.5694/j.1326-5377.2010.tb03741.x
Published online: 5 July 2010

The recent pandemic (H1N1) 2009 influenza outbreak has highlighted the importance of timely surveillance data to monitor epidemiological trends for guiding public health control measures.1 Collecting high-quality surveillance data is needed to gauge the timing and peak of the influenza season, and is an important pandemic preparedness activity.

Current influenza surveillance and data interpretation

Laboratory-confirmed influenza became notifiable in most Australian states and territories in 2001,2 and is now nationally notifiable. In theory, it should now be possible to compare influenza activity across the country. States and territories also conduct surveillance for influenza-like illness (ILI) during the influenza season using sentinel sites. Results from one such system have provided important early findings about pandemic (H1N1) 2009 influenza.1 However, the type of data and the way they are collected vary throughout the nation, resulting in a fragmented surveillance system.3 Comprehensive sentinel systems require committed general practitioners and concerted efforts to establish and maintain. Australia should aspire to having a uniform, national sentinel surveillance system, although funding and long-term maintenance issues would need to be addressed. Alternative methods of capturing influenza information include the online survey “FluTracking”4 and Google’s “Flu Trends”.5 The former currently lacks national coverage and neither system incorporates laboratory confirmation, meaning “false alarms” caused by other respiratory viruses may occur.

Given weaknesses in current data and the effort required to develop new, stand-alone surveillance mechanisms, it is important that we maximise the information that can be obtained from existing data collection systems that report laboratory-confirmed influenza cases.

According to national surveillance data of laboratory-confirmed influenza, Queensland has had the most severe influenza seasons of all Australian states and territories in recent years (Box, A).7,8 Australian data available at Google’s Flu Trends do not support differences among states in ILI activity,5 and there are no clear reasons why Queensland should consistently suffer effects of influenza disproportionate to other states. It is likely, therefore, that this finding is influenced by information bias.

To explore this possibility, we examined test result data for 2004–2008 from large public health laboratories in three states: Queensland (Queensland Health [QH] public laboratories); Victoria (Victorian Infectious Diseases Reference Laboratory [VIDRL]); and Western Australia (PathWest Laboratory Medicine at the Queen Elizabeth II Medical Centre).

Over time, all three laboratories increased influenza testing (Box, B), but with different patterns. Queensland had the highest number of tests each year, with a consistent increase over the 5 years, while both Victoria and WA showed slower growth but stepwise increases in 2007. A severe influenza season in 2007 saw deaths reported in healthy children across the country,9 including from Queensland10 and three early season deaths of young children from WA.11 In that year, all three laboratories reported increased numbers of laboratory-confirmed cases of influenza (Box, A). The consistently higher and increasing test numbers in Queensland may be due to several factors, including active promotion by public health authorities of influenza testing by GPs,12 increased use of point-of-care testing, and widespread availability, with rapid turnaround, at both public and private laboratories of highly sensitive molecular laboratory testing.

Each state’s data show a concordance between the amount of testing performed and the number of positive results (Box, A and B). One method to compensate for the impact of testing behaviour is to calculate the proportion of positive results, reducing the role the number of tests performed has on absolute counts. Regular calculation of this value shows a remarkable correlation between the timing and peak of the season at each of the laboratories (Box, C); the correlation is independent of variations in testing (Box, B). Source of notification — inpatient, outpatient, or sentinel surveillance — was not available for each laboratory, but where it was, removing sentinel specimens made no difference to the conclusions drawn (data not shown).

There were different patterns in the proportion of annual state notifications that each laboratory provided (Box, D).13 QH laboratories had the highest increase in testing, but their contribution to state notifications was low and flat during the period, consistent with substantial and increasing testing in private laboratories. VIDRL’s contribution was initially high but fell quickly, then stepwise, throughout the 5 years, suggesting an increasing role of other public and private providers. PathWest provided about half of the WA notifications early in the period, but contributed more than half in 2007, probably due to increased testing related to high influenza activity and community concern fuelled by the childhood deaths.

Arguments for and against

The notification of influenza-negative test data will not cure all the ills of influenza surveillance — no surveillance mechanism is perfect — but it would provide improved, nationally consistent data and could be implemented easily and quickly. This may be of particular value given concern about the expected resurgence of pandemic influenza in 2010. Total test numbers could be captured from electronic databases in laboratories, minimising implementation costs and providing an ongoing sustainable data source. Public health units would require only an initial time outlay to modify receipt and handling of notifications, and associated data analysis. The total number of tests and the proportion of positive results would both be published and would provide a more robust system for comparing influenza activity across time and regions.

The proportion of positive test results from sentinel practice surveillance samples has been used for monitoring overseas, including by the United Kingdom’s Health Protection Agency14 and the European Influenza Surveillance Network.15 But such data are only available where a sentinel surveillance system is in place. The proportion of positive test results was used recently in Victoria to describe the influenza season for the pandemic A(H1N1) 2009 outbreak,1 and in the United States to examine influenza vaccine effectiveness for preventing death in older people, using a 10% cut-off to define the season.16 Further, a Canadian paper defined periods of peak influenza activity as those months when the percentage of positive test results exceeded 7%,6 giving a mean influenza season of 3 months. Using this measure, duration of annual influenza seasons in Queensland, Victoria and WA ranged from 2 to 4 months (Box, C).

Calculating the proportion of all positive test results would reduce bias caused by variation in the number of tests performed, but this value may not be completely free of bias itself. For example, in many laboratories, including our own,17,18 specimens submitted for any respiratory virus polymerase chain reaction (PCR) test are subjected to a panel of assays. This may mean that during an outbreak of another respiratory virus, such as respiratory syncytial virus, influenza testing (and testing for all viruses on the panel) may be increased. This reduction in the proportion of results that are positive, caused by a testing artefact, would be misleading, but such a bias, we argue, would need to be persistent, strong and remain neglected to result in problems as serious as those caused by interpreting notification data that do not include negative test values.

Although use of PCR is increasing, different laboratories use different methods to test for influenza, and even the same method in different settings may produce different results. These site and regional variations in sensitivity and specificity could be accounted for by capturing information about the actual test being performed as a notified field.

Electronic notification is increasingly common but not universal, and some laboratories may find reporting negative test results burdensome. In this case, notifiability of negative results could be delayed and a limited system of reporting negative test results could be implemented, focusing on large laboratories where such a data request would be reasonable.

Conclusions

High-quality epidemiological surveillance is the cornerstone of monitoring seasonal activity of influenza and identifying new trends, including the emergence of potentially pandemic strains. It seems remarkable that, given the importance of these data, we currently have no way to account for regional or temporal variations in the number of tests performed.

Making all laboratory tests for influenza notifiable — with the addition of negative-test result reporting — may not result in a perfect system entirely free from residual bias but, as outlined, we believe it to be simple and able to be rapidly introduced. Such a system would add to, and be better than, our current system of only counting the positive results.

Complex models have been suggested for monitoring influenza surveillance data in real-time.14 There is a myriad of ways influenza surveillance could be improved in Australia, such as implementing a GP- and hospital-based, uniform, nationwide, laboratory-supported sentinel surveillance scheme, and the reporting of influenza-related mortality in key age groups. We strongly support such proposals, but implementing negative-test result reporting should not be deferred while other reforms are being considered.

To better understand influenza and its impact, both in seasonal and pandemic periods, we believe it is time to do more at a national level than count.

  • Stephen B Lambert1,2
  • Cassandra E Faux1,2
  • Kristina A Grant3
  • Simon H Williams4
  • Cheryl Bletchly5
  • Michael G Catton6
  • David W Smith4
  • Heath A Kelly3

  • 1 Queensland Paediatric Infectious Diseases Laboratory, Royal Children’s Hospital, Brisbane, QLD.
  • 2 Clinical Medical Virology Centre, Sir Albert Sakzewski Virus Research Centre, University of Queensland, Brisbane, QLD.
  • 3 Epidemiology Unit, Victorian Infectious Diseases Reference Laboratory, Melbourne, VIC.
  • 4 PathWest Laboratory Medicine WA, Perth, WA.
  • 5 Molecular Diagnostic Unit/Virology, Clinical and Statewide Services Division, Queensland Health, Brisbane, QLD.
  • 6 Victorian Infectious Diseases Reference Laboratory, Melbourne, VIC.


Correspondence: sblambert@uq.edu.au

Competing interests:

None identified.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.