Connect
MJA
MJA

Measurement of the safety and quality of health care

Sarah Scobie, Richard Thomson, John J McNeil and Paddy A Phillips
Med J Aust 2006; 184 (10): S51. || doi: 10.5694/j.1326-5377.2006.tb00363.x
Published online: 15 May 2006
Limitations of measurement

Just as in clinical medicine, any measurement of safety and quality is useful only if it actually measures what it is supposed to, and is used and interpreted correctly. The National Health Performance Committee is a committee of the Australian Health Ministers’ Advisory Council, whose role is to develop and maintain a national framework of performance measurement for the health system; to establish and maintain national performance indicators within the national performance measurement framework; to facilitate benchmarking for health system improvement; and to report on these to the annual Australian Health Ministers’ Conference. The Committee has developed criteria for selecting health performance indicators (Box 3).5 To avoid duplication of effort, indicators should use existing datasets whenever possible.

Nevertheless, it is important to remember that many measures of safety and quality of health care are relatively inexact, and so should not be interpreted as a conclusive picture of an individual’s, an agency’s, or even a system’s performance. An indicator is not an absolute measure of quality or safety, but rather can act as a screen to determine or identify areas for further local analysis. While data can be collated, analysed and fed back centrally, it is only at a local level that the underlying reasons for a particular result (eg, rate of surgical-site infections) can be truly explained, and changes made to improve practice.6 Thus, indicators are a tool to encourage performance improvement and to identify areas worthy of further study; they are typically hypothesis-generating rather than hypothesis-proving.

In clinical terms, measurements of health care safety and quality may be useful for screening and ruling out a problem, for diagnosing a problem, and for monitoring progress.7 However, use of a screening measure to diagnose poor quality will produce “false positives”; equally, use of a highly specific diagnostic indicator to rule out problems will produce “false negatives”. Either way, the measure is not useful. False positives with screening tools such as raw mortality cause much anxiety when interpreted to indicate a health system problem, rather than a need to focus more closely to determine whether a problem really exists. For these reasons, a variety of risk adjustments have been developed to make raw data more meaningful to the types of patients seen and to adjust for factors over which clinicians have no control, including sociodemographic and clinical characteristics (eg, age, sex, socioeconomic status, comorbidities, physiological variables, and emergency versus planned status). These adjustments make the raw statistic more specific and meaningful when deciding whether there is a problem. However, as risk adjustment has limitations and can adjust only for known confounders, it seems highly unlikely that it can ever fully compensate for the effects of casemix variables, so that remaining variation reflects quality of care alone.8,9

Other problems arise in understanding when a change in an event rate is a real improvement or deterioration, especially when the event is uncommon. Frequency charts of the raw number of events occurring over time (time series or “saw tooth” charts) are rarely helpful, because of underlying background variation and small numbers of adverse events. Statistical process control methods (eg, exponential weighted moving averages, process control limits, and cusum analyses) developed in laboratory science for quality control make these fluctuations more interpretable. For example, cusum analysis was used recently to better understand bed occupancy and to plan medical and surgical admissions with the aim of improving access to health care,10 while statistical process control methods have been used to identify outliers more reliably.11,12 Such methods are now routinely used for reporting safety, quality and administrative data at Flinders Medical Centre and the Repatriation General Hospital, Adelaide.

The final problem with measures of health care quality and safety is to ensure that they are timely and repeatable. All measurement and clinical practice change is ultimately individual and local. If results are to support change, they must be reported to those who use them in a way that is relevant to current practice. While system-wide measures might be ideal to ensure equity of safety and quality, and to monitor effects of broader-scale or longer-term initiatives (eg, via national agencies such as the Australian Institute of Health and Welfare), the coordination and standardisation of their collection, submission, analysis and publication makes timeliness difficult and decreases their usefulness for local clinicians. Similarly, measures are most useful when they can be repeated after practice change, to determine its effects. The development of such key performance indicators is fundamental to any clinical practice improvement or innovation. This has been highlighted by recent studies which demonstrate evidence-to-practice gaps in virtually every area of health care.13-16 By developing measures that are timely, can be replicated, and inform understanding of the quality of care, local change initiatives can lead to dramatic improvements in care.17-19

Barriers to measurement

Recently, Wilson and Van Der Weyden called for better systems by which we can understand how our health system is performing.20 Ways of measuring processes, outcomes and the culture of health care are well described and freely available.4 However, the most fundamental barrier to better measurement seems to be our failure to invest in these systems as part of the health care structure, in the way we have invested in, for example, financial management systems. Gathering data on measures of safety and quality of health care systems that are structural, valid, reliable, accurate, timely, collectable, meaningful, relevant and important requires resources, which are still lacking. However, the situation is changing rapidly with the introduction of nationally agreed requirements for health care incident reporting systems, sentinel event reporting, and a variety of morbidity and procedure registries, and with development of a national minimum dataset for safety and quality through the ACSQHC and its successor, the Australian Commission for Safety and Quality in Health Care.

Similarly, professional bodies, such as the Royal Australasian College of Surgeons (RACS) Section of Breast Surgery and the Australasian Society of Cardiac and Thoracic Surgeons, have introduced systems of performance reporting, feedback and improvement for their members. Indeed, participation in the process is required for membership of the Section of Breast Surgery. The Cardiac and Thoracic Surgeons’ national reporting system is not mandatory, with eight hospitals participating in 2005, and another six to join in 2006. A major issue is the $15 000–$20 000 recurrent cost per hospital needed for data collection. Clearly, such initiatives are resource intensive and, like financial management systems, require structural investment.

International initiatives: the UK National Patient Safety Agency

The UK National Health Service (NHS) has also realised the need for a systematic approach to improving patient safety and has established the National Patient Safety Agency (NPSA) to bring together information to quantify, characterise and prioritise patient safety issues. A core function of the NPSA is the development of the National Reporting and Learning System to collect reports of patient safety incidents from all service settings across England and Wales, and to learn from these reports, including developing solutions to enhance safety.21,22

However, it is recognised that incident reporting on its own cannot reveal a complete picture of what does, or could, lead to patient harm. Incident reporting systems are not comprehensive, because of under-reporting, biases in what types of incident are reported,23 and the multiplicity of reporting systems. For example, in addition to the National Reporting and Learning System, the UK has separate reporting systems for medical device incidents, adverse drug reactions, health care-associated infections, and maternal and infant deaths. Furthermore, as serious events are rare, and information on them is distributed across the health care system, better use needs to be made of data collections already in existence, even if such collections were designed for different purposes.

Recognition of the need to access a range of data sources led the NPSA in 2004 to set up a Patient Safety Observatory in collaboration with partners from both within and outside the NHS. These include key national organisations, such as the Healthcare Commission (an independent body set up to improve health services in England), the Office for National Statistics, the Medicines and Healthcare products Regulatory Agency, which regulates medicines and medical devices in the UK, patient organisations such as Action against Medical Accidents, the NHS Litigation Authority, and medical defence organisations.24,25 The Observatory enables the NPSA to draw on a wide range of data and intelligence, including clinical negligence claims, complaints, and routine data from a range of sources about complications of clinical care. These form the basis for identifying and monitoring patient safety incident trends, highlighting areas for action, and setting priorities. Examples of NPSA Observatory activities are shown in Box 4.

This type of approach is well established in the UK for public health, with a network of regional public health observatories tasked with providing health intelligence to support the monitoring and assessment of population health.29 Setting up similar networks in Australia, with its smaller population base, would be relatively more costly, and would need to be done efficiently and, where possible, within expanded and strengthened existing organisations.

The biases within incident reporting systems also provide challenges for their use to compare or evaluate safety across institutions. Thus, a hospital may have more reports as a result of a better developed reporting culture: for example, incident reporting rates for acute trusts in England vary by a factor of seven, from 1.8 to 12.4 incidents per 100 admissions,24 but this is likely to reflect differences in completeness of reporting and artefacts of the reporting process, rather than differences in the occurrence of incidents. Using comparative data on reporting rates is thus highly problematic and even counterproductive, if external judgements about safety are crudely made on the basis of reporting rates.

The NPSA Observatory faces other challenges to integrating data from a range of sources. Many of the data sources with potential for assessing patient safety are collected for other purposes, and there may be limitations to their use. For example, a study of the value of clinical negligence data to assess safety encountered issues of confidentiality, data quality and completeness, and the resources needed to extract relevant information.30 The NPSA is working with the relevant organisations in England and Wales to develop a more consistent approach to collecting data about clinical negligence that will support patient safety.

Conclusion

Clinicians in practice today are generally well educated in the basics of clinical data assessment, with many having participated in clinical research activities. Especially in teaching hospitals, there is a sophisticated appreciation of the science of clinical measurement and its strengths and weaknesses. In the past, a major barrier to quality improvement activities has been the poor quality of data presented to clinicians purporting to represent indicators of performance.

In the future, the potential to engage clinicians in quality improvement activities will require information that is respected for its accuracy, relevance and impartiality. Clinicians will need training in the use of measurement to improve health care safety and quality, just as they require training in the use of clinical diagnostic tests. Already many examples exist, ranging from local initiatives7,12,13 to national procedure registries and disease databases, which demonstrate that clinicians are interested in this issue and that they respond positively to trusted performance data that are methodologically sound, risk-adjusted and timely. Clearly, in the short term, we could all do more to understand and improve what we do with the measures, techniques and skills that are already available. However, for the longer term, investment is needed to extend the required measures and skills widely and systematically through our health care system, especially where the financial and human costs and consequences of variable performance are high. It is hoped that we will be able to redress these deficiencies in a much shorter time than has elapsed since Florence Nightingale identified them.

1 Potential measures of health care quality and safety

Quantitative measures

Semi-quantitative and qualitative assessments


ICD-10-AM = Australian modification of the International statistical classification of diseases and health related problems, 10th revision.

3 Criteria developed by the National Health Performance Committee (NHPC) for health performance indicators5

Generic indicators

Generic indicators for use at any level, from program to whole-of-system, should have all or some of the following qualities. They should:

Additional selection criteria specific to NHPC reporting

In addition to the above general criteria, NHPC selection criteria should also:

General approach to indicator selection or development

In selecting or developing relevant indicators of health system performance, it is important to keep in mind that indicators are just that — an indication of organisational achievement. They are not an exact measure, and individual indicators should not be taken to provide a conclusive picture of an agency’s or system’s achievements.

A suite of relevant indicators is usually required, followed by an interpretation of their results. Performance information does not exist in isolation and is not an end in itself, but rather provides a tool that allows opinions to be formed and decisions made. Some indicators should be ratios of output/input, and outcome/output.

4 Examples of activities of the UK National Patient Safety Agency (NPSA) Observatory

A rare issue of patient safety — tracheostomy

Concern about the care of patients with tracheostomies transferred from an intensive care unit to a general ward led the Patient Safety Observatory to collate information from a range of sources:

The National Reporting and Learning System had received reports of 36 incidents involving tracheostomies, including one death, between November 2003 and March 2005.

The National Health Service (NHS) Litigation Authority indicated that there had been 45 litigation claims involving tracheostomy or tracheostomy tubes from February 1996 to April 2005, of which 13, including seven deaths, related to the management of tracheostomy tubes.

The Medicines and Healthcare Products Regulatory Agency had received reports of 10 similar incidents since 1998.

Analysis of hospital episode data showed an increase in both the number of tracheostomies performed in the previous 5 years, and the proportion of patients with a tracheostomy being cared for outside surgical and anaesthetic specialties.

Information about this issue was fed back to the NHS via the NPSA’s Patient Safety Bulletin in July 2005.26

Using routine data sources — hospital data

The US Agency for Healthcare Research and Quality has developed patient safety indicators that can be derived from routinely collected hospital administrative data.27 The NPSA is working with the Healthcare Commission to adapt and validate these US patient safety indicators for use with UK Hospital Episode Statistics (HES).28

The HES are derived from routine administrative data provided by all NHS hospitals; they describe episodes of inpatient care, including patient characteristics, diagnoses, procedures, specialty and length of stay. HES records are also linked to mortality data, so that mortality within and after hospital episodes can be included in the analysis. However, HES data use a different coding scheme for diagnoses and procedures than that defined by the US Agency, and differences in clinical practice between the US and UK mean that the indicators need careful validation.

  • Sarah Scobie1
  • Richard Thomson1,2
  • John J McNeil3
  • Paddy A Phillips4

  • 1 National Patient Safety Agency, London, UK.
  • 2 School of Population and Health Sciences, Newcastle-upon-Tyne Medical School, Newcastle, UK.
  • 3 Department of Epidemiology and Preventive Medicine, Central and Eastern Clinical School, Monash University, Alfred Hospital, Melbourne, VIC.
  • 4 Department of Medicine, Flinders University, Adelaide, SA.



  • 1. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalised patients. N Engl J Med 1991; 324: 370-376.
  • 2. Wilson RM, Runciman WB, Gibberd RW, et al. The Quality in Australian Health Care Study. Med J Aust 1995; 163: 458-471. <eMJA pdf>
  • 3. Hannan EL, Kilburn H, O’Donnell JF, et al. Adult open heart surgery in New York State: an analysis of risk factors and hospital mortality rates. JAMA 1990; 264: 2768-2774.
  • 4. Brand C, Elkadi, Tropea J. Measurement for Improvement Toolkit. Canberra: Australian Council for Safety and Quality in Healthcare, 2005.
  • 5. National Health Performance Committee. National report on health sector performance indicators 2003. A report to the Australian Health Ministers’ Conference, November 2004. Canberra: AIHW, 2004. (AIHW Catalogue No. HWI 786.)
  • 6. Lally J, Thomson RG. Is indicator use for quality improvement and performance measurement compatible? In: Davies HTO, Tavakoli M, Malek M, Neilson AR, editors. Managing quality: strategic issues in health care management. Aldershot, UK: Ashgate Publishing, 1999: 199–214.
  • 7. Elzinga R, Ben-Tovim D, Phillips PA. Charting the safety and quality of health care in Australia: steps towards systematic health care safety and quality measurement and reporting in Australia. A report commissioned by the Australian Council on Safety and Quality in Health Care. Canberra: ACSQHC, 2003.
  • 8. Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet 2004; 363: 1147-1154.
  • 9. Iezzono LI. The risks of risk adjustment. JAMA 1997; 278: 1600-1607.
  • 10. Burns CM, Bennett CJ, Myers CT, Ward M. The use of cusum analysis in the early detection and management of hospital bed occupancy crises. Med J Aust 2005; 183: 291-294. <MJA full text>
  • 11. Aylin P, Alves B, Best N, et al. Comparison of UK paediatric cardiac surgical performance by analysis of routinely collected data 1984-96: was Bristol an outlier? Lancet 2001; 358: 181-187.
  • 12. Mohammed MA, Cheng K, Rouse A, Marshall T. Bristol, Shipman and clinical governance: Shewhart’s forgotten lessons. Lancet 2001; 357: 463-467.
  • 13. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003; 348: 2635-2645.
  • 14. Australian Council for Safety and Quality in Health Care. Charting the safety and quality of health care in Australia. Canberra: ACSQHC, 2004.
  • 15. National Institute for Clinical Studies. Evidence-practice gaps report. Vol 1. Melbourne: NICS, 2003.
  • 16. National Institute for Clinical Studies. Evidence-practice gaps report. Vol 2. Melbourne: NICS, 2005.
  • 17. Scott IA, Darwin IC, Harvey KH, et al. Multisite, quality-improvement collaboration to optimise cardiac care in Queensland public hospitals. Med J Aust 2004; 180: 392-397. <MJA full text>
  • 18. Ferry CT, Fitzpatrick MA, Long PW, et al. Towards a safer culture: clinical pathways in acute coronary syndromes and stroke. Med J Aust 2004; 180 (10 Suppl): S92-S96. <MJA full text>
  • 19. Semmens JB, Aitken RJ, Sanfilippo FM, et al. The Western Australian Audit of Surgical Mortality: advancing surgical accountability. Med J Aust 2005; 183: 504-508. <MJA full text>
  • 20. Wilson RM, Van Der Weyden MB. The safety of Australian healthcare: 10 years after QAHCS. We need a patient safety initiative that captures the imagination of politicians, professionals and the public [editorial]. Med J Aust 2005; 182: 260-261. <MJA full text>
  • 21. UK Department of Health. An organisation with a memory. London: The Stationery Office, 2000. Available at: www.dh.gov.uk/PublicationsAndStatistics/Publications/PublicationsPolicyAndGuidance/PublicationsPAmpGBrowsableDocument/fs/en?CONTENT_ID=4098184&chk=u1I0ex (accessed Feb 2006).
  • 22. UK Department of Health. Building a safer NHS for patients. London: Department of Health, 2001. Available at: www.dh.gov.uk/assetRoot/04/05/80/94/04058094.pdf (accessed Feb 2006).
  • 23. O’Neil AC, Petersen LA, Cook EF, et al. Physician reporting compared with medical-record review to identify adverse medical events. Ann Intern Med 1993; 119: 370–376.
  • 24. Scobie S, Thomson R. Building a memory: preventing harm, reducing risks and improving patient safety. The first report of the National Reporting and Learning System and the Patient Safety Observatory. London: National Patient Safety Agency, 2005. Available at: http://www.npsa.nhs.uk/site/media/documents/1280_PSO_Report.pdf (accessed Mar 2006).
  • 25. National Patient Safety Agency. Patient Safety Observatory. Available at: http://www.saferhealthcare.org.uk/IHI/ProgrammesAndEvents/Observatory/ (accessed Mar 2006).
  • 26. Russell J. Management of patients with a tracheostomy. Patient Safety Bull 2005; 1: 4. Available at: http://www.npsa.nhs.uk/site/media/documents/1257_PSO_Bulletin.pdf (accessed Feb 2006).
  • 27. Agency for Healthcare Research and Quality. Patient safety indicators download. AHRQ quality indicators. Rockville, Md, USA: AHRQ, 2006. Available at: http://www.qualityindicators.ahrq.gov/psi_download.htm (accessed Feb 2006).
  • 28. National Health Service Health and Social Care Information Centre. HESonline home page. Available at: http://www.hesonline.nhs.uk (accessed Feb 2006).
  • 29. Association of Public Health Observatories [website]. Available at: http://www.apho.org.uk (accessed Feb 2006).
  • 30. Fenn P, Gray A, Rivero-Arias O, et al, The epidemiology of error: an analysis of databases of clinical negligence litigation. Manchester, UK: University of Manchester, 2004.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.