Public reporting of hospital outcomes: a challenging road ahead

Martin P Gallagher and Harlan M Krumholz
Med J Aust 2011; 194 (12): 658-660. || doi: 10.5694/j.1326-5377.2011.tb03156.x
Published online: 20 June 2011


The outcomes for patients following admission to an Australian hospital, notwithstanding occasional medical misadventures hitting the mainstream media, remain largely unknown, even to those working within the sector. The Australian Government, along with all the states, has been moving toward addressing this gap in knowledge with recent policy documents and reports.1,2 This intent has been made concrete with the recent Council of Australian Governments agreement to establish a national performance authority that will be charged with publicly reporting hospital and community health outcomes. In a challenging area, where Australia has lagged behind many other countries, learning from the international experience may smooth the development of an Australian reporting program.

Motivations for publicly reporting hospital outcomes

The drive for greater public reporting in health care has come from three major perceived deficiencies in the existing systems — those of failures in accountability, improvement and information to inform patient choice.8

The need for greater accountability in health systems has been highlighted by numerous government reports, both in Australia and overseas,6,9,10 examining failures of safety and quality. Marshall has perhaps best summarised the public’s desire for such information to be available: “they are dissatisfied with what they perceive as the veil of secrecy and professional protectionism currently seen in health care”.11 Recent comments that “the mice [are] in charge of the cheese” by the father of a young golfer who died unnecessarily in a Sydney hospital12 would tend to support this view. From a public policy standpoint, public reporting of hospital outcomes is an overdue step in enhancing accountability.

Public reporting as a tool for improving the quality of health care has some influential advocates, including the National Committee for Quality Assurance and the Institute for Healthcare Improvement13 in the US, and the NHS14 and King’s Fund15 in the UK. All of these organisations appear to be suggesting that reporting such outcomes is necessary, although not sufficient, for improvement in health quality. However, the medical evidence to support public reporting as a means of improving outcomes is not robust. A systematic review published in 200816 concluded that “Evidence is scant” and “Rigorous evaluation of many major public reporting systems is lacking”. A recently reported trial17 that examined public feedback of cardiac indicators (using measures of both processes and outcomes) showed no difference in the primary end point of a range of process measures. There were, however, statistically significant reductions in some secondary mortality outcomes for myocardial infarction in the publicly reported hospitals. Given the difficulties and expense of such studies, and the diversity of global health systems, it would seem unlikely that compelling scientific literature on public reporting will arise in the near future.

The third driver of public reporting is that of providing information to allow informed choice by health consumers. This has been most prominent as a justification for the early public reporting in the US18 and remains important in a country where so much health care is purchased by employers. However, the evidence suggests that consumers who use this information to guide health purchases are in a very small minority,19 and early public reporting in the US has been focused on acute conditions where choice is likely to be less valuable to consumers. A requirement for a nascent Australian program to primarily guide consumers in their hospital choices would make the task significantly more demanding.

Our key recommendations for public reporting in Australia are set out in the Box.

Challenges in public reporting

Although reporting of hospital outcomes may appear a simple exercise of accounting, there are significant issues concerning the data and its processing, its utility to the various stakeholders, and the possible effects of such reports.

All public reporting of hospital outcomes uses the World Health Organization’s International Classification of Diseases (ICD) system. All admissions to Australian public hospitals are coded, including the comorbidities that can be used in the standardisation of risk across different hospital populations. Although precision and consistency in coding will be a prerequisite for any public reporting system, the coding data are the best available data barring a significant investment in manual abstraction of medical records or rapid adoption of electronic health records.

Perhaps the major concern about ICD-based models for reporting outcomes is that the coding data lacks the clinical detail of medical records and will give rise to inappropriate classification of outcomes.20 This issue has driven the medical record validation of risk-adjustment models used in the US,21-23 where administrative models have been shown to be good surrogates for those derived from medical records.

The statistical methods used in public reporting of health outcomes are often inscrutable to many and make presentation and understanding of data more difficult. This complexity must be balanced against the potential of simpler measures and methods to be statistically challenged, as seen in a recent challenge to publicly reported HSMR measures in the UK.3,24 The US uses hierarchical logistic regression models, which take into account the clustering of outcomes within facilities, rather than the general logistic regression models used by some in the UK, with others recommending the use of funnel plots and variations on statistical process control.5 The greater danger in any new national program would appear to be the inappropriate classification of outcomes as aberrant from the norm, rather than the classification of aberrant outcomes as “normal”. Therefore, hierarchical logistic regression has an important advantage over other techniques that label a predetermined proportion of facilities as aberrant.25

One of the challenges in Australia will be reporting outcomes in smaller hospitals, where the caseloads are likely to be small, resulting in less precision of outcome measures and inability to draw meaningful conclusions. In the US, outcomes in smaller hospitals tend to be corrected toward the national mean, which reduces their likelihood of classification outside national norms, and hospitals with very low caseloads (less than 25 cases per year) are not reported. Larger hospitals, because of greater precision in the outcome estimates, tend to undergo less correction of their raw values toward a national mean. Many have raised concerns about the risk of “gaming” of any outcomes reporting system15,20 or even outright fraud.26 These issues have been best explored in the NHS, where such activity was not seen in emergency department initiatives to reduce waiting times.27 However, the criticism has mainly been directed at the excessive focus on administrative process measures and targets28 and the subsequent failure to develop a culture of improvement, rather than the public reporting per se. A further potential adverse consequence of public reporting is that of minimising risk in the pool of patients being treated at a given facility. This has been highlighted principally in cardiology procedures and postulated as a factor in the reduction in mortality after coronary artery bypass grafting in New York in the early 1990s,29 as driving differences in percutaneous coronary intervention (PCI) casemix between Michigan and New York,30 and for reductions in the proportion of high-risk patients treated with PCI in Massachusetts.31 Such risk avoidance may reduce patient harm but also has the potential to deny care to high-risk patients, who often have the largest absolute benefit from an intervention. Among the means of mitigating this risk are sound baseline measures of casemix, inclusion of measures of appropriateness of care, and development of risk-standardisation models that clinicians can trust to account for differences in baseline risk.

A recent UK report15 highlights the differing audiences for public reports of health outcomes, including the public, the media, industry, researchers, clinicians and government. These groups each have differing expectations and abilities to interpret such data. Attempting to reach or influence all these audiences as a first step in public reporting would be overly ambitious, so targeting specific groups (such as those working in the clinical delivery system) who are most able to use the data to drive change is likely to be most fruitful.

One of the risks of public reporting is the possibility of raising fear in the community about the health care system and eroding trust in important public investments. A Royal Statistical Society report on performance monitoring in the public services32 outlines many elements of such performance management programs that enhance their success. Prominent among these is the need for a “wide-ranging educational effort about the role and interpretation” of data.

There is also a risk that the selection of publicly reported outcomes will draw investment and resources away from other service areas that are not being measured. Inevitably, health services will have local priorities to address, and any mandatory reporting requirements may not coincide with these exigencies. Good design of the measures to minimise the burden of data collection and perverse behaviour (such as gaming and risk manipulation), along with a modest data burden that is supported by appropriate resourcing, should minimise this risk.

  • Martin P Gallagher1
  • Harlan M Krumholz2

  • 1 The George Institute for International Health, Sydney, NSW.
  • 2 Yale University, New Haven, Conn, USA.


Competing interests:

Martin Gallagher prepared this article during a Harkness Fellowship supported by the Commonwealth Fund. Harlan Krumholtz’s institution has received funding from the US Centers for Medicare and Medicaid Services for outcome measure development. The views expressed here are those of the authors and should not be attributed to the Commonwealth Fund or its directors, officers or staff.

  • 1. Australian Government. A national health and hospitals network for Australia’s future. Canberra: Commonwealth of Australia, 2010.
  • 2. Australian Commission on Safety and Quality in Health Care 2009. Windows into safety and quality in health care. Sydney: ACSQHC, 2009.
  • 3. Mohammed MA, Deeks JJ, Girling A, et al. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ 2009; 338: b780.
  • 4. Kmietowicz Z. Dr Foster patient safety ratings are flawed, confusing, and outdated, trusts say. BMJ 2009; 339: b5181.
  • 5. Duckett SJ, Coory M, Sketcher-Baker K. Identifying variations in quality of care in Queensland hospitals. Med J Aust 2007; 187: 571-575. <MJA full text>
  • 6. Queensland Government. Queensland health systems review. Final report, September 2005. (accessed Apr 2010).
  • 7. Ben-Tovim D, Woodman R, Harrision JE, et al. Measuring and reporting mortality in hospital patients. Canberra: Australian Institute of Health and Welfare, 2009. (AIHW Cat. No. HSE 69.)
  • 8. Morris K, Zelmer J. Public reporting of performance measures in health care. Ottawa: Canadian Policy Research Networks, 2005.
  • 9. Walker B. Interim report of the Special Commission of Inquiry into Campbelltown and Camden Hospitals. Sydney: NSW Government, 2004.$file/Interim_Report_31March2004.pdf (accessed Apr 2010).
  • 10. Learning from Bristol: the report of the public inquiry into children’s heart surgery at the Bristol Royal Infirmary 1984–1995. London: UK Parliament, 2001.
  • 11. Marshall MN. Accountability and quality improvement: the role of report cards. Qual Health Care 2001; 10: 67-68.
  • 12. Australian Associated Press. “The mice in charge of the cheese”: father slams inaction after girl’s hospital death. Sydney Morning Herald 2010; 4 Mar.
  • 13. Berwick DM. Public performance reports and the will for change. JAMA 2002; 288: 1523-1524.
  • 14. UK Department of Health. High quality care for all: NHS Next Stage Review Final Report by Lord Darzi. London: The Stationery Office, 2008.
  • 15. Raleigh V, Foot C. Getting the measure of quality. Opportunities and challenges. London: The King’s Fund, 2010.
  • 16. Fung CH, Lim Y-W, Mattke S, et al. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008; 148: 111-123.
  • 17. Tu JV, Donovan LR, Lee DS, et al. Effectiveness of public report cards for improving the quality of cardiac care. The EFFECT study: a randomized trial. JAMA 2009; 302: 2330-2337.
  • 18. Vladeck BC, Goodwin EJ, Myers LP, Sinisi M. Consumers and hospital use: the HCFA “death list”. Health Aff (Millwood) 1988; 7: 122-125.
  • 19. Kaiser Family Foundation Public Opinion and Survey Research Program. 2008 update on consumers’ views of patient safety and quality information. Menlo Park, Calif: KFF, 2008.
  • 20. Scott IA, Ward M. Public reporting of hospital outcomes based on administrative data: risks and opportunities. Med J Aust 2006; 184: 571-575. <MJA full text>
  • 21. Krumholz HM, Wang Y, Mattera JA, et al. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with an acute myocardial infarction. Circulation 2006; 113: 1683-1692.
  • 22. Krumholz HM, Wang Y, Mattera JA, et al. An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with heart failure. Circulation 2006; 113: 1693-1701.
  • 23. Desai MM, Lin Z, Schreiner GC, et al. 2010 measures maintenance technical report: acute myocardial infarction, heart failure, and pneumonia 30-day risk-standardized mortality measures. New Haven: Centers for Medicare and Medicaid Services, 2009. (accessed Apr 2010).
  • 24. Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that won’t go away. BMJ 2010; 340: c2016.
  • 25. Normand S-LT, Glickman ME, Gatsonis CA. Statistical methods for profiling providers of medical care: issues and applications. J Am Stat Assoc 1997; 92: 803-814.
  • 26. Nocera A. Performance-based hospital funding: a reform tool or an incentive for fraud? Med J Aust 2010; 192: 222-224. <MJA full text>
  • 27. Kelman S, Friedman JN. Performance improvement and performance dysfunction: an empirical examination of distortionary impacts of the emergency room wait-time target in the English National Health Service. J Public Adm Res Theory 2009; 19: 917-946.
  • 28. Gubb J. Have targets done more harm than good in the English NHS? Yes. BMJ 2009; 338: a3130.
  • 29. Omoigui NA, Miller DP, Brown KJ, et al. Outmigration for coronary bypass surgery in an era of public dissemination of clinical outcomes. Circulation 1996; 93: 27-33.
  • 30. Moscucci M, Eagle KA, Share D, et al. Public reporting and case selection for percutaneous coronary interventions: an analysis from two large multicenter percutaneous coronary intervention databases. J Am Coll Cardiol 2005; 45: 1759-1765.
  • 31. Resnic FS, Welt FG. The public health hazards of risk avoidance associated with public reporting of risk-adjusted outcomes in coronary intervention. J Am Coll Cardiol 2009; 53: 825-830.
  • 32. Bird SM, Cox D, Farewell VT, et al. Performance indicators: good, bad and ugly. J Royal Stat Soc (A) 2005; 168: 1-27.


remove_circle_outline Delete Author
add_circle_outline Add Author

Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.