Connect
MJA
MJA

Public reporting of hospital outcomes based on administrative data: risks and opportunities

Ian A Scott and Michael Ward
Med J Aust 2006; 184 (11): 571-575. || doi: 10.5694/j.1326-5377.2006.tb00383.x
Published online: 5 June 2006

In response to repeated disclosures of poor quality care in hospitals throughout the Western world, calls have been made for regular public reporting of quality and safety indicators from individual hospitals.1 In the United Kingdom, mortality data for selected surgical procedures from each hospital trust are posted on the Internet together with overall quality ratings for each hospital.2 In the United States, mortality and readmission data for cardiovascular diagnoses and surgical procedures for individual hospitals are issued as annual reports freely available to the public.3,4

The recent Bundaberg Hospital inquiry5 unearthed quality reports produced in 2004 for every Queensland public hospital (Box 1),6 but these were denied public disclosure. This prompted the Forster inquiry to recommend that such reports be made public on an annual basis (Recommendation 13.3).7 The 2004 reports were subsequently released by the Queensland Government and summary results were published in Brisbane newspapers in September 2005.

The current Queensland reports (Box 1) use routinely collected administrative data from each of 78 hospitals to report risk-adjusted in-hospital mortality rates, length of stay (LOS) and complication rates for all admissions over a 12-month period for each of 12 common medical and surgical conditions. Ratings of patient satisfaction with care are also provided. By comparing outcome data for each hospital with the average for all hospitals combined, and the average for other hospitals that belong to the same “peer group”, an attempt is made to identify “outlier” hospitals that demonstrate significantly better or worse outcomes. The intention of profiling hospitals in this way is to help patients and the public to select better performing hospitals, while affording an opportunity for managerial and clinical staff of poorly performing hospitals to investigate and optimise care processes using insights gained from better performing hospitals.

While intuitively appealing, such public reporting has been questioned with regard to its scientific rigour, and there remains uncertainty about its use by, and impact on, patients and providers.8,9 In the promotion of public reporting, several assumptions are being made that deserve empirical testing. The aim of our article is to raise awareness of potential pitfalls of public reporting and to suggest ways by which public reports, which focus on outcomes derived from administrative data, can be rendered more beneficial and less harmful in improving hospital practice.

Are hospital reports valid?

Several considerations are pertinent.

How accurate are administrative data? The Queensland hospital reports rely on routinely coded hospital discharge data pertaining to specific principal diagnoses, comorbidities and complications, defined by International statistical classification of diseases and related health problems (ICD) codes.10 For quality assessment, such data are often inaccurate, incomplete, or provide insufficient clinical detail. While death, LOS and readmission are reliable descriptors, accuracy of diagnosis coding is variable. For example, Australian studies show that about 20% of cases of acute myocardial infarction (AMI) as primary diagnosis are not detected,11 and coding of comorbidities is associated with miss rates ranging from 11% for diabetes to as high as 100% for dementia.12 In-hospital complications may be difficult to distinguish from presenting diagnoses, may be variably documented according to specialty, or may be coded inappropriately.13

Systematic bias in coding error at the level of individual hospitals can invalidate the process of risk-adjusting outcome data (see below), which is necessary for fair comparisons between hospitals. The extent to which coding error varies between hospitals has not been formally assessed.

Have analyses controlled for differences in casemix and other relevant factors? Comparisons must be controlled for differences between hospitals in age, sex, and risk profiles of patients relevant to the outcomes of interest. Risk-adjusted mortality rates can differ markedly from crude rates. For example, in AMI, crude in-hospital mortality rates in the 2004 Queensland reports were 11.2% and 15.2% for tertiary and small hospitals, respectively, whereas the respective risk-adjusted rates were 8.2% and 20.5%.6 Risk adjustment models applied to administrative data must classify patient risk with a level of accuracy close to that achieved by validated prediction rules based on clinical data. While this has been confirmed for AMI care,14 other disease conditions require further study, with many clinicians15 and statisticians16 claiming such models inadequately account for variation in patient characteristics.

In addition, factors over which hospitals have no control may influence outcomes, as noted by higher readmission rates in geographic areas where appropriate levels of medical, nursing or social care are not available after discharge.17

Have the effects of random error been minimised? The smaller the sample size, or the more times data are analysed, the more likely it is that extreme results will occur simply by the play of chance. Data from small hospitals are more likely to yield spurious “outlier” results that categorise them as high- or low-performing hospitals when, in reality, on the basis of process-of-care reviews, the opposite may be true. In one analysis of reports similar to the Queensland reports, fewer than 12% of truly poor quality hospitals emerged as high-mortality “outliers”, while over 60% of seemingly poor quality “outlier” hospitals were in fact good quality institutions.18

The Queensland hospital reports attempt to deal with this problem by using more stringent tests of statistical significance in the form of 99.9% confidence intervals. Other statistical tools, such as “shrinkage” techniques,19 which minimise chance results, have been employed. Another option is to complement outcome reports based on relatively few events with “snapshot” audits of the care of samples of randomly selected or consecutive patients. Processes of care, compared with outcomes, are more frequent, more sensitive in detecting suboptimal quality, more easily controlled by hospitals, and more directly linked to remedial action.20

Can true variation in outcome be reliably detected for hospitals that are similar? As hospitals differ in service capacity and ability to care for different types of patients, it can be argued that comparisons should be performed within peer groups: tertiary, community or district hospitals. However, as the number of hospitals and total number of events are reduced, the magnitude of detectable “special-cause” variation between hospitals (ie, variation due to specific causal factors, such as quality of care, as opposed to random variation) decreases. Of 12 diagnosis-specific outcome indicators measured across 180 Queensland hospitals over a 12-month period, significant special-cause variation within peer hospitals was detected for only two indicators (17%).21 One solution is to restrict analysis to patients with lower baseline risk, for whom special-cause variation between hospitals is of greater magnitude.22

Do differences in outcomes based on administrative data reflect real differences in quality of care? It is assumed that worse outcomes in specific hospitals will correlate with less-than-optimal care as determined by the gold standard of reviewing individual patient records and applying explicit quality criteria. Using this method, up to 90% of variance in risk-adjusted mortality rates for AMI in NSW hospitals was attributed to variance in quality of care, and other studies report similar findings for heart failure, pneumonia, stroke and hip fracture.12 However, other investigations demonstrate little or no correlation between specific outcome and process-of-care measures, despite rigorous risk adjustment.23

Applying statistical process control techniques such as risk-adjusted cusum analysis to routine hospital outcome data may be better for detecting trends towards excess adverse outcomes at an earlier stage.19

Do outcome data for selected diagnoses predict overall quality of care? Hospitals may perform well on some indicators and poorly on others. Overall quality of care within an institution cannot be deduced from results of a selected sample of indicators, despite the inclusion of as many as 22 different clinical conditions.24

Can the lay public access, interpret, and act upon hospital reports?

Expecting the lay public to use public reports when choosing which hospital to attend assumes that reports exist that the public are aware of and can access, understand, accept and act on.

In the US, where public report cards for hospitals have existed for almost two decades, surveys reveal that, as of 2004, only a third of the public had sighted them, and only 19% reported making use of such information, with most distrusting it or viewing it as unhelpful.25 Common sources of misunderstanding, more prevalent among patients of lower socioeconomic status, were language and terms used, the meaning of an indicator in relation to quality of care, and whether low or high rates of an indicator implied good performance.26

Interestingly, limited data suggest many doctors are also distrustful of hospital reports when making patient referrals because of concerns about adequacy of adjustment methods and immunity of ratings to the effects of manipulation (“gaming” [see below]).27

Moreover, public reports will only allow patients and clinicians to exercise discretion in their choice of hospital if there is more than one hospital in a given locale, and will only apply to care of elective conditions, as the nearest hospital will usually be preferred for urgent care of acute, life-threatening conditions. The lay public may also see hospital comparisons among peer groups as being unhelpful, given their desire to know which hospital, among several of mixed type (tertiary and community) in their area, provides the best care.

Is there a potential downside of public reporting of hospital performance?

Several dysfunctional consequences can arise from public reporting of performance data.

Gaming. Hospitals can be made to look better than they really are by, for example, up-coding the principal diagnosis (to a more severe condition or procedure) and coding every possible comorbidity on every patient to lower risk-adjusted death rates.

Early discharge. Discharging patients earlier, and possibly prematurely, reduces LOS and shifts into community settings a proportion of deaths and complications that would otherwise have occurred, and been measured, as in-hospital events.32

Avoidance of high-risk patients. Investigations in several US states suggest that lower cardiac surgery mortality did not result from better care induced by reports, but rather from increased refusal by operators and hospitals to operate on sicker patients who could have benefited.33

Outsourcing of high-risk patients. Hospitals may transfer more of their sickest patients to other institutions, as suggested by a 30% increase in transfer of patients, mostly high risk, to other states following release of hospital reports for cardiac surgery in New York state.34

Adoption of defensive medicine. Hospitals may be encouraged to overtreat all patients in chasing better outcomes, leading instead to more unnecessary interventions and unintended adverse events.

Withdrawal or disengagement. Unless public reporting is centrally mandated, hospitals reporting worse outcomes may simply cease reporting.35 If they do continue reporting, but the staff within these institutions feel, or are, powerless to influence poorly performing indicators, such measurement may lead to demoralisation and disengagement.36

Tunnel vision. Remedial efforts may concentrate on the services being measured, with neglect of equally important aspects of care that are not being measured.37

Could internal reports do just as well in improving quality of care?

Several studies show that internal feedback of quality indicators, combined with other quality improvement interventions, raises standards of care, particularly within multi-hospital collaborations.38,39 While keeping reporting internal may engender conspiracy theories and lower public trust, professional peer comparisons probably drive most service improvement. This can be achieved just as well with internal as with public reporting, with less propensity to trigger dysfunctional responses.

Proceed with caution

There is limited research into the methods and effects of public disclosure of performance data,9 with claims of favourable effects on cardiac surgery mortality in several US states being the most frequently cited evidence supporting public reporting.30 On balance, we believe that public reporting is worthwhile, but needs to be performed in a circumspect manner based on our summary findings (Box 2). If public reports of hospital outcomes based on administrative data are to become permanent, we recommend mandatory reporting by all hospitals according to the standards listed in Box 3.

In particular, design formats and information presentation must ensure that public reports can be interpreted and used in ways that do not create anxiety or false notions about a hospital’s capacity to provide suitable care.40 Incentives and safeguards must also exist to guarantee appropriate care and counter any tendency for hospitals to discriminate against high-risk patients. The methods for producing public reports should be transparent and scientifically rigorous, the reporting and dissemination process should actively involve lay groups and the media, and the whole activity must be free of political and bureaucratic interference.

A Queensland experiment

In February 2006, the Queensland Minister for Health established an independent Health Public Reporting Advisory Panel comprising clinical, academic, community and media representatives. This panel will advise the Minister and the public on the significance of key performance measurements of the public health care system by attempting to answer three sets of questions relating to such data:

In light of the political and professional sensitivities surrounding public reporting of performance data, it is imperative, if this experiment is to succeed, that comparative data are presented in their proper context and are used to define system problems for which remedial options are clearly articulated.36 In this way, thought and action are directed at the problem, not at an indicator; feedback is given and local targets agreed; action plans are formulated; and support for change is provided, together with additional resources where required. If this course of action is followed, public reports are likely to attract the trust of both health care staff and the public and to demonstrate that performance data collected from administrative sources are credible and transparent and are being used to improve health care outcomes.

1 Format of Queensland hospital measured quality reports6

Reports are produced for each of 78 public hospitals in Queensland. A core set of indicators, derived from administrative data routinely collected in various corporate hospital databases or from target questionnaire surveys, are reported. The indicators fall into four groups:

Clinical quality and efficiency of care indicators

Outcome measures that include in-hospital mortality rates, length of stay and complication rates are reported for 12 common medical and surgical conditions listed as principal diagnosis: acute myocardial infarction, heart failure, stroke, asthma, pneumonia, diabetic foot ulcers, fractured neck of femur, elective knee and hip replacements, hysterectomy, colorectal cancer and laparoscopic cholecystectomy. The primary data source is the Queensland Hospitals Admitted Patient Data Collection, which uses the ICD-10-AM (Australian Modification) classification system.10

For each indicator, hospitals are placed into peer groupings of principal referral and specialised hospitals (essentially tertiary institutions), large (> 200 bed non-tertiary), medium (50–200 beds) and district (< 50 beds) hospitals. Each indicator result is risk-adjusted using adjustment models derived from the entire state cohort. A technical supplement details indicator definitions, classification schemes, data sources and statistical methods.

Outcome indicators for each condition (expressed as a percentage [for mortality or complication rate] or mean [for length of stay]) are reported annually and compared with peer group and state averages for the most recent year. Each indicator is flagged if it deviates from the peer group average at the 90%–99% or 99.9% level of statistical significance.

3 Desirable attributes of public performance reports

Attribute

Method for enhancing attribute


Validity and reliability of data

Accurate and complete source data

Standardised coding practices; regular coding audits; coding rules that flag complications and adverse events; hospital databases linked with death registries; state-wide unique patient identifier

Risk-adjusted comparisons

Validated risk-adjustment models

Minimal random error

Appropriate statistical processing

Peer group comparisons

Accurate peer group categorisation

Timeliness

Fixed schedule of annual or biannual reporting


Use by the public

Awareness raising

Procedures for publicising reports in the media

Accessibility

Posting on websites; newspaper articles; community broadcasts on radio and television

Interpretability

Clear design format and information presentation; explanatory notes for statistical methods; graphic displays with interpretive comments; incorporation of contextual factors such as bed, workforce and resource availability and access to community services; proposed remedial strategies


Use by clinicians

Awareness raising

Procedures for publicising reports on health department and hospital websites and in the medical press, clinical journals and CME/QA programs

Accessibility

Distribution of hospital-specific reports to all levels of staff, to hospital councils and boards, and to local divisions of general practice

Interpretability

Forums in which, before public release, clinicians can discuss report results with report authors and hospital management, receive full explanation of analytical methods used, and correct factual errors and offer interpretations for inclusion in the reports


Use by managers

Measurement for improvement

Use of reports to improve quality rather than to judge competence of individual clinicians or clinical units

Proactive remedial responses to quality issues identified

Increased resources and service capacity, or revised service roles and responsibilities, where deficiencies in these areas are found to be direct causes of poor-quality care

Safeguards against dysfunctional responses to public reports

Inclusion of appropriateness-of-care measures (process indicators based on clinical eligibility criteria) as performance measures; application of sanctions and penalties for deliberate data manipulation or refusal to collect data; independent review of analytical methods; distribution of reports to external regulatory agencies


CME = continuing medical education. QA = quality assurance.

  • Ian A Scott1
  • Michael Ward2

  • 1 Department of Internal Medicine, Princess Alexandra Hospital, Brisbane, QLD.
  • 2 Clinical Practice Improvement Centre, Royal Brisbane and Women’s Hospital, Brisbane, QLD.


Correspondence: ian_scott@health.qld.gov.au

Competing interests:

The authors are members of the Health Public Reporting Advisory Panel of Queensland Health. The views expressed in this article are those of the authors and should not be viewed as endorsed by, or official policy of, Queensland Health or its Health Public Reporting Advisory Panel.

  • 1. Berwick DM. Public performance reports and the will for change. JAMA 2002; 288: 1523-1524.
  • 2. UK Healthcare Commission. 2005 performance ratings. Available at: http://ratings2005.healthcarecommission.org.uk (accessed Apr 2006).
  • 3. Rainwater JA, Romano PS, Antonius DM. The California Hospital Outcomes Project: how useful is California’s report card for quality improvement? Jt Comm J Qual Improv 1998; 24: 31-39.
  • 4. Rosenthal GE, Hammar PJ, Way LE, et al. Using hospital performance data in quality improvement: the Cleveland health quality choice experience. Jt Comm J Qual Improv 1998; 24: 347-360.
  • 5. Van Der Weyden MB. The Bundaberg Hospital scandal: the need for reform in Queensland and beyond. Med J Aust 2005; 183: 284-285. <MJA full text>
  • 6. Queensland Health. Measured quality hospital reports 2004. Available at: http://www.health.qld.gov.au/quality/mq_reports.asp (accessed Nov 2005).
  • 7. Queensland Health. Queensland health systems review. Final report. September 2005. Available at: http://www.health.qld.gov.au/health_sys_review/final/default.asp (accessed May 2006).
  • 8. Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA 2000; 283: 1866-1874.
  • 9. Marshall MN, Davies HTO. Public release of information on quality of care: how are the health service and the public expected to respond? J Health Serv Res Policy 2001; 6: 158-162.
  • 10. National Centre for Classification in Health. International statistical classification of diseases and related health problems, 10th revision, Australian modification (ICD-10-AM). Sydney: National Centre for Classification in Health, University of Sydney, 1998.
  • 11. Vu HD, Heller RF, Lim LL, et al. Mortality after acute myocardial infarction is lower in metropolitan regions than in non-metropolitan regions. J Epidemiol Community Health 2000; 54: 590-595.
  • 12. Powell H, Lim LL, Heller RF. Accuracy of administrative data to assess comorbidity in patients with heart disease: an Australian perspective. J Clin Epidemiol 2001; 54: 687-693.
  • 13. McCarthy EP, Iezzoni LI, Davis RB, et al. Does clinical evidence support ICD-9-CM diagnosis coding of complications? Med Care 2000; 38: 868-876.
  • 14. Normand SL, Morris CN, Fung KS, et al. Development and validation of a claims based index for adjusting for risk of mortality: the case of acute myocardial infarction. J Clin Epidemiol 1995; 48: 229-243.
  • 15. Narins CR, Dozier AM, Ling FS, Zareba W. The influence of public reporting of outcome data on medical decision-making by physicians. Arch Intern Med 2005; 165: 83-87.
  • 16. Iezzoni LI. The risks of risk adjustment. JAMA 1997; 278: 1600-1607.
  • 17. Giuffrida A, Gravelle H, Roland M. Measuring quality of care with routine data: avoiding confusion between performance indicators and health outcomes. BMJ 1999; 319: 94-98.
  • 18. Thomas JW, Hofer TP. Accuracy of risk-adjusted mortality rates as a measure of hospital quality of care. Med Care 1999; 37: 83-92.
  • 19. Spiegelhalter D, Grigg O, Kinsman R, Treasure T. Risk-adjusted sequential probability ratio tests: applications to Bristol, Shipman and adult cardiac surgery. Int J Qual Health Care 2003; 15: 7-13.
  • 20. Crombie IK, Davies HT. Beyond health outcomes: the advantages of measuring process. J Eval Clin Pract 1998; 4: 31-38.
  • 21. Scott I, Youlden D, Coory M. Are diagnosis specific outcome indicators based on administrative data useful in assessing quality of hospital care? Qual Saf Health Care 2004; 13: 32-39.
  • 22. Coory MD, Scott IA. Analysing low-risk patient populations allows better discrimination between high and low performing hospitals. A case study using in-hospital mortality from acute myocardial infarction. Brisbane: Queensland Health, 2006.
  • 23. Park RE, Brook RH, Kosecoff J, et al. Explaining variations in hospital death rates. Randomness, severity of illness, quality of care. JAMA 1990; 264: 484-490.
  • 24. Chassin MR, Park RE, Lohr KN, et al. Differences among hospitals in Medicare patient mortality. Health Serv Res 1989; 24: 1-8.
  • 25. Kaiser Family Foundation and Agency for Health Care Research and Quality. National survey on consumers’ expectations with patient safety and quality information. Washington, DC: Kaiser Family Foundation, 2004.
  • 26. Jewett JI, Hibbard JH. Comprehension of quality care indicators: differences among privately insured, publicly insured, and uninsured. Health Care Financ Rev 1996; 18: 75-94.
  • 27. Schneider EC, Epstein AM. Influence of cardiac surgery performance reports on referral practices and access to care: a survey of cardiovascular specialists. N Engl J Med 1996; 335: 251-256.
  • 28. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement? Health Aff (Millwood) 2003; 22: 84-94.
  • 29. Mannion R, Davies HTO. Report cards in health care: learning from the past; prospects for the future. J Eval Clin Pract 2002; 8: 215-228.
  • 30. Green J, Wintfeld N. Report cards on cardiac surgeons: assessing New York State’s approach. N Engl J Med 1995; 332: 1229-1232.
  • 31. Rosenthal GE, Quinn L, Harper DL. Declines in hospital mortality associated with a regional initiative to measure hospital performance. Am J Med Qual 1997; 12: 103-112.
  • 32. Goldacre MJ, Griffith M, Gill L, et al. In-hospital deaths as fraction of all deaths within 30 days of hospital admission for surgery: analysis of routine statistics. BMJ 2002; 324: 1069-1070.
  • 33. Dranove D, Kessler D, McClellan M, Satterthwaite M. Is more information better? The effects of “report cards” on health care providers. J Polit Econ 2003; 111: 555-588.
  • 34. Omoigui NA, Miller DP, Brown KJ, et al. Outmigration for coronary artery bypass surgery in an era of public dissemination of clinical outcomes. Circulation 1996; 93: 27-33.
  • 35. McCormick D, Himmelstein DU, Woolhandler S, et al. Relationship between low quality-of-care scores and HMOs’ subsequent public disclosure of quality-of-care scores. JAMA 2002; 288: 1484-1490.
  • 36. Pringle M, Wilson T, Grol R. Measuring “goodness” in individuals and healthcare systems. BMJ 2002; 325: 704-707.
  • 37. Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med 1999; 341: 1147-1150.
  • 38. Marciniak TA, Ellerbeck EF, Radford MJ, et al. Improving the quality of Medicare patients with acute myocardial infarction: results from the Cooperative Cardiovascular Project. JAMA 1998; 279: 1351-1357.
  • 39. Scott IA, Denaro CP, Bennett CM, et al. Achieving better in-hospital and after-hospital care of patients with acute cardiac disease. Med J Aust 2004; 180 (10 Suppl): S83-S88. <MJA full text>
  • 40. Hibbard JH, Slovic P, Peters E, Finucane ML. Strategies for reporting health plan performance information to consumers: evidence from controlled studies. Health Serv Res 2002; 37: 291-313.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.