Connect
MJA
MJA

Using hospital standardised mortality ratios to assess quality of care — proceed with extreme caution

Ian A Scott, Caroline A Brand, Grant E Phelps, Anna L Barker and Peter A Cameron
Med J Aust 2011; 194 (12): 645-648. || doi: 10.5694/j.1326-5377.2011.tb03150.x
Published online: 20 June 2011

There is growing interest in assessing Australian hospital performance using routinely collected administrative data. The hospital standardised mortality ratio (HSMR) has emerged as a potentially universal system-level indicator for comparing mortality between hospitals both within and across different jurisdictions (Box). It is presently reported in the United Kingdom, Sweden, Netherlands, Canada, United States and Australia,1,2 and is being used to gauge success of several large-scale safety campaigns in both the US3 and Canada.4 In November 2009, the Australian Health Ministers endorsed the approach recommended by the Australian Commission on Safety and Quality in Health Care for the implementation and reporting of a core set of national indicators of safety and quality that included the HSMR.5 Working groups are now studying its implementation.6 Researchers from Flinders University in Adelaide have recently argued the case for using the HSMR as a screening tool for safety and quality in Australian hospitals.7 In Canada8 and the UK,9 there is already public reporting of HSMRs for individual hospitals. In Australia, there is similar political commitment to public reporting of comparative hospital quality and safety performance, with a national reform agenda10 and the MyHospitals website (http://www.myhospitals.gov.au).

In this article, we argue that the methodology underpinning the all-admission, hospital-wide HSMR is not sufficiently robust for its use as an external, publicly available, cross-sectional screening tool in identifying hospitals associated with above-average mortality, which is then attributed (rightly or wrongly) to lower quality care. Instead, diagnosis-specific HSMRs may serve as more useful tools for monitoring changes in mortality of high-volume, high-risk conditions over time within single institutions as a marker of broad secular trends in care improvement and effects of local quality improvement initiatives.

Problems with using HSMR as a screening tool for detecting poor-quality hospitals
Low signal-to-noise ratio

Death occurs in 5%–10% of all hospitalised patients. Most of these deaths (between 95%11 and 98%12) reflect the natural history of disease, not poor-quality care. Conversely, most quality problems, while associated with injury and prolonged hospital stays, do not cause death. Consequently, the HSMR, as a screening instrument for quality, is limited by low sensitivity (most quality problems do not cause death) and low specificity (most deaths do not reflect poor-quality care). Past simulation studies using mortality data from 190 US hospitals reveal that almost two-thirds of poor-quality hospitals (in which 25% of deaths were classified as preventable) demonstrated no significant increase in HSMR.13 A further limitation is that, in the absence of data linkage systems, the HSMR will not encompass deaths that occur after discharge or interhospital transfer, some of which may reflect poor-quality inpatient care.14

Low criterion validity

The available evidence indicates a weak and inconsistent association between HSMR and other measures of quality of care when performed as between-hospital comparisons. In one review of 378 patients who died from stroke, myocardial infarction or pneumonia in 11 outlier hospitals with substantially higher HSMRs, no differences were seen, compared with hospitals with lower HSMRs, in adherence rates for 31 recommended processes of care.15 Even differences in risk-adjusted mortality rates for three high-risk clinical conditions — acute myocardial infarction, heart failure and pneumonia — correlated very poorly with variations in 10 condition-specific process measures of quality reported across 3657 US acute-care hospitals.16 A recent systematic review of 31 studies concluded that risk-adjusted mortality is a poor predictor of preventable complications and quality problems among hospitals.17

Adequacy of risk adjustment

In rendering the HSMR more able to distinguish low-quality from high-quality hospitals, much attention is given to optimising statistical models that adjust for differences between hospitals in patient characteristics that increase the risk of death, but which are independent of the quality of care.18 While risk-adjustment models based on administrative data appear to be equivalent to those based on clinical data in predicting risk for selected, well defined primary diagnoses within large datasets,19 both types of model may not sufficiently adjust for differences in casemix between individual hospitals. Certain comorbidities relevant to risk adjustment — such as obesity, dementia and heart failure — are inconsistently recorded in hospital statistics,20 as is overall functional status.21 This means that diagnosis coding may not distinguish full recovery from persistent disability. Accordingly, tertiary hospitals receiving patients with severe, complicated or end-stage disease may be disadvantaged compared with secondary hospitals that retain patients with less severe, uncomplicated disease.22 The HSMR may be falsely elevated or reduced according to variations in coding of present-on-admission comorbidities versus new diagnoses arising during admission, which may reflect avoidable complications.23,24 If differences in coding are non-randomly distributed among hospitals, differences in HSMR will reflect these biases rather than quality of care. In a recent US study involving more than 2.5 million discharges from 83 acute care hospitals,25 four common risk-adjustment models — including the method advocated by researchers at Flinders University6 — produced substantially different mortality rates. In 2006, 12 out of 28 hospitals with higher than expected hospital-wide mortality as classified by one method had lower than expected mortality when classified by one or more of the other methods. Explanations included disparate statistical methods, differences between hospitals in eligibility and exclusion criteria regarding admissions, and flaws in the hypothesised association between HSMR and quality of care.25

Different reference populations

Choosing which diagnoses or patient groups should be included in the reference populations used to generate risk-adjustment models is particularly problematic. Admission practices for unplanned presentations vary across hospitals, with some institutions coding patients who die in emergency departments shortly after presentation as non-admissions. In Canada8 and the UK,9 even among coded admissions, hospitals exclude vague or undetermined diagnoses, which account for 20% of all deaths. While this maximises the accuracy of risk prediction, it further limits HSMR sensitivity for detecting quality problems. Conversely, specificity is reduced if models include patients who are receiving palliative or end-of-life care, or who have severe illness with dismal prognosis despite best care. Many of these patients will not be admitted to designated (and easily coded) inpatient palliative care units. In one UK hospital, 37% of admissions were found on case review to be for end-of-life care, compared with the reported coded figure of 22%, resulting in a fall of the HSMR from 105 to 68.21 In a Canadian study, the HSMR for one hospital over 2 financial years dropped from 148 to 55 after removal of palliative care patients.12 The HSMR may also vary according to whether a palliative care classification is present-on-admission or applied at various times during an admission that may in fact have involved a quality event.18

Uncertain stability over time

At the hospital level, HSMRs should be relatively stable over time, and immune to seasonal and annual fluctuations. Although some studies confirm this,6,27 others show substantial short-term changes in HSMR beyond those reasonably attributable to clinical advances or quality-of-care improvements.2,12

Potential to mislead

In the absence of wide awareness of its limitations and careful clinical interpretation, the hospital-wide HSMR can mislead. Unfavourable HSMRs based on incorrect data or analyses can trigger external inquiries that stigmatise individual hospitals, lower morale and public confidence, and encourage “gaming” of data — eg, by upgrading risk assessments28 — or the pursuit of inappropriately aggressive care.29 The HSMR also overlooks factors that may account for seemingly better performance, such as greater access to step-down facilities, hospice or residential care, or community health services that reduce length of stay and risk of inhospital death.30 Compared with HSMRs specific to clinical condition, hospital-wide HSMRs do not enable hospital clinicians or administrators to easily pinpoint correctable processes of care at the level of the individual departments or units that account for most quality problems (eg, surgery at Bundaberg Base Hospital in 2003–2005 and paediatric cardiac surgery at Bristol Royal Infirmary in 1984–1995). Even within a hospital whose overall HSMR is 100 or less, individual diagnoses and procedures may demonstrate higher than expected mortality.

Does the overall HSMR as a screening tool engender quality improvement?

There are relatively few studies (all of which are uncontrolled before–after analyses) assessing whether individual hospitals with initial HSMRs above 100 have responded by implementing quality improvement programs that have then led to subsequent reductions in HSMR.30,31 In these studies, it remains unclear whether the HSMR was chosen as a quality measure by hospital staff wanting to enact quality improvement in response to other indicators of concern, or whether a pre-existing HSMR was itself the primary catalyst for action. It is also uncertain whether subsequent improvement in HSMR was due to chance, regression to the mean, changes in coding and admission policies, removal of palliative care patients, secular trends, or real effects of local practice optimisation.12

In 2009, a UK hospital guide stated that HSMRs had drawn attention to hospitals of concern, most notably Mid Staffordshire NHS Foundation Trust, and that, as a consequence, this hospital subsequently reduced its HSMR from 127 to 93 in just 2 years.9 However, the HSMR had been elevated for at least 3 years before a public inquiry exposed its poor standards of care, which were already well known to hospital staff.32 The inquiry, not the reporting of HSMR, was the likely catalyst for remedial action. Most of the decrease in the HSMR (from 127 to 93) occurred very quickly — within 12 months of the inquiry — suggesting that patient de-selection and coding changes were largely responsible for the observed decrease in mortality, not practice change.32 Moreover, at least 21 other hospitals in the UK have demonstrated HSMRs above 100, of which seven have had HSMRs higher than Mid Staffordshire for at least 5 years.9 These observations challenge the notion that the HSMR, as a screening tool for quality, can, by itself, predictably drive practice improvement and distinguish local improvement effects from broad secular trends.

Alternative strategies for using the HSMR in improving quality

To be a reliable and trustworthy quality-assessment tool, the HSMR requires comprehensive and valid patient-level data, robust risk adjustment, intimate knowledge of its limitations, and an ability to pinpoint potential quality problems. As none of these criteria are currently being met, we believe that the overall HSMR is not yet sufficiently mature to be applied to all hospitals at one point in time and serve as a useful national indicator of hospital quality.

An alternative strategy worthy of more research might be to use the overall HSMR to monitor changes in mortality over time internally within individual hospitals. This could be complemented by diagnosis-specific HSMRs, again calculated at the hospital level, for well defined, high-volume, high-risk diagnoses associated with evidence-based standardised care processes.33 Diagnosis-specific HSMRs would allow correctable variances in care to be easily ascertained. Such within-hospital monitoring circumvents much of the confounding inherent in between-hospital comparisons, as each hospital serves as its own historical control, assuming no substantive change in coding practices and casemix over the short to medium term. Health roundtables could be convened wherein different hospitals share information on how they have identified, responded to, and evaluated improvement in care deficiencies with the aid of diagnosis-specific HSMRs.

However, within-hospital HSMRs, even if they are calculated annually, must be accompanied by other tools for the early detection of poor-quality care at the individual hospital level. These include:

Conclusion

The overall HSMR based on routinely collected data can falsely label hospitals as poor performers and fail to identify many that harbour quality problems. The overall HSMR is not yet “fit for purpose” as a quality-screening tool for all hospitals. To avoid inappropriate responses, it should not be used in publicly reported interhospital comparisons. Diagnosis-specific HSMRs, calculated at the level of individual hospitals, are a potentially more fruitful method for monitoring mortality over time for specific diagnoses, which may allow earlier identification of care deficiencies that are responsive to hospital quality improvement programs.


Provenance: Not commissioned; externally peer reviewed.

  • Ian A Scott1
  • Caroline A Brand2
  • Grant E Phelps3
  • Anna L Barker2
  • Peter A Cameron2

  • 1 Department of Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Brisbane, QLD.
  • 2 Centre for Research Excellence in Patient Safety, Monash University, Melbourne, VIC.
  • 3 Ballarat Health Services, Ballarat, VIC.


Correspondence: ian_scott@health.qld.gov.au

Competing interests:

None identified.

  • 1. Jarman B, Pieter D, van der Veen AA, et al. The hospital standardised mortality ratio: a powerful tool for Dutch hospitals to assess their quality of care? Qual Saf Health Care 2010; 19: 9-13.
  • 2. Canadian Institute for Health Information. HSMR: a new approach for measuring hospital mortality trends in Canada. Ottawa: CIHI, 2007. http://secure.cihi.ca/cihiweb/products/HSMR_hospital_mortality_trends_in_canada.pdf (accessed May 2011).
  • 3. Institute of Healthcare Improvement. Move your dot. Measuring, evaluating and reducing hospital mortality rates (Part 1). Boston: IHI, 2003.
  • 4. Canadian Patient Safety Institute. Targeted Interventions. Safer Healthcare Now! Edmonton: CPSI, 2006. http://www.saferhealthcarenow.ca/EN/Pages/default.aspx (accessed May 2011).
  • 5. Australian Commission on Safety and Quality in Health Care. Update. Issue 10, Mar 2010. Sydney: ACSQHC, 2010. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/C6E96E608654099 CCA257753001ECA1A/$File/Issue-10.pdf (accessed May 2011).
  • 6. Ben-Tovim D, Woodman R, Harrison JE, et al. Measuring and reporting mortality in hospital patients. Canberra: Australian Institute of Health and Welfare, 2009.
  • 7. Ben-Tovim DI, Pointer SC, Woodman R, et al. Routine use of administrative data for safety and quality purposes — hospital mortality. Med J Aust 2010; 193: S100-S103. <MJA full text>
  • 8. Canadian Institute for Health Information. 2009 hospital standardized mortality ratio (HSMR) public release. http://www.cihi.ca/CIHI-ext-portal/internet/en/applicationleft/health+system+performance/quality+of+care+and+outcomes/hsmr/cihi022151 (accessed Apr 2010).
  • 9. Dr Foster Unit, Imperial College London. Dr Foster hospital guide 2009. How safe is your hospital? London: Dr Foster Health, 2009. http://www.drfosterhealth.co.uk (accessed Oct 2010).
  • 10. Australian Government Department of Health and Ageing. A national health and hospitals network for Australia’s future: delivering the reforms. Canberra: Commonwealth of Australia, 2010.
  • 11. Wilson RM, Runciman WB, Gibberd RW, et al. The Quality in Australian Health Care Study. Med J Aust 1995; 163: 458-471.
  • 12. Penfold RB, Dean S, Flemons W, Moffatt M. Do hospital standardized mortality ratios measure patient safety? HSMRs in the Winnipeg Regional Health Authority. Healthc Pap 2008; 4: 8-24.
  • 13. Hofer TP, Hayward RA. Identifying poor-quality hospitals. Can hospital mortality rates detect quality problems for medical diagnoses? Med Care 1996; 34: 737-753.
  • 14. Goldacre MJ, Griffith M, Gill L, Mackintosh A. In-hospital deaths as fraction of all deaths within 30 days of hospital admission for surgery: analysis of routine statistics. BMJ 2002; 324: 1069-1070.
  • 15. Dubois RW, Rogers WH, Moxley JH, et al. Hospital inpatient mortality. Is it a predictor of quality? N Engl J Med 1987; 317: 1674-1680.
  • 16. Werner RM, Bradlow ET. Relationship between Medicare’s Hospital Compare performance measures and mortality rates. JAMA 2006; 296: 2694-2702.
  • 17. Pitches DW, Mohammed MA, Lilford RJ. What is the empirical evidence that hospitals with higher risk-adjusted mortality rates provide poorer quality care? A systematic review of the literature. BMC Health Serv Res 2007; 20: 91.
  • 18. Tu Y-K, Gilthorpe MS. The most dangerous hospital or the most dangerous equation? BMC Health Serv Res 2007; 7: 185-189.
  • 19. Aylin P, Bottle A, Majeed A. Use of administrative data or clinical databases as predictors of risk of death in hospital: comparison of models. BMJ 2007; 334: 1044-1051.
  • 20. Powell H, Lim LL, Heller RF. Accuracy of administrative data to assess comorbidity in patients with heart disease: an Australian perspective. J Clin Epidemiol 2001; 54: 687-693.
  • 21. Campbell SE, Seymour DG, Primrose WR; ACMEPLUS Project. A systematic literature review of factors affecting outcome in older medical patients admitted to hospital. Age Ageing 2004; 33: 110-115.
  • 22. Gordon HS, Rosenthal GE. Impact of interhospital transfers on outcomes in an academic medical center. Implications for profiling hospital quality. Med Care 1996; 34: 295-309.
  • 23. Jencks SF, Williams DK, Kay TL. Assessing hospital-associated deaths from discharge data. The role of length of stay and comorbidities. JAMA 1998; 260: 2240-2246.
  • 24. Glance LG, Osler TM, Mukamel DB, Dick AW. Impact of the present-on-admission indicator on hospital quality measurement: experience with the Agency for Healthcare Research and Quality (AHRQ) inpatient quality indicators. Med Care 2008; 46: 112-119.
  • 25. Shahian DM, Wolf RE, Iezzoni LI, et al. Variability in the measurement of hospital-wide mortality rates. N Engl J Med 2010; 363: 2530-2539.
  • 26. Coory M, Gibberd R. New measures for reporting the magnitude of small-area variation in rates. Stat Med 1998; 17: 2625-2634.
  • 27. Heijink R, Koolman X, Pieter D, et al. Measuring and explaining mortality in Dutch hospitals: The hospital standardized mortality rate between 2003 and 2005. BMC Health Serv Res 2008; 8: 73-80.
  • 28. Powell AE, Davies HTO, Thomson RG. Using routine comparative data to assess the quality of health care: understanding and avoiding common pitfalls. Qual Saf Health Care 2003; 12: 122-128.
  • 29. Pronovost PJ, Colantuoni E. Measuring preventable harm: helping science keep pace with policy. JAMA 2009; 301: 1273-1275.
  • 30. Jarman B, Bottle A, Aylin P, et al. Monitoring changes in hospital standardised mortality ratios. BMJ 2005; 330: 329.
  • 31. Wright J, Dugdale B, Hammond I, et al. Learning from death: a hospital mortality reduction programme. J R Soc Med 2006; 99: 303-308.
  • 32. The Mid Staffordshire NHS Foundation Trust Inquiry. Independent inquiry into care provided by Mid Staffordshire NHS Foundation Trust. January 2005–March 2009. Volume I. London: Department of Health, 2010. http://www.dh.gov.uk/en/Publicationsandstatistics/Publications/PublicationsPolicyAndGuidance/DH_113018 (accessed Oct 2010).
  • 33. Robb E, Jarman B, Suntharalingam G, et al. Using care bundles to reduce in-hospital mortality: quantitative survey. BMJ 2010; 340: c1234.
  • 34. Scott IA, Darwin IC, Harvey KH, et al, for the CHI Cardiac Collaborative. Multisite, quality-improvement collaboration to optimise cardiac care in Queensland public hospitals. Med J Aust 2004; 180: 392-397. <MJA full text>
  • 35. Victorian Department of Health. Understanding clinical practice toolkit. http://www.health.vic.gov.au/clinicalengagement/downloads/pasp/understanding_clinical_practice_toolkit.pdf (accessed Apr 2010).
  • 36. Duckett SJ, Coory M, Sketcher-Baker K. Identifying variations in quality of care in Queensland hospitals. Med J Aust 2007; 187: 571-575. <MJA full text>
  • 37. Pilcher DV, Hoffman T, Thomas C, et al. Risk-adjusted continuous outcome monitoring with an EWMA chart: could it have detected excess mortality among intensive care patients at Bundaberg Base Hospital? Crit Care Resusc 2010; 12: 36-41.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.