Connect
MJA
MJA

A classification of hospital-acquired diagnoses for use with routine hospital data

Terri J Jackson, Jude L Michel, Rosemary F Roberts, Christine M Jorm and John G Wakefield
Med J Aust 2009; 191 (10): 544-548. || doi: 10.5694/j.1326-5377.2009.tb03307.x
Published online: 16 November 2009

Abstract

Objective: To develop a tool to allow Australian hospitals to monitor the range of hospital-acquired diagnoses coded in routine data in support of quality improvement efforts.

Design and setting: Secondary analysis of abstracted inpatient records for all episodes in acute care hospitals in Victoria for the financial year 2005–06 (n = 2.032 million) to develop a classification system for hospital-acquired diagnoses; each record contains up to 40 diagnosis fields coded with the ICD-10-AM (International Classification of Diseases, 10th revision, Australian modification).

Main outcome measure: The Classification of Hospital Acquired Diagnoses (CHADx) was developed by: analysing codes with a “complications” flag to identify high-volume code groups; assessing their salience through an iterative review by health information managers, patient safety researchers and clinicians; and developing principles to reduce double counting arising from coding standards.

Results: The dataset included 126 940 inpatient episodes with any hospital-acquired diagnosis (complication rate, 6.25%). Records had a mean of three flagged diagnoses; including unflagged obstetric and neonatal codes, 514 371 diagnoses were available for analysis. Of these, 2.9% (14 898) were removed as comorbidities rather than complications, and another 118 640 were removed as redundant codes, leaving 380 833 diagnoses for grouping into CHADx classes. We used 4345 unique codes to characterise hospital-acquired conditions; in the final CHADx these were grouped into 144 detailed subclasses and 17 “roll-up” groups.

Conclusions: Monitoring quality improvement requires timely hospital-onset data, regardless of causation or “preventability” of each complication. The CHADx uses routinely abstracted hospital diagnosis and condition-onset information about in-hospital complications. Use of this classification will allow hospitals to track monthly performance for any of the CHADx indicators, or to evaluate specific quality improvement projects.

Patient safety advocates have recently posed the question: “Is health care getting safer?”1 They conclude it is impossible to know, but in order to make progress toward the answer, health systems need to change focus “away from unsystematic voluntary reporting towards systematic measurement”. Their prescription is “a broad but manageable spectrum of indicators that are genuinely useful to the clinical teams that monitor quality and safety day to day”, using “local data that are relevant to clinical concerns . . . how a team is doing compared with last month and last year”.1

Most current approaches to systematic measurement of patient outcomes in hospital do not satisfy these criteria, instead relying on voluntary reporting, a relatively narrow range of diagnoses, or detailed, condition-specific profiles of comorbidities for risk adjustment (Box 1). To our knowledge, there has been only one attempt to use routinely recorded data on diagnosis codes to monitor the full range of hospital-acquired illness and injury — the Utah/Missouri Patient Safety Project16,17 (Box 1). This was limited by the need for expert clinical review to distinguish hospital-acquired diagnoses from comorbidities, eliminating many conditions that could also be community acquired.

Here, we describe the development of a tool to allow routinely coded inpatient data to be used to monitor a full range of hospital-acquired diagnoses (“complications”) to support quality improvement efforts by hospital-based clinical teams. This tool — the Classification of Hospital Acquired Diagnoses (CHADx) — was developed under the sponsorship of the Australian Commission on Safety and Quality in Health Care and builds on the Utah/Missouri project. To identify hospital-acquired diagnoses, it uses a “condition onset” flag that is now common in a number of jurisdictions18 and recorded in all Australian states. We termed these diagnoses “complications” in an attempt to find neutral terminology reflecting the lack of either risk adjustment or information on causation.

The classification is designed to provide a comprehensive overview of all complications as the basis for estimating total and relative per case expenditure by complication type.19 It is also intended to provide hospitals with a computerised tool to group the 4000 + valid diagnosis codes typically used with a hospital-acquired diagnosis-onset flag into a smaller set of clinically meaningful classes for routine monitoring of patient safety and safety improvement efforts.

Methods
ICD-10-AM and condition-onset coding

The CHADx (pronounced “chaddix”) uses data coded according to the International Classification of Diseases, 10th revision, Australian modification (ICD-10-AM). As the ICD was not designed specifically to identify hospital-acquired conditions, the CHADx had to accommodate the idiosyncrasies of the source data and coding rules, while seeking to extract as much information as possible from the record abstracts.

Like most previous classifications, we began with the “external causes” chapters of the ICD, which contain codes for causes of injury specific to hospital care (Complications of medical and surgical care, Y40–Y84), and the codes for manifestations or injuries common in hospital care (T and End of Chapter [EOC] codes). The latter include Complications of surgical and medical care not elsewhere classified (T80–T88), Poisoning by drugs, medicaments and biological substances (T36–T50), and the EOC or postprocedural complication codes specific to particular chapters (eg, cardiac, respiratory).

The complications flag (C prefix20) used in Victoria was the model for the recently adopted national system of “condition onset” flagging.21 To be flagged, a diagnosis must have occasioned treatment or active investigation in hospital, or have extended length of hospital stay. Coders review the patient’s clinical notes to establish whether each diagnosis was recorded as present on admission. If the diagnosis was not present on admission and is plausibly hospital acquired (ie, not a congenital or chronic condition), the C prefix is assigned. In the past, coding standards have not encouraged assigning of the C prefix to diagnoses in obstetric or perinatal patients because of ambiguities in the timing of onset of particular diagnoses. Thus, because of the small proportion of C prefixes in obstetric and neonatal records, we analysed all codes listed in the obstetric and neonatal chapters of ICD-10-AM that were plausibly hospital acquired.

Analysis and design principles

To determine the optimal number of end classes in the CHADx, we examined how variations between hospitals in depth of coding and total number of separations per year interacted with classifications of various sizes. This suggested that for hospitals with over 6000 admissions per year, 120–130 end classes with an incidence of over 0.1% of cases would provide sufficient granularity (specificity of classes and avoidance of “catch all” classes), without creating too many “empty” classes because of infrequently occurring diagnoses. For hospitals with fewer admissions, major “roll-up” groups could be used to monitor a smaller number of broad complication types.

Australian coding standards mandate that codes be recorded in specific sequences: for example, an injury or manifestation should be coded before the cause.21 Some combinations of code types represent redundant coding, or only marginally refine the information available from a single code. We developed working principles for prioritising code selection and reducing double counting arising from these sequenced codes. T and EOC codes are given priority as the most specific codes for hospital-acquired conditions. To avoid double counting of manifestations related to the same cause, a “bracket rule” was used: any codes bracketed between a T or EOC code and a following Y (external cause) code were assigned to a postprocedural CHADx, with no further assignment to other CHADx classes. Exceptions were made for three “high saliency” infection-related complications: septicaemia, methicillin-resistant Staphylococcus aureus, and “other drug resistant” infections.

Some manifestations are not described with T or EOC codes, and thus the bracket rule defining a code sequence cannot be applied. For example, both a rash and headache could be coded as manifestations of a particular drug. This sequence of three codes is meaningful when it is coded, but ambiguous when analysed. Only the paired external cause code and the immediately preceding diagnosis can be linked; no unequivocal links to other manifestations can be made without referring back to the medical record. Given this limitation on inferring relationships between codes, we took a conservative approach to avoid overinterpreting the data. This had the unavoidable consequence that some causes might have been omitted. In the example above, both the rash and the headache were classified in CHADx, although only one was attributed to their shared drug cause.

High-volume sets of codes were then grouped together into the first draft of the CHADx, using an iterative process principally involving the first two authors (T J J and J L M). Major groupings were compared with similar published grouping systems11,14,16,17,23 to ensure that salient event types were not overlooked in the grouping process. While grouping was based in part on the size of groups, single codes or low-volume code groups with high saliency for patient safety were created as their own groups. Other small-volume and less specific codes were grouped together. The logic used in the construction of the CHADx is shown in Box 2.

The draft classification was reviewed by the remaining three authors (C M J, J G W and R F R) and then by three independent clinical reviewers, two of whom returned full reviews of the classification. Their suggestions and amendments were analysed, and the groups were further refined.

Discussion

The CHADx is intended as a tool to help hospitals monitor rates of complications and the effect of patient safety interventions. In most Australian states, hospitals submit diagnosis abstracts on a monthly basis. Monthly use of the CHADx would allow hospitals to identify any changes associated with local patient safety strategies in near “real time”. Longitudinal measurement would provide information not available from methods that rely on periodic and costly intensive investigations, such as chart review. We also foresee its potential to help include cost in prioritising patient safety programs.

In contrast to performance indicators, the CHADx is not intended for external monitoring or holding hospitals to account. However, at a broad systems level, it might be useful for monitoring changes in rates of particular complication types. It does not employ risk adjustment, a technique that seeks to standardise the risk of adverse events in a patient population using information on severity or comorbidity. It avoids risk adjustment for two closely related reasons:

In reality, severity of illness and comorbidities do affect rates of complications. However, it is important as a tool for priority setting and for local efforts to improve patient safety to have the full picture across the patient population, regardless of the spectrum of risk and severity.

The number of end categories in the CHADx is designed according to the level of specificity required. The optimal granularity of the classification (and thus usefulness of the categories) will vary, depending on both hospital size and local depth of coding: fewer classes provide more robust cell sizes for monitoring, but may group unlike complications together. Detail has been highlighted as a critical feature of patient-event classifications.24 The tiered structure of the 17 major roll-up groups and 144 detailed and comprehensive subclasses is designed to suit a range of potential uses.

The development of any classification system using routine hospital data faces a number of challenges. The quality of both medical record entries and their abstraction varies both between institutions and between jurisdictions. This limits comparisons of data, at least until robust data audits are in place.

Current coding conventions may need to be reconsidered to better support use of routine morbidity data for patient safety. However, monitoring these routine data within institutions remains potentially useful for tracking change, even in the absence of the data audits that would support comparability between institutions. With appropriate quality control and external audits, hospitals may be able to identify peers (with a similar casemix) for comparison of rates of particular CHADx classes for particular procedures or patient groups. The specificity of most CHADx classes might also allow more standardised reporting of complications of care in clinical research.

Assignment of the condition-onset flag has not been audited, although the annual Victorian Department of Human Services inpatient data audit covered assignment of these flags for the first time in 2008.25 The usefulness of the source codes could be compromised if financial incentives were applied to hospitals reporting hospital-acquired illnesses and injuries in their data. For this reason, we argue against the use of the CHADx for public reporting or the application of financial incentives.

Research groups around the world are trialling ways to collect information on patient outcomes to inform efforts to reduce rates of hospital-acquired illness and injury. Routine data is underutilised in these efforts but has the advantage of being comprehensive, timely and available at no additional cost. Validity and reliability of these data will vary within and between health care systems, and only conditions specifically identified in the record can be coded. In Australia and elsewhere, diagnosis coding is subject to increasing scrutiny and formal evaluation.

Despite increased reporting of mortality rates and other measures of quality of care, individual hospitals have had few ways of systematically investigating rates and patterns of quality problems, focusing instead on incident investigations. The CHADx is designed to provide clinicians and hospitals with a computerised tool to group hospital-acquired diagnoses into smaller sets of clinically meaningful classes for routine monitoring of patient safety. It is premised on a “just culture” approach to improving patient safety (which recognises that competent professionals make mistakes, but has zero tolerance for reckless behaviour26), and also due attention to the organisational contexts in which clinicians work. It requires well documented medical charts and investment in training and supervision of coding staff, but does not rely on special-purpose collection of data. It supports local monitoring of complication rates over time, to focus efforts to improve patient outcomes by minimising their incidence. These complications may not be preventable in every patient but are amenable to systematic efforts to reduce their rates. The CHADx can also be used as the basis for setting priorities through supporting the estimation of relative per-case and total expenditure attributable to each CHADx class.

1 Approaches to systematic measurement of hospital outcomes

Current approaches are of four general types.

Analysis of causal factors. This approach focuses on the causes of adverse events in patient care.1,3 These systems aim to be “multiaxial”, allowing patient “safety events” to be analysed by cataloguing a range of relevant contributory factors, such as characteristics of the patient and care team, and the circumstances leading up to, or causing, any breach of patient safety. This is an important focus for workers at the “sharp end” of patient safety, but will continue to rely on voluntary reporting for the near future. Because of the historical focus on accountability of individual health care workers (largely ignoring contributory organisational factors), voluntary reporting is vulnerable to underestimation of rates of such events.4-6 These collections may be more useful to characterise events than to count them.

Case-finding for sentinel events. This type of system typically reports serious, “sentinel” or “never” events.7-9 These systems are better understood as case-finding systems that enable in-depth investigation of particular events. They also use voluntary reporting. The assumption (rarely tested) is that such relatively uncommon events function as sentinels for more systemic problems in patient care.

Performance reporting with risk adjustment. This approach uses routine hospital morbidity data and focuses on performance reporting. It places a premium on preventability and risk adjustment to avoid inappropriate blame of hospitals or providers for adverse outcomes beyond their control. The foremost example of this approach is the US Pay for Performance rules,10 where a set of specific events coded in the record abstract leads to denial of Medicare funding. Other examples are the US Agency for Healthcare Research and Quality Patient Safety Indicators,11 which build on similar earlier work;12,13 the 3M proprietary system Potentially Preventable Complications;14 and Queensland Health’s VLAD (variable life-adjusted display) indicators.15 By necessity, these systems focus on a narrower range of diagnoses or procedures and rely on detailed, condition-specific profiles of comorbidities that predict a higher rate of unfavourable patient outcomes. This approach is primarily embraced by regulatory and funding authorities seeking to reward better patient outcomes and to penalise poor performance.

Monitoring of the range of hospital-acquired diagnoses. The final approach also uses routine data but seeks to use the full range of routinely recorded diagnosis codes that characterise hospital-acquired illness and injury. We have identified only one example: the Utah/Missouri Patient Safety Project,16,17 funded by the US Agency for Healthcare Research and Quality, which developed a set of 64 classes. It used expert clinical review to try to distinguish at the code level between comorbidities (conditions present on admission) and complications (hospital-acquired diagnoses). Thus, it could not include conditions such as pneumonia or urinary tract infections, which may be either community- or hospital-acquired. Because US jurisdictions have yet to switch to the 10th revision of the International Classification of Diseases for coding, the project was developed using the previous version of the World Health Organization’s hospital mortality and morbidity coding system (9th revision, clinical modification [ICD-9-CM]).

Received 2 March 2009, accepted 24 August 2009

  • Terri J Jackson1
  • Jude L Michel1
  • Rosemary F Roberts1
  • Christine M Jorm2
  • John G Wakefield3

  • 1 Australian Centre for Economic Research on Health, University of Queensland, Brisbane, QLD.
  • 2 Australian Commission on Safety and Quality in Health Care, Sydney, NSW.
  • 3 Queensland Health, Brisbane, QLD.


Correspondence: t.jackson@uq.edu.au

Acknowledgements: 

We thank the Australian Commission on Safety and Quality in Health Care for financial support, the Victorian Department of Human Services for providing access to patient-level data from the Victorian Admitted Episodes Database, staff of the National Centre for Classification in Health (Brisbane and Sydney nodes) for helpful comments on the first draft of the CHADx, and Dr Hong Son Nghiem, Dan Borovnicar and Peter McNair for help with computer programming.

Competing interests:

None identified.

  • 1. Vincent C, Aylin P, Franklin BD, et al. Is health care getting safer? BMJ 2008; 337: a2426.
  • 2. Runciman WB, Williamson JAH, Deakin A, et al. An integrated framework for safety, quality and risk management: an information and incident management system based on a universal patient safety classification. Qual Saf Health Care 2006; 15 Suppl 1: i82-i90.
  • 3. World Health Organization’s World Alliance for Patient Safety Drafting Group. The World Health Organization World Alliance for Patient Safety project to develop an international patient safety event classification. The conceptual framework of an international patient safety event classification. Geneva: WHO, 2006.
  • 4. Oken A, Rasmussen MD, Slagle JM, et al. A facilitated survey instrument captures significantly more anesthesia events than does traditional voluntary event reporting. Anesthesiology 2007; 107: 909-922.
  • 5. Sari ABA, Sheldon TA, Cracknell A, Turnbull A. Sensitivity of routine system for reporting patient safety incidents in an NHS hospital: retrospective patient case note review. BMJ 2007; 334: 79-82.
  • 6. Zhan C, Smith SR, Keyes MA, et al. How useful are voluntary medication error reports? The case of warfarin-related medication errors. Jt Comm J Qual Patient Saf 2008; 34: 36-45.
  • 7. Australian Institute of Health and Welfare, Australian Commission for Safety and Quality in Health Care. Sentinel events in Australian public hospitals 2004–05. Canberra: AIHW, 2007. (AIHW Cat. No. HSE 51.)
  • 8. National Quality Forum. Serious reportable events in healthcare 2006 update: a consensus report. Washington, DC: National Quality Forum, 2007.
  • 9. The Joint Commission (US). Sentinel event policy and procedures. Oakbrook Terrace, Ill: the Commission, 2007. http://www.jointcommission.org/SentinelEvents/PolicyandProcedures/se_pp.htm (accessed Feb 2009).
  • 10. Centers for Medicare and Medicaid Services (US). Medicare program; changes to the hospital inpatient prospective payment system and fiscal year 2008 rates; final rule. Washington, DC: Office of the Federal Register, National Archives and Records Administration, 2007.
  • 11. McDonald K, Romano P, Geppert J, et al. Measures of patient safety based on hospital administrative data — the patient safety indicators. Rockville, Md: Agency for Healthcare Research and Quality, 2002. (AHRQ Publication 02-0038.)
  • 12. Iezzoni LI, Davis RB, Palmer RH, et al. Does the Complications Screening Program flag cases with process of care problems? Using explicit criteria to judge processes. Int J Qual Health Care 1999; 11: 107-118.
  • 13. Weingart SN, Iezzoni LI, Davis RB, et al. Use of administrative data to find substandard care: validation of the Complications Screening Program. Med Care 2000; 38: 796-806.
  • 14. Hughes JS, Averill RF, Goldfield NI, et al. Identifying potentially preventable complications using a present on admission indicator. Health Care Financ Rev 2006; 27: 63-82.
  • 15. Duckett SJ, Coory M, Sketcher-Baker K. Identifying variations in quality of care in Queensland hospitals. Med J Aust 2007; 187: 571-575. <MJA full text>
  • 16. Expert Panel for Classification of Adverse Event ICD-9-CM Codes. The 2002 report on the findings of rating the Utah/Missouri ICD-9-CM adverse event codes. http://health.utah.gov/psi/pubs/Expertpanel.pdf (accessed Dec 2008).
  • 17. Hougland P, Xu W, Pickard S, et al. Performance of International Classification of Diseases, 9th Revision, Clinical Modification codes as an adverse drug event surveillance system. Med Care 2006; 44: 629-636.
  • 18. Jackson T, Duckett S, Shepheard J, Baxter K. Measurement of adverse events using ‘incidence flagged’ diagnosis codes. J Health Serv Res Policy 2006; 11: 21-25.
  • 19. Ehsani JP, Jackson T, Duckett SJ. The incidence and cost of adverse events in Victorian hospitals 2003–04. Med J Aust 2006; 184: 551-555. <MJA full text>
  • 20. Health Data Standards and Systems Unit. Victorian additions to Australian coding standards. Melbourne: Victorian Department of Human Services, 2005. http://www.health.vic.gov.au/hdss/icdcoding/vicadditions/vicadd05.pdf (accessed Mar 2008).
  • 21. National Centre for Classification in Health. Australian coding standards. 6th ed. Sydney: University of Sydney, 2008.
  • 22. Jackson TJ, Michel JL, Roberts R, et al. Development of a validation algorithm for 'present on admission' flagging. BMC Med Inform Decis Mak 2009. In press.
  • 23. McLoughlin V, Millar J, Mattke S, et al. Selecting indicators for patient safety at the health system level in OECD countries. Int J Qual Health Care 2006; 18 Suppl 1: 14-20.
  • 24. Rivard PE, Rosen AK, Carroll JS. Enhancing patient safety through organizational learning: are patient safety indicators a step in the right direction? Health Serv Res 2006; 41 (4 Pt 2): 1633-1653.
  • 25. Victorian ICD Coding Committee. Audits of hospital admitted patient data 2005-08. ICD Coding Newslett 2006-07; First Quarter: 15-18.
  • 26. Agency for Healthcare Research and Quality. Patient Safety Network glossary. http://www.psnet.ahrq.gov/glossary.aspx#refjust culture1 (accessed Oct 2009).

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.