Connect
MJA
MJA

Mapping the limits of safety reporting systems in health care —what lessons can we actually learn?

Matthew J W Thomas, Timothy J Schultz, Natalie Hannaford and William B Runciman
Med J Aust 2011; 194 (12): 635-639. || doi: 10.5694/j.1326-5377.2011.tb03146.x
Published online: 20 June 2011

Abstract

Objectives: To assess the utility of Australian health care incident reporting systems and determine the depth of information available within a typical system.

Design and setting: Incidents relating to patient misidentification occurring between 2004 and 2008 were selected from a sample extracted from a number of Australian health services’ incident reporting systems using a manual search function.

Main outcome measures: Incident type, aetiology (error type) and recovery (error-detection mechanism). Analyses were performed to determine category saturation.

Results: All 487 selected incidents could be classified according to incident type. The most prevalent incident type was medication being administered to the wrong patient (25.7%, 125), followed by incidents where a procedure was performed on the wrong patient (15.2%, 74) and incidents where an order for pathology or medical imaging was mislabelled (7.0%, 34). Category saturation was achieved quickly, with about half the total number of incident types identified in the first 13.5% of the incidents. All 43 incident types were classified within 76.2% of the dataset. Fifty-two incident reports (10.7%) included sufficient information to classify specific incident aetiology, and 288 reports (59.1%) had sufficient detailed information to classify a specific incident recovery mechanism.

Conclusions: Incident reporting systems enable the classification of the surface features of an incident and identify common incident types. However, current systems provide little useful information on the underlying aetiology or incident recovery functions. Our study highlights several limitations of incident reporting systems, and provides guidance for improving the use of such systems in quality and safety improvement.

The collection of data relating to incidents and near-miss events has become an entrenched and critical component of safety management across high-risk industries worldwide.1 According to the quality and safety axiom that “every defect should lead to improvement”, reporting systems exist to provide the raw data for continual improvement processes,2 as well as serving critical functions with respect to the local management of incidents.

In the quest towards enhancing patient safety, health care has also embraced the collection of near-miss and incident data, with all health services in Australia collecting some form of data. Across most health systems, voluntary reporting systems exist for near-miss and minor incidents, with the primary aim of collecting information about vulnerabilities in the health care system, so that remedies can be applied before an actual adverse event takes place.3 According to the World Health Organization, the primary role of such reporting systems is to enable learning across large health systems:

Collecting data is only the first step in a process of organisational learning, through which lessons can be drawn from incidents, modifications made to practice, and the risk of adverse events reduced. At the organisational level, aggregate data are available and more sophisticated monitoring and analysis processes can take place, using data from multiple facilities or multiple units within the organisation. This process of using safety-related data has been termed quadruple-loop learning (personal, local, national and international), emphasising the wide-reaching potential for harnessing lessons from incident report data.5

Recently, a disparity between the number of reports being made and the rate of meaningful evidence-based change to practice has been identified.6 This in turn has suggested that the ineffective use of data, and a lack of published learnings from such systems, may be responsible for the deficit of evidence-based change.

In addition to the primary purpose of systemic quality improvement, secondary administrative roles such as local incident management functions, as well as political and organisational reasons for reporting also exist. As reporting systems become embedded in the day-to-day process of health care, these administrative functions (which demand less detailed reports and analysis) may have come to outweigh the primary purpose. Furthermore, the effective use of increasing volumes of data itself becomes increasingly difficult.

Our study explored the utility of incident reporting systems currently used in most Australian health services. Our primary aim was to determine the depth of information available within a typical incident reporting system and whether more sophisticated “human factors” classification schemes relating to incident aetiology and systemic defences can be applied to incident reporting data.

Methods
Results
Discussion

The principal role of incident reporting systems is to ensure a consistent and coordinated approach to the identification and analysis of incidents so that lessons can be learned and shared across the whole health system.13,14 Reporting of near-miss events has been shown to offer numerous benefits compared with retrospective investigation of adverse events. Perhaps the most important of these benefits are the greater frequency of near-miss events, thus allowing quantitative analysis; and the ability to identify and analyse the recovery functions that enabled the accident trajectory to be stopped before an actual adverse event took place.15

Efficacy of current incident reporting in Australia. Our study shows that the information contained in the current incident reports is frequently inadequate to allow sophisticated analyses of incident aetiology and, more importantly, the recovery functions involved in averting or ameliorating adverse events. The results suggest that incident reporting may not be meeting the primary objective of enabling continuous improvement of safety and quality.

These results echo a report of the Institute of Medicine in the United States, which suggested that the value of near-miss reporting required further investigation.16 Although there is general agreement that near-miss reports contain relevant information about identifiable hazards that cannot be collected by other means,15 most organisations only take action on serious adverse events, which diminishes the value of reporting large numbers of near misses.14,17 Indeed, the sheer volume of incidents reported means that health care organisations tend to investigate most events superficially. For the few incidents that receive thorough investigation, the principal method is root cause analysis, which is seen as the gold standard for gaining deeper insights into the causal features of an adverse event.14,18

Incident reporting systems are also subject to a number of other limitations, including international suggestions of significant underreporting of incidents,19 and of medical error, as doctors rarely report.20 Any reporting is highly affected by hindsight bias (based on the degree of harm the patient has suffered, especially considering that doctors tend to disproportionally report the more severe events). Incident reports are thus “a nonrandom sample of identified hazards from a larger unknown universe of hazards”.14 In light of these limitations, it is critical that we explore alternative ways to learn from incidents in health care.15-18

Doing more with less — category saturation. Currently, public health systems with established incident reporting systems (such as South Australia, Queensland and New South Wales) receive on average between 30 000 and 100 000 incident reports annually. While there are significant amounts of data available, our study demonstrates that effective learning from incidents requires much less data.

The category saturation results suggest that a critical threshold exists, beyond which quantitative analyses of existing forms of incident data yield limited new information. For health care incident types related to patient misidentification, this was found to be no more than 400 incidents. In any case, a substantial amount of information was available from only 200 incidents.

Human factors classification — measuring the resilience of the system. Early studies on adverse events in health care focused on the origins of errors and on the mechanisms involved in their production. Accordingly, most effort was dedicated to the field of error prevention and focused on addressing the contextual and organisational issues in the production of errors.1 However, in a more enlightened world where we accept error to be both inevitable and ubiquitous, current safety science now focuses on the processes of error detection and error recovery.21-23

Our results indicate that incident reporting systems as they are currently designed and administered do not always elicit sufficient detail to enable a systematic analysis. While these current systems may not be designed to collect details relating to incident aetiology from a human factors perspective, the study highlights that existing forms of data collection focus predominantly on a description of the incident as it unfolded. They contain limited information about causal or contributory factors and whether there were any features in the system available or needed to prevent an actual adverse event. However, given the ubiquity of error, a renewed focus on reporting the aetiology and recovery from error may place a massive burden on the time taken to report.

We suggest that the goal of improving learning from patient safety incident data will be best served by exploring innovative approaches to sampling incidents and eliciting more detailed data for each event. It is apparent that we are lacking quality, not quantity, of incident reports. We should be seeking data that will inform a deeper understanding of the system features that will promptly detect and mitigate the inevitable failures that occur in a complex system such as health care.

Innovative approaches to maximise the efficacy of incident reporting. To facilitate collection of better-quality incident reports, we propose testing a modified incident reporting system in which new incident reports are selected at random and followed up with the individuals involved, to seek more detailed information pertaining to the incident. We envisage that this type of system would complement the existing approaches of incident reporting and root cause analysis, but with better return on investment.

A novel approach to more effective use of incident report data in industries such as health care would first involve risk-based sampling of incidents from traditional incident reporting systems. The aim of this sampling would be the proactive identification of emergent high-risk safety concerns, rather than the analysis of sentinel events after they have occurred. Risk-based sampling would involve introducing risk assessment into the initial data-collection process within the incident reporting system. This approach already exists within other health care safety and quality methods. Health care failure mode and effects analysis, for example, risk assesses each failure event in terms of potential severity and probability of occurrence, producing a hazard score that in turn enables prioritisation of safety management activities.18

Second, this new approach would involve eliciting more detailed data on these high-risk incident types. This process could involve targeting specific incident types and using telephone interview or survey techniques to elicit further information. More detailed information on the aetiology of the event could be obtained in this manner, enabling more sophisticated approaches to analysis. Perhaps more importantly, this approach would enable collection of information on the features of near-miss events that relate to the error tolerance or resilience of the system. In this way, the focus of improving learning from patient safety incidents would shift from the negative analytic frame of “what went wrong” to the more positive analytic frame of “what enabled us to maintain the integrity of the system”. This more positive focus on the systemic defences that enable the detection and timely mitigation of incident trajectories could hold an important key for the renewed vigour of quality and safety interventions in health care. Further research is required to examine the resource implications and practicalities of such changes to incident reporting systems.

1 Principal natural categories of incident type recorded in Australian incident reporting systems, 2004–2008 (n = 487)

Incident type

No.

Incident type

No.


Admission

Notes

Admitted under wrong medical record number

6 (1.2%)

Medication chart for wrong patient

8 (1.6%)

Armband

Treatment plan in wrong notes

3 (0.6%)

None

8 (1.6%)

Wrong admission filed

5 (1.0%)

Wrong patient

11 (2.3%)

Wrong chart in notes

8 (1.6%)

Autologous product

Wrong label on chart

25 (5.1%)

Wrong patient

2 (0.4%)

Wrong label on consent

2 (0.4%)

Wrong product

2 (0.4%)

Wrong labels in notes

10 (2.1%)

Booking

Wrong “not for resuscitation” instruction

1 (0.2%)

Booked under wrong medical record number

1 (0.2%)

Wrong notes in file

6 (1.2%)

Calls to family

Wrong notes with patient

9 (1.8%)

Wrong patient’s family called

1 (0.2%)

Wrong results in file

9 (1.8%)

Clinical information system

Pre-admission

Errors in patient details

1 (0.2%)

Letter sent to wrong patient

1 (0.2%)

Discharge

Procedure

Given wrong summary

2 (0.4%)

Wrong patient

74 (15.2%)

Given wrong paperwork

1 (0.2%)

Request

Wrong patient

3 (0.6%)

Wrong details

2 (0.4%)

Equipment

Wrong label

34 (7.0%)

Used on two patients

2 (0.4%)

Wrong patient

23 (4.7%)

Follow-up

Results

Wrong patient

1 (0.2%)

Wrong label

30 (6.2%)

Handover instructions

Wrong patient

11 (2.3%)

Wrong patient

1 (0.2%)

Script

Meal

Wrong patient

1 (0.2%)

Wrong patient

2 (0.4%)

Wrong label

1 (0.2%)

Medication

Specimen

Wrong label

7 (1.4%)

Wrong label

29 (6.0%)

Wrong medication

5 (1.0%)

Transfer

Wrong patient

125 (25.7%)

Wrong patient

10 (2.1%)

Multiresistant organism status

Wrong unit

2 (0.4%)

Wrong status

1 (0.2%)

Transfusion

Wrong patient

1 (0.2%)


Provenance: Not commissioned; externally peer reviewed.

  • Matthew J W Thomas1
  • Timothy J Schultz2
  • Natalie Hannaford2
  • William B Runciman2

  • 1 School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, SA.
  • 2 Australian Patient Safety Foundation, Adelaide, SA.


Correspondence: matthew.thomas@unisa.edu.au

Acknowledgements: 

We thank the Australian Commission on Safety and Quality in Health Care for funding this study. We also thank the Australian health service for providing access to the data for this study. Ongoing analysis of these incident data are critical to enhancing safety and quality in health care, and we acknowledge the health service’s commitment to safety as a priority over political or other risk associated with providing access to such data. Sarah Michael conducted the search and extracted incidents for analysis. Peter Hibbert helped to initiate the study.

Competing interests:

None identified.

  • 1. Reason J. Managing the risks of organizational accidents. Aldershot, UK: Ashgate, 1997.
  • 2. Berwick DM. Continuous improvement as an ideal in healthcare. N Engl J Med 1989; 320: 53-56.
  • 3. Institute of Medicine. Kohn LT, Corrigan JM, Donaldson MS, editors. To err is human: building a safer health system. Washington, DC: National Academies Press, 2000.
  • 4. World Alliance for Patient Safety. WHO draft guidelines for adverse event reporting and learning systems. Geneva: World Health Organization, 2005.
  • 5. Runciman WB, Williamson JAH, Deakin A, et al. An integrated framework for safety, quality and risk management: an information and incident management system based on a universal patient safety classification. Qual Saf Health Care 2006; 15: i82-i90.
  • 6. Battles JB, Stevens DP. Adverse event reporting systems and safer healthcare. Qual Saf Health Care 2009; 18: 2.
  • 7. Runciman WB, Edmonds MJ, Pradhan M. Setting priorities for patient safety. Qual Saf Health Care 2002; 11: 224-229.
  • 8. Hollnagel E. The phenotype of erroneous actions. Int J Man Mach Stud 1993; 39: 1-32.
  • 9. Runciman WB, Merry A, Walton M. Safety and ethics in healthcare. Aldershot, UK: Ashgate, 2007.
  • 10. Amalberti R. The paradoxes of almost totally safe transport systems. Saf Sci 2001; 37: 109-126.
  • 11. Leape LL, Berwick DM. Five years after To err is human: what have we learned? JAMA 2005; 293: 2384-2390.
  • 12. Thomas MJW, Petrilli RM. Interaction between error type and error detection mechanism during normal flight operations. Proceedings of the 27th Conference of the European Association for Aviation Psychology: 2006 Sep 24-28; Potsdam, Germany. Hamburg: European Association for Aviation Psychology, 2006.
  • 13. Braithwaite J, Westbrook M, Travaglia J. Attitudes toward the large-scale implementation of an incident reporting system. Int J Qual Health Care 2008; 20: 184-191.
  • 14. Pronovost P, Morlock LL, Sexton B, et al. Improving the value of patient safety reporting systems. In: Henriksen K, Battles JB, Keyes MA, Grady ML, editors. Advances in patient safety: new directions and alternative approaches. Vol 1. Assessment. Rockville, Md: Agency for Healthcare Research and Quality, 2008.
  • 15. Barach P, Small SD. Reporting and preventing medical mishaps: lessons from non-medical near miss reporting systems. BMJ 2000; 320: 759-763.
  • 16. Institute of Medicine. Aspden P, Corrigan JM, Wolcott J, Erickson SM, editors. Patient safety: achieving a new standard for care. Washington, DC: National Academies Press, 2004.
  • 17. Edmondson A. Learning from failure in health care: frequent opportunities, pervasive barriers. Qual Saf Health Care 2004; 13: ii3-ii9.
  • 18. Senders JW. FMEA and RCA: the mantras of modern risk management. Qual Saf Health Care 2004; 13: 249-250.
  • 19. Shaw C, Coles JT. The reporting of adverse clinical incidents — international views and experience. London: CASPE Research, 2001.
  • 20. Neale G. Are the risks of hospital practice adequately recognised by incident reporting? Qual Saf Health Care 2005; 14: 78-79.
  • 21. Zarbo RJ, Meier FA, Raab SS. Error detection in anatomic pathology. Arch Path Lab Med 2005; 129: 1237-1245.
  • 22. Nyssen A-S, Blavier A. Error detection: A study in anaesthesia. Ergonomics 2006; 49: 517-525.
  • 23. Kanse L, Van Der Schaaf TW, Vrijland ND, Van Mibrlo H. Error recovery in a hospital pharmacy. Ergonomics 2006; 49: 503-516.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.