Main outcome measures: Engagement of clinicians (number on CRC, number interviewed for the clinical review process, number of specific referrals from clinicians); and numbers of cases reviewed, system issues identified, recommendations made to the hospital board, and ensuing actions.
Results: A multidisciplinary CRC with 34 members established a robust clinical review process and identified 5925 cases for initial case review. Of these, 2776 (46.8%) fulfilled one or more of the specified criteria for adverse events and progressed to detailed review; 342 of these (12.3%) were classed as serious or major. A total of 317 staff (11%) were interviewed, and 881 system issues were identified, resulting in 98 specific recommendations being made to the Clinical Board and implementation of 81 practice changes (including seven hospital-wide projects) to improve patient care.
Conclusion: A robust, multidisciplinary clinical review process with strong links to managers and policymakers can influence an organisation’s response to adverse patient outcomes and underpin a clinical governance framework.
High-profile patient safety inquiries1,2 and persistently high levels of preventable adverse events in health care systems3-5 have led governments to revolutionise their approach to the delivery of safety and quality in health care.6-8 A key component of this revolution has been the adoption of “clinical governance”,6,9 which requires structures and processes that integrate financial control, service performance and clinical quality in ways that engage clinicians and generate service improvement. Good clinical governance ideally shares the responsibility for averting adverse events between clinicians and managers. This includes shared ownership of both the issues and implementation of solutions.
In the Australian Capital Territory, a public review of neurosurgical services at the Canberra Hospital10 recommended that adverse events be identified and monitored to prevent harm to patients. In response to this review, a senior hospital executive decided that both clinicians and managers should be involved in developing a process to not only identify and investigate adverse events but also to create solutions to minimise their recurrence. A multidisciplinary committee was formed to oversee the development and implementation of a hospital-wide clinical review process and to provide the hospital’s Clinical Board with recommendations for reducing the incidence of adverse events.
We undertook a review of documents pertaining to the set-up and maintenance of the Clinical Review Committee (CRC) and recommendations made to and subsequent actions from the Clinical Board during the period 1 September 2002 – 30 June 2006. We assessed the degree of hospital staff engagement in the clinical review process by using the surrogate measures of CRC membership, the number of specific referrals made by clinicians, the number of departmental committees undertaking clinical review, and the number of staff interviewed during investigation of incidents. Other outcome measures were the numbers of cases reviewed, system issues identified, recommendations made to the hospital board, and ensuing actions.
A senior clinical leader was appointed Chair of the CRC by the General Manager, and, together with the Deputy General Manager, appointed a multidisciplinary committee of 12 representative members from September to November 2002. During the first year, the hospital Executive affirmed its commitment to the clinical review process by funding a dedicated team of four skilled clinical nurse reviewers to provide a consistent, objective and timely approach to the process.
A CRC Executive of five members was established to ensure that the weekly CRC meeting only dealt with appropriate cases (severe or significant adverse events) and was not distracted by daily operational matters. The CRC delegated authority to this Executive to prioritise cases by severity of the adverse event, identify the method of case review, and deal with daily operational matters.
CRC members adapted the clinical review process from the process at another institution,11 which, at the time, did not have a multidisciplinary approach to clinical review or the same hospital structure. The main change made to this process was the introduction of a six-tier system of case review, so that intensity of the review was dependent on the severity of the adverse event, allowing more cases to be reviewed without diminishing the review outcomes.
Cases were identified for initial review using predetermined flags (Box 1). The clinical reviewers would then screen the medical records of flagged cases for the presence of one or more specified adverse events (Box 2), which were developed from a review of published adverse event data,11,12 national core sentinel events13 and aggregated CRC data after 12 months. If a case involved one or more of the specified adverse events, it was tabled at the CRC Executive meeting and evaluated against a severity assessment code (SAC).14 Along with other predetermined criteria relating more specifically to the nature of the case, the SAC determined the method of review (Box 3). The review aimed to determine if any system issues led to the adverse event.
The CRC was afforded “qualified privilege” under the ACT Health Act (1993), which encouraged frank discussion of the adverse event during the review process. However, qualified privilege did not prevent the CRC from publishing its findings and recommendations to a wider audience, including the Coroner, the Community and Health Services Complaints Commissioner, patients, and their relatives; nor did it prevent the ACT from supporting the national open disclosure policy.15
When system issues were identified, CRC reviewers, in consultation with the staff members (clinicians and managers) directly or indirectly involved in the adverse event, developed and presented recommendations to the Clinical Board. This peak decision-making body’s role was to accept, reject or modify these recommendations and appoint a senior clinician or manager to ensure they were enacted through policy and practice changes or targeted quality improvement projects.
Over time, common system issues became evident and were aggregated to facilitate prioritisation of clinical improvement initiatives. From cases deemed to have had a serious or major patient outcome (based on the SAC), 16 broad categories of system issues were identified and modified after review of the literature16,17 (Box 4). These system issues were ranked annually by frequency and reported to the Clinical Board, to advise of clinical priority areas requiring attention.
From September 2002 to June 2006, 179 750 inpatients and 1 370 092 occasions of service were screened, capturing 5925 cases involving adverse patient outcomes; many were captured under more than one criterion. Of these events, 2776 (46.8%) progressed to detailed review and, of these, 342 (12.3%) were classed as serious or major (SAC 1 or 2). Investigation of these 342 cases identified at least two system issues associated with each, with a total of 881 system issues being identified.
Over the 4-year period, the committee grew from 12 to 34 members as a result of active and strategic recruitment, through the Chair and Deputy General Manager meeting with 29 clinical directors and nurse managers. The new appointments were made deliberately to increase hospital representation and to penetrate the clinical review process deeper into the institution. Additional junior medical officers, registrars, midwives, clinical nurse consultants, and senior staff members became CRC members. More recently, a consumer representative has been appointed. Throughout the 4 years, there was a sustained average weekly CRC meeting attendance of 20 people (60%).
From 2002 to 2006, the number of specific referrals made by clinicians directly to the CRC increased sixfold from 29 to 175 (Box 1), and the number of local morbidity and mortality committees increased from eight to 16. The number of these groups reporting their activity and findings to the CRC increased from two to nine. During the 4-year period, 27 extended reviews (Level 3; see Box 3) were performed, involving interviews with 317 (11%) of the 2854 hospital staff: consultants (70), registrars (43), resident medical officers (20), nursing staff (168), allied health staff (9), ward staff (3) and hospital administrators (4).
Ninety-eight recommendations were made to the hospital’s Clinical Board, of which 81 (83%) have been implemented or continue to be enacted through hospital-wide projects. The actions taken have been far-reaching; examples are detailed in Box 4.
Four of the 16 categories of system issues emerged as the most frequent: clinical assessment and management; clinical guidelines/policy procedure; communication between staff; and skills/education. The most frequently recurring system issues were seen to require large-scale projects to implement hospital-wide changes. These long-term projects include Early Recognition of the Deteriorating Patient; Clinical Handover; Respecting Patient Choices; Review of Resuscitation Processes; Management of the Mentally Ill Patient with Significant Medical Comorbidity; and After-Death Management. Most of these projects have been implemented as hospital-wide programs; for example, the Early Recognition of the Deteriorating Patient project involved development of a new observation chart, an education program, and installation of a track and trigger system. This initiative demonstrated clear changes in clinical practice (increase in the frequency of documentation of observations and calling of the medical emergency team) and an improvement in patient outcome.18
The implementation of a hospital-wide clinical review process in our tertiary hospital has demonstrated that all serious adverse events can be detected in a systematic way using predetermined detection flags and screening criteria. With seven methods of detecting clinical incidents, no significant adverse event has been identified outside the CRC processes. The close relationship with the hospital’s Clinical Board has enabled the CRC to bridge the gap between frontline clinical staff, policymakers and managers, by ensuring that system issues identified in serious adverse events are acknowledged and result in actions and hospital-wide projects to improve patient care.
The role of the independent clinical reviewers has been important in the success of this clinical review process. They have been able to work collaboratively with all clinicians, including senior consultants, and, being located in the hospital’s independent Clinical Practice Improvement Unit, have been able to provide impartial and objective reports. Their independence has also facilitated objective feedback to the clinicians, CRC and Clinical Board.
Another potentially important determinant in engaging clinical staff in the review process has been the driving of CRC activities and development of CRC processes by clinicians, allowing them to “buy into” the CRC and its activities. The CRC has also been able to give feedback to clinicians, morbidity and mortality committees, and the hospital Executive on its findings, and together developed recommendations for the Clinical Board. This allowed for engagement of clinicians (nurses, doctors, allied health workers) in developing recommendations, which facilitates ownership and makes it more likely they will be enacted.19
Introduction of the systematic clinical review process has not been without difficulties. Its set-up and maintenance has been time-consuming and dependent on a small number of enthusiastic people. Some craft groups did not initially embrace the clinical review process, but the resistance to take part has declined over time. This change in behaviour occurred through active participation (eg, CRC membership) and also through an understanding, gained from face-to-face meetings, that adverse events are investigated consistently and independently in accord with transparent processes.
The qualified privilege conferred on the CRC appears to have helped with acceptance of the clinical review process. Previously, clinicians were reluctant to discuss adverse events9 for fear of reprisal (defamation, litigation). However, with the knowledge that documents relating to CRC investigations were not admissible in a court of law, only rarely did clinicians refuse to take part. With the clinical review process now embedded in the hospital culture, clinicians have welcomed a consumer representative onto the CRC and the introduction of open disclosure.
The CRC was slow to develop rigorous reporting of identified system issues. In the first 2 years, it was difficult to report to the Clinical Board in a meaningful way, due to lack of grouping or prioritisation of identified system issues. Over time, a data dictionary has been developed to enable accurate grouping of identified system issues, which has been essential for the development of hospital-wide projects.
Despite the apparent success of the CRC, this study only reports surrogate markers for engagement of clinical staff in the clinical review process. The failure to conduct interviews with participants and non-participants in the CRC process weakens the evidence for good clinical engagement. Also, in the absence of a CRC database in the early days, much of the data collection was performed manually, increasing the risk of missing data and incorrect analysis.
The CRC, through its multidisciplinary group of clinicians and links with the Clinical Board, has had a visible impact on patient care. The multi-tiered investigative process has been a practical solution to the overwhelming number of cases identified for initial screening, without compromising review outcomes. We see the success of the CRC as twofold: the engagement of clinicians in the process,20 and the development of actions overseen by the peak decision-making body. The consistent methods used for case review of similar incidents, the independent nature of the dedicated reviewers, the penetration of the CRC into the institution and the local university curriculum, and the visible actions that have arisen from the reviews represent some of the evidence of its success.
The clinical review process is itself continually under review, and substantial resources have been invested to not only support the CRC’s processes, but also for clinical improvement projects driven by clinicians. While the system continues to mature, it has led the development of the clinical governance framework in our institution that is now being used territory-wide.
1 Cases detected by Clinical Review Committee (CRC) flags
2 Initial adverse event screening criteria that trigger further review
3 Clinical Review Committee (CRC) review process — levels of review*
4 Frequency of systems associated with adverse events and examples of actions taken
- 1. Douglas N, Robinson J, Fahey K. Inquiry into obstetric and gynaecological services at King Edward Memorial Hospital 1990–2000. Final report. Perth: Government of Western Australia, 2001.
- 2. Walker B. Final report of the Special Commission of Inquiry into Campbelltown and Camden Hospitals. Sydney: New South Wales Attorney General’s Department, 2004.
- 3. Australian Council for Safety and Quality in Health Care. Charting the safety and quality of health care in Australia. Canberra: Commonwealth of Australia, 2004.
- 4. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study. N Engl J Med 1991; 324: 370-376.
- 5. Wilson RM, Runciman WB, Gibberd RW, et al. The Quality in Australian Health Care Study. Med J Aust 1995; 163: 458-471. <MJA full text>
- 6. UK Department of Health. The New NHS: modern, dependable. London: The Stationery Office, 1997. (Series No. Cm 3807.)
- 7. Australian Council for Safety and Quality in Health Care. Maximising national effectiveness to reduce harm and improve care. Fifth report to the Australian Health Ministers’ Conference. Canberra: Commonwealth of Australia, 2004. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/2D41579F246E93E3CA2571C5 002358A0/$File/annualreptjul04.pdf (accessed Dec 2006).
- 8. Institute of Medicine. To err is human: building a safer health system. Washington, DC: National Academy Press, 2000.
- 9. Braithwaite J, Travaglia JF. An overview of clinical governance policies, practices and initiatives. Aust Health Rev 2008; 32: 10-22.
- 10. Community and Health Services Complaints Commissioner. A final report of the investigation into adverse patient outcomes of neurosurgical services provided by the Canberra Hospital. Canberra: ACT Government, 2003.
- 11. Quality Department Royal North Shore Hospital. QaRNS programme review manual. Sydney: QaRNS, 2002.
- 12. Wolff AM, Bourke J, Campbell IA, Leembruggen DW. Detecting and reducing hospital adverse events: outcomes of the Wimmera clinical risk management program. Med J Aust 2001; 174: 621-625. <MJA full text>
- 13. Australian Council for Safety and Quality in Health Care. Sentinel events [fact sheet]. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/6A2AB719D72945A4CA2571C5001 E5610/$File/sentnlevnt31305.pdf (accessed May 2008).
- 14. NSW Health. Severity Assessment Code (SAC). November 2005. http://www.health.nsw.gov.au/pubs/2005/pdf/saca4.pdf (accessed Dec 2006).
- 15. Australian Commission on Safety and Quality in Health Care. Open disclosure standard: a national standard for open communication in public and private hospitals, following an adverse event in health care. Canberra: Commonwealth of Australia, 2008. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/3B994EFC1C9 C0B22CA25741F0019FDEE/$File/NOD-Std %20reprinted%202008.pdf (accessed May 2008).
- 16. Vincent C, Taylor-Adams S, Chapman EJ, et al. How to investigate and analyse clinical incidents: Clinical Risk Unit and Association of Litigation and Risk Management protocol. BMJ 2000, 320: 777-781.
- 17. Metropolitan Health and Aged Care Services Division. Sentinel event program: annual report 2003–04. Melbourne: Victorian Government Department of Human Services, 2004.
- 18. Mitchell I, Van Leuvan C, Avard B, et al. Recognising the deteriorating patient reduces unplanned intensive care unit admissions. Anaesth Intensive Care 2007; 35: 1007.
- 19. Degeling PJ, Maxwell S, Iedema R, Hunter DJ. Making clinical governance work. BMJ 2004; 329: 679-681.
- 20. Kotter JP. Leading change: why transformation efforts fail. Harvard Bus Rev 1995; (Mar/Apr): 59-67.
Publication of your online response is subject to the Medical Journal of Australia's editorial discretion. You will be notified by email within five working days should your response be accepted.