Connect
MJA
MJA

A robust clinical review process: the catalyst for clinical governance in an Australian tertiary hospital

Imogen A Mitchell, Bobby Antoniou, Judith L Gosper, John Mollett, Mark D Hurwitz and Tracey L Bessell
Med J Aust 2008; 189 (8): 451-455. || doi: 10.5694/j.1326-5377.2008.tb02120.x
Published online: 20 October 2008

High-profile patient safety inquiries1,2 and persistently high levels of preventable adverse events in health care systems3-5 have led governments to revolutionise their approach to the delivery of safety and quality in health care.6-8 A key component of this revolution has been the adoption of “clinical governance”,6,9 which requires structures and processes that integrate financial control, service performance and clinical quality in ways that engage clinicians and generate service improvement. Good clinical governance ideally shares the responsibility for averting adverse events between clinicians and managers. This includes shared ownership of both the issues and implementation of solutions.

In the Australian Capital Territory, a public review of neurosurgical services at the Canberra Hospital10 recommended that adverse events be identified and monitored to prevent harm to patients. In response to this review, a senior hospital executive decided that both clinicians and managers should be involved in developing a process to not only identify and investigate adverse events but also to create solutions to minimise their recurrence. A multidisciplinary committee was formed to oversee the development and implementation of a hospital-wide clinical review process and to provide the hospital’s Clinical Board with recommendations for reducing the incidence of adverse events.

Here, we report the development and implementation of this clinical review process and its impact on the hospital’s response to adverse patient outcomes.

Methods

We undertook a review of documents pertaining to the set-up and maintenance of the Clinical Review Committee (CRC) and recommendations made to and subsequent actions from the Clinical Board during the period 1 September 2002 – 30 June 2006. We assessed the degree of hospital staff engagement in the clinical review process by using the surrogate measures of CRC membership, the number of specific referrals made by clinicians, the number of departmental committees undertaking clinical review, and the number of staff interviewed during investigation of incidents. Other outcome measures were the numbers of cases reviewed, system issues identified, recommendations made to the hospital board, and ensuing actions.

The clinical review process

CRC members adapted the clinical review process from the process at another institution,11 which, at the time, did not have a multidisciplinary approach to clinical review or the same hospital structure. The main change made to this process was the introduction of a six-tier system of case review, so that intensity of the review was dependent on the severity of the adverse event, allowing more cases to be reviewed without diminishing the review outcomes.

Cases were identified for initial review using predetermined flags (Box 1). The clinical reviewers would then screen the medical records of flagged cases for the presence of one or more specified adverse events (Box 2), which were developed from a review of published adverse event data,11,12 national core sentinel events13 and aggregated CRC data after 12 months. If a case involved one or more of the specified adverse events, it was tabled at the CRC Executive meeting and evaluated against a severity assessment code (SAC).14 Along with other predetermined criteria relating more specifically to the nature of the case, the SAC determined the method of review (Box 3). The review aimed to determine if any system issues led to the adverse event.

The CRC was afforded “qualified privilege” under the ACT Health Act (1993), which encouraged frank discussion of the adverse event during the review process. However, qualified privilege did not prevent the CRC from publishing its findings and recommendations to a wider audience, including the Coroner, the Community and Health Services Complaints Commissioner, patients, and their relatives; nor did it prevent the ACT from supporting the national open disclosure policy.15

Results

From September 2002 to June 2006, 179 750 inpatients and 1 370 092 occasions of service were screened, capturing 5925 cases involving adverse patient outcomes; many were captured under more than one criterion. Of these events, 2776 (46.8%) progressed to detailed review and, of these, 342 (12.3%) were classed as serious or major (SAC 1 or 2). Investigation of these 342 cases identified at least two system issues associated with each, with a total of 881 system issues being identified.

Discussion

The implementation of a hospital-wide clinical review process in our tertiary hospital has demonstrated that all serious adverse events can be detected in a systematic way using predetermined detection flags and screening criteria. With seven methods of detecting clinical incidents, no significant adverse event has been identified outside the CRC processes. The close relationship with the hospital’s Clinical Board has enabled the CRC to bridge the gap between frontline clinical staff, policymakers and managers, by ensuring that system issues identified in serious adverse events are acknowledged and result in actions and hospital-wide projects to improve patient care.

The role of the independent clinical reviewers has been important in the success of this clinical review process. They have been able to work collaboratively with all clinicians, including senior consultants, and, being located in the hospital’s independent Clinical Practice Improvement Unit, have been able to provide impartial and objective reports. Their independence has also facilitated objective feedback to the clinicians, CRC and Clinical Board.

Another potentially important determinant in engaging clinical staff in the review process has been the driving of CRC activities and development of CRC processes by clinicians, allowing them to “buy into” the CRC and its activities. The CRC has also been able to give feedback to clinicians, morbidity and mortality committees, and the hospital Executive on its findings, and together developed recommendations for the Clinical Board. This allowed for engagement of clinicians (nurses, doctors, allied health workers) in developing recommendations, which facilitates ownership and makes it more likely they will be enacted.19

Introduction of the systematic clinical review process has not been without difficulties. Its set-up and maintenance has been time-consuming and dependent on a small number of enthusiastic people. Some craft groups did not initially embrace the clinical review process, but the resistance to take part has declined over time. This change in behaviour occurred through active participation (eg, CRC membership) and also through an understanding, gained from face-to-face meetings, that adverse events are investigated consistently and independently in accord with transparent processes.

The qualified privilege conferred on the CRC appears to have helped with acceptance of the clinical review process. Previously, clinicians were reluctant to discuss adverse events9 for fear of reprisal (defamation, litigation). However, with the knowledge that documents relating to CRC investigations were not admissible in a court of law, only rarely did clinicians refuse to take part. With the clinical review process now embedded in the hospital culture, clinicians have welcomed a consumer representative onto the CRC and the introduction of open disclosure.

The CRC was slow to develop rigorous reporting of identified system issues. In the first 2 years, it was difficult to report to the Clinical Board in a meaningful way, due to lack of grouping or prioritisation of identified system issues. Over time, a data dictionary has been developed to enable accurate grouping of identified system issues, which has been essential for the development of hospital-wide projects.

Despite the apparent success of the CRC, this study only reports surrogate markers for engagement of clinical staff in the clinical review process. The failure to conduct interviews with participants and non-participants in the CRC process weakens the evidence for good clinical engagement. Also, in the absence of a CRC database in the early days, much of the data collection was performed manually, increasing the risk of missing data and incorrect analysis.

The CRC, through its multidisciplinary group of clinicians and links with the Clinical Board, has had a visible impact on patient care. The multi-tiered investigative process has been a practical solution to the overwhelming number of cases identified for initial screening, without compromising review outcomes. We see the success of the CRC as twofold: the engagement of clinicians in the process,20 and the development of actions overseen by the peak decision-making body. The consistent methods used for case review of similar incidents, the independent nature of the dedicated reviewers, the penetration of the CRC into the institution and the local university curriculum, and the visible actions that have arisen from the reviews represent some of the evidence of its success.

The clinical review process is itself continually under review, and substantial resources have been invested to not only support the CRC’s processes, but also for clinical improvement projects driven by clinicians. While the system continues to mature, it has led the development of the clinical governance framework in our institution that is now being used territory-wide.

2 Initial adverse event screening criteria that trigger further review

Australian national core sentinel events13

Other triggers

3 Clinical Review Committee (CRC) review process — levels of review*

Level of review and type of adverse event

               Method of review


Level 1: External opinion

Actual or potential significant/sentinel/critical incident


Level 2: ACT Clinical Audit Committee (CAC) interdivisional (joint) review

Any incident involving more than one health agency in the ACT


Level 3: CRC extended review

Actual or potential significant/sentinel/critical clinical incident in accord with severity assessment coding process and Significant Incident Policy


Level 4: Review and presentation to CRC

Incident involving more than one clinical unit — “not significant” in accord with severity assessment coding process and Significant Incident Policy


Level 5: Single unit review

Incident involving only one clinical unit — not “non-significant” in accord with severity assessment coding process and Significant Incident Policy


Level 6: CRC Executive

Any case reviewed by the clinical reviewers that fulfils the screening criteria


ACT = Australian Capital Territory. * A review may be escalated to another level at the discretion of the CRC Executive.

4 Frequency of systems associated with adverse events and examples of actions taken

System associated with adverse event

2003–2004

2004–2005

2005–2006

               Examples of actions taken


Clinical assessment and management

35%

57%

34%


Clinical guidelines/policy procedure

20%

43%

23%


Communication between staff

37%

42%

26%


Skills/education

33%

27%

22%


Patient observation process

19%

25%

9%


Documentation

20%

24%

13%


Coordination of care

17%

16%

20%


Staff supervision

7%

14%

9%


Human resources/staff allocation

8%

11%

4%


Equipment

5%

8%

8%


External factors

4%

8%

5%


Other factors

6%

8%

6%


Physical environment

4%

4%

8%


Communication between staff, patient and family

3%

2%

7%


Security/design

0

1%

1%


Patient site/identification

0

1%

3%

  • Imogen A Mitchell1
  • Bobby Antoniou1
  • Judith L Gosper1
  • John Mollett1
  • Mark D Hurwitz1
  • Tracey L Bessell2

  • 1 Canberra Hospital, Canberra, ACT.
  • 2 Patient Safety and Quality Unit, ACT Health, Canberra, ACT.


Correspondence: imogen.mitchell@act.gov.au

Acknowledgements: 

We would like to acknowledge the contribution of Dr Wayne Ramsey, AM, to the establishment of this clinical review process.

Competing interests:

None identified.

  • 1. Douglas N, Robinson J, Fahey K. Inquiry into obstetric and gynaecological services at King Edward Memorial Hospital 1990–2000. Final report. Perth: Government of Western Australia, 2001.
  • 2. Walker B. Final report of the Special Commission of Inquiry into Campbelltown and Camden Hospitals. Sydney: New South Wales Attorney General’s Department, 2004.
  • 3. Australian Council for Safety and Quality in Health Care. Charting the safety and quality of health care in Australia. Canberra: Commonwealth of Australia, 2004.
  • 4. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study. N Engl J Med 1991; 324: 370-376.
  • 5. Wilson RM, Runciman WB, Gibberd RW, et al. The Quality in Australian Health Care Study. Med J Aust 1995; 163: 458-471. <MJA full text>
  • 6. UK Department of Health. The New NHS: modern, dependable. London: The Stationery Office, 1997. (Series No. Cm 3807.)
  • 7. Australian Council for Safety and Quality in Health Care. Maximising national effectiveness to reduce harm and improve care. Fifth report to the Australian Health Ministers’ Conference. Canberra: Commonwealth of Australia, 2004. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/2D41579F246E93E3CA2571C5 002358A0/$File/annualreptjul04.pdf (accessed Dec 2006).
  • 8. Institute of Medicine. To err is human: building a safer health system. Washington, DC: National Academy Press, 2000.
  • 9. Braithwaite J, Travaglia JF. An overview of clinical governance policies, practices and initiatives. Aust Health Rev 2008; 32: 10-22.
  • 10. Community and Health Services Complaints Commissioner. A final report of the investigation into adverse patient outcomes of neurosurgical services provided by the Canberra Hospital. Canberra: ACT Government, 2003.
  • 11. Quality Department Royal North Shore Hospital. QaRNS programme review manual. Sydney: QaRNS, 2002.
  • 12. Wolff AM, Bourke J, Campbell IA, Leembruggen DW. Detecting and reducing hospital adverse events: outcomes of the Wimmera clinical risk management program. Med J Aust 2001; 174: 621-625. <MJA full text>
  • 13. Australian Council for Safety and Quality in Health Care. Sentinel events [fact sheet]. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/6A2AB719D72945A4CA2571C5001 E5610/$File/sentnlevnt31305.pdf (accessed May 2008).
  • 14. NSW Health. Severity Assessment Code (SAC). November 2005. http://www.health.nsw.gov.au/pubs/2005/pdf/saca4.pdf (accessed Dec 2006).
  • 15. Australian Commission on Safety and Quality in Health Care. Open disclosure standard: a national standard for open communication in public and private hospitals, following an adverse event in health care. Canberra: Commonwealth of Australia, 2008. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/3B994EFC1C9 C0B22CA25741F0019FDEE/$File/NOD-Std %20reprinted%202008.pdf (accessed May 2008).
  • 16. Vincent C, Taylor-Adams S, Chapman EJ, et al. How to investigate and analyse clinical incidents: Clinical Risk Unit and Association of Litigation and Risk Management protocol. BMJ 2000, 320: 777-781.
  • 17. Metropolitan Health and Aged Care Services Division. Sentinel event program: annual report 2003–04. Melbourne: Victorian Government Department of Human Services, 2004.
  • 18. Mitchell I, Van Leuvan C, Avard B, et al. Recognising the deteriorating patient reduces unplanned intensive care unit admissions. Anaesth Intensive Care 2007; 35: 1007.
  • 19. Degeling PJ, Maxwell S, Iedema R, Hunter DJ. Making clinical governance work. BMJ 2004; 329: 679-681.
  • 20. Kotter JP. Leading change: why transformation efforts fail. Harvard Bus Rev 1995; (Mar/Apr): 59-67.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.