This article reports the experience of the Victorian Department of Health in seeking clinician engagement in the testing of 11 quality-of-care indicators in 20 health services in Victoria.
The Department previously developed a suite of 18 core indicators and seven subindicators known as the AusPSI set.
We used routinely collected administrative data from the Victorian Admitted Episodes Dataset to produce variable life-adjusted display (VLAD) control charts for 11 selected indicators.
The Department recognises that clinicians are responsible for the safety and quality of the care they provide, and therefore the necessity of engaging clinicians in the process of investigating apparent variation in patient care.
Although using readily available and inexpensive routinely collected administrative data to measure clinical performance has a certain appeal, the use of administrative data and VLADs to identify apparent variations has posed significant challenges due to concerns about the quality of the data and resource requirements.
When clinicians at a major Melbourne hospital were engaged, it resulted in an improvement in clinical practice.
Investigating apparent variation in patient care provides an ideal opportunity for emerging clinical leaders to take local ownership and develop expertise in investigating apparent variation in processes of care and implementing change as required.
The pursuit of efficient and effective means of monitoring and improving quality and safety of patient care is occurring globally. At a national level, through the 2008 National Healthcare Agreement, the Council of Australian Governments agreed to report on national trends and the performance of all jurisdictions on a variety of health measures, including selected indicators of quality and safety. In November 2009, the Australian Health Ministers’ Conference (AHMC) endorsed the recommendation by the Australian Commission on Safety and Quality in Health Care (ACSQHC) that all Australian hospitals routinely review reports based on a series of hospital-based outcome indicators. This aligns with another AHMC initiative: for the ACSQHC to publish a national safety and quality framework. The proposed draft includes “driven by information” as one of three elements of the national framework’s vision, along with a strategy for action by health systems and providers to “continually monitor the effects of healthcare intervention”.1
At a state level, the Victorian Clinical Governance Policy Framework is used by the Department of Health to map its performance monitoring activities. Clinical effectiveness is one of the Victorian Framework’s domains of quality and safety, and two aspects specified are that:
performance of clinical care processes and clinical outcomes are measured; and
clinical performance measures, peer review and clinical audit are used to evaluate and improve performance.2
The Framework commits the Department to develop a set of core quality and safety indicators for use in clinical governance processes, recognising that this will interdigitate with a range of national processes. Importantly, one of the Framework’s stated principles is that clinicians and clinical teams are directly responsible and accountable for the safety and quality of care they provide.
This article describes the Department’s challenge in developing a suite of indicators that aligns with clinicians’ desire to continually improve the quality of their clinical practice, the Department’s responsibility to monitor, evaluate and improve performance, and the recognition that clinicians are themselves responsible for the safety and quality of the care they provide.
Before the development of the above frameworks, the Victorian Auditor-General recommended the use of routinely collected hospital administrative data, specifically the Victorian Admitted Episodes Dataset (VAED), to monitor patient safety and quality. The Department commenced work to investigate the utility of VAED data as a means of identifying apparent variations in patient safety and quality of care related to clinical outcomes. The use of administrative data for this purpose requires the development of indicators based on appropriate coding algorithms. The Department developed a suite of 18 core indicators and seven subindicators known as the AusPSI set with coding algorithms based on International Classification of Diseases, 10th revision, Australian modification (ICD-10-AM).3
Like the Queensland Health approach,4 our intention was to develop clinical indicators to enable the investigation of apparent variations in practice. Central to this undertaking was the belief that an important distinction can be made between formative and summative assessment.5 Formative assessments (clinical indicators) are diagnostic and help clinicians take control of areas for improvement, which is appropriate as clinicians are responsible and accountable for the safety and quality of care they provide. Summative assessments (performance measures) are evaluative and used primarily for external purposes such as public reporting.
We determined that the measurement of indicators by means of administrative data offers a formative way of driving improvement at local levels. It is not clear that using administrative data to measure performance in a summative way is valid or even appropriate.6,7 This point is well made by Duckett and colleagues:
Through the Patient Safety Monitoring Initiative (PSMI), the Department drew on the Queensland Health initiative of using statistical process control charts, specifically VLADs, to identify variations in selected outcomes across hospitals. The Queensland Health approach to monitoring clinical outcomes represents a significant increase in centralised monitoring.4 However, Victoria operates a devolved hospital governance model, so the implementation of sequential monitoring using VLAD in this context is very reliant on clinician acceptance.
Through the establishment of the PSMI, the Department chose to work collaboratively with health services, and clinicians in particular, to develop a suite of indicators based on earlier work. The Department invited 20 Victorian public health services to be involved in assessing the value of 11 selected indicators.
The 11 indicators comprised six Victorian-defined indicators: Death in low mortality DRGs (diagnosis-related groups), Complications of anaesthesia, In-hospital fracture, Postoperative haemorrhage or haematoma, Postoperative deep vein thrombosis/pulmonary embolus, and Obstetric trauma — vaginal or caesarean delivery. Five Queensland Health clinical indicators were selected for initial testing in Victoria: Stroke in-hospital mortality, Heart failure in-hospital mortality, AMI (acute myocardial infarction) in-hospital mortality, Pneumonia in-hospital mortality, and Fractured neck of femur in-hospital mortality.
An introductory forum held on 1 April 2009 was attended by over 40 health service representatives, including senior clinicians, directors and managers of quality and risk, and health information managers. The frank and open discussion at this forum demonstrated that concerns about administrative data raised several hurdles.
A reference group was established, comprising health service representatives from each participating health service — mainly directors and managers of quality and risk, independent epidemiologists and the Department’s program team. The reference group was intended to provide a forum to continue and improve the collaboration between the Department and health services.
The reference group, through the program team, has sought advice about the PSMI indicators from relevant clinical peak bodies. The Victorian Consultative Council on Anaesthetic Mortality and Morbidity, while acknowledging the need for a robust indicator for complications arising from anaesthesia, highlighted the inability of the PSMI indicator to identify accurately true anaesthetic-related adverse events. In response, the Department stopped reporting this indicator. Advice from clinicians at a major maternity hospital suggested we reconsider the definition for the Obstetric trauma indicator. We responded by amending the definition in line with the advice received.
Clinicians frequently request de-identified “benchmarking” information. There is an apparent desire to know how our performance can be measured in relation to past achievement, and in relation to the achievement of others. Constant reference to our own performance can have a deleterious result, described as self-reference inertia.8 Providing a source of comparison can be a positive motivator to match and excel the achievements of others.
As we want to give a means of comparison, we provide de-identified hospital-level statewide rates in the form of funnel plots. This will provide a clearer picture to hospitals of performance for a period. We posit that control charts complement cross-sectional funnel plots rather than replace them, but agree that more work is needed to determine how best to engage end users of clinical indicators.9
The Department is continually mindful that clinicians and clinical teams are directly responsible and accountable for the safety and quality of care they provide, and that it is important to adopt an inclusive system of review and refinement that is not adversarial or judgemental.
Participating hospitals report limited involvement of clinicians (partly due to a lack of confidence in the data source), a higher-than-anticipated level of strain on resources (although this is difficult to quantify), and too great a delay between incident and report at the hospital level.
Despite the lack of enthusiasm about the process, quality improvement can result when the initiative’s principles are applied at the clinical level. One major Victorian metropolitan hospital demonstrated this when a flag arose in the hospital’s report for Postoperative deep vein thrombosis/pulmonary embolus. An audit revealed patients included in the PSMI report had experienced thrombophlebitis, not thromboembolism.
Embracing the spirit of the PSMI, to identify possible areas for improvement, the hospital asked: “Why are we seeing so many cases of thrombophlebitis?” Clinicians discovered that intravenous (IV) appliances were being left in-situ longer than necessary. Clinicians reviewed guidelines for the removal of IV appliances and implemented an updated process. As a result, the rate of thrombophlebitis as a complication of IV appliances decreased.
This is not an experience reported by other hospitals involved in the PSMI. It indicates, however, that improvement in care by using indicators based on administrative data is possible. We recognise that the key is not the data but the involvement of clinicians.
As already noted, hospitals have expressed concern that investigation of apparent variation flagged by VLADs creates a higher-than-anticipated level of strain on resources. As well as investigating apparent variation in care, the number of indicators reported also contributes to the resource burden. It is critical not to divert clinical resources away from patient care to engage clinicians in an investigation. Diverting clinicians from patient care may itself create an environment that compromises safety and quality.
The federal government’s MyHospitals website was scheduled to be launched during August 2010. However, the caretaker period associated with the national election delayed its commencement. Information will be available on the site to compare the performances of public hospitals against national benchmarks. The intention is for information to be nationally consistent and locally relevant. MyHospitals will present hospital-level data as agreed by states and territories in the National Health and Hospitals Network Agreement. Over time, the content of the website will be expanded to add further information, and to include private hospitals.
Publicly reporting information about the performance of hospitals may help the public understand the performance of hospitals in their area. Hospitals may be motivated to improve their performance lest they appear to fall behind the pack. However, critical clinical engagement may be in jeopardy if clinical indicators developed for formative purposes are used as performance measures. Care at the bedside will only improve if clinicians have confidence in the information they receive, and have the resources required to generate improvement in care processes.
In Victoria, there are many well established clinician-driven processes for monitoring quality of patient care in specific clinical areas. Examples include the Surgical Outcomes Information Initiative, the Victorian Cardiac Surgery Database project, the Victorian Intensive Care Data Review Committee, and the Limited Adverse Occurrence Screening process. These all involve senior clinicians from multiple health services collaborating on common improvement activities. The PSMI provides an ideal opportunity for emerging clinical leaders to take local ownership and develop expertise in investigating possible deficiencies in processes of care and implementing change as required.
It must be stated clearly that the PSMI program is in its early stages and there is still much to be explored and learnt. The Department will continue to explore and learn with the assistance of its clinical partners.
The use of administrative data and VLADs to identify apparent variations in patient safety and quality of care has presented significant challenges for the Victorian Department of Health. Working with health services to develop a suite of patient safety indicators has demonstrated the difficulty in successfully engaging clinicians in the process of investigating apparent variation, owing to concerns about the quality of the data and resource requirements. A formative approach to reporting apparent variation in care is consistent with the devolved hospital governance model, which acknowledges that clinicians are themselves responsible for the safety and quality of the care they provide.
Although provision of comparative information can be a strong motivator to improve performance, diverting clinicians from care provision can itself jeopardise patient care. The critical nature of clinician engagement cannot be overstated. Indeed, genuine clinician ownership is the only way to really understand, persuade and lead changes in care processes that arise from apparent variations in clinical outcome measures.
- 1. Australian Commission on Safety and Quality in Health Care. The proposed national safety and quality framework. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/C774 AE55079AFD23CA2577370000783C/$File/22177-NSQF.pdf (accessed Aug 2010).
- 2. Victorian Government Department of Human Services. Victorian clinical governance policy framework: enhancing clinical care. Melbourne: DHS, 2009.
- 3. McConchie S, Shepheard J, Waters S, et al. The AusPSI: the Australian version of the Agency of Healthcare Research and Quality patient safety indicators. Aust Health Rev 2009; 33: 334-341.
- 4. Duckett SJ, Coory M, Sketcher-Baker K. Identifying variations in quality of care in Queensland hospitals. Med J Aust 2007; 187: 571-575. <MJA full text>
- 5. Boston C. The concept of formative assessment. Pract Assess Res Eval [internet] 2002; 8 (9). http://PAREonline.net/getvn.asp?v=8&n=9 (accessed Mar 2010).
- 6. Schwartz RM, Gagnon DE, Muri JH, et al. Administrative data for quality improvement. Pediatrics 1999; 103 (1 Suppl E): 291-301.
- 7. Clinical Practice Improvement Centre. VLADs for dummies. Queensland Health ed. Brisbane: Wiley Publishing Australia, 2008.
- 8. Scott I, Phelps G. Measurement for improvement: getting one to follow the other. Intern Med J 2009; 39: 347-351.
- 9. Coory M, Duckett S, Sketcher-Baker K. Using control charts to monitor quality of hospital care with administrative data. Int J Qual Health Care 2008; 20: 31-39.
Publication of your online response is subject to the Medical Journal of Australia's editorial discretion. You will be notified by email within five working days should your response be accepted.