Connect
MJA
MJA

Using what we gather — harnessing information for improved care

Neville Board and Diane E Watson
Med J Aust 2010; 193 (8): S93. || doi: 10.5694/j.1326-5377.2010.tb04019.x
Published online: 18 October 2010

Currently available data can be used to focus clinical quality, patient centredness and safety of care in hospitals

Australia has traditionally focused its public reporting efforts concerning hospital care on indicators of volume, costs, length of stay and efficiency at state, territory and national levels. Curiously, there is much less nationally consistent hospital-level reporting on other dimensions of care such as appropriateness, effectiveness, patient-centredness and safety across Australia.

There are two major reasons for measuring, monitoring and reporting on these dimensions of quality of hospital care.

First, hospital-level reporting stimulates and focuses quality improvement initiatives that support better care and better health. Quality improvement techniques such as benchmarking, Six Sigma, “lean” programs, collaboratives and process re-engineering all depend on measurement and reporting to monitor impact. There is evidence, for example, that confidential, hospital- and physician-level reporting can substantially reduce 30-day mortality after cardiac surgery.1

A review of international evidence indicates that there is “strong and consistent evidence that public reporting stimulates quality improvement in hospitals” and “the majority of studies show significant positive impact of public reporting on clinical outcomes”.2

Second, hospital-level reporting is necessary for accountability and transparency as governments, insurers and the public reasonably expect to understand how effectively care is being delivered. Public reporting is also necessary for transparency if ready access to information is expected to influence patient choice. A recent review of international evidence indicated that public reporting “may be able to make significant and policy-important changes in consumers’ decisions in choosing hospitals in some settings”.1

This Supplement features articles that describe how currently available data can be used to focus quality improvement and to support accountability and transparency through the creation and use of timely and accurate information on clinical quality, patient centredness and safety of care in hospitals.

The article by Sketcher-Baker and colleagues on the use of variable life-adjusted displays (VLADs) describes how inpatient data in Queensland have been used to support quality improvement, accountability and transparency.3 Data collation, calculation of VLADs, and feedback inform a clinical improvement program and support accountability while delivering transparency through public reporting of outcomes. Other important elements of this program are the commitment to ongoing review and consultation around the indicators, and the clinical governance model that underpins the VLAD review and response cycle.

The article by Clarke and colleagues on the AusPSI program describes how inpatient data have been used to support routine reporting to the Patient Safety Monitoring Initiative in Victoria.4 This initiative involves the use of risk-adjusted outcome measures that build on patient safety indicator work established by the Agency for Healthcare Research and Quality.5

The main questions to ask in assessing data collections that report on quality of care include:

  • Are we asking the right questions — will each data collection accurately describe significant variance in practice and outcomes?

  • Is the data collection feasible and efficient, or unrealistically burdensome?

  • Is there high-quality data scrutiny — of accuracy and reliability?

  • Is there high-quality interpretation and clinical review of reported compliance and variance?

  • Is the risk adjustment fair?

Reid and colleagues describe the Australian Cardiac Procedures Registry (ACPR).6 The ACPR includes patient, procedure and outcome data from 21 participating facilities, generating and feeding back risk-adjusted outcome measures against local and international benchmarks.

In a previous issue of the Journal, McNeil and colleagues recommended the establishment of clinical quality registries for high-cost, high-volume interventions where there is variation in practice and where practice modification can improve outcomes.7 The successes of the National Joint Replacement Registry,8 the National Breast Cancer Audit,9 the Australian and New Zealand Intensive Care Society Centre for Outcome and Resource Evaluation patient databases,10 and the Australia and New Zealand Dialysis and Transplant Registry11 suggest that clinicians are prepared to trade off the burden of submitting a succinct dataset — that they themselves have developed — in return for routine reports showing their own performance, risk-adjusted, against their peers.

Ben-Tovim and colleagues describe their efforts to measure standardised, in-hospital death rates.12 The authors, in collaboration with the Australian Institute for Health and Welfare (AIHW), have led national analyses of hospital mortality data, and refined the Canadian risk adjustment through detailed analyses of the National Hospital Morbidity Database. The calculation, monitoring and reporting of hospital-standardised mortality ratios (HSMRs) is not without controversy.13,14 The Australian Commission on Safety and Quality in Health Care, however, has recommended that hospitals routinely review HSMRs, deaths in low-mortality diagnosis-related groups and condition-specific inhospital mortality rates to identify opportunities to improve hospital care.15

Kennedy and colleagues describe the importance of clinical practice guidelines, and the need to identify and respond to variations in practice, in the context of the rapid escalation of the comparative effectiveness agenda in the United States.16

Leathley and colleagues summarise the results of a forum, convened by the Clinical Excellence Commission in New South Wales, on measuring hospital performance.17 The authors outline key principles for the design of hospital performance measures, and identify measures with the highest potential.

McNeil and colleagues describe the National Antimicrobial Utilisation Surveillance Program approach to measure, monitor and identify significant variance and trends in antibiotic usage.18 They demonstrate how antibiotic usage data from 28 principal referral hospitals and one private hospital have generated interventions and real change in antibiotic prescribing practice.

Current health care reporting

At the national level, the AIHW and the Australian Government Productivity Commission take seriously their charters to “provide information on Australia’s health and welfare, through statistics and data development”19 and “promote public understanding of matters related to industry and productivity”20 on health services. On 20 April 2010, the Council of Australian Governments agreed (with the exception of Western Australia) to sign the National Health and Hospitals Network Agreement, including the establishment of a National Performance Authority, and there are plans to launch a public website with hospital-level information.21

At the state and territory level, several governments release information on the performance of public hospitals, including Victoria,22 Queensland23 and NSW.24 NSW publishes information including waiting lists for elective surgery, health care-associated infections and current safety notices, and hosts a health service website.25

The NSW Bureau of Health Information was recently established to publicly report on the performance of the state’s health system. The Bureau’s first report provided comparative, hospital-level information on patient-centred care in 38 large hospitals.26 Its hospital quarterly reports will provide information on inpatient services, surgical care and emergency departments every 3 months. The first issue expanded the scope of hospital-level information previously reported to include new measures of accessibility and patient-centred care and increased the number of hospital emergency departments reported on from 40 to 66.27

The way forward

If we accept the premise that timely, accurate and comparable information about the performance of hospitals is “a good thing”, the most important question remains: “What measures are meaningful and useful?”

If the purpose of reporting is better care, selection of measures should be driven by the priorities of clinicians and health care management and policy communities. If the purpose of reporting is accountability, selection of measures depends on the aim of investments. If the purpose is transparency about the performance of hospitals, a broad and balanced portfolio of measures is important.

Authors of articles in this Supplement highlight how current information systems in Australia can be used to gather meaningful, useful information for clinical, management and policy communities. It’s up to the rest of us to build on these initiatives to create and use timely, accurate and comparable information about the performance of hospitals in Australia to support better care.

  • Neville Board1
  • Diane E Watson2

  • 1 Australian Commission on Safety and Quality in Health Care, Sydney, NSW.
  • 2 Bureau of Health Information, Sydney, NSW.


Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.