Connect
MJA
MJA

Measurement for improvement: a survey of current practice in Australian public hospitals

Caroline A Brand, Joanne Tropea, Joseph E Ibrahim, Shaymaa O Elkadi, Christopher A Bain, David I Ben-Tovim, Tracey K Bucknall, Peter B Greenberg and Allan D Spigelman
Med J Aust 2008; 189 (1): 35-40. || doi: 10.5694/j.1326-5377.2008.tb01893.x
Published online: 7 July 2008

Abstract

Objective: To identify patient safety measurement tools in use in Australian public hospitals and to determine barriers to their use.

Design: Structured survey, conducted between 4 March and 19 May 2005, designed to identify tools, and to assess current use of, levels of satisfaction with, and barriers to use of tools for measuring the domains and subdomains of: organisational capacity to provide safe health care; patient safety incidents; and clinical performance.

Participants and setting: Hospital executives, managers and clinicians from a nationwide random sample of Australian public hospitals stratified by state and hospital peer grouping.

Main outcome measures: Tools used by hospitals within the three domains and their subdomains; patient safety tools and processes identified by individuals at these hospitals; satisfaction with the tools; and barriers to their use.

Results: Eighty-two of 167 invited hospitals (49%) responded. The survey ascertained a comprehensive list of patient safety measurement tools that are in current use for measuring all patient safety domains. Overall, there was a focus on use of processes rather than quantitative measurement tools. Approximately half the 182 individual respondents from participating hospitals reported satisfaction with existing tools. The main reported barriers were lack of integrated supportive systems, resource constraints and inadequate access to robust measurement tools validated in the Australian context. Measurement of organisational capacity was reported by 50 (61%), of patient safety incidents by 81 (99%) and of clinical performance by 81 (99%).

Conclusion: Australian public hospitals are measuring the safety of their health care, with some variation in measurement of patient safety domains and their subdomains. Improved access to robust tools may support future standardisation of measurement for improvement.

The Quality in Australian Health Care Study highlighted the extent of harm to patients in Australia’s health care system in the 1990s,1 and stimulated initiatives to improve the quality, safety, and accountability of patient care. The Australian Council for Safety and Quality in Health Care (ACSQHC) was established in January 2000 as a key national body to drive quality and safety health care reform. A key priority was to increase the use of health care performance measurements to drive quality improvement.2 This priority was underpinned by evidence that performance measurements are associated with improved quality and safety outcomes.3-7

External benchmarking and public reporting of the measurement of organisation and individual performance is limited in Australia, in part because of concerns about data validity and the adequacy of models to adjust for differences in casemix.5 Individuals and organisations need access to robust measurement tools to enable internal, local and national comparisons to be made.8-12 To date, there has been no systematic assessment of the tools used by Australian public hospitals to measure their own performance.

In November 2004, the ACSQHC commissioned the development of an evidence-based resource, the Measurement for Improvement Toolkit,13 to help Australian health care professionals access appropriate measurement tools and processes to support their patient safety programs.

Here, we provide the results of a survey of Australian public hospitals undertaken to inform the development and subsequent implementation of the Measurement for Improvement Toolkit. The primary objective of this national survey was to identify patient safety measurement tools across the three domains of patient safety defined by the ACSQHC: organisational capacity to provide safe health care; patient safety incidents; and clinical performance. The second objective was to identify perceived barriers to the use of these tools.

Methods

The project was supervised by a multidisciplinary national panel of experts with expertise in patient safety and quality, in conjunction with a technical team based within the Clinical Epidemiology and Health Service Evaluation Unit, Melbourne Health in Victoria.

Definitions

A patient safety measurement tool was defined as an instrument or device that provides instruction and support for measurement, and is used by organisations and/or individuals to maintain and improve patient safety.

In the absence of internationally accepted definitions, the project team developed a project definition of patient safety subdomains (within the three main domains defined by the ACSQHC) by conducting a comprehensive MEDLINE and CINAHL database search of peer-reviewed literature and a web-based search of literature of key health care safety and quality improvement organisations. These searches, which were also designed to identify existing patient safety tools, were conducted in April 2005. Definitions were ratified for inclusion by the expert panel (Box 1).

The survey was designed by means of an iterative development process to assess awareness of, use of, and barriers to use of measurement tools within the specified safety domain. The expert panel agreed by consensus to the inclusion of 46 questions. Draft surveys were piloted with stakeholders and subsequently revised before national dissemination. Respondents were asked to state (yes/no) if they measured each domain and relevant subdomains. They were asked to indicate (yes/no/not applicable) if they used tools identified by the project team, or any other tools. For each subdomain, respondents were asked to document their satisfaction with these tools (on a 5-point scale from completely satisfied to completely dissatisfied). They were also asked to specify limitations to the use of tools within each subdomain with descriptive qualitative responses.

Approval for the survey process was obtained from the Melbourne Health Human Research and Ethics Committee (HREC) as a quality improvement activity.

Hospital sampling methods

A random sample of Australian public hospitals, stratified by state and hospital peer group (location, type and size of hospital), was obtained from the Australian Institute of Health and Welfare’s public hospital list 2002–2003.16 Because they were few in number, all Australian Capital Territory, Northern Territory and Tasmanian public hospitals in the hospital peer groups we chose were included in the sample (Box 2). To maintain a representative sample and yet have the study remain feasible, a proportion of each stratum (hospital peer group by state) was selected. Computer-generated random number sampling was used to obtain the random sample using Stata, version 8 (StataCorp, College Station, Tex, USA).

Respondent sampling and survey dissemination methods

The survey period was 4 March 2005 to 19 May 2005. Repeated survey dissemination (a maximum of three times) was used to increase the likelihood of at least a 50% response rate.17 In keeping with the requirements of the HREC, hospitals were first contacted through their chief executive officers (CEOs) or directors to request their participation, and that of quality/safety/risk management staff, directors of nursing, allied health, and pharmacy, and up to three directors of medical departments. In accordance with HREC approval, there was no direct contact with hospital staff other than through the CEO or director. A limit of three follow-up calls and two global emails were sent by the project team to CEOs or their nominated staff, to remind staff to complete the survey.

Results

A total of 167 public hospitals, representing 22% of all Australian public hospitals, were invited to participate. State and territory response rates and the reported use of measurement tools are summarised in Box 3. Eighty-two invited hospitals (49%) agreed to participate, with representation from each state and territory. The anticipated response rate was 50%, which was achieved in all but two states (New South Wales [36%] and Western Australia [43%]).

Responses on identification and use of patient safety measurement tools, satisfaction with them, and barriers to their use, were received from 182 individuals from the 82 responding hospitals. The tools they identified are summarised in Box 4. Individuals from responding hospitals did not identify any measurement tools that had not already been identified by the literature search and expert panel. In all domains, there was a focus on the use of processes (eg, accreditation) rather than use of tools designed to quantitatively measure responsiveness to change.

The proportions of individual respondents reporting satisfaction, ambivalence or dissatisfaction with existing measurement tools are shown in Box 5. About half the individual respondents indicated they were satisfied with the existing patient safety measurement tools. A high proportion of respondents reported being “neither satisfied nor dissatisfied”, especially with tools measuring organisational capacity and clinical performance. Where measurement tools were not in use, or where there was dissatisfaction with tools used, the most frequently listed limitations across all three domains (Box 6) were lack of an integrated patient safety system and administrative resource constraints. Lack of well developed tools for local use was reported to be a major limitation for measuring organisational capacity and clinical performance, but not reported as a limitation for measuring patient safety incidents.

The 82 hospital responses were from CEOs or directors (20; 24%), quality and safety managers (30; 37%), heads of departments (22; 27%), and other positions including nurse unit managers and allied health representatives (9; 11%), and was not specified for one hospital. All senior hospital representatives reported measuring at least one patient safety domain and 47 (56%) reported measuring all three domains.

Fifty hospitals (61%) reported measuring organisational capacity; 81 (99%) measured patient safety incidents; and 81 (99%) measured clinical performance (either organisational/departmental or individual). There was some variation between states and territories (Box 3), which did not reach statistical significance. There were no significant differences between hospital size or peer group and measurement of the three patient safety domains, although numbers of responses were small and confidence intervals accordingly wide. There was no difference in reported measurement across these domains according to respondent positions.

There was variation in measurement of organisational capacity subdomains among the 50 hospitals reporting measuring organisational capacity. Five respondents (10%) did not indicate which subdomains were measured. Forty-five hospitals (90%) indicated measurement of specific subdomains: 30 (60%) reported measuring clinical governance, 27 (54%) measured organisational leadership, 34 (68%) measured safety culture, 34 (68%) measured professional competence, 33 (66%) measured consumer and community involvement, 31 (62%) measured professional education, and 32 (64%) measured data information management. In all, measurement ranged between one subdomain measured by four hospitals (8%) and seven subdomains measured by 18 hospitals (36%), the median number of subdomains measured being five.

Of the 81 hospital respondents who reported measuring patient safety incidents, 80 (99%) reported detection, investigation and analysis, and management of patient safety incidents. Seventy-nine (98%) used reporting of patient safety incidents, 58 (72%) used patient feedback and 73 (90%) used professional feedback and learning. Most hospitals measured multiple aspects of patient safety incidents, and all reported measuring four or more of the total of six identified subdomains.

Most hospital respondents (57; 70%) reported measuring both organisational and individual clinical performance. Of the 76 respondents (93%) who reported measuring organisational clinical performance, 71 (93%) reported using accreditation, 59 (78%) used certification of staff, 55 (72%) used clinical indicators, 54 (71%) used professional development programs, 49 (64%) used peer review, 46 (61%) used benchmarking and 21 (28%) used organisational competence. Sixty-two hospital respondents (76%) reported measuring individual clinical performance, with 46 (74%) using professional education programs, 44 (71%) using formal assessment of professional competence, 44 (71%) using peer review, and 27 (44%) using clinical indicators.

Discussion

This study provides the first comprehensive overview of patient safety measurement tools currently used in Australian public hospitals. To our knowledge, no other survey of this kind has been undertaken outside Australia. The survey identified a breadth of tools in use, and provided preliminary evidence for variation in the use of tools to measure patient safety domains. It provides insight into barriers that need to be considered in planning implementation strategies for improving access to, and sustained uptake of, high-quality tools by Australian public hospitals.

The response rate, its broad distribution across Australian states and geographical areas suggests the survey sample was representative of Australian public hospitals. However, as limited numbers of hospital peer groups were included and lower than expected response rates achieved, caution is required in interpreting and generalising our findings. Some factors that may have limited the response rate include: short project timeframes which constrained the mechanism and the “dose” of survey dissemination; the length of the survey instrument; and the use of patient safety terminology which may have alienated potential respondents, especially non-managerial clinicians. Further, the particularly low response rate in NSW may have been influenced by the major restructuring of the health care system that was occurring in that state at the time of survey dissemination.

Responses to our survey provide little insight into how measurement tools were applied. The results do not allow us to determine how the measurement tools were being used and whether there was consistency within and across organisations in their use. Also, the survey did not assess why, where tools did exist, they were not being used. Further information about awareness and use of tools by different staff of variable seniority would have been of interest, but assessing this was not feasible within the project timeframe and HREC specifications.

With these caveats in mind, our findings suggest that, while most hospitals measure some aspect of patient safety, there may not be comprehensive measurement in up to 44% of hospitals. In addition, there was variation noted within measurement of organisational capacity subdomains. This variation may relate to respondent bias because not all hospitals were represented by executive and senior quality managers. However, as the most senior responses were used to represent overall hospital response, the expected bias in response would be towards measurement of organisational capacity. Organisational capacity is the most complex and least intuitive of the domains, and there is no agreed national consensus on the classification and definitions of patient safety domains and terms. Therefore, health care organisations may still be in the process of understanding, defining and integrating organisational capacity measurement into their patient safety frameworks. Our survey did not ascertain respondents’ level of education and experience in quality and safety, and an inconsistency within individual survey responses suggests that there may be inadequate understanding of this domain and its measurement.

Our study has ascertained that satisfaction with patient safety measurement tools among health professional is modest at best. Dissatisfaction may be linked to a range of reported limitations, the most prevalent of which was lack of integrated systems within hospitals. Not perceiving the value of change is one of the most powerful barriers to implementing innovation.18 If an organisational system does not support measurement in all aspects of data management, from collection through to review, timely feedback and response, then measurement is unlikely to be perceived as worthwhile and hence unlikely to be supported by individuals within the system. In addition, adoption of new information depends not only on awareness and perception of value, but also on the credibility of the information. The second most reported limitation was lack of access to robust measurement tools, a finding supported by additional work that identified that most patient safety tools have not been developed through rigorous psychometric methods, and have not been validated within the Australian context.13

In contrast to organisational capacity, there were high levels of reported measurement of patient safety incidents and more consistent use of measurement of the subdomains within this domain. Organisational changes to support measurement in this domain may have been facilitated by the need to meet external regulatory requirements for reporting and management of sentinel events in some jurisdictions.

In summary, the Measurement for Improvement Toolkit stakeholder survey has played an important role in identifying tools currently in use to measure patient safety in the Australian public hospital sector, and in identifying gaps in access to robust measurement tools. The results of our survey could be used to inform further research on measuring patient safety and developing validated measurement tools for Australian public hospitals settings.

2 Summary of the sampling process for inviting Australian public hospitals to participate in the survey

Hospital peer group*

Percentage of hospital peer group sampled in each state

No. in sample


Principal referral metropolitan (> 20 000 acute weighted separations) and rural (> 16 000 acute weighted separations)

(50%)

28

Large metropolitan (> 10 000 acute weighted separations)

(50%)

14

Medium (metropolitan, 5000–10 000; and rural, 5000–8000 acute weighted separations)

(50%)

17

Small non-acute (< 2000 acute and acute weighted separations, more than 40% not acute or outlier patient days)

(10%)

12

Large rural (> 8000 acute weighted separations) and remote (> 5000 acute weighted separations)

(50%)

11

Medium metropolitan and rural (2000 acute or acute weighted to 5000 acute weighted separations)

(50%)

38

Small rural acute (< 2000 acute and acute weighted separations, less than 40% not acute or outlier patient days)

(10%)

7

Remote acute (< 5000 acute weighted separations)

(10%)

3

Psychiatric

(50%)

11

Specialist women’s and children’s hospitals (> 10 000 acute weighted separations)

(50%)

7

Hospitals from Tasmania, Australian Capital Territory and Northern Territory

(100%)

19

Total

167


* National public hospital peer group classification from the Australian Institute of Health and Welfare for use in presenting data on costs per casemix-adjusted separation. † New South Wales, Victoria, Queensland, Western Australia and South Australia. ‡ All public hospitals in these states were included in the sample because of their small numbers.

  • Caroline A Brand1,2
  • Joanne Tropea1
  • Joseph E Ibrahim3
  • Shaymaa O Elkadi4
  • Christopher A Bain5
  • David I Ben-Tovim6
  • Tracey K Bucknall7,8
  • Peter B Greenberg9,10
  • Allan D Spigelman11

  • 1 Clinical Epidemiology and Health Service Evaluation Unit, Royal Melbourne Hospital, Melbourne, VIC.
  • 2 Centre for Research Excellence in Patient Safety, Monash University, Melbourne, VIC.
  • 3 Clinical and Work-Related Liaison Services, State Coroner’s Office and Victorian Institute of Forensic Medicine, Melbourne, VIC.
  • 4 Caraniche Pty Ltd, Melbourne, VIC.
  • 5 Western and Central Melbourne Integrated Cancer Service, Melbourne, VIC.
  • 6 Flinders Medical Centre, Adelaide, SA.
  • 7 Deakin University, Melbourne, VIC.
  • 8 Cabrini-Deakin Centre for Nursing Research, Melbourne, VIC.
  • 9 Royal Melbourne Hospital, Melbourne, VIC.
  • 10 University of Melbourne, Melbourne, VIC.
  • 11 St Vincent’s Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW.


Correspondence: Joanne.Tropea@mh.org.au

Acknowledgements: 

This national survey was undertaken as part of a series of activities commissioned and funded by the Australian Council for Safety and Quality in Health Care towards development of a Measurement for Improvement Toolkit. Thanks to Professor Paddy Phillips for his support as Chair, Australian Council for Safety and Quality in Health Care Measurement for Improvement Group, to other members of the expert working group — Dr Shiong Tan, Mr Tony McBride, Dr Alan Wolff, Ms Cathie Steele, Ms Jane Phelan, Ms Marie Colwell — and to all those who participated in the survey.

Competing interests:

Caroline Brand, Joanne Tropea and Shaymaa Elkadi were project team members, and all other authors were members of the expert working group commissioned to develop the Measurement for Improvement Toolkit.

  • 1. Wilson RM, Runciman WB, Gibberd RW, et al. The Quality in Australian Health Care Study. Med J Aust 1995; 163: 458-471. <MJA full text>
  • 2. Australian Council for Safety and Quality in Health Care. Part B. Summary of council’s work 2004–2005. Achieving safety and quality improvements in health care — sixth report to the Australian Health Ministers’ Conference, July 2005. http://www.health.gov.au/internet/safety/publishing.nsf/Content/former-pubs-archive-annrept2005 (accessed Jun 2008).
  • 3. Scott IA, Denaro CP, Bennett CJ, et al. Achieving better in-hospital and after-hospital care of patients with acute cardiac disease. Med J Aust 2004; 180 (10 Suppl): S83-S88. <MJA full text>
  • 4. Scott IA, Denaro CP, Flores JL, et al. Quality of care of patients hospitalized with congestive heart failure. Intern Med J 2003; 33: 140-151.
  • 5. Scott IA, Duke AB, Darwin IC, et al. Variations in indicated care of patients with acute coronary syndromes in Queensland hospitals. Med J Aust 2005; 182: 325-330. <MJA full text>
  • 6. NSW Institute for Clinical Excellence. Annual report 2003/2004. Let’s make a noticeable difference, together. http://www.cec.health.nsw.gov.au/pdf/ice_ar04.pdf (accessed Jun 2008).
  • 7. NSW Institute for Clinical Excellence; Royal Australian College of Physicians. Towards a safer culture in New South Wales. http://www.racp.edu.au/bp/new_tasc_nsw1.html (accessed Jun 2008).
  • 8. Institute for Healthcare Improvement. Measures. http://www.ihi.org/IHI/Topics/Improvement/ImprovementMethods/Measures/ (accessed Jun 2008).
  • 9. Barraclough B. Measuring and reporting outcomes can identify opportunities to provide better and safer care. ANZ J Surg 2004; 74: 90.
  • 10. Mannion R, Davies HT. Reporting health care performance: learning from the past, prospects for the future. J Eval Clin Pract 2002; 8: 215-228.
  • 11. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood) 2003; 22: 84-94.
  • 12. Scobie S, Thomson R, McNeil JJ, Phillips PA. Measurement of the safety and quality of health care. Med J Aust 2006; 184 (10 Suppl): S51-S55. <MJA full text>
  • 13. Australian Council for Safety and Quality in Health Care. Measurement for Improvement Toolkit, 2005. http://www.safetyandquality.org/internet/safety/publishing.nsf/Content/CommissionPubs (accessed Jun 2008).
  • 14. Australian Commission on Safety and Quality in Health Care. Former council terms and definitions for safety and quality concepts. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/former-pubs-archive-definitions (accessed Jun 2008).
  • 15. Daley J, Vogeli C, Blumenthal D, et al. Physician clinical performance assessment. The state of the art: issues, possibilities and challenges for the future. Boston, Mass: Institute for Health Policy, Massachusetts General Hospital, 2002.
  • 16. Australian Institute of Health and Welfare. Australian hospital statistics 2002–03. Health services series No. 22. Appendix 4, Table A4.2. Canberra: AIHW, 2003. (AIHW Cat. No. HSE 32.) http://www.aihw.gov.au/publications/index.cfm/title/10015 (accessed Jun 2008).
  • 17. Dillman DA. Mail and telephone surveys: the total design method. New York: John Wiley & Sons, 1999.
  • 18. Greenhalgh J, Meadows K. The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review. J Eval Clin Pract 1999; 5: 401-416.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.