Connect
MJA
MJA

Standards for health care: a necessary but unknown quantity

Caroline A Brand, Joseph E Ibrahim, Peter A Cameron and Ian A Scott
Med J Aust 2008; 189 (5): 257-260. || doi: 10.5694/j.1326-5377.2008.tb02017.x
Published online: 1 September 2008
The rationale for having health care standards

In 1995, the Quality in Australian Health Care Study of public hospitals reported high levels of preventable iatrogenic injuries among inpatients.2 The slow pace of health care reform since that time has prompted the monitoring of health care performance and feedback of data to organisations, health care provider groups and consumers to improve quality of care. External regulation in Australia is accepted, and indeed expected, in other domains, such as the food and aviation industries, where operators who do not meet tightly controlled minimum safety standards can have their activities curtailed. External regulation of health care performance is also necessary to ensure accountability and equity in access to high-quality care.3

However, the most effective way to incorporate measurement into health care clinical governance frameworks is yet to be identified,4 as reflected in the variety of measurement, reporting and regulatory processes used worldwide: from profession-led clinical audits to organisation- and system-wide clinical indicators, the latter linked to the use of report cards, league tables and funding incentives such as “pay for performance”.

Recent international and national policy documents predict increased external regulation. In Australia, jurisdictions such as Queensland now require mandatory reporting of safety indicators,5 and, nationally, the Australian Commission on Safety and Quality in Health Care, established in January 2006, aims “to report publicly on the state of safety and quality including performance against national standards”,6 broadening the scope of attention beyond acute in-hospital care to all health settings and health care providers.

Here, we present a point of view regarding the strengths and limitations associated with implementing policies of universal reporting against specified standards. We suggest that a structured framework for developing national health care standards should be applied to high-priority areas in which there are demonstrated gaps in health care performance and where there is strong evidence for the need for effective intervention.

Is there a clear definition of health care standards?

It may be argued that there have always been health care standards: some explicit in nature (in the form of clinical guidelines and position statements), but many implicit, defined and maintained by the professionalism of self-regulating health care providers. From a patient’s perspective, however, there is a reasonable expectation of public accountability for explicitly defined standards that support high-quality health care. In addition, patients may interpret “quality” differently from clinicians.7 Ultimately, the quality domains and standards chosen will reflect community values and social goals.8

Safety standards have been defined as “agreed attributes and processes designed to ensure that a product, service or method will perform consistently at a designated level”.9 A designation of minimally acceptable performance that applies to all cases may attract high-level regulation, even legislation, with which all stakeholders would agree, as is the case for the incorporation of seatbelts in car design. In health care, a similar example would be the use of appropriate sterilisation procedures for surgical instruments. However, there will be many instances where such standards have not been determined or are even undesirable. Quality is a concept rather than a fixed entity, and describes attributes of performance, many of which are amenable to change, preferably improvement. In other jurisdictions, such as the National Health Service in the United Kingdom, improvement measures have been included in national reporting as “optimal” or aspirational standards, in which levels are set using evidence, existing achievement levels or consensus.10

There is a risk that different types of standards will cause conceptual confusion and lead to an unrealistic expectation that aspirational targets are synonymous with fixed minimally acceptable service levels. It will be necessary to review the implications for national reporting of both types of standards, with regard to their limitations and ramifications for regulation.

What is the relationship between health care standards and clinical performance?

In Australia, health care standards monitored within accreditation programs already measure performance across the continuum of care, focusing primarily (but not exclusively) on qualitative structural and process measures rather than on quantitative measures that reflect change over time in response to specific quality improvement interventions.11,12 Despite widespread adoption of accreditation (associated with significant cost and perceived distraction by organisations), there is limited information about the degree to which included standards conform to a commonly agreed-on “standards for standards” development framework, and about whether assessment of such standards correlates with, or improves, quality of care. A recent Australian review of the impact of accreditation reports some evidence for an increased focus on policies and processes to improve quality, but conflicting evidence for an association between accreditation and measures of hospital quality and safety performance.13 Further, accreditation and national safety and quality processes have failed to rectify problems uncovered in well publicised incidents in some Australian hospitals.14,15 Administrative processes such as credentialling and clinical privileging, even when appropriately applied and assessed by accreditation agencies, may also not prevent failures by individual health service providers: the most rigorous structural frameworks and administrative processes are likely to be too far removed from actual service provision to allow assessment of a causal relationship with health outcomes. The current emphasis on structural standards may need to be reviewed from a cost–benefit perspective to select a minimum set of priority standards. Those included would drive improvement in important domains of care; for example, documentation supports communication and other goals of care, but should not burden organisations and health care providers with data collection requirements that distract from efforts to improve quality in high-priority, evidence-based areas.

Quantitative clinical performance measures of both processes and outcomes of care in specific clinical conditions are of increasing interest to regulators. They have a closer causal relationship to quality and safety than structural measures and provide numerical data for monitoring change. However, they also have limitations. Firstly, the reported association between processes of care and patient outcomes is variable.16,17 Bradley et al correlated hospital performance based on measures of care of patients with acute myocardial infarction with National Registry of Myocardial Infarction data from 962 hospitals in the United States and found that the process measures captured only a small proportion of the variation in risk-standardised hospital mortality rates.17 This result contrasts with that of Peterson et al, who investigated individual and composite measures of guideline adherence and found a positive correlation with high performance on composite measures and overall organisational performance measured by in-hospital mortality rates.16 A further study reviewed organisational performance across 18 standardised indicators of care introduced by Joint Commission on Accreditation of Healthcare Organizations for acute myocardial infarction, heart failure and pneumonia, and found consistent improvement in process-of-care measures but no related improvement in in-hospital mortality.18 Performance indicators based on guideline-endorsed standards of care may also not be the most appropriate measures of high-quality care of specific patient populations.19 Kerr et al reported that lipid levels in patients with diabetes may identify poor control but not necessarily poor care,20 and Guthrie et al reported that high levels of reported adherence to national targets for blood pressure by general practitioners may not necessarily translate into clinical action. For quality improvement purposes, gathering additional treatment information was required.21

A recent systematic review reported no “consistent nor reliable” relationship between risk-adjusted mortality and quality of hospital care,22 and the validity of using readmission as a measure of quality remains highly controversial.23-25 Even after adjustment for differences in casemix, other confounders that may explain variances between organisations are poorly understood, and methodological differences relating to different data sources (administrative or “coded” data versus clinical data) and data quality can result in erroneous and unfair conclusions regarding organisational performance.26-28 Additional recent work suggests that appropriate patient selection may be even more important than choice of data sources in the assessment of potentially avoidable adverse outcomes for specific clinical conditions.29

How should Australia develop national standards?

Despite the reservations expressed about methodological issues in identifying robust measures for standards, patients are entitled to expect information about their treatment. We suggest that Australian health care standards would be best developed within a broad measurement framework that matches assessment of system-level performance with use of appropriate measurement methods, measures and indicators, reporting and regulation (Box).30,31 Further, we caution against over-reliance on external regulatory systems for driving improvement. Some authors have reported perverse behaviours resulting from the introduction of standards or fixed targets which undermine a holistic approach to quality improvement in all its domains and divert attention from unmeasured areas of care.32,33 The chasing of aspirational targets may incur considerable opportunity costs in the absence of studies that confirm cost-effectiveness from a policy-making perspective.34 Once performance thresholds become entrenched, there may be less flexibility for reviewing and redefining standards according to changing circumstances.

Externally regulated standards should focus on areas with clearly identified major gaps in safety; where these gaps can be accurately measured, and a validated “cut-off” or minimally acceptable threshold can be identified; and where there is good evidence that interventions improve performance in specific gaps. Further, we recommend that in the initial stages of development of Australian health care standards, broad frameworks35 previously used for developing performance indicators could form a basis for setting standards across multiple quality domains system-wide. These broad frameworks should also be modified to guide development of a smaller array of standards and measures targeting a suite of high-priority quality and safety areas. For instance, nosocomial infection is a major safety issue for hospitalised patients and a priority identified for intervention by the Australian Commission on Safety and Quality in Health Care.36 On the rationale that there is evidence for a causal relationship between inadequate hand hygiene and microbial colonisation, a suite of structural standards (policies, environment, building), process standards (credentialling, observational monitoring) and outcome standards (reporting central blood stream infections) could be considered for development within a defined methodological framework that considers psychometric attributes of performance measures and the implications of data collection, monitoring and remedial intervention on infrastructure development.37 A responsive regulatory process, appropriate to the type of standard and needs of the setting and providers to which the standard applies, could then be assigned.38 Other priority areas could be addressed in a similar fashion; for instance, venous thrombophylaxis and prevention of pressure ulcers, wrong-site surgery and handover errors.

Clearly a group of diverse individuals will need to provide the necessary clinical, management, methodological, legal and consumer perspectives and expertise. Engagement of clinicians in quality improvement and a systems approach to patient safety have been slow and generally narrowly focused on improving evidence-based practice. Despite this, there have been notable examples of the involvement of frontline clinicians in the design and implementation of systems that routinely collect and report high-quality data to improve quality of care.39,40

Ultimately, success in developing effective Australian health care standards will be predicated on access to adequate funding to develop the standards and to adapt or redesign current monitoring systems required for the reporting, review and remediation that underpin the standards, and to facilitate regular review and reformulation of these standards. It has been suggested that development of standards needs well defined procedures and at least 3 years of preparation and testing.41 Observers in the UK are concerned that lessons learned have not been integrated into the National Health Service plan for the development of standards.42 Let’s hope the Australian experience of standards development will not be reported in the same way.

  • Caroline A Brand1,2
  • Joseph E Ibrahim2,3
  • Peter A Cameron2
  • Ian A Scott4,5

  • 1 Clinical Epidemiology and Health Service Evaluation Unit, Melbourne Health, Melbourne, VIC.
  • 2 Centre of Research Excellence in Patient Safety, Department of Epidemiology and Preventive Medicine, Monash University, Melbourne, VIC.
  • 3 Clinical Liaison Service, Victorian Institute of Forensic Medicine and Department of Forensic Medicine, Monash University, Melbourne, VIC.
  • 4 Princess Alexandra Hospital, Brisbane, QLD.
  • 5 University of Queensland, Brisbane, QLD.


Correspondence: Caroline.Brand@mh.org.au

Acknowledgements: 

We would like to thank Bhasker Amatya, who contributed to editing the final draft of the paper and preparation of the submission.

Competing interests:

None identified.

  • 1. Peacock J. Safety is knowing we have world’s best standards [letter]. The Age (Melbourne) 2007; 17 Oct.
  • 2. Wilson RM, Runciman WB, Gibberd RW, et al. The Quality in Australian Health Care Study. Med J Aust 1995; 163: 458-471. <MJA full text>
  • 3. Rubin GL, Leeder SR. Health care safety: what needs to be done? Med J Aust 2005; 183: 529-531. <MJA full text>
  • 4. Hearnshaw HM, Harker RM, Cheater FM, et al. Are audits wasting resources by measuring the wrong things? A survey of methods used to select audit review criteria. Qual Saf Health Care 2003; 12: 24-28.
  • 5. Health Quality and Complaints Commission. HQCC Annual Report 2006–2007. Brisbane: HQCC, 2007. http://www.hrc.qld.gov.au/_uploads/263653Annual_Report_2006-2007-correct.pdf (accessed Jan 2008).
  • 6. Australian Commission on Safety and Quality in Health Care. About us. http://www.safetyandquality.org/internet/safety/publishing.nsf/Content/about-us-lp (accessed Nov 2007).
  • 7. Hibbard JH. What can we say about the impact of public reporting? Inconsistent execution yields variable results. Ann Intern Med 2008; 148: 160-161.
  • 8. MacRae D. Policy indicators. Links between social science and public debate. Chapel Hill, NC: University of North Carolina Press, 1985.
  • 9. Runciman WB. Shared meanings: preferred terms and definitions for safety and quality concepts. Med J Aust 2006; 184 (10 Suppl): S41-S43. <MJA full text>
  • 10. UK Department of Health. NHS performance indicators [website]. http://www.performance.doh.gov.uk/nhsperformanceindicators/index.htm (accessed Jan 2008).
  • 11. Quality in Practice/Australian General Practices Accreditation Ltd [website]. http://www.qip.com.au/accreditation.asp (accessed Jan 2008).
  • 12. Australian Council on Healthcare Standards. Accreditation standards. http://www.achs.org.au/accreditstandar/ (accessed Jan 2008).
  • 13. Greenfield D, Braithwaite J. Health sector accreditation research: a systematic review. Int J Qual Health Care 2008; 20: 172-183.
  • 14. Faunce TA, Bolsin SN. Three Australian whistleblowing sagas: lessons for internal and external regulation. Med J Aust 2004; 181: 44-47. <MJA full text>
  • 15. Frankum B, Attree D, Gatenby A, et al. The “Cam affair”: an isolated incident or destined to be repeated [letter]? Med J Aust 2004; 180: 362-363; author reply 365-366. <MJA full text>
  • 16. Peterson ED, Roe MT, Mulgund J, et al. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA 2006; 295: 1912-1920.
  • 17. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA 2006; 296: 72-78.
  • 18. Williams LK, Pladevall M, Fendrick AM, et al. Differences in the reporting of care-related patient injuries to existing reporting systems. Jt Comm J Qual Saf 2003; 29: 460-467.
  • 19. Krumholz HM. Guideline recommendations and results: the importance of the linkage. Ann Intern Med 2007; 147: 342-343.
  • 20. Kerr EA, Smith DM, Hogan MM, et al. Building a better quality measure: are some patients with “poor quality” actually getting good care? Med Care 2003; 41: 1173-1182.
  • 21. Guthrie B, Inkster M, Fahey T. Tackling therapeutic inertia: role of treatment data in quality indicators. BMJ 2007; 335: 542-544.
  • 22. Pitches DW, Mohammed MA, Lilford RJ. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature. BMC Health Serv Res 2007; 7: 91.
  • 23. Weissman JS. Readmissions — are we asking too much? Int J Qual Health Care 2001; 13: 183-185.
  • 24. Hasan M. Readmission of patients to hospital: still ill defined and poorly understood. Int J Qual Health Care 2001; 13: 177-179.
  • 25. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med 2000; 160: 1074-1081.
  • 26. Westaby S, Archer N, Manning N, et al. Comparison of hospital episode statistics and central cardiac audit database in public reporting of congenital heart surgery mortality. BMJ 2007; 335: 759.
  • 27. Jollis JG, Ancukiewicz M, DeLong ER, et al. Discordance of databases designed for claims payment versus clinical information systems. Implications for outcomes research. Ann Intern Med 1993; 119: 844-850.
  • 28. Pine M, Norusis M, Jones B, et al. Predictions of hospital mortality rates: a comparison of data sources. Ann Intern Med 1997; 126: 347-354.
  • 29. Scott IA, Thomson PL, Narasimhan S. Comparing risk-prediction methods using administrative or clinical data in assessing excess in-hospital mortality in patients with acute myocardial infarction. Med J Aust 2008: 188; 332-336. <MJA full text>
  • 30. Scobie S, Thomson R, McNeil JJ, et al. Measurement of the safety and quality of health care. Med J Aust 2006; 184 (10 Suppl): S51-S55. <MJA full text>
  • 31. Smith PC, Mossialos E, Papanicolas I. Performance measurement for health system improvement: experiences, challenges and prospects [background document]. World Health Organization European Ministerial Conference on Health Systems: “Health Systems, Health and Wealth”; 2008 Jun 25–27; Talinn, Estonia. Copenhagen: WHO, 2008. http://www.euro.who.int/healthsystems/Conference/Documents/20080620_34 (accessed Aug 2008).
  • 32. Sheldon TA. The healthcare quality measurement industry: time to slow the juggernaut? Qual Saf Health Care 2005; 14: 3-4.
  • 33. Bevan G, Hood C. Have targets improved performance in the English NHS? BMJ 2006; 332: 419-422.
  • 34. Mason J, Freemantle N, Nazareth I, et al. When is it cost-effective to change the behavior of health professionals? JAMA 2001; 286: 2988-2992.
  • 35. National Health Performance Committee. National report on health sector performance indicators 2001. Brisbane: Queensland Health, 2002.
  • 36. Australian Commission on Safety and Quality in Health Care. Priority Program 3: healthcare associated infection (HAI). http://www.safetyandquality.org/internet/safety/publishing.nsf/Content/PriorityProgram-03 (accessed Jan 2008).
  • 37. Collignon PJ. Methicillin-resistant Staphylococcus aureus (MRSA): “missing the wood for the trees” [editorial]. Med J Aust 2008; 188: 3-4. <MJA full text>
  • 38. Healy J, Braithwaite J. Designing safer health care through responsive regulation. Med J Aust 2006; 184 (10 Suppl): S56-S59.
  • 39. Graves SE, Davidson D, Ingerson L, et al. The Australian Orthopaedic Association National Joint Replacement Registry. Med J Aust 2004; 180 (5 Suppl): S31-S34. <MJA full text>
  • 40. Scott IA, Darwin IC, Harvey KH, et al. Multisite, quality-improvement collaboration to optimise cardiac care in Queensland public hospitals. Med J Aust 2004; 180: 392-397. <MJA full text>
  • 41. Shaw CD. Toolkit for accreditation programs: some issues in the design and redesign of external health care assessment and improvement systems. Melbourne: International Society for Quality in Health Care, 2004. http://www.isqua.org/isquaPages/Accreditation/ISQuaAccreditationToolkit.pdf (accessed Jan 2008).
  • 42. Shaw CD. Standards for better health: fit for purpose? BMJ 2004; 329: 1250-1251.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.