Connect
MJA
MJA

Clinical practice variation

Peter J Kennedy, Colleen M Leathley and Clifford F Hughes
Med J Aust 2010; 193 (8): S97. || doi: 10.5694/j.1326-5377.2010.tb04021.x
Published online: 18 October 2010
How common is variation in the health system?

Although difficult to quantify with accuracy, there is clear evidence that gaps exist between what is known to be effective and is from the best available evidence and research, and what happens in practice.1,2

Overseas and Australian reports indicate that variation in clinical practice is common even where agreed clinical practice guidelines exist.2-6 Wide, unwarranted variations that cannot be explained by illness severity or patient factors are frequent, and clinical practice is often idiosyncratic and unscientific.7

Research conducted by the RAND Corporation identified several deficits in adherence to recommended processes for basic care.8 The researchers evaluated performance on 439 indicators of quality of care for 30 acute and chronic conditions and preventive care, and found that the proportion of participants receiving recommended care varied substantially between medical conditions, ranging from 78.7% of participants for senile cataract to 10.5% for alcohol dependence. The study found greater problems with underuse (46.3% of participants not receiving recommended care) than with overuse (11.3% received care that was not recommended and was potentially harmful).

Similar trends have been identified by research from Dartmouth Medical School, which indicated that 30%–40% of patients in regional America do not receive care consistent with current evidence, and that 20%–25% of care provided is unnecessary or potentially harmful.9 Related comparative effectiveness research, comparing one diagnostic or treatment option to others, has also highlighted major variation in approaches and treatment; this has significant financial and clinical implications.10,11

The lack of good-quality data regarding processes of care restricts the provision of similar research in Australia.2 There is evidence, however, that research evidence is not being applied in practice. In 1995, the National Health and Medical Research Council (NHMRC) expressed concern about “unjustifiable variations in clinical practice for the same condition . . . and uncertainty as to the effectiveness of many interventions in improving people’s health”.12 In 2003, the NICS released a report highlighting apparent evidence-based gaps (underuse or overuse of recommended interventions) in 11 clinical areas, such as:

Follow-up reports by the NICS highlighted some improvement in these areas, but also identified ongoing challenges.13,14

In New South Wales, the Special Commission of Inquiry into Acute Care Services in NSW Public Hospitals (the Garling Report) discerned clear variation in practice, observing that much clinical care reflects clinician or organisational preference, not patient needs.15

The Clinical Excellence Commission has been involved in clinical improvement projects in transfusion medicine, paediatric emergency care, bacteraemia associated with central line insertion, and patients with deteriorating conditions in intensive care units (ICUs).16 This work indicates that clinical practice variation is widespread across NSW hospitals, with baseline measures recording wide variation in practice.16 Its publication of an annual chartbook since 2007 has also revealed widespread variation between area health services in NSW in rates of key clinical interventions, such as caesarean section, that cannot be explained by demographic or acuity factors alone.17

Reasons for variation

There are multiple, diverse reasons behind variation in clinical practice, reflecting personal, organisational and systemic levels. The reasons why gaps occur between evidence and practice are complex, and efforts to improve uptake are unlikely to be successful if they are one-dimensional or focus on individual health professionals.1,18,19

Ease of use of the guidelines in real-life clinical practice must always be a major consideration if successful implementation is to occur. Clinicians need to believe that the guidelines are appropriate if they are to use them. In particular, doctors in training are unlikely to follow guidelines if senior doctors do not use them.

Research on standardised care in medical oncology across Australia20 and identification of barriers by the NICS21 support the view that multiple factors affect the awareness and application of best-practice protocols and models. These include clinician-specific (autonomy, time) and environmental (policy, infrastructure, education) factors. These, and other studies,1-4 clearly show that availability, dissemination and even awareness of evidence-based approaches do not guarantee their application.

Research commissioned by the Clinical Excellence Commission provides particular insight into variation in clinicians’ prescribing behaviours regarding red blood cells.22 The study found that most senior doctors interviewed were

The study also found that although participants had a broad assumption that transfusion-prescribing guidelines existed (and had been in place nationally since 2001), there was little knowledge of details such as who wrote them, the format in which they were available, or where participants may have encountered them. There was also a common perception “that their own practice was already consistent with guidelines and, as such, the guidelines merely reinforced their own prescribing habits and did not contain any new information”.22 Some respondents felt that the guidelines were insufficiently helpful or applicable to their specialty, and others reported practices that did not comply with the guidelines.22

The importance of reducing unwanted variation

More effective translation of health research into practice has the potential to significantly improve patient care and outcomes.3,6 Overseas studies demonstrate that compliance with evidence-based care processes can lead to significant improvements in public health6 and patient outcomes,23 whereas non-compliance is considered to pose a serious threat to quality of care and overall patient health.8

Recent research by the Health Research and Educational Trust,24 which aimed to identify and disseminate best practices associated with 45 high-performing health systems in the United States, identified standardisation of care processes and associated education and skills development programs as being vital to the spread of best practices. The researchers found that higher-performing systems reported greater standardisation of training and care processes and employed multiple strategies balancing local autonomy with an expectation that evidence-based practices would be consistently implemented throughout the health system.24

In addition to improving safety and quality, more standardised practices can also lead to greater efficiencies. In the Clinical Excellence Commission’s Blood Watch program, an area health service that adopted more appropriate transfusion practices saved an estimated $890 000 in the 2006–07 financial year alone.22

Initiatives to foster best practice and reduce variation in practice

The most common initiative to reduce unwanted variation in clinical practice is the development and implementation of clinical practice guidelines, evidence-based pathways and clinical protocols. As evidence shows, however, development is not enough.2-4 Implementation of guidelines needs to be supported by education, infrastructure, data support, promotion, endorsement and, if applicable, incentives or penalties to encourage uptake. Initiatives to foster best practice and reduce unwarranted variation require local, statewide and national approaches.

In Australia, there have been several exciting recent national initiatives to develop, disseminate and implement best practice more fully. These include the Australian Satellite of the Cochrane EPOC (Effective Practice and Organisation of Care) Group25 and the NICS Clinical Practice Guidelines Portal and Register, which provides a central repository of established and planned clinical practice guidelines across Australia.26

Within NSW, the Bureau for Health Information and Agency for Clinical Innovation, two of the four pillars identified in the Garling Report,15 are intended to foster the development, implementation, measurement and reporting of evidence-based practice. The NSW Department of Health established a collaborative Clinical Variation Committee of key stakeholders to review and promote alignment of evidence and practice.

At the individual level, the value of involving clinicians and patients in the development, implementation and decisions about acceptable variation from clinical protocols is well recognised,3,19 particularly where there are several options for treatment that have comparable value or evidence. Examples of clinical areas where clinician and patient preferences have been successfully incorporated into clinical practice guidelines include prostate and breast cancer treatment, and the ICU.20,27,28

These strategies highlight the importance of involving key stakeholders and having easily accessible, reportable data on developing, monitoring, reporting and refining relevant measures.

Lessons and recommendations

Reducing unwarranted clinical practice variation is important from a quality and safety perspective; it encompasses patient-focused care, appropriateness of care, reduced mortality and morbidity, and improved efficiency in the face of spiralling health care costs.

Although there is generic support for the development, implementation and monitoring of evidence-based guidelines, and many good avenues to build on, there is ample evidence that goodwill alone will not lead to the application of evidence-based care.4,18

Producing data that inform clinicians is a major driving force if implementation is to be successful. Collecting valid rates of blood stream infections and reporting results to ICU staff were critical components of the change management process utilised in the Keystone ICU project in Michigan.29 In the United Kingdom, the Health Foundation listed better measurement systems in hospitals as one of seven essentials for success of projects.30

The literature clearly demonstrates that successful reduction of unwarranted variation will require a multipronged and long-term approach. The nature, importance and applicability of initiatives will depend on contextual factors, including the individual condition, patient and clinical environment. A collaborative and concerted effort from clinicians, patients, health care providers and policymakers, supported by appropriate and accessible data collection and reporting mechanisms, will help bring gains in the form of improved patient care and broader health system outcomes.

  • Peter J Kennedy1
  • Colleen M Leathley2
  • Clifford F Hughes3

  • Clinical Excellence Commission, Sydney, NSW.



Competing interests:

None identified.

  • 1. National Institute of Clinical Studies. Evidence–practice gaps report. Vol 1. Melbourne: NICS, 2003.
  • 2. Buchan H. Gaps between best evidence and practice: causes for concern. Med J Aust 2004; 180 (6 Suppl): S48-S49. <MJA full text>
  • 3. Buchan H, Sewell JR, Sweet M. Translating evidence into practice. Med J Aust 2004; 180: S43. <MJA full text>
  • 4. Grol R, Buchan H. Clinical guidelines: what can we do to increase their use? Med J Aust 2006; 185: 301-302. <MJA full text>
  • 5. Davis DA, Taylor-Vaisey A. Translating guidelines into practice: a systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines. CMAJ 1997; 157: 408-416.
  • 6. Evensen AE, Sanson-Fisher R, E’Este C, et al. Trends in publications regarding evidence-practice gaps: a literature review. Implement Sci 2010; 5: 11.
  • 7. Wennberg JE. Unwarranted variations in healthcare delivery: implications for academic medical centres. BMJ 2002; 325: 961-964.
  • 8. McGlynn EA, Asch SM, Adams J, et al. Quality of health care delivered to adults in the United States. N Engl J Med 2003; 348: 2635-2645.
  • 9. Fisher ES, Bynum JP, Skinner JS. Slowing the growth of health care costs — lessons from regional variation. N Engl J Med 2009; 360: 849-852.
  • 10. Chalkidou K, Tunis S, Lopert R, et al. Comparative effectiveness research and evidence-based health policy: experience from four countries. Milbank Q 2009; 87: 339-367.
  • 11. Weinstein MC, Skinner JA. Comparative effectiveness and health care spending — implications for reform. N Engl J Med 2010; 362: 460-465.
  • 12. National Health and Medical Research Council. Guidelines for the development and implementation of clinical practice guidelines. Canberra: NHMRC, 1995.
  • 13. National Institute of Clinical Studies. Evidence–practice gaps report. Vol 2. Melbourne: NICS, 2003.
  • 14. National Institute of Clinical Studies. Evidence–practice gaps report. Vol 1: a review of developments: 2004–2007. Melbourne: NICS, 2003.
  • 15. Garling P. Final report of the Special Commission of Inquiry: acute care services in NSW public hospitals. Sydney: NSW Government, 27 Nov 2008.
  • 16. Clinical Excellence Commission [website]. http://www.cec.health.nsw. gov.au/ (accessed Aug 2010).
  • 17. Clinical Excellence Commission. Chartbook. Quality of healthcare in NSW. 2008. Sydney: CEC, 2010. http://www.cec.health.nsw.gov.au/programs/chartbook.html (accessed Aug 2010).
  • 18. Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet 2003; 362: 1225-1230.
  • 19. Fisher ES, Berwick DM, Davis K. Achieving health care reform — how physicians can help. N Engl J Med 2009; 360: 2495-2497.
  • 20. Hains IM, Fuller JM, Ward RL, Pearson SA. Standardizing care in medical oncology. Cancer 2009; 115: 5579-5588.
  • 21. National Institute of Clinical Studies. Identifying barriers to evidence uptake. Melbourne: NICS, 2006.
  • 22. Clinical Excellence Commission. Understanding and influencing blood prescription. A market research report prepared by Eureka Strategic Research for the Clinical Excellence Commission and the National Blood Authority. December 2007. Sydney: CEC, 2007.
  • 23. National Institute of Clinical Studies. Do guidelines make a difference to health care outcomes? http://www.nhmrc.gov.au/nics/material_resources/resources/do_guidelines.htm (accessed Aug 2010).
  • 24. Yonek J, Hines S, Joshi MA. A guide to achieving high performance in multi-hospital health systems. March 2010. Chicago: Health Research and Educational Trust, 2010.
  • 25. Australian Satellite of the Cochrane EPOC Group [website]. http://www.epoc.nhmrc.gov.au/ (accessed Aug 2010).
  • 26. National Institute of Clinical Studies. Clinical Practice Guidelines Portal [website]. http://www.clinicalguidelines.gov.au/ (accessed Aug 2010).
  • 27. Krahn M, Naglie G. The next step in guideline development: incorporating patient preferences. JAMA 2008; 300: 436-438.
  • 28. Sepucha KR, Fowler FJ, Mulley AG. Policy support for patient-centred care: the need for measurable improvements in decision quality. Health Aff [web exclusive] 2004; 7 Oct: VAR-55. http://content.healthaffairs.org/cgi/reprint/hlthaff.var.54v1?ck=nck (accessed Aug 2010).
  • 29. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections. N Engl J Med 2006; 355: 2725-2734.
  • 30. The Health Foundation. Patient safety update. London: Health Foundation; 2009. http://www.health.org.uk/document.rm?id=1189 (accessed Aug 2010).

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.