MJA
MJA InSight
MJA Careers
Supplement

Clinical practice variation

Peter J Kennedy, Colleen M Leathley and Clifford F Hughes
Med J Aust 2010; 193 (8): 97.
Abstract
  • Although difficult to quantify, there is known widespread variation in the way that best available evidence is applied in clinical practice.

  • The reasons for gaps between evidence and practice are complex, and efforts to improve uptake are unlikely to be successful if they are one-dimensional or focus on individual health professionals.

  • This article provides contextual reference for articles in this Supplement in addressing how and why clinical variation exists, the importance of reducing it and strategies to drive a more streamlined approach to evidence-based care in Australian health care systems.

Reasons underlying clinical practice variation are numerous, and it has no standardised definition. In this article, we use the following definition: patients with similar diagnoses, prognoses and demographic status receive different levels of care depending on when, where and by whom they are treated, despite agreed and documented evidence of “best practice”. This can occur at individual, facility, professional and organisational levels, and aligns with what the National Institute of Clinical Studies (NICS) refers to as “knowing–doing gaps”.1

How common is variation in the health system?

Although difficult to quantify with accuracy, there is clear evidence that gaps exist between what is known to be effective and is from the best available evidence and research, and what happens in practice.1,2

Overseas and Australian reports indicate that variation in clinical practice is common even where agreed clinical practice guidelines exist.2-6 Wide, unwarranted variations that cannot be explained by illness severity or patient factors are frequent, and clinical practice is often idiosyncratic and unscientific.7

Research conducted by the RAND Corporation identified several deficits in adherence to recommended processes for basic care.8 The researchers evaluated performance on 439 indicators of quality of care for 30 acute and chronic conditions and preventive care, and found that the proportion of participants receiving recommended care varied substantially between medical conditions, ranging from 78.7% of participants for senile cataract to 10.5% for alcohol dependence. The study found greater problems with underuse (46.3% of participants not receiving recommended care) than with overuse (11.3% received care that was not recommended and was potentially harmful).

Similar trends have been identified by research from Dartmouth Medical School, which indicated that 30%–40% of patients in regional America do not receive care consistent with current evidence, and that 20%–25% of care provided is unnecessary or potentially harmful.9 Related comparative effectiveness research, comparing one diagnostic or treatment option to others, has also highlighted major variation in approaches and treatment; this has significant financial and clinical implications.10,11

The lack of good-quality data regarding processes of care restricts the provision of similar research in Australia.2 There is evidence, however, that research evidence is not being applied in practice. In 1995, the National Health and Medical Research Council (NHMRC) expressed concern about “unjustifiable variations in clinical practice for the same condition . . . and uncertainty as to the effectiveness of many interventions in improving people’s health”.12 In 2003, the NICS released a report highlighting apparent evidence-based gaps (underuse or overuse of recommended interventions) in 11 clinical areas, such as:

  • underprescribed angiotensin-converting enzyme inhibitors for reducing heart failure symptoms;

  • lack of written information and advice about smoking cessation in antenatal care protocols and policies; and

  • inappropriate preventive care for patients at high risk of developing deep vein thrombosis.1,2

Follow-up reports by the NICS highlighted some improvement in these areas, but also identified ongoing challenges.13,14

In New South Wales, the Special Commission of Inquiry into Acute Care Services in NSW Public Hospitals (the Garling Report) discerned clear variation in practice, observing that much clinical care reflects clinician or organisational preference, not patient needs.15

The Clinical Excellence Commission has been involved in clinical improvement projects in transfusion medicine, paediatric emergency care, bacteraemia associated with central line insertion, and patients with deteriorating conditions in intensive care units (ICUs).16 This work indicates that clinical practice variation is widespread across NSW hospitals, with baseline measures recording wide variation in practice.16 Its publication of an annual chartbook since 2007 has also revealed widespread variation between area health services in NSW in rates of key clinical interventions, such as caesarean section, that cannot be explained by demographic or acuity factors alone.17

Reasons for variation

There are multiple, diverse reasons behind variation in clinical practice, reflecting personal, organisational and systemic levels. The reasons why gaps occur between evidence and practice are complex, and efforts to improve uptake are unlikely to be successful if they are one-dimensional or focus on individual health professionals.1,18,19

Ease of use of the guidelines in real-life clinical practice must always be a major consideration if successful implementation is to occur. Clinicians need to believe that the guidelines are appropriate if they are to use them. In particular, doctors in training are unlikely to follow guidelines if senior doctors do not use them.

Research on standardised care in medical oncology across Australia20 and identification of barriers by the NICS21 support the view that multiple factors affect the awareness and application of best-practice protocols and models. These include clinician-specific (autonomy, time) and environmental (policy, infrastructure, education) factors. These, and other studies,1-4 clearly show that availability, dissemination and even awareness of evidence-based approaches do not guarantee their application.

Research commissioned by the Clinical Excellence Commission provides particular insight into variation in clinicians’ prescribing behaviours regarding red blood cells.22 The study found that most senior doctors interviewed were

not particularly interested in learning more about inappropriate transfusions [because] they believe their current practices are not deficient, and that it is others who need to be encouraged or educated to change their practices.22

The study also found that although participants had a broad assumption that transfusion-prescribing guidelines existed (and had been in place nationally since 2001), there was little knowledge of details such as who wrote them, the format in which they were available, or where participants may have encountered them. There was also a common perception “that their own practice was already consistent with guidelines and, as such, the guidelines merely reinforced their own prescribing habits and did not contain any new information”.22 Some respondents felt that the guidelines were insufficiently helpful or applicable to their specialty, and others reported practices that did not comply with the guidelines.22

The importance of reducing unwanted variation

More effective translation of health research into practice has the potential to significantly improve patient care and outcomes.3,6 Overseas studies demonstrate that compliance with evidence-based care processes can lead to significant improvements in public health6 and patient outcomes,23 whereas non-compliance is considered to pose a serious threat to quality of care and overall patient health.8

Recent research by the Health Research and Educational Trust,24 which aimed to identify and disseminate best practices associated with 45 high-performing health systems in the United States, identified standardisation of care processes and associated education and skills development programs as being vital to the spread of best practices. The researchers found that higher-performing systems reported greater standardisation of training and care processes and employed multiple strategies balancing local autonomy with an expectation that evidence-based practices would be consistently implemented throughout the health system.24

In addition to improving safety and quality, more standardised practices can also lead to greater efficiencies. In the Clinical Excellence Commission’s Blood Watch program, an area health service that adopted more appropriate transfusion practices saved an estimated $890 000 in the 2006–07 financial year alone.22

Initiatives to foster best practice and reduce variation in practice

The most common initiative to reduce unwanted variation in clinical practice is the development and implementation of clinical practice guidelines, evidence-based pathways and clinical protocols. As evidence shows, however, development is not enough.2-4 Implementation of guidelines needs to be supported by education, infrastructure, data support, promotion, endorsement and, if applicable, incentives or penalties to encourage uptake. Initiatives to foster best practice and reduce unwarranted variation require local, statewide and national approaches.

In Australia, there have been several exciting recent national initiatives to develop, disseminate and implement best practice more fully. These include the Australian Satellite of the Cochrane EPOC (Effective Practice and Organisation of Care) Group25 and the NICS Clinical Practice Guidelines Portal and Register, which provides a central repository of established and planned clinical practice guidelines across Australia.26

Within NSW, the Bureau for Health Information and Agency for Clinical Innovation, two of the four pillars identified in the Garling Report,15 are intended to foster the development, implementation, measurement and reporting of evidence-based practice. The NSW Department of Health established a collaborative Clinical Variation Committee of key stakeholders to review and promote alignment of evidence and practice.

At the individual level, the value of involving clinicians and patients in the development, implementation and decisions about acceptable variation from clinical protocols is well recognised,3,19 particularly where there are several options for treatment that have comparable value or evidence. Examples of clinical areas where clinician and patient preferences have been successfully incorporated into clinical practice guidelines include prostate and breast cancer treatment, and the ICU.20,27,28

These strategies highlight the importance of involving key stakeholders and having easily accessible, reportable data on developing, monitoring, reporting and refining relevant measures.

Barriers to implementation

Despite evidence that compliance with evidence-based care processes can lead to improved clinical outcomes,23 there are barriers to implementation of such an approach that can occur at different levels of health care.4-6 Identifying and addressing such barriers is an important step in helping close evidence-based gaps.21

The NICS identified six key levels of health care where barriers may impede best practice.21 These are:

  • the guidelines themselves — whether they are considered feasible, credible, accessible and attractive;

  • professionals’ individual levels of awareness, knowledge, attitude, motivation to change and behavioural routines;

  • patients’ knowledge, skills, attitude and compliance;

  • professionals’ social context — opinion of colleagues, culture of the network, and level of collaboration and leadership;

  • organisational context — infrastructural elements supporting or inhibiting uptake (eg, staff, processes, capacities, resources and structures); and

  • economic and political context — broader influences supporting or inhibiting uptake, such as financial arrangements, regulations and policies.

Lessons and recommendations

Reducing unwarranted clinical practice variation is important from a quality and safety perspective; it encompasses patient-focused care, appropriateness of care, reduced mortality and morbidity, and improved efficiency in the face of spiralling health care costs.

Although there is generic support for the development, implementation and monitoring of evidence-based guidelines, and many good avenues to build on, there is ample evidence that goodwill alone will not lead to the application of evidence-based care.4,18

Producing data that inform clinicians is a major driving force if implementation is to be successful. Collecting valid rates of blood stream infections and reporting results to ICU staff were critical components of the change management process utilised in the Keystone ICU project in Michigan.29 In the United Kingdom, the Health Foundation listed better measurement systems in hospitals as one of seven essentials for success of projects.30

The literature clearly demonstrates that successful reduction of unwarranted variation will require a multipronged and long-term approach. The nature, importance and applicability of initiatives will depend on contextual factors, including the individual condition, patient and clinical environment. A collaborative and concerted effort from clinicians, patients, health care providers and policymakers, supported by appropriate and accessible data collection and reporting mechanisms, will help bring gains in the form of improved patient care and broader health system outcomes.

Received 
22 Apr 2010
accepted 
22 Jul 2010
Peter J Kennedy, MB BS, FRACP, Deputy Chief Executive Officer
Colleen M Leathley, MSocSc(Hons), Statewide Coordinator, Clinical Leadership Program
Clifford F Hughes, MB BS, FRACS, FACS, Chief Executive Officer
Clinical Excellence Commission, Sydney, NSW.
Competing Interests: 

None identified.

Reference Text: 
National Institute of Clinical Studies. Evidence–practice gaps report. Vol 1. Melbourne: NICS, 2003.
Reference Order: 
1
PubMed ID: 
Reference Text: 
Buchan H. Gaps between best evidence and practice: causes for concern. Med J Aust 2004; 180 (6 Suppl): S48-S49.
Reference Order: 
2
PubMed ID: 
15012579
Reference Text: 
Buchan H, Sewell JR, Sweet M. Translating evidence into practice. Med J Aust 2004; 180: S43.
Reference Order: 
3
PubMed ID: 
15012577
Reference Text: 
Grol R, Buchan H. Clinical guidelines: what can we do to increase their use? Med J Aust 2006; 185: 301-302.
Reference Order: 
4
PubMed ID: 
16999667
Reference Text: 
Davis DA, Taylor-Vaisey A. Translating guidelines into practice: a systematic review of theoretic concepts, practical experience and research evidence in the adoption of clinical practice guidelines. CMAJ 1997; 157: 408-416.
Reference Order: 
5
PubMed ID: 
9275952
Reference Text: 
Evensen AE, Sanson-Fisher R, E’Este C, et al. Trends in publications regarding evidence-practice gaps: a literature review. Implement Sci 2010; 5: 11.
Reference Order: 
6
PubMed ID: 
20181079
Reference Text: 
Wennberg JE. Unwarranted variations in healthcare delivery: implications for academic medical centres. BMJ 2002; 325: 961-964.
Reference Order: 
7
PubMed ID: 
12399352
Reference Text: 
McGlynn EA, Asch SM, Adams J, et al. Quality of health care delivered to adults in the United States. N Engl J Med 2003; 348: 2635-2645.
Reference Order: 
8
PubMed ID: 
12826639
Reference Text: 
Fisher ES, Bynum JP, Skinner JS. Slowing the growth of health care costs — lessons from regional variation. N Engl J Med 2009; 360: 849-852.
Reference Order: 
9
PubMed ID: 
19246356
Reference Text: 
Chalkidou K, Tunis S, Lopert R, et al. Comparative effectiveness research and evidence-based health policy: experience from four countries. Milbank Q 2009; 87: 339-367.
Reference Order: 
10
PubMed ID: 
19523121
Reference Text: 
Weinstein MC, Skinner JA. Comparative effectiveness and health care spending — implications for reform. N Engl J Med 2010; 362: 460-465.
Reference Order: 
11
PubMed ID: 
20054039
Reference Text: 
National Health and Medical Research Council. Guidelines for the development and implementation of clinical practice guidelines. Canberra: NHMRC, 1995.
Reference Order: 
12
PubMed ID: 
Reference Text: 
National Institute of Clinical Studies. Evidence–practice gaps report. Vol 2. Melbourne: NICS, 2003.
Reference Order: 
13
PubMed ID: 
Reference Text: 
National Institute of Clinical Studies. Evidence–practice gaps report. Vol 1: a review of developments: 2004–2007. Melbourne: NICS, 2003.
Reference Order: 
14
PubMed ID: 
Reference Text: 
Garling P. Final report of the Special Commission of Inquiry: acute care services in NSW public hospitals. Sydney: NSW Government, 27 Nov 2008.
Reference Order: 
15
PubMed ID: 
Reference Text: 
Clinical Excellence Commission [website]. http://www.cec.health.nsw. gov.au/ (accessed Aug 2010).
Reference Order: 
16
PubMed ID: 
Reference Text: 
Clinical Excellence Commission. Chartbook. Quality of healthcare in NSW. 2008. Sydney: CEC, 2010. http://www.cec.health.nsw.gov.au/programs/chartbook.html (accessed Aug 2010).
Reference Order: 
17
PubMed ID: 
Reference Text: 
Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet 2003; 362: 1225-1230.
Reference Order: 
18
PubMed ID: 
14568747
Reference Text: 
Fisher ES, Berwick DM, Davis K. Achieving health care reform — how physicians can help. N Engl J Med 2009; 360: 2495-2497.
Reference Order: 
19
PubMed ID: 
19458353
Reference Text: 
Hains IM, Fuller JM, Ward RL, Pearson SA. Standardizing care in medical oncology. Cancer 2009; 115: 5579-5588.
Reference Order: 
20
PubMed ID: 
19711462
Reference Text: 
National Institute of Clinical Studies. Identifying barriers to evidence uptake. Melbourne: NICS, 2006.
Reference Order: 
21
PubMed ID: 
Reference Text: 
Clinical Excellence Commission. Understanding and influencing blood prescription. A market research report prepared by Eureka Strategic Research for the Clinical Excellence Commission and the National Blood Authority. December 2007. Sydney: CEC, 2007.
Reference Order: 
22
PubMed ID: 
Reference Text: 
National Institute of Clinical Studies. Do guidelines make a difference to health care outcomes? http://www.nhmrc.gov.au/nics/material_resources/resources/do_guidelines.htm (accessed Aug 2010).
Reference Order: 
23
PubMed ID: 
Reference Text: 
Yonek J, Hines S, Joshi MA. A guide to achieving high performance in multi-hospital health systems. March 2010. Chicago: Health Research and Educational Trust, 2010.
Reference Order: 
24
PubMed ID: 
Reference Text: 
Australian Satellite of the Cochrane EPOC Group [website]. http://www.epoc.nhmrc.gov.au/ (accessed Aug 2010).
Reference Order: 
25
PubMed ID: 
Reference Text: 
National Institute of Clinical Studies. Clinical Practice Guidelines Portal [website]. http://www.clinicalguidelines.gov.au/ (accessed Aug 2010).
Reference Order: 
26
PubMed ID: 
Reference Text: 
Krahn M, Naglie G. The next step in guideline development: incorporating patient preferences. JAMA 2008; 300: 436-438.
Reference Order: 
27
PubMed ID: 
18647988
Reference Text: 
Sepucha KR, Fowler FJ, Mulley AG. Policy support for patient-centred care: the need for measurable improvements in decision quality. Health Aff [web exclusive] 2004; 7 Oct: VAR-55. http://content.healthaffairs.org/cgi/reprint/hlthaff.var.54v1?ck=nck (accessed Aug 2010).
Reference Order: 
28
PubMed ID: 
Reference Text: 
Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections. N Engl J Med 2006; 355: 2725-2734.
Reference Order: 
29
PubMed ID: 
17192537
Reference Text: 
The Health Foundation. Patient safety update. London: Health Foundation; 2009. http://www.health.org.uk/document.rm?id=1189 (accessed Aug 2010).
Reference Order: 
30
PubMed ID: