Connect
MJA
MJA

Gaps between best evidence and practice: causes for concern

Heather Buchan
Med J Aust 2004; 180 (6): S48. || doi: 10.5694/j.1326-5377.2004.tb05944.x
Published online: 15 March 2004

Abstract

  • Overseas studies that aim to quantify the evidence base of conventional medical care give varying estimates, but many of these studies have potential for bias.

  • We do not know how much of the total healthcare Australians receive is based on the best available evidence; studies of a number of specific conditions show that there are gaps between what is known and what happens in practice.

  • The National Institute of Clinical Studies aims to identify and test systemic approaches to embed ongoing review and uptake of evidence into routine clinical care.

How much mainstream medical care is based on the best available scientific knowledge about what does or doesn’t work? Claims that only 10%–20% of care was based on good evidence were made by a few influential people such as Kerr White (a US physician who pioneered the discipline of health services research) and Archie Cochrane (the UK epidemiologist who provided inspiration for the Cochrane Collaboration) in 1976,1 and by David Eddy (a leading US health policy expert) in 1991.2 These statements were repeated in widely read publications such as the US Office of Technology Assessment reports3,4 and the BMJ.2

The estimates themselves had a weak evidence base. White’s assessment relied on prescribing data gathered in 1961 from 19 British general practitioners. Eddy’s comments were apparently based on research in selected areas, such as treatment of glaucoma. The most substantial body of data came from a 1991 study of the evidence used by the US National Institutes of Health to assess 126 therapeutic and diagnostic technologies.5 The study concluded that only 21% of the technologies were supported by controlled research evidence, and took no account of how often they were actually used in patient care.

Spurred by these dismal pronouncements, clinicians in several specialties have studied the evidence base of their practice. In 1995, an Oxford general medical unit reported that 82% of primary treatments provided to 109 patients were based on evidence, with 53% supported by randomised controlled trials (RCTs).6 Further studies in internal medicine,7 general practice,8 psychiatry,9 surgery,10-13 anaesthesia,14 paediatrics,15 and other areas16,17 have concluded that between 11% and 65% of treatment is supported by findings from RCTs. The percentages are higher when other forms of evidence are included, although there are issues about the quality of the evidence used to derive these higher estimates.

While these figures are substantially higher than earlier estimates, critical evaluation of the reported studies highlights the potential for biased results. Most studies examine treatment decisions made during a series of patient encounters over a few weeks in a particular setting. Some have low numbers of participants, are confined to the practice of a small group of doctors, or are undertaken in atypical settings. Moreover, they often concern only one aspect of care, assume that all relevant information was recorded, and do not adequately note possible underprovision of care. The main value of these studies may be their potential to highlight areas in which the lack of good quality research data is failing doctors and patients who need to make practical decisions about healthcare.

A different picture of the extent to which evidence is used in clinical practice emerges from studies of the care that patients receive for specific illnesses. Recently, researchers from the RAND Corporation interviewed a random sample of over 6000 adults living in 12 metropolitan areas in the United States and examined their healthcare records.18 They compared the care received with evidence-based quality indicators for common acute and chronic conditions. Most indicators were process measures (eg, the provision of counselling for smoking cessation and other lifestyle changes), because these reflect the activities that clinicians control most directly and, unlike outcome measures, do not generally require risk adjustment. Overall, participants in the survey received about half of the recommended care processes. Some specific areas of underprovision included failure to treat with aspirin following myocardial infarction, failure to conduct routine monitoring of glycosylated haemoglobin levels in diabetes, and failure to provide pneumococcal vaccination for elderly people.

Although there have been no similar studies in Australia, we know that, in a number of important areas, good research evidence is not being applied universally in practice. For example:

  • Heart failure affects over 300 000 Australians and is a common reason for hospitalisation.19 Although angiotensin-converting enzyme inhibitors reduce heart failure symptoms and hospital admissions and lengthen life,20 they are underprescribed by Australian GPs19 and hospital doctors.21

  • Smoking in pregnancy adversely affects mothers and babies, and smoking-cessation programs can reduce smoking rates among pregnant women.22 However, a study of the protocols and policies used by antenatal care providers in Australia found that 90% did not include written information and advice about smoking cessation.23

  • There is strong evidence for the benefit of measures to prevent deep-vein thrombosis in hospitalised patients,24 but studies of high-risk patients in Australian hospitals suggest that some do not receive appropriate preventive care.25

These Australian examples of opportunities to improve care come from a recent National Institute of Clinical Studies (NICS) report that aims to raise awareness of the potential impact of better use of research findings in practice.25 The NICS was established 3 years ago by the Australian Government to help close gaps between evidence and practice in healthcare. The NICS’ evidence practice gaps report does not provide a comprehensive overview of the most important gaps, mainly because we lack good quality routine data that describe the processes of care in Australian hospitals and primary care settings. Mostly, we depend on one-off surveys and studies to describe whether the care that is delivered is consistent with the best available evidence. While systems for exploring the cost of healthcare become increasingly sophisticated, the capacity to monitor and improve the value of what we do remains primitive.

The lack of documentation of important gaps is one challenge that faces the Institute. A second is that the reasons for the gaps between what is known and what actually happens in clinical practice are complex, and efforts to improve uptake are unlikely to be successful if they are unidimensional or focus only on individual health professionals. The primary role of the Institute is not to answer research questions about whether specific methods work in particular settings for specific problems (although better-quality evidence in these areas would be enormously helpful). Rather, its role is to identify and test approaches that might improve evidence uptake across the system and that are feasible and affordable in Australia.

Concerted effort by motivated groups to improve uptake can improve care. For example, the incidence of pulmonary embolism at the Canberra Hospital fell 25% after clinicians reviewed and changed their practice to ensure that patients actually received care that was in line with the current best available evidence. This description of use of evidence made the Clinical Practice Improvement Unit at the Canberra Hospital the overall winner of the inaugural NICS Cochrane Users’ Awards.26 Identifying ways to achieve these kinds of results in multiple settings for a broad array of problems was the focus of the November 2003 workshop in Hobart on effective implementation strategies.

Better use of research knowledge in practice will require partnerships between a range of organisations and the capacity to combine scientific rigour with a recognition of real world needs and constraints. In 1991 Trevor Baylis, a British inventor, heard of the difficulties of healthcare workers in Africa who were trying to inform the population about ways to prevent AIDS. Radio broadcasts could not reach a wide audience because the cost of batteries was prohibitive for many Africans. Baylis developed clockwork-powered radios that are now distributed throughout the world by agencies like the United Nations and the Red Cross.27 Improving research implementation will require a similar mix of ingenuity, science and partnership to develop affordable approaches and products that are tailored to the environments where they are most needed.

  • Heather Buchan

  • National Institute of Clinical Studies, Melbourne, VIC.


Correspondence: 

Competing interests:

The author is Chief Executive Officer of the National Institute of Clinical Studies (NICS), which has purchased this Supplement. The core business of the NICS is to close gaps between evidence and practice.

  • 1. White KL. Evidence-based medicine. Lancet 1995; 346: 837-838.
  • 2. Smith R. Where is the wisdom . . .? The poverty of medical evidence. BMJ 1991; 303: 798-799.
  • 3. Office of Technology Assessment of the Congress of the United States. Assessing the efficacy and safety of medical technologies. Washington, DC: US Government Printing Office, September 1978. Available at: www.wws.princeton.edu/cgi-bin/byteserv.prl/~ota (accessed Dec 2003).
  • 4. Office of Technology Assessment of the Congress of the United States. The impact of randomized clinical trials on health policy and medical practice. Washington, DC: US Government Printing Office, August 1983. Available at: www.wws.princeton.edu/cgi-bin/byteserv.prl/~ota (accessed Dec 2003).
  • 5. Dubinsky M, Ferguson J. Analysis of the National Institutes of Health Medicare Coverage Assessment. Int J Technol Assess Health Care 1990; 6: 480-488.
  • 6. Ellis J, Mulligan I, Rowe J, Sackett DL. Inpatient general medicine is evidence-based. A-Team, Nuffield Department of Clinical Medicine. Lancet 1995; 345: 407-410.
  • 7. Michaud G, McGowan JL, van der Jagt R, et al. Are therapeutic decisions supported by evidence from health care research? Arch Intern Med 1998; 158: 1665-1668.
  • 8. Gill P, Dowell AC, Neal RD, et al. Evidence based general practice: a retrospective study of interventions in one training practice. BMJ 1996; 312: 819-821.
  • 9. Geddes J, Game D, Jenkins N, et al. What proportion of primary psychiatric interventions are based on evidence from randomised controlled trials? Qual Health Care 1996; 5: 215-217.
  • 10. Howes N, Chagla L, Thorpe M, et al. Surgical practice is evidence based. Br J Surg 1997; 84: 1220-1223.
  • 11. Kenny SE, Shankar KR, Rintala R, et al. Evidence-based surgery: interventions in a regional paediatric surgical unit. Arch Dis Child 1997; 76: 50-53.
  • 12. Lee JS, Urschel DM, Urschel JD. Is general thoracic surgical practice evidence based? Ann Thorac Surg 2000; 70: 429-431.
  • 13. Kingston R, Barry M, Tierney S, et al. Treatment of surgical patients is evidence-based. Eur J Surg 2001; 167: 324-330.
  • 14. Myles PS, Bain DL, Johnson F, McMahon R. Is anaesthesia evidence-based? A survey of anaesthetic practice. Br J Anaesth 1999; 82: 591-595.
  • 15. Rudolf MCJ, Lyth N, Bundle A, et al. A search for the evidence supporting community paediatric practice. Arch Dis Child 1999; 80: 257-261.
  • 16. Lai TYY, Wong VWY, Leung GM. Is ophthalmology evidence based? A clinical audit of the emergency unit of a regional eye hospital. Br J Ophthalmol 2003; 87: 385-390.
  • 17. Jemec GB, Thorsteinsdottir H, Wulf HC. Evidence-based dermatologic out-patient treatment. Int J Dermatol 1998; 37: 850-854.
  • 18. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003; 348: 2635-2645.
  • 19. Krum H, Tonkin AM, Currie R, et al. Chronic heart failure in Australian general practice. The Cardiac Awareness Survey and Evaluation (CASE) Study. Med J Aust 2001; 174: 439-444.
  • 20. National Heart Foundation of Australia and Cardiac Society of Australia & New Zealand Chronic Heart Failure Clinical Practice Guidelines Writing Panel. Guidelines for management of patients with chronic heart failure in Australia. Med J Aust 2001; 174: 459-466.
  • 21. Scott IA, Denaro CP, Flore JL, et al. Quality of care of patients hospitalized with congestive heart failure. Intern Med J 2003; 33: 140-151.
  • 22. Lumley J, Oliver S, Waters E. Interventions for promoting smoking cessation during pregnancy (Cochrane Review). In: The Cochrane Library, Issue 2, 2003. Oxford: Update Software.
  • 23. Hunt JM, Lumley J. Are recommendations about routine antenatal care in Australia consistent and evidence based? Med J Aust 2002; 176: 255-259. <eMJA full text>
  • 24. Kleinbart J, Williams M, Rask K. Prevention of venous thromboembolism. In: Shojana KG, Duncan BW, McDonald KM, Wachter RM, editors. Making health care safer. A critical analysis of patient safety practices. Rockville, Md: Agency for Healthcare Research and Quality, 2001. (AHRQ Publication No. 01-EO58.)
  • 25. National Institute of Clinical Studies. Evidence–practice gaps report. Vol 1. Melbourne: NICS, 2003.
  • 26. National Institute of Clinical Studies. NICS projects. The Cochrane Library. 2003. Available at: www.nicsl.com.au/projects_projects_detail.aspx?view=1&subpage=9 (accessed Jan 2003).
  • 27. British Council. Trevor Baylis — inventor of the clockwork radio. 1999. Available at: www.britishcouncil.org/science/science/personalities/text/ukperson/baylis.htm (accessed Sep 2003).

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.