Connect
MJA
MJA

Improving the effectiveness of clinical medicine: the need for better science

Ian A Scott and Paul P Glasziou
Med J Aust 2012; 196 (5): 304-308. || doi: 10.5694/mja11.10364
Published online: 19 March 2012

Internationally, there is a drive to render clinical medicine more effective, evidence-based, cost-efficient and accountable. In Australia, the National Health and Hospitals Reform Commission recommends a strong focus on continuous learning and evidence-based improvements to the delivery of health care.1 In the United States, the Institute of Medicine is forging the concept of a value- and science-driven learning health care system that is effective and efficient.2 In the United Kingdom, guidance from the National Institute for Health and Clinical Excellence, combined with quality and outcome frameworks that include financial incentives, seeks to align clinical practice with best available evidence.3

But translating medical research into practice is not the only challenge — medical research itself has problems. These include poor reporting and non-publication of research, and questionable relevance and value of research to clinical practice. Respected former and current editors of high-impact medical journals feel their publications have become marketing vehicles for commercial interests, profiling interventions of marginal clinical value.4 If the effectiveness of clinical medicine is to improve (defined as the consistent delivery of appropriate, evidence-based care to individual patients designed to maximise clinical benefit), we need to address the failings in clinical science. In this article, we define several problems and propose remedial recommendations.

1. Inequities and gaps in clinical research

The clinical research agenda is becoming increasingly distorted by commercial and academic priorities and pressures that do not match the concerns and needs of patients and clinicians.5 In Australia, funding for trials targeting cardiovascular disease and cancer is considerably greater than that for trials related to conditions with similar or greater disease burden, such as musculoskeletal disorders, diabetes, injury, asthma and mental illness.6 Research preference is also given to manufactured products (drugs and devices), while potentially efficacious non-drug, non-device interventions are largely ignored.7 Trial entry criteria tend to exclude the elderly, children, patients with multiple comorbidities, or less adherent patients (using run-in phases). Such exclusions may be selected by investigators or mandated by research ethics committees. As a result, on average, less than 15% of subjects screened for trial entry are ever subject to randomisation, thus limiting the generalisability of trial results.8 This problem fosters off-label use of interventions in patient groups in whom efficacy and potential for harm remain uncertain.9

There is also an urgent need for more research on the comparative effectiveness of different interventions, as many clinical dilemmas centre on choosing the most clinically effective and cost-effective forms of condition-specific care among multiple options within the constraints of limited resources. In this regard, more head-to-head randomised trials incorporating economic evaluations are required, with a close watch on the increasing use of non-inferiority study designs that use non-inferiority margins that are far too liberal.10 More attention should be directed to evaluating the benefits and harms of implanted devices, which attract far less rigorous research than pharmaceuticals.11 At the level of health services, more research is needed into how they can be redesigned in order to achieve more efficient, reliable delivery of safe, high-quality care.

Recommendations

Involve consumers of research in shaping the research agenda: Consumers of research include both patients and practising clinicians. The James Lind Alliance (JLA) in the UK (www.lindalliance.org) promotes working partnerships between patients and clinicians to identify and promote shared priorities for therapeutic research, including non-drug and non-device interventions. With funding from the UK Department of Health, the JLA identifies high-priority uncertainties about treatment effects from consumer surveys and focus groups, peruses guidelines looking for uncertain or weak recommendations, and identifies evidence gaps in existing health technology assessments and systematic reviews. These uncertainties are published by NHS (National Health Service) Evidence in its Database of Uncertainties about the Effects of Treatments (DUETs).12 An example of a JLA-sponsored activity is the collaboration between Asthma UK and the British Thoracic Society, which establishes research priorities in asthma that may attract NHS research funding. Establishing an advocacy group similar to the JLA in Australia may realise similar benefits. Trial registries identifying diseases that are underrepresented in research efforts relative to their disease burden may also assist.13

Promote independent research in areas of need: The amount of funding from research agencies such as the National Health and Medical Research Council (NHMRC) available for independent, investigator-led, applied clinical research (as opposed to basic science) must be increased from its current levels, especially for comparative effectiveness and health services research. In the UK, the National Institute for Health Research is providing around £800 million per year for applied clinical research,14 while in the US, $1.1 billion dollars of public funds have been allocated to comparative effectiveness research under recently passed legislation.15 In Italy and Spain, independent research on drug effects is being supported with revenue from taxation on commercial drug promotion.16 In Australia, shared funding arrangements involving government and commercial sponsors have been proposed for evaluating promising new interventions.17 Although industry investment in trials is important, an appropriate level of public sector investment must coexist, to support independent research aimed at improving care and not solely producing marketable interventions. The NHMRC and the federal Department of Health and Ageing should work on establishing a national health research agenda that addresses areas of need based on disease burden.

Undertake more adaptive, pragmatic trials: Pragmatic randomised controlled trials (PRCTs) assess intervention effectiveness under conditions of everyday practice (thus increasing generalisability) by selecting clinically relevant interventions and comparators, including diverse populations of study participants from different practice settings, and by collecting data on a broad range of patient-important health outcomes.18 In the 1980s and early 1990s, large, multisite, investigator-led trials (such as ISIS 1–4 and GISSI 1–3), conducted at modest cost, involving thousands of patients and using streamlined procedures, transformed the management of acute myocardial infarction.19 Since that time, trial costs have increased as have administrative and ethical requirements,19 but incentives still exist to promote more PRCTs (Box 1).20 In addition, new, more flexible trial designs serve to maximise research productivity by allowing changes to study protocol when emerging data unveil discrepancies between what is being observed and what was expected regarding event rates or clinical processes.21

Use observational designs where appropriate: In some circumstances, RCTs have their limitations, such as surgical trials where randomisation may be difficult,22 and assessment of rare, long-term harm or real-world effects of complex interventions. High-quality observational studies based on clinical registries, cohort studies and postmarketing surveillance can yield important information in such instances. These deserve wider use, as exemplified by the discovery from registry data of high revision rates in metal-on-metal hip replacement prostheses.23

2. Shortcomings in trial design and reporting

Many trials could potentially yield more useful information if, in the design phase, investigators made themselves more aware of what is already known about the issues under study. Less than a quarter of reports of clinical trials incorporate a systematic synthesis of pre-existing evidence to guide researchers in formulating research questions and study design, and only a third cite such a synthesis when discussing their results.24 On average, more than 75% of trials predating an index trial are not cited, including many that involved large sample sizes.25

In addition, 30%–50% of randomised trials are seriously limited by one or more methodological flaws (Box 2).26,27 Of greater concern are highly publicised observational studies claiming intervention effects that fail to adequately account for possible confounders and other forms of bias.28 Such claims tend to persist in published literature, despite strong contradictory evidence from subsequent randomised trials or more rigorous analyses of the original studies.29

Many of these research flaws underpin false claims of efficacy appearing in commercial promotions and advertisements. Unfortunately, no more than a third of these flaws are detected in the peer-review process before journal publication,30 and different reviewers, chosen for their content knowledge rather than their methodological expertise, can assign different quality ratings to the same article.31 In particular, many reviewers show bias in recommending publication of studies that report a positive outcome in preference to those with null or negative results, despite no difference in methodological quality.32

Finally, in assessing real-world relevance, users of research often want to know much more about the logistics and processes of implementation of interventions (who, what, how, when, how much) than is commonly reported in trial reports.33

Recommendations

Mandate a synthesis of pre-existing evidence before new studies: Research funders should support grant proposals that build on what is already known from systematic reviews of existing evidence. Similarly, journal editors, in considering manuscripts for publication, should require new research findings to be placed in the context of established knowledge or emerging evidence, as is the current policy of The Lancet.

Expand training and expertise in clinical research methods: Young investigators undertaking clinical research apprenticeships must receive adequate training in research methods, critical appraisal and evidence synthesis. Academic centres and research institutes should ensure unfettered access to methodologists and statisticians at the design stage of clinical trials, and granting bodies should require involvement of these individuals in development of study protocols.

Enhance the reliability of peer review: When considering publication, journal editors should critically appraise the quality of their reviewers’ comments34 and supplement peer review with review by clinically aware methodologists and statisticians (for internal validity) and end users of research (for clinical impact and relevance).35

Implement best-practice reporting processes: Research authors and journal editors should implement minimum standards for study reporting developed by reputable groups of methodologists. Standards for randomised intervention trials, studies of diagnostic tests, systematic reviews, observational studies and other study types exist in readily accessible form (www.equator-network.org).

Require researchers to incorporate implementation and economic evaluations into their study designs and reporting of results: Investigators should describe their intervention in detail sufficient to allow clinicians to replicate it in routine practice. The extent to which actual care delivered differs from what was originally planned in the study protocol (study fidelity) should also be provided as a measure of feasibility. Information should be forthcoming on the intervention attributes listed in Box 3.36 If a formal cost-effectiveness analysis is not possible, estimates of the direct costs of the intervention and its comparator should at least be provided. These provide some measure of return on investment when correlated with differences in observed outcomes between the two groups.37

3. Diminishing returns from clinical research

It is estimated that less than 7% of all reported clinical trials are valid and highly relevant to clinical practice.38 In developed countries, this figure is likely to decrease over time as greater affluence, better overall health care and longer life expectancies reduce the potential magnitude of clinical benefit achievable with any new therapy.39 In response, industry-funded trials increasingly resort to inflating event rates by using inappropriate composite end points or overinclusive, subclinical diagnostic criteria or biomarkers.40 Many contemporary trials involve “me too” drugs, report trivial gains in patient benefit, are intended simply to obtain marketing approval, and fail to live up to exaggerated claims of “breakthrough” or “miracle” advances.41

4. Restricted access to trial data

Non-publication or delayed publication of negative or no-effect trials remains a challenge for both industry and non-industry trials, more so for the former.42 Recent exposés of internal drug company documents demonstrate how commercially inconvenient results of industry-sponsored trials continue to be concealed from independent review by regulatory authorities or other researchers.43 Even if released, most are buried within dense and misleading approval applications and never see the light of day in product labels and patient information leaflets.44

Conclusion

In many respects, clinical research stands at the crossroads. The reputation of researchers and the science that they generate has been tainted by methodological shortcomings, commercial influences, neglect of societal needs and notorious cases of outright fraud. Researchers need to be more proactive in evaluating clinical interventions in terms of patient-important benefit, wide applicability and comparative effectiveness. Funders need to be more supportive of applied clinical research that rigorously evaluates effectiveness of new treatments and synthesises existing knowledge into clinically useful systematic reviews. Advancing the effectiveness of clinical practice is predicated on producing more good science.


Provenance: Not commissioned; externally peer reviewed.

  • Ian A Scott1
  • Paul P Glasziou2

  • 1 Department of Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Brisbane, QLD.
  • 2 Centre for Research in Evidence-based Research, Bond University, Gold Coast, QLD.


Correspondence: ian_scott@health.qld.gov.au

Competing interests:

No relevant disclosures.

  • 1. National Health and Hospitals Reform Commission. A healthier future for all Australians: final report June 2009. Canberra: Department of Health and Ageing, 2009.
  • 2. Olsen L, Aisner D, McGinnis JM. Institute of Medicine Roundtable on Evidence-based Medicine. The learning healthcare system: workshop summary. Washington, DC: National Academies Press, 2007.
  • 3. National Institute for Health and Clinical Excellence. Putting guidance into practice. http://www.nice.org.uk/usingguidance/ (accessed Nov 2011).
  • 4. Jelinek GA, Neate SL. The influence of the pharmaceutical industry in medicine. J Law Med 2009; 17: 216-223.
  • 5. Tallon D, Chard J, Dieppe P. Relation between agendas of the research community and the research consumer. Lancet 2000; 355: 2037-2040.
  • 6. Mitchell RJ, McClure RJ, Olivier J, Watson WL. Rational allocation of Australia’s research dollars: does the distribution of NHMRC funding by National Health Priority Area reflect actual disease burden? Med J Aust 2009; 191: 648-652. <MJA full text>
  • 7. Glasziou PP. Promoting evidence-based non-drug interventions: time for a non-pharmacopoeia? Med J Aust 2009; 191: 52-53. <MJA full text>
  • 8. Sharpe N. Clinical trials and the real world: selection bias and the generalisability of trial results. Cardiovasc Drugs Ther 2002; 16: 75-77.
  • 9. Abernethy AP, Raman G, Balk EM, et al. Systematic review: reliability of compendia methods for off-label oncology indications. Ann Intern Med 2009; 150: 336-343.
  • 10. Scott IA. Non-inferiority trials — determining when alternative treatments are good enough. Med J Aust 2009; 190: 326-330. <MJA full text>
  • 11. Dhruva SS, Bero LA, Redberg RF. Strength of study evidence examined by the FDA in premarket approval of cardiovascular devices. JAMA 2009; 302: 2679-2685.
  • 12. Fenton M, Leitch A, Grindlay D. Developing a UK DUETs module 2009. www.lindalliance.org/pdfs/UK_DUETs/DEVELOPING_A_UK_DUETs%20_MODULE_2009.pdf (accessed Feb 2011).
  • 13. Dear RF, Barratt AL, McGeechan K, et al. Landscape of cancer clinical trials in Australia: using trial registries to guide future research. Med J Aust 2011; 194: 387-391. <MJA full text>
  • 14. National Institute for Health Research. Transforming health research: the first two years. Progress report 2006–2008. London: NIHR, 2008. http://www.nihr.ac.uk/files/pdfs/NIHR%20Progress%20Report%202006-2008.pdf (accessed Nov 2011).
  • 15. Department of Health and Human Services. Text of the Recovery Act related to comparative effectiveness funding. Excerpt from the American Recovery and Reinvestment Act 2009. March 2009. http://www.hhs.gov/recovery/programs/cer/recoveryacttext.html(accessed Mar 2011).
  • 16. Garattini S, Chalmers I. Patients and the public deserve big changes in evaluation of drugs. BMJ 2009; 338: b1025.
  • 17. Glasziou PP. Support for trials of promising medications through the Pharmaceutical Benefits Scheme. A proposal for a new authority category. Med J Aust 1995; 162: 33-36.
  • 18. Tunis SR, Stryer DB, Clancy CM. Practical clinical trials. Increasing the value of clinical research for decision making in clinical and health policy. JAMA 2003; 290: 1624-1632.
  • 19. Yusuf S. Damage to important clinical trials by over-regulation. Clin Trials 2010; 7: 622-625.
  • 20. Relton C, Torgerson D, O’Cathain A, Nicholl J. Rethinking pragmatic randomised controlled trials: introducing the “cohort multiple randomised controlled trial” design. BMJ 2010; 340: c1066.
  • 21. Luce BR, Kramer JM, Goodman SN, et al. Rethinking randomized clinical trials for comparative effectiveness research: the need for transformational change. Ann Intern Med 2009; 151: 206-209.
  • 22. Merkow RP, Ko CY. Evidence-based medicine in surgery. The importance of both experimental and observational study designs. JAMA 2011; 306: 436-437.
  • 23. Australian Orthopaedic Association National Joint Replacement Registry. Hip and knee arthroplasty. Annual report 2010. Adelaide: AOA, 2010. http://www.dmac.adelaide.edu.au/aoanjrr/documents/aoanjrrreport_2010.pdf (accessed Jul 2011).
  • 24. Goudie AC, Sutton AJ, Jones DR, Donald A. Empirical assessment suggests that existing evidence could be used more fully in designing randomised controlled trials. J Clin Epidemiol 2010; 63: 983-991.
  • 25. Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med 2011; 154: 50-55.
  • 26. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009; 374: 86-89.
  • 27. Scott IA, Greenberg PB; IMSANZ EBM Working Group. Cautionary tales in the clinical interpretation of therapy trial reports. Intern Med J 2005; 35: 611-621.
  • 28. Groenwold RH, Van Deursen AM, Hoes AW, Hak E. Poor quality of reporting confounding bias in observational intervention studies: a systematic review. Ann Epidemiol 2008; 18: 746-751.
  • 29. Tatsioni A, Bonitsis NG, Ioannidis JP. Persistence of contradicted claims in the literature. JAMA 2007; 298: 2517-2526.
  • 30. Schroter S, Black N, Evans S, et al. What errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med 2008; 101: 507-514.
  • 31. Kravitz RL, Franks P, Feldman MD, et al. Editorial peer reviewers’ recommendations at a general medical journal: are they reliable and do editors care? PLoS One 2010; 5: e10072.
  • 32. Emerson GB, Warme WJ, Wolf FM, et al. Testing for the presence of positive-outcome bias in peer review: a randomised controlled trial. Arch Intern Med 2010; 170: 1934-1939.
  • 33. Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ 2008; 336: 1472-1474.
  • 34. Landkroon AP, Euser AM, Veeken H, et al. Quality assessment of reviewers’ reports using a simple instrument. Obstet Gynecol 2006; 108: 979-985.
  • 35. Cobo E, Selva-O’Callagham A, Ribera JM, et al. Statistical reviewers improve reporting in biomedical articles: a randomized trial. PLoS One 2007; 2: e332.
  • 36. Glasziou P, Chalmers I, Altman DG, et al. Taking healthcare interventions from trial to practice. BMJ 2010; 341: c3852.
  • 37. Hlatky MA, Owens DK, Sanders GD. Cost-effectiveness as an outcome in randomized clinical trials. Clin Trials 2006; 3: 543-551.
  • 38. McKibbon KA, Wilczynski NL, Haynes RB. What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals? BMC Med 2004; 2: 33-45.
  • 39. Kent DM, Trikalinos TA. Therapeutic innovations, diminishing returns, and control rate preservation. JAMA 2009; 302: 2254-2256.
  • 40. Alonso-Coello P, Garcia-Franco AL, Guyatt G, Moynihan R. Drugs for pre-osteoporosis: prevention or disease mongering? BMJ 2008; 336: 126-129.
  • 41. Wurtman RJ, Bettiker RL. The slowing of treatment discovery, 1965–1995. Nat Med 1995; 1: 1122-1125.
  • 42. Hopewell S, Clarke MJ, Stewart L, Tierney J. Time to publication for results of clinical trials. Cochrane Database Syst Rev 2007; (2): MR000011.
  • 43. Turner EH, Matthews AM, Linardatos E, et al. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008; 358: 252-260.
  • 44. Schwartz LM, Woloshin S. Lost in transmission — FDA drug information that never reaches clinicians. N Engl J Med 2009; 361: 1717-1720.
  • 45. US Food and Drug Administration. Law strengthens FDA. 2007. www.fda.gov/oc/initiatives/advance/fdaaa.html (accessed Feb 2011).
  • 46. Coombes R. UK government will tighten the law on trial results after weaknesses found in safety legislation. BMJ 2008; 336: 576-577.
  • 47. European Commission. Medicinal products for human use. Transparency of information related to clinical trials. http://ec.europa.eu/health/human-use/clinical-trials/index_en.htm (accessed Jul 2011).
  • 48. Haines IE, Miklos GLG. Time to mandate data release and independent audits for all clinical trials. Med J Aust 2011; 195: 575-577. <MJA full text>
  • 49. Starr M, Chalmers I, Clarke M, Oxman AD. The origins, evolution and future of the Cochrane Database of Systematic Reviews. Int J Technol Assess Health Care 2009; 25 (Suppl 1): 182-195.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.