Improving effectiveness of clinical medicine: the need for better translation of science into practice

Ian A Scott and Paul P Glasziou
Med J Aust 2012; 197 (7): 374-378. || doi: 10.5694/mja11.10365
Published online: 1 October 2012

In an earlier article we discussed the need for better science in improving health care effectiveness.1 In this article, we focus on the need for better translation of valid and relevant science into routine clinical practice. The path from research to improved patient outcomes has been likened to a “leaky pipe”, which comprises seven sequential steps of evidence translation: (i) awareness — the clinician is aware of valid and relevant research; (ii) acceptance — the clinician accepts that the research should alter current practice; (iii) applicability — the clinician uses interventions in patients who stand to benefit most and avoids interventions in patients who might be harmed; (iv) ability — the clinician feels confident that delivering an intervention is within his or her capacity; (v) acted on — the clinician remembers to consider and prescribe the intervention appropriately; (vi) agreement — the patient accepts the prescribed treatment plan; and (vii) adherence — the patient consistently adheres to the treatment plan.2 Leakage from (or “falls in pressure”) throughout this clinician-awareness-to-patient-adherence pipeline helps explain why, on average, no more than a third of evidence-based clinical guideline recommendations are routinely adhered to (based on clinician and patient self-report),3 and no more than 60% of patients at any one time receive the care deemed appropriate by current science (based on case reviews).4 This article presents evidence-based effective means for overcoming evidence-translation barriers.5,6

1. Insufficient awareness or acceptance of high-quality evidence

Clinicians (defined here as any certified health professional) must be prepared to question the level of evidentiary certainty underpinning clinical decisions, and actively seek and adopt new evidence that may better inform such decisions, even if this means reversing widely held beliefs and practices. Between 20% and 30% of clinical interventions may be unnecessary or even harmful on the basis of current evidence.4,7,8 Some examples are listed in Box 1.

Clinicians need to have the skills to quickly locate and interpret the 7% of published research scattered among hundreds of journals that is valid, has clinically meaningful impact, and is applicable to many patients.9 The vast amount of “marketing-based evidence” that is false or misleading can then be quickly dismissed. Accessing and using systematic reviews and secondary sources of prefiltered, preappraised research accelerates evidence uptake.10 Clinicians skilled in evidence-based medicine (EBM) appear to provide better-quality care.11-13


Develop and assess EBM skills among clinicians: The acquisition of EBM skills should be part of the core curricula of all medical schools and specialist colleges and students should be assessed on these skills. Curricula need to integrate learning of EBM into bedside medicine, clinical tutorials and journal clubs in ways that foster appropriate skills and attitudes.14 Performance appraisals and continuing professional development programs should award credits for activities that maintain and enhance EBM skills.

Provide user-friendly EBM infrastructure: Wherever clinicians work, access should be guaranteed to well designed evidence databases and decision-support systems.15 Reliable search engines such as PubMed Clinical Queries, Cochrane Library and Evidence Updates (BMJ) should be available at every computer workstation.

Generate reliable evidence-based guidance: Professional organisations working with government bodies such as the Australian Commission on Safety and Quality in Health Care (ACSQHC) need to define minimum evidence-based (not opinion-based) practice standards for common conditions associated with a high disease burden.

Clinical guideline recommendations should be unambiguous and consistent, and define target patient populations and expected clinical outcomes. Clinically trained, non-conflicted content experts and methodologists should collaborate in guideline development panels that use structured and transparent processes for grading the quality of evidence and the strength of recommendations.16 The occurrence of conflicting recommendations in different guidelines must be minimised by consensus processes operating across professional jurisdictions.17 Clinicians charged with implementing local guidelines should collaborate with others within their practice settings in developing agreed best practices that account for local contexts.18 Such localised guidelines then need to be made readily accessible at the point of care, and implemented in ways that highlight key decision points and target less experienced or more isolated clinicians most in need of guidance.19

Guidelines need to be updated regularly and should emphasise discontinuation of established practices that new evidence shows to be inappropriate (Box 1), and avoidance of interventions for which evidence of effectiveness is lacking (such as some off-label prescribing). When new drugs or devices become listed for public subsidy, before adopting them, clinicians should seek out rigorous assessments of efficacy from independent sources such as the National Prescribing Service Rational Assessment of Drugs and Research (RADAR) and Diagnostics Initiative (available at, and national registry studies (for prostheses, devices and procedures).

Restrict commercial influences in clinician education and practice: Misleading claims in journal advertisements, educational events, and media releases paid for by industry distract clinicians from valid, high-quality evidence and should attract hefty sanctions and penalties. Some medical journals such as PLoS Medicine have dispensed with all commercial advertising, relying instead on income from subscribers and non-commercial sources (eg, government health departments and research agencies). Disclosure of all industry funding of expert opinionmakers, conference organisers and practising clinicians should be mandated.

2. Suboptimal targeting of clinical interventions

Clinicians often undertreat patients at high absolute risk of disease events and overtreat lower-risk patients or those with irreversibly poor prognosis.20 This risk–treatment paradox is pervasive, particularly in older populations, where underuse and overuse of interventions is a significant contributor to avoidable mortality and hospitalisation.21


Promote the use of risk prediction tools: Professional bodies, guideline writing groups and the ACSQHC should promote greater use of validated, easy-to-use risk prediction tools that enable clinicians and patients to better estimate individual absolute risk of treatment benefit and harm. A recently updated guideline on antithrombotic treatment in acute coronary syndromes includes risk calculators for determining the trade-off between a decreased risk of cardiovascular events and an increased risk of bleeding.22

Promote wider use of care standards and indicators in routine practice: Senior personnel within hospital departments, Medicare Locals, large private group practices and public health facilities should embed evidence-based standards into routine care. Clinical pathways, protocols and checklists can define and reinforce evidence-based indications for specific interventions.23 In maximising compliance with such indications, condition-specific clinical indicators need to be measured continuously or regularly, and results fed back to relevant clinicians.24 Standardised electronic medical records and discharge summaries, clinical registries and administrative datasets can supply the necessary data for feedback in a timely manner. Such feedback has more impact if combined with comparative peer analyses and “how to improve” information and used to develop multidisciplinary quality improvement plans.24

3. Impaired ability or inadequate incentives to enact evidence-informed decisions

High-quality evidence, even if known and accepted by clinicians, may not guarantee appropriate clinical actions because of limited applicability of evidence to specific patient circumstances; limited clinician self-efficacy (ability to provide required care); professional norms, organisational structures and culture that oppose changes to traditional practice; and the extra effort, time and resources that more appropriate decisions may entail for both clinicians and patients.25,26 Clinicians may also perceive evidence as inferior to organisational clout in influencing policymakers involved in resource allocation decisions.


Provide incentives for clinicians to consistently adopt evidence-informed practice: Implementation science has yielded many evidence-based behaviour change strategies,27-30 which share three key elements: (i) raising awareness of evidence–practice gaps (creating impetus for change); (ii) devising, implementing, testing and refining change strategies (creating the “how-to”); and (iii) dealing with enablers and barriers (creating sustainability and widespread diffusion of change). Box 2 summarises change strategies relevant to clinical microsystems (general practice, specialist clinic, and hospital-unit or ward-based teams) or communities of practice (professional craft groups, networks and collaborations). Research suggests that such strategies can improve the proportion of patients who receive guideline-concordant care by between 6% and 16%.29 Engagement and leadership of clinicians is crucial; strategies unilaterally mediated by managers, such as publicly reported scorecards, service accreditation and pay-for-performance schemes, have shown little evidence of effect on quality of care.29

Align incentives/disincentives to support evidence-informed practice: Health care organisations can suffer loss of reputation, staff and revenue if their activities are perceived as being outdated. Studies from the United Kingdom and the United States show positive associations between science-based innovation and clinical performance among acute care hospitals.31,32 Box 3 details organisational-level strategies used to support evidence-based practice,33,34 which should be scrutinised by service accreditation programs.

Avoid financial reimbursement for interventions of nil or uncertain benefit: Interventions that robust evidence shows to be ineffective, or for which evidence of benefit is lacking, should not be subsidised by public or private health insurance programs. Despite the best efforts of the Pharmaceutical Benefits Advisory Committee and Medicare Services Advisory Committee, new uses and indications of existing drugs, tests and procedures commonly bypass such scrutiny.35

4. Inability to gain patient adherence

Health care effectiveness is considerably compromised by levels of patient adherence to appropriate care, which currently average no more than 50%.36


Endorse strategies to improve patient adherence to agreed medical advice: Virtually all interventions for enhancing medication adherence that are at least partially effective involve combinations of more simplified and convenient dosing, reminders, self-management and reinforcement, and personalised outreach (phone calls or home visits).36

Enhance clinician skills in individualising patient care: Eliciting patients’ needs and preferences, providing personalised estimates of treatment risk and benefit, and identifying and ameliorating patient-perceived barriers to compliance improves patient acceptance and adherence to management plans.37,38 Shared decision making (Box 4) helps patients to be more selective about treatments and invasive procedures, and helps reduce potentially inappropriate polypharmacy in older patients with multiple comorbidities.39 Where patients request ineffective interventions, clinicians should stand by the evidence and offer alternative care of proven value.40


A sustainable, best-value health care system requires awareness of, and system-wide commitment to, the barriers and enablers of evidence-informed care affecting individual clinicians and health care services. Generating and testing strategies for bridging evidence–practice gaps will be an ongoing need in rendering clinical medicine more effective.

1 Contemporary and widely used treatments that have been shown to be ineffective or harmful in many patients



Evidence and comment


Hormone replacement therapy


Hormone replacement therapy was widely promoted for its possible preventive efficacy, but the Women’s Health Initiative showed net harms.

JAMA 2002; 288: 321-333

Arthroscopic lavage


Popular treatment, but randomised trial against sham arthroscopy showed no effect.

N Engl J Med 2002; 347: 81-88


Heart failure

Popular treatment, but 2005 trial showed increased mortality.

JAMA 2005; 293: 1900-1905


Acute head injury

Corticosteroids are often given in brain injury with the hope of reducing swelling, but large randomised trial showed increased mortality.

Lancet 2005; 365: 1957-1959



Rosiglitazone was widely promoted as a new oral hypoglycaemic agent, but meta-analysis suggested an increase in heart failure and deaths.

N Engl J Med 2007; 356: 2457-2471


Osteoporotic fractures

Vertebroplasty had a wide uptake in the 2000s, but two randomised trials against a sham procedure showed no effect.

BMJ 2011; 343: d3952

Tight glucose control


Guidelines had recommended progressively tighter glycated haemoglobin (HbA1c) limits, until three recent large randomised trials showed tight glucose control to be harmful or of no benefit.

N Engl J Med 2011; 364: 818-828

Self-monitoring of blood glucose levels


Guidelines and diabetic educators recommend that patients regularly measure their blood sugar levels with glucometers, but data from a meta-analysis showed this does not achieve lower glycated haemoglobin in patients whose blood sugar levels are not widely fluctuating.

BMJ 2012; 344: e486

Percutaneous coronary intervention

Stable coronary artery disease

High usage of percutaneous coronary intervention in patients with stable exertional angina was challenged by trials that showed no benefit compared with optimal medical therapy alone.

Arch Intern Med 2012; 172: 312-319

Percutaneous revascularisation of renal artery

Atherosclerotic renal artery stenosis

Traditionally angioplasty and more recently stenting were used to slow decline in renal function and improve blood pressure control, but a randomised trial showed no benefit and increased risk of procedure-related events.

N Engl J Med 2009; 361: 1953-1962 

Early dialysis

End-stage renal failure

Early initiation of dialysis was believed to improve patient outcomes, but a randomised trial showed no benefit.

N Engl J Med 2010; 363: 609-619

Strict heart rate control

Chronic atrial fibrillation

Guidelines recommended strict rate control, but a randomised trial showed lenient control was equally effective and easier to achieve.

N Engl J Med 2010; 362: 1363-1373

2 Behaviour change strategies for promoting wider use of evidence by individual clinicians in group practices and hospitals

Raising awareness of new practice-changing evidence and generating motivation to change

  • Present new evidence at journal clubs, grand rounds, conferences (ideally interdisciplinary); raise common problems in the format of clinical questions; request expert facilitators to provide evidence supporting their recommendations (evidence-pull strategy).

  • Raise awareness of new sentinel evidence (that discredits existing clinical practices or strongly mandates new practice) using journal-scanning services (evidence-push strategy).

  • Raise awareness of disagreement between peers about what constitutes appropriate care for a commonly encountered clinical problem; convene focus groups that aim to achieve an evidence-based consensus.

  • Publicise and use role models and senior opinion leaders who actively retrieve and apply evidence in changing routine care.

  • Arrange interactive workshops or online courses that aim to improve evidence-based practice skills.

  • Use academic detailing, train-the-trainer strategies, quality circles and learning collaboratives to educate wider audiences about desirable changes in practice.

  • Develop and format guidelines, pathways and toolkits that make it easier for clinicians to learn new changes in practice.

Devising and implementing change on the basis of evidence

  • Convene interdisciplinary meetings to formulate evidence-based best-practice standards, identify evidence–practice gaps from audits and consider practice changes to close the gaps.

  • Conduct focus group discussions and undertake process mapping to identify evidence–practice gaps and predisposing contextual factors.

  • Assess readiness for change within clinician groups and identify implementation barriers.

  • Review available evidence about others’ experience in instituting practice changes and undertake site visits to locations where similar implementation efforts have been successful.

  • Recruit, designate and train clinical leaders for the change effort; build partnerships with agencies (eg, academic units, quality and safety improvement units) for shared training and research collaborations.

  • Test feasibility of changes to practice at the local level with simulation or modelling exercises, small pilots or demonstration projects before full-scale roll-out.

Sustaining and embedding change into routine care

  • Cultivate and support experts, clinical champions and respected peers who encourage change.

  • Involve existing governance structures (eg, clinical councils, medical staff associations, practice boards) in providing oversight and support for full-scale implementation.

  • Seek advocacy for change from informed patients, carers and patient advocates, and involve them in implementation efforts.

  • Disseminate results of change to others (eg, through small group discussion, clinical presentations, newsletters, mass media) to motivate others to introduce change.

  • Use evidence-based clinical pathways, checklists, reminders, prompts, teaching resources and other decision support to reinforce change.

  • Develop and implement quality monitoring systems that allow an ongoing audit of practice and effects of change implementation, and ensure results are widely distributed.

  • Revise professional roles and reconfigure clinical teams in ways that make it more likely for clinical innovation to be achieved.

3 Organisational strategies for supporting evidence-based practice

Active commitment and support from senior managers (in hospitals, general practice, specialist group practices and public health organisations)

  • Openly endorse evidence-based learning and continuous quality improvement within mission statements and operations of boards, directorships and other subagencies of the organisation.

  • Provide dedicated time, physical resources and remuneration for clinicians to practice evidence-based care; schedule 10% of normal working hours to be spent on evidence-based practice-related activities (eg, journal clubs, clinical audits, quality and safety reviews).

  • Recognise those who have championed evidence-based practice within the organisation through recognition awards, sponsorship of presentations at professional meetings or credits for participation in professional development courses.

Use of evidence to inform care delivery

  • Establish interdisciplinary panels for developing and updating evidence-based clinical standards, guidelines or pathways applicable to key areas of practice within the organisation.

  • Establish organisation-wide literature search services that staff can use to retrieve relevant high-quality evidence in answering important clinical questions.

  • Sponsor clinician-led, organisation-wide restructuring of care processes and service delivery systems in accordance with evidence of effectiveness in optimising care.

  • Mandate that submissions for new clinical technologies or services include a rationale based on a systematic review of evidence of effectiveness compared with existing care.

  • Develop payment formulae that fully remunerate high-value evidence-based practice while disinvesting in interventions that robust evidence shows to be of no or very marginal benefit.

  • Deploy performance appraisal and credentialling policies that restrict the scope of practice of clinicians whose practice is consistently in violation of accepted evidence-based standards.

  • Waive professional indemnity from litigation in cases where care resulting in serious patient harm was in clear violation of accepted evidence-based standards.

Alignment of evidence-based practice with quality and safety improvement

  • Foster wide recognition that evidence-based practice and quality and safety improvement complement and reinforce one another.

  • Establish clinician-led quality and safety teams at the level of group practices and hospital units or departments to identify and remediate shortfalls in care according to the best available evidence.

  • Provide the necessary infrastructure for measuring and providing feedback on defined sets of key clinical indicators for commonly encountered conditions and procedures.

  • Support the creation of clinical registries that monitor care processes and outcomes relating to key areas of practice within the organisation.

  • Maintain an up-to-date inventory of evidence-based quality and safety improvement interventions relevant to key areas of practice within the organisation.

  • Participate in evidence-based quality and safety improvement collaborations with other like-minded organisations.

  • Emphasise the integration of an evidence-based quality and safety improvement framework with mainstream care when seeking organisational accreditation.

4 Key questions that characterise shared decision making*

  • What do you expect from investigation and/or treatment of your condition?

  • Do you have all the information you think you need to weigh up the various options?

  • Thinking about this decision, what is the most important aspect for you to consider?

  • What aspects of management (eg, tests, drugs, procedures or surgery) are you most concerned about?

  • How do the benefits of the various options compare? And how do the harms compare?

  • Are there important other people that you want to talk to in making this decision?

*Adapted from Stiggelbout et al.38

Provenance: Not commissioned; externally peer reviewed.

  • Ian A Scott1
  • Paul P Glasziou2

  • 1 Department of Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Brisbane, QLD.
  • 2 Centre for Research in Evidence-Based Practice, Bond University, Gold Coast, QLD.


Competing interests:

No relevant disclosures.

  • 1. Scott IA, Glasziou PP. Improving the effectiveness of clinical medicine: the need for better science. Med J Aust 2012; 196: 304-308.
  • 2. Glasziou P, Haynes B. The paths from research to improved health outcomes. Evid Based Med 2005; 10: 4-7.
  • 3. Mickan S, Burls A, Glasziou P. Patterns of ‘leakage’ in the utilisation of clinical guidelines: a systematic review. Postgrad Med J 2011; 87: 670-679.
  • 4. Runciman WB, Hunt TD, Hannaford NA, et al. CareTrack: assessing the appropriateness of health care delivery in Australia. Med J Aust 2012; 197: 100-105.
  • 5. Scott IA. The evolving science of translating research evidence into clinical practice. Evid Based Med 2007; 12: 4-7.
  • 6. Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009; 4: 50. doi: 10.1186/1748-5908-4-50.
  • 7. Prasad V, Cifu A, Ioannidis JPA. Reversals of established medical practices: evidence to abandon ship. JAMA 2012; 307: 37-38.
  • 8. Brownlee S. Overtreated: why too much medicine is making us sicker and poorer. New York: Bloomsbury, 2007.
  • 9. McKibbon KA, Wilczynski NL, Haynes RB. What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals? BMC Med 2004; 2: 33-45.
  • 10. Guyatt GH, Meade MO, Jaeschke RZ, et al. Practitioners of evidence-based care. Not all clinicians need to appraise evidence from scratch but all need some skills. BMJ 2000; 320: 954-955.
  • 11. Straus SE, Ball C, Balcombe N, et al. Teaching evidence-based medicine skills can change practice in a community hospital. J Gen Intern Med 2005; 20: 340-343.
  • 12. Shuval K, Linn S, Brezis M, et al. Association between primary care physicians’ evidence-based medicine knowledge and quality of care. Int J Qual Health Care 2010; 22: 16-23.
  • 13. Flores-Mateo G, Argimon JM. Evidence based practice in postgraduate healthcare education: a systematic review. BMC Health Serv Res 2007; 7: 119-126.
  • 14. Coomarasamy A, Khan KS. What is the evidence that post-graduate teaching in evidence-based medicine changes anything? A systematic review. BMJ 2004; 329: 1017.
  • 15. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 2005; 330: 765.
  • 16. Scott IA, Guyatt GH. Clinical practice guidelines: the need for greater transparency in formulating recommendations. Med J Aust 2011; 195: 29-33.
  • 17. Thomson R, McElroy H, Sudlow M. Guidelines on anticoagulant treatment in atrial fibrillation in Great Britain: variation in content and implications for treatment. BMJ 1998; 316: 509-513.
  • 18. Bosch M, Tavender E, Bragge P, et al. How to define ‘best practice’ for use in Knowledge Translation research: a practical, stepped and interactive process. J Eval Clin Pract 2012; Apr 9. [Epub ahead of print.] doi: 10.1111/j.1365-2753.2012.01835.x.
  • 19. Scott IA, Denaro CP, Bennett CJ, Mudge AM. Towards more effective use of decision support in clinical practice: what the guidelines for guidelines don’t tell you. Intern Med J 2004; 34: 492-500.
  • 20. Scott IA, Derhy PH, O’Kane D, et al; CPIC Cardiac Collaborative. Discordance between level of risk and intensity of evidence-based treatment in patients with acute coronary syndromes. Med J Aust 2007; 187: 153-159.
  • 21. Scott IA, Jayathissa S. Quality of drug prescribing in older patients: is there have a problem and can we improve it? Intern Med J 2010; 40: 7-18.
  • 22. Chew DP, Aroney CN, Aylward PE, et al. 2011 addendum to the National Heart Foundation of Australia/Cardiac Society of Australia and New Zealand guidelines for the management of acute coronary syndromes (ACS) 2006. Heart Lung Circ 2011; 20: 487-502.
  • 23. Wolff AM, Taylor SA, McCabe JF. Using checklists and reminders in clinical pathways to improve hospital inpatient care. Med J Aust 2004; 181: 428-431.
  • 24. de Vos M, Graafmans W, Kooistra M, et al. Using quality indicators to improve hospital care: a review of the literature. Int J Qual Health Care 2009; 21: 119-129.
  • 25. Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA 1999; 282: 1458-1465.
  • 26. Lugtenberg M, Zegers-van Schaick JM, Westert GP, Burgers JS. Why don’t physicians adhere to guideline recommendations in practice? An analysis of barriers among Dutch general practitioners. Implement Sci 2009; 4: 54-62.
  • 27. Grol R, Wensing M, Eccles M, editors. Improving patient care: the implementation of change in clinical practice. Edinburgh: Elsevier, 2005.
  • 28. Straus S, Tetroe J, Graham ID, editors. Knowledge translation in health care: moving from evidence to practice. Hoboken, NJ: Wiley-Blackwell, 2009.
  • 29. Scott IA. What are the most effective strategies for improving quality and safety of health care? Intern Med J 2009; 39: 389-400.
  • 30. Powell BJ, McMillen JC, Proctor EK, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev 2012; 69: 123-157.
  • 31. Salge TO, Vera A. Hospital innovativeness and organizational performance: evidence from English public acute care. Health Care Manage Rev 2009; 34: 54-67.
  • 32. Bonis PA, Pickens GT, Rind DM, Foster DA. Association of a clinical knowledge support system with improved patient safety, reduced complications and shorter length of stay among Medicare beneficiaries in acute care hospitals in the United States. Int J Med Inform 2008; 77: 745-753.
  • 33. Lukas CV, Holmes SK, Cohen AB, et al. Transformational change in health care systems: an organizational model. Health Care Manage Rev 2007; 32: 309-320.
  • 34. Rundall TG, Martelli PF, Arroyo L, et al. The informed decisions toolbox: tools for knowledge transfer and performance improvement. J Healthc Manag 2007; 52: 325-341; discussion 341-342.
  • 35. Elshaug AG, Moss JR, Littlejohns P, et al. Identifying existing health care services that do not provide value for money. Med J Aust 2009; 190: 269-273.
  • 36. Haynes RB, Ackloo E, Sahota N, et al. Interventions for enhancing medication adherence. Cochrane Database Syst Rev 2008; (2): CD000011.
  • 37. Weiner SJ, Schwartz A, Weaver F, et al. Contextual errors and failures in individualizing patient care: a multicenter study. Ann Intern Med 2010; 153: 69-75.
  • 38. Stiggelbout AM, Van der Weijden T, De Wit MP, et al. Shared decision making: really putting patients at the centre of healthcare. BMJ 2012; 344: e256.
  • 39. Scott IA, Gray LC, Martin JH, Mitchell CA. Effects of a drug minimization guide on prescribing intentions in elderly persons with polypharmacy. Drugs Aging 2012; 29: 659-667. doi: 10.2165/11632600-000000000-00000.
  • 40. Brett AS, McCullough LB. Addressing requests by patients for nonbeneficial interventions. JAMA 2012; 307: 149-150.


remove_circle_outline Delete Author
add_circle_outline Add Author

Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.