Connect
MJA
MJA

Setting Medical Research Future Fund priorities: assessing the value of research

Haitham W Tuffaha, Lazaros Andronis and Paul A Scuffham
Med J Aust 2017; 206 (2): 63-65. || doi: 10.5694/mja16.00672
Published online: 6 February 2017

Quantitative and qualitative considerations are essential for research funding efficiency

In its 2014–2015 budget, the Australian Government announced the establishment of the $20 billion Medical Research Future Fund (MRFF).1 The MRFF aims to support health and medical research in Australia to drive innovation, improve delivery of health care, enhance the efficiency and effectiveness of the health system, and contribute to economic growth.1 In April 2016, the government announced the creation of the Australian Medical Research Advisory Board to determine the medical research strategy and priorities to guide the funding allocated through the MRFF. Although its mission is clear, the board faces the challenging task of identifying research priorities and allocating the available budget across topics and programs competing for funding. The criteria for identifying research priorities to guide the government decision making on program level funding, as set out in the MRFF legislation, focus on the ability of research programs to deliver the greatest value for as many Australians as possible.1 However, there is little mention of how the value of research programs would be objectively, transparently and practically assessed to inform research prioritisation and ensure efficient use of the MRFF budget.

Common approaches to research prioritisation in Australia

Priority areas for medical research in Australia are typically identified through consultations with major stakeholders, such as research funding organisations (eg, the National Health and Medical Research Council [NHMRC] and the Australian Research Council [ARC]), researchers, and through direct consultation with patients and their representatives. Measures of the burden of disease are often considered during this process, based on the notion that focusing research on diseases of high population and cost will deliver high societal value. However, there is often a chasm between national priority areas and the bottom-up approach, whereby individual researchers submit grant applications on the topics of their own interests and compete with other researchers for funding from a limited budget.

Decisions on which specific research programs (eg, clinical trials) to fund are usually made based on the assessments of the merits (eg, scientific rigour, strength of the research team) of the submitted research proposals, according to the opinions and judgments of experts sitting on funding panels. Nevertheless, this approach is based on panel members’ inherently subjective views on the potential value of a piece of research, with little or no reference to explicit estimates of the incremental costs and benefits of proposed research programs. In addition, there is a potential for research duplication due to the lack of coordination across panels in the various funding organisations. Research duplication can also happen when funding is granted to research projects to generate evidence that can be sourced from relevant international research. In this case, resources may be better deployed on other studies or activities, such as dissemination and implementation of findings.

To maximise benefits from research budgets, funding decisions should be based on each research proposal’s ability to provide the best value for money, based on explicit evidence on the proposals’ cost and potential benefits.2-5 Even when the research project is on a disease with high burden, it may not be worthwhile if the expected costs of conducting a research study exceed its expected benefits. Similar assessments of benefits and costs have been the standard in guiding funding decisions of other health care investments in Australia (eg, pharmaceuticals and health services). There is, therefore, no reason why research funding should not be subjected to the same scrutiny to achieve efficiency in spending public funds.

Analytical approaches for assessing the value of research

A number of analytic approaches have been proposed to quantify the value of research programs, particularly in research intended to evaluate health care interventions (eg, clinical trials and observational studies). These approaches estimate the expected benefits of research on improving health care, which is expressed as improved health outcomes (eg, survival) or, in terms of monetary benefit, using a willingness to pay value for an additional unit of health outcome (eg, $50 000 per life year gained). The underlying principle in such approaches is that the overall value of a specific research program can be assessed by comparing its cost against its expected benefits. Research costs include direct costs to set up a study and recruit individuals and opportunity costs for the population who will not benefit from research until the results are implemented. The proposals with the highest expected net benefit would constitute good candidates for research.6 Two key analytical approaches are the prospective payback of research (PPoR) and the value of information (VoI).6,7

A number of models following the principles of the PPoR approach have been put forward over the past 30 years. Under this approach, the value of a research study is typically inferred from its ability to result in a beneficial change in clinical practice.2,7 In essence, the expected value of research depends on its possible findings, the expected level of the change in practice triggered by the findings, and on the size of the population expected to benefit from that change in practice.5 The approach is based on well established principles of economic impact analysis, is relatively straightforward and can be undertaken within narrow time frames.8 However, it has been argued that changing clinical practice can be achieved through other ways, and undertaking research may not be the most cost-effective approach.2 Moreover, due to the way PPoR estimates the value of research, the approach may advocate prioritising research in areas where there is great scope for change in clinical practice rather than in areas where there is much need for information, but a smaller opportunity for improvements in clinical practice.5 The approach does not assess whether there is a need for a given research program by considering the level of uncertainty in the available evidence (ie, how much evidence already exists), which may lead to funding unnecessary research.

VoI is an alternative quantitative approach to research prioritisation that has received increased attention. This method has firm foundations in statistical decision theory and provides a systematic approach to estimating the expected value of acquiring new evidence to inform a decision problem.3,6 The approach considers the uncertainty in the relevant available evidence, the consequences of this uncertainty (ie, the cost of making a wrong decision), the population that would benefit from the results of the intended research and the expected cost of the research.3,4,6 Thus, VoI considers both the burden of the disease and the uncertainty in existing evidence to advise whether additional research is potentially worthwhile. This is essential to reduce research duplication and wastage by directing research funds to worthy research programs, and to enhance equity by improving the opportunity of research funding for programs studying rare diseases, where there is a small population but a high information need. Moreover, the value of research estimates obtained using VoI can be adjusted to the expected level of implementation to reflect the impact of research findings on real-world practice.3,9

VoI analysis is typically conducted with economic evaluations of new technologies and health services to inform funding decisions, mainly using decision-analytic models and computer simulation. Nevertheless, great advances have been made to simplify VoI computation.10 For instance, the Agency for Healthcare Research and Quality in the United States has issued a working paper on research prioritisation using VoI with minimal modelling.11 Moreover, Claxton and colleagues12 demonstrated how VoI analysis can be used to estimate the value of additional research directly from systematic reviews and meta-analyses. A number of research prioritisation initiatives have tested VoI worldwide. The first application was in 2004 through two pilot projects in the United Kingdom, one for the National Coordinating Centre for Health Technology Assessment and another for the National Institute for Health and Care Excellence.2 In Australia, we have applied VoI analysis to a range of research projects under an NHMRC-funded centre for research excellence, and we have demonstrated the value and practicality of the approach in prioritising research and optimising trial design.13 From the US, Carlson and colleagues14 have reported the outcomes of incorporating VoI analysis into a stakeholder-driven research prioritisation process within a program to establish the comparative effectiveness research in cancer genomics. In addition, Bennette and colleagues15 developed and applied an efficient and customised VoI-based process to prioritise cancer clinical trials within the Southwest Oncology Group.

The way forward

The MRFF Advisory Board needs to develop innovative and flexible frameworks within which the research priorities can be set. A preferred framework would combine both quantitative and qualitative considerations to ensure that research funding is efficient, sustainable and equitable, and at the same time responsive to the clinical needs for high quality and innovative medical research. A possible option would be to use consultations with major stakeholders and considerations on the burden of disease to identify the broad areas of research funding priority, and to use VoI analysis to assess the value of research programs within each priority topic. To further reduce the burden of VoI analysis, this approach could be reserved for the most costly research projects. Committee discussions may ensue to refine decisions allowing for additional attributes, such as capacity building or targeting disadvantaged groups.


Provenance: Not commissioned; externally peer reviewed.

  • Haitham W Tuffaha1
  • Lazaros Andronis2
  • Paul A Scuffham3

  • 1 Centre for Applied Health Economics, Griffith University, Brisbane, QLD
  • 2 Institute of Applied Health Research, University of Birmingham, Birmingham, United Kingdom
  • 3 Menzies Health Institute Queensland, Griffith University, Gold Coast, QLD


Competing interests:

No relevant disclosures.

  • 1. Australian Government, Department of Health [website]. Medical Research Future Fund. Canberra: Commonwealth of Australia; 2016. http://www.health.gov.au/internet/main/publishing.nsf/Content/mrff (accessed May 2016).
  • 2. Claxton KP, Sculpher MJ. Using value of information analysis to prioritise health research: some lessons from recent UK experience. Pharmacoeconomics 2006; 24: 1055-1068.
  • 3. Eckermann S, Karnon J, Willan A. The value of value of information: best informing research design and prioritization using current methods. Pharmacoeconomics 2010; 28: 699-709.
  • 4. Tuffaha HW, Gordon LG, Scuffham PA. Value of information analysis in healthcare: a review of principles and applications. J Med Econ 2014; 17: 377-383.
  • 5. Andronis L. Analytic approaches for research priority-setting: issues, challenges and the way forward. Expert Rev Pharmacoecon Outcomes Res 2015; 15: 745-754.
  • 6. Claxton K, Posnett J. An economic approach to clinical trial design and research priority-setting. Health Econ 1996; 5: 513-524.
  • 7. Townsend J, Buxton M, Harper G. Prioritisation of health technology assessment. The PATHS model: methods and case studies. Health Technol Assess 2003; 7: 1-82.
  • 8. Andronis L, Billingham LJ, Bryan S, et al. A practical application of value of information and prospective payback of research to prioritize evaluative research. Med Decis Making 2016; 36: 321-334.
  • 9. Andronis L, Barton P. Adjusting estimates of the expected value of information for implementation: theoretical framework and practical application. Med Decis Making 2016; 36: 296-307.
  • 10. Tuffaha HW, Strong M, Gordon L, Scuffham P. Efficient value of information calculation using a non-parametric regression approach: an applied perspective. Value Health 2016; 19: 505-509.
  • 11. Meltzer DO, Hoomans T, Chung JW, Basu A. Minimal modeling approaches to value of information analysis for health research. Med Decis Making 2011; 31: 1-22.
  • 12. Claxton K, Griffin S, Koffijberg H, McKenna C. How to estimate the health benefits of additional research and changing clinical practice BMJ 2015; 351: h5987.
  • 13. Tuffaha HW, Gordon LG, Scuffham P. Value of information analysis informing adoption and research decisions in a portfolio of health care interventions. MDM Policy Practice 2016; 1: 1-11.
  • 14. Carlson JJ, Thariani R, Roth J, et al. Value-of-information analysis within a stakeholder-driven research prioritization process in a US setting: an application in cancer genomics. Med Decis Making 2013; 33: 463-471.
  • 15. Bennette CS, Veenstra DL, Basu A, et al. Development and evaluation of an approach to using value of information analyses for real-time prioritization decisions within SWOG, a Large Cancer Clinical Trials Cooperative Group. Med Decis Making 2016; 36: 641-651.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Responses are now closed for this article.