Connect
MJA
MJA

National health reform needs strategic investment in health services research

Jane P Hall and Rosalie C Viney
Med J Aust 2008; 188 (1): 33-35. || doi: 10.5694/j.1326-5377.2008.tb01502.x
Published online: 7 January 2008

The recently proposed federal government takeover of a state hospital and the federal opposition’s response have focused attention on reform of Australia’s health care financing. Plans of this nature are being drawn up largely uninformed by evidence. The Grant report on health and medical research,1 released in 2004, yet again pointed to the underdevelopment of health services research in Australia. The need for Australia to invest in its own capacity to study the organisation, delivery and financing of health care has been recognised by all reviews of health and medical research conducted over the past 20 years,2 and by expert observers and commentators.3 As the National Health and Medical Research Council (NHMRC) gears up for a major initiative in research that aims to provide an evidence base for policy and practice reform,4 it is worth considering where Australian health services research has had an impact, and the conditions which facilitated this.

While some major policy issues have received little research attention, there are several examples where health services research has made a substantial contribution. We have identified casemix classification, coordinated care trials, development of cost-effectiveness requirements for the Pharmaceutical Benefits Scheme (PBS) and the health workforce as areas from which we might draw lessons about what does and does not work in promoting health services research capacity and activity.

Casemix

Casemix classification5 emerged in the 1980s as a measurement tool that could explain variations in hospital lengths of stay and costs. Its early development, in Australia and other countries, was initiated by academic researchers, with early Australian research followed by growing interest,6 as shown by publications, presence in conference programs and research funding. The research was given great impetus by national leadership and coordination,7 and by the provision of substantial and sustained funding which was initiated in the 1988–89 Medicare Agreement.8 Australian researchers developed a specifically Australian casemix classification — Australian Refined Diagnosis Related Groups (AR-DRGs) — which has been adopted in many other countries, including New Zealand, Ireland and Germany. Thus, funding was provided to consolidate and extend work already underway (rather than for another new idea), building on an environment which was ready for implementation of this approach to hospital funding. The research ultimately had a major impact on health services organisation and delivery through implementation of new funding arrangements, first in Victoria in 1993, and subsequently in most other states.

The coordinated care trials

The coordinated care trials were established in the mid 1990s. There had been numerous commentaries about the inefficiencies engendered by the split in federal–state responsibilities for health care.9 The coordinated care trials were developed to test whether pooling funds from federal and state sources into a common budget would offer better service delivery, improved health outcomes and greater efficiency. They were designed to meet local needs, rather than the demands of research rigour. A commitment to evaluation was made from the outset, both at the local project level and as a national program, but the trial design preceded the evaluation design. Results were equivocal, in terms of health outcomes, costs and financial viability;10 consequently, there was little direct impact on policy. Many factors are considered to have contributed to the limited achievement of the desired objectives of funds pooling, including the short time frame over which trials were run, insufficient preparatory work in developing budgets, the weakness of incentives as budget holding was notional, and lack of experience of case coordinators. This was a major health services research effort with national leadership and funding,11 similar to casemix classification. It did bring together a nationwide network of researchers and evaluators, but with relatively short time frames for design, implementation and evaluation. Unlike casemix classification, this program did not draw on an existing body of work developed by Australian researchers. Rather, the research effort was initiated by the policy response, but the research was limited to measuring trial outcomes and processes. In contrast, the RAND Health Insurance Experiment in the United States not only measured the outcomes of different insurance packages, it also included a major investment in the development of methods,12 such as the development of the SF-36 questionnaire13 to measure relevant health outcomes. Methodological advances, as well as the results of the experiment per se, have had a major impact on the field.

Pharmaceutical cost-effectiveness

Australia was the first country to introduce the requirement that the cost-effectiveness of new drug therapies be considered explicitly before being added to the list eligible for government subsidisation. Submissions to the Pharmaceutical Benefits Advisory Committee (PBAC), prepared by the manufacturer, are required to provide evidence of the drug’s safety, effectiveness and cost-effectiveness, and are extensively reviewed by an independent advisory committee assisted by a team of independent evaluators.14 The impetus for the new policy was the recognised need to determine whether the health gain delivered by a new expensive pharmaceutical justified its addition to the PBS. This regulatory requirement has provided the impetus for the development of pharmacoeconomics capacity within academic units in universities, within pharmaceutical companies and independent consultancies, and within government. Implementing the policy has required the development of guidelines for consistent methods and standardised costs, as well as the development of the human expertise and capacity to review submitted evidence. The requirement for including a cost-effectiveness analysis was a development of an existing policy mechanism — the PBAC consideration of safety and effectiveness. It drew on well established methods accepted internationally, and a small but significant research capacity in Australia (there were 33 published economic evaluations in the period 1978–1993).15

Workforce planning

Workforce planning is another area where research evidence is potentially relevant, as the challenge is to ensure adequacy of an appropriately trained workforce without major undersupply or oversupply.16 A national structure to undertake health workforce planning has been in place since 1995, initially covering the medical workforce, and then extended to nursing and allied health professions. In spite of this, Australia is facing severe shortages of trained nurses, doctors in primary care and some medical specialties, and professionals in some allied health fields. Workforce research has also been initiated by various inquiries, as well as the national workforce planning agencies. Traditional workforce planning approaches rely on simple projections of demand and supply; government-led planning which emphasises cooperation of the various stakeholders cannot readily encompass more innovative approaches to increasing flexibility and productivity.17 Yet, in this important area of research, with notable exceptions,18 there has been little independent or investigator-initiated research in Australia.

Underpinning effective health services reform

So, what lessons can we draw from these experiences? One factor that may have influenced the outcomes is the time frame over which they were expected to influence policy. In the case of casemix, Australia made a sustained investment over 5 years through the Australian Health Care Agreements before expecting it to have a major impact on policy. This was important, as it allowed the development of trust and confidence in the system by clinicians and policymakers. It was also supported by significant investment in dissemination of the methods used, developments in research, and how casemix was being applied through the Australian Government’s sponsorship of a series of casemix conferences over several years.

Similarly, the use of economic evaluation by the PBAC was preceded by widespread consultation with industry, the development of briefing and technical papers, and a substantial lead-in trial period in which the guidelines and processes were developed before economic evaluation was made mandatory. This took several years. By contrast, the coordinated care trials were designed, implemented, undertaken and evaluated all over a relatively short period that was unlikely to lead to the same level of confidence in their operation or their outcomes.

Workforce planning has been a policy issue over the past decade, but, unlike PBAC economic evaluation and casemix funding, there seems to have been little interest or capacity within the research community to study workforce issues, although there have been numerous reports commissioned by the planning agencies and other inquiries. Unlike the other three case studies, it has not been clear how workforce research would impact on current decision-making or funding mechanisms.

These examples reveal three preconditions that are necessary for health services research to have an impact on policy. Political will is important; this means both a receptiveness to new ideas and the readiness to change the status quo. This must be combined with an appropriate and sustained level of investment over time, and a time frame that allows for the development of rigour in the research and confidence from policymakers and players in the health system. The third and critical precondition that is evident across the success stories is sufficient capacity among independent academic researchers to provide the evidence base that supports and informs the policy initiatives of government.

The need for research capacity

Why has the health services research sector in Australia not been better developed, given the recommendations made in the various reviews of research funding, and the international recognition which Australian health and medical research has achieved? We suggest that there are a number of reasons.

First, the disappearance of specific funding for investigator-initiated health services research projects and the shift within NMHRC to a stronger focus on funding around specific diseases and body systems has ensured that many health services researchers are discouraged from applying. At the same time, there is growing demand from government departments, other funders of health care, and provider bodies for contract research, which, as a result of both limited funding and confidentiality provisions, is less frequently published.

Second, public health developments supported by the Public Health Education and Research Program have emphasised prevention, health promotion, and the social determinants of disease; these initiatives have generally overlooked health services research. At the same time, university budgets have been contracting, and this has limited the expansion of emerging fields and new continuing appointments. As a result, there are fewer opportunities for an academic career, so fewer health services researchers are engaged in teaching, leading to fewer students being attracted to health services research.

Third, health services researchers have been spread through different academic units, private firms, and small independent consultancies, while government health department restructures have generally limited their investment in these skills, preferring to outsource these functions. This has hindered the development of critical mass. These problems can only be overcome through a comprehensive strategy of capacity building that invests in centres of excellence, research funding, and people support.

  • Jane P Hall1
  • Rosalie C Viney2

  • Centre for Health Economics Research and Evaluation, University of Technology, Sydney, NSW.


Correspondence: Jane.Hall@chere.uts.edu.au

Competing interests:

None identified.

  • 1. Grant J (Chairman). Sustaining the virtuous cycle for a healthy, competitive Australia. Investment review of health and medical research. Final report. December 2004. Canberra: Commonwealth of Australia, 2004. http://researchaustralia.org/files/IRHMR_Final_Report.pdf (accessed Oct 2007).
  • 2. Hall J. Health services research in Australia. Aust Health Rev 2001; 24: 35-38.
  • 3. Van Der Weyden MB. Australian health policy research and development: where is it [editorial]? Med J Aust 2002; 177: 586. <MJA full text>
  • 4. Anderson W. CEO’s newsletter. Canberra: National Health and Medical Research Council, June 2007. http://www.nhmrc.gov.au/about/org/ceo/newsletters/previous/_files/0607.doc (accessed Aug 2007).
  • 5. Duckett SJ. Casemix funding for acute hospital inpatient services in Australia. Med J Aust 1998; 169 (8 Suppl): S17-S21. <MJA full text>
  • 6. Stoelwinder J, Viney R. A tale of two States: New South Wales and Victoria. In: Bloom A, editor. Health reform in Australia and New Zealand. Melbourne: Oxford University Press, 2000.
  • 7. Podger A, Hagan P. Reforming the Australian health care system: the role of government. In: Bloom A, editor. Health reform in Australia and New Zealand. Melbourne: Oxford University Press, 2000.
  • 8. Palmer GR, Short SD. Health care and public policy: an Australian analysis. 3rd ed. Melbourne: Macmillan Education Australia, 2000.
  • 9. Paterson J. National healthcare reform: the last picture show. Melbourne: Victorian Government Department of Human Services, 1996.
  • 10. Commonwealth Department of Health and Aged Care. The Australian Coordinated Care Trials: final technical national evaluation report on the first round of trials. Canberra: AGPS, 2001.
  • 11. Duckett SJ. The Australian health care system. 2nd ed. Melbourne: Oxford University Press, 2004.
  • 12. Newhouse JP; Insurance Experiment Group. Free for all?: lessons from the RAND Health Insurance Experiment. Cambridge, Mass: Harvard University Press, 1993.
  • 13. Ware JE Jr, Sherbourne CD. The MOS 36-item short-form health survey (SF-36). 1. Conceptual framework and item selection. Med Care 1992; 30: 473-483.
  • 14. Henry DA, Hill SR, Harris A. Drug prices and value for money: the Australian Pharmaceutical Benefits Scheme. JAMA 2005; 294: 2630-2632.
  • 15. Salkeld G, Davey P, Arnolda G. A critical review of health-related economic evaluations in Australia: implications for health policy. Health Policy 1995; 31: 111-125.
  • 16. Australian Government Productivity Commission. Australia’s health workforce: research report Canberra: Commonwealth of Australia, 2005.
  • 17. Hall J. Health care workforce planning: can it ever work [editorial]? J Health Serv Res Policy 2005; 10: 65-66.
  • 18. Joyce CM, McNeil JJ, Stoelwinder JU. More doctors, but not enough: Australian medical workforce supply 2001–2012. Med J Aust 2006; 184: 441-446. <MJA full text>

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.