Connect
MJA
MJA

The Research Quality Framework and its implications for health and medical research: time to take stock?

Louise G Shewan and Andrew J S Coats
Med J Aust 2006; 184 (9): 463-466. || doi: 10.5694/j.1326-5377.2006.tb00320.x
Published online: 1 May 2006

Research evaluation is a hot topic across the world as many economies seek to implement research performance exercises to meet the growing demands for accountability and to drive funding allocation.1 Universities are the custodians of significant amounts of publicly invested research funding and, as such, it is incumbent upon the sector to maximise the value of that investment over the longer term. Australian universities, despite operating in an environment of impoverished infrastructure, with a declining proportion of government support despite a sizeable and growing federal budget surplus, are fully aware that further taxpayer funding will demand greater accountability. With the release of the Expert Advisory Group’s (EAG) refined Research Quality Framework (RQF) measurement schema (Box 1),2 it is timely to review the outcomes of similar assessment exercises internationally before any final decision is taken.

The crucial question is whether the RQF is the most appropriate and cost-effective mechanism to achieve this. If we cannot at this late stage say with any confidence whether the benefit of introducing an RQF will outweigh the costs, should we proceed at all? The recent change in Federal Minister for Education, Science and Training provides an opportunity for the incoming minister, the Hon Julie Bishop, to consider the option of a modification of existing assessment criteria, which would achieve more at considerably less cost and with much less disturbance to the sector.

Australia’s RQF is based largely on the Research Assessment Exercise in the United Kingdom (UK-RAE) (as the appointment of the UK’s Professor Sir Gareth Roberts as chair of the Expert Advisory Group might suggest), with the addition of a measurement of research impact. The New Zealand Performance Based Review Fund (NZ-PBRF), while itself modelled to a large extent on the UK-RAE, is also instructive in that the unit of assessment is the individual rather than the team.

UK Research Assessment Exercise

The UK-RAE introduced in 1986 (with substantial changes made in 1992) has concentrated research funding into fewer places and the best departments. The longer term effects of this increasingly selective and concentrated funding are yet to be fully appreciated, even though a polarisation within the sector, a disinclination to take risks, and adverse effects on clinical research have already been noted.3,4

While it has been credited with increasing the global impact of UK research, increasing its share of the 1% most cited research papers,5 the RAE has also attracted sharp and fervent criticism.3,6 Some have suggested that the perceived improvement in research performance (55% of the UK’s academic departments deemed to be of “international quality” in 2001, up from 31% in 1996) was not so much a true RAE outcome, rather it was more an artefact of successful academic “games playing”.6,7 Furthermore, with 55% of researchers placed within the top two grades, the scale now appears to lack discriminatory power.5 Exacerbating the problem was the UK Government’s subsequent failure to fully fund the improved ratings achieved in the 2001 assessment cycle.

The RAE is expensive (in 1996, it was estimated to cost between £27 and £37 million). It has also been claimed to have undermined university autonomy, forced researchers to realign their research pursuits within RAE “friendly” research domains, downgraded teaching and undervalued clinical academic medicine.8,9 In a survey conducted in 2005 by the British Medical Association, 40% of clinical academic and research staff regarded the RAE as having had a negative impact on their career,10 and data produced by the Council of Heads of Medical Schools show a significant decline in clinical academic staffing levels between 2000 and 2004, with the biggest slump reported among clinical lecturers.11

It is widely believed that the RAE has compromised clinical academic medicine through a failure to satisfactorily acknowledge the commitment and contribution of clinical academics, not only to research but also to teaching and clinical practice.9,10,12 Certain disciplines, for example craft specialties such as obstetrics and gynaecology and surgery, have suffered disproportionately.11 By its very nature, clinical research is disadvantaged by the RAE’s focus on short-term research outputs and over-emphasis on publications in high impact-factor journals. In addition, there is concern about a possible emergence of non-research medical schools as a result of the concentration of limited resources.

Little wonder the UK laments the widely accepted decline of its clinical research,11,13 when its own funding mechanism forces universities to ditch clinical academics in favour of more “productive” non-clinical scientists. Research-led teaching has been widely credited with improving the quality of both education and service in the health sector. This is particularly true in the medical arena, where the concept of a university hospital with university clinical departments, clinical schools and affiliated medical research institutes is seen as so important.

Following the Roberts review14 with its recommended departure from a “one size fits all” assessment approach, the controversial seven-point scale employed in previous assessments has been jettisoned in favour of a research activity quality profile. Each unit of assessment will be given a research profile documenting the proportion of work that falls into five categories from unclassified to four-star quality (Box 2). However, disappointment and incredulity has been expressed by the research community about the reluctance of the funding councils to disclose how RAE outcomes will be used to calculate funding and what proportion of funding will be assigned to each star grade.15

In the lead-up to the 2008 exercise, there are reports of universities conducting practice runs, shedding staff who are less research active, with teaching staff being replaced with “star” researchers in the pursuit of research ratings, and announcing the rating their academics are expected to gain,4,16 with little evidence such exercises actually raise the nation’s research output. The medical school at Imperial College, London, has been reported as threatening “disciplinary procedures” for academics who do not secure at least £75 000 in external research revenue yearly,17 despite no immediate link between fundraising and the quality of research produced. No mention is made of the need to encourage long-term research themes, for which results may take decades, or the need to balance research with teaching and professional leadership.

Much discussion is now centred on the future of the UK-RAE. In the budget delivered on 22 March 2006, the UK Chancellor declared that the 2008 RAE would indeed be the last and that a working group had been established to develop alternative metrics-based formulae to simplify the distribution of research funding.18 Metric measures, where funding is related to the impact of publications and research grant and contract money, will be tested in conjunction with the 2008 RAE.4,18 This should be of particular interest to Australia as we contemplate discarding our metrics-based system in favour of an un-reconstructed UK-RAE.

New Zealand’s Performance Based Research Fund (NZ-PBRF)

The NZ-PBRF, introduced in 2003, is based on a combination of peer review and performance indicators. The research assessment consists of three components: the quality of academics’ research outputs, research degree completions and external research income, weighted 60/25/15, respectively.

The publication of the 2003 results prompted several concerns regarding the appropriateness and efficacy of the funding model.19,20 Echoing the issues generated by the UK-RAE, the New Zealand scheme has been charged with the devaluation of teaching, downgrading of academic autonomy, disadvantaging applied research and creating a deterrent to collaboration.20,21 Additional concerns have been raised in relation to the real cost–benefit ratio of participation in the NZ-PBRF exercise, with reports that many universities had spent more on participating in the exercise than they will gain in funding increases.22 The most trenchant criticism, however, has been reserved for the scoring system, which placed most early career researchers in the lowest category (in essence “research inactive”), and the use of individual academics as the unit of assessment. Following the review by the sector reference group, provisions for new and emerging researchers are to be implemented and the controversial unit of assessment, believed to have disadvantaged certain groups and negatively affected staff, will be reviewed after the partial round assessment scheduled for this year.23

Research Quality Framework

Submissions received in response to the Expert Advisory Group’s “Preferred Model” paper for the RQF highlighted myriad issues requiring clarification. Not least among the issues raised was the unanticipated announcement in the Minister’s foreword that the outcomes of the RQF may be used to determine the distribution of research funding through the National Health and Medical Research Council (NHMRC) and the Australian Research Council (ARC). The worrying implication of this is diminished independence of the research councils, and the possibility that the RQF may render some research groups ineligible or disqualified from access to NHMRC and ARC funding. Some commentators advocate the converse, with ARC and NHMRC success informing the RQF.24 Further fears of political intervention have been fuelled by the veto by the previous Minister for Education, Science and Training of several ARC research grants, and the more recent allegations of federal government censorship of CSIRO (Commonwealth Scientific and Industrial Research Organisation) scientists.

The Vice-Chancellors of the Group of Eight universities maintain that there already exist appropriate processes for assessing the quality of national research outcomes, and that, at a small cost, the present research assessment and funding mechanisms could be modified to produce more comprehensive comparative data for the university sector.25 The allocation of funding from the National Competitive Grants Scheme is based on an extensive and rigorous peer review system which supports the highest quality research projects. As such, this income represents an existing and accepted measure of research quality. In the UK, the House of Commons Science and Technology Committee found that external grant income closely matched RAE results in the top 20–30 institutions.15 It is believed that an RQF would produce analogous results, duplicating competitive peer-reviewed processes.25 Given this likelihood, why introduce an enormously expensive experiment in its stead?

Throughout the university sector, there is a general feeling of concern regarding the funding implications of the RQF, and acceptance by the sector will depend on additional funding to meet and exceed the costs of participation in the exercise. It has been suggested that the cost of implementing the RQF could be in the region of $50 million per cycle.25 As yet there is little indication that implementing an RQF would be accompanied by a sufficient increase in funding to make this worthwhile.

If existing data can be used to obtain detailed quantitative data that closely match the proposed system, there is little to support introducing a potentially burdensome and expensive assessment process, especially in the absence of any pilot data to suggest advantages to a new system. A criticism of the present funding formulae is that they pay little attention to publication quality and impact compared with quantity. This can be readily and cheaply adjusted. A stronger emphasis on publication in prestigious “big name” journals might gather some support, even though, as we all know, significant breakthroughs and important research messages of high societal impact often appear first in lesser-known and specialist publications with the long-term value being recognised only in retrospect. We should also ask whether our two new Nobel laureates, Barry Marshall and Robin Warren, awarded the Nobel prize in Physiology or Medicine for their discovery of the bacterium Helicobacter pylori and its role in gastric disorders, would have been supported or closed down by our proposed assessment system. The incoming minister might be well advised to look at what is happening in the UK now, as opposed to when it commenced its experiment with RAEs.

Received 30 January 2006, accepted 29 March 2006

  • Louise G Shewan1
  • Andrew J S Coats2

  • Faculty of Medicine, University of Sydney, Sydney, NSW.


Correspondence: ajscoats@aol.com

Competing interests:

None identified.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.