Connect
MJA
MJA

An evaluation of methods used in health technology assessments produced for the Medical Services Advisory Committee

Emily S Petherick, Elmer V Villanueva, Jo Dumville, Emma J Bryan and Shyamali Dharmage
Med J Aust 2007; 187 (5): 289-292. || doi: 10.5694/j.1326-5377.2007.tb01246.x
Published online: 3 September 2007

Medical technologies are believed to be a major driver of increased health expenditure and are therefore an object of cost containment, which requires a systematic approach to evaluating and introducing new medical technologies and procedures.

Australia implemented one of the world’s first health technology assessment (HTA) programs in 1982.1-3 The government body now responsible for the management of Australia’s HTA program is the Medical Services Advisory Committee (MSAC). The MSAC comprises 22 members from specialist health care, health administration, consumer and academic health (epidemiologists, health economists, etc) who work together to fulfil the MSAC’s terms of reference:4

Further details including current membership and administrative arrangements are available on the MSAC website (http://www.msac.gov.au).

Although there is an expanding body of literature examining evaluation methods or the results of their application,3,5 little is known of the methods used to provide the evidence that informs policy when introducing new medical technologies to the Medicare Benefits Schedule (MBS) or of the quality of methods used to undertake HTAs for the MSAC. Here, we examine the methods used to produce the section of MSAC HTA reports that reviews effectiveness of the technology or procedure.

Methods
The MSAC process

New medical technologies and procedures are brought to the attention of the MSAC in two ways: (i) a sponsor (eg, a manufacturer, a craft group such as a medical college, or a consumer group) can submit an application to the MSAC for consideration (an “application”); or (ii) the Department of Health and Ageing can refer a particular technology to the MSAC (a “reference”). All technologies considered by the MSAC for listing on the MBS must have prior approval for marketing by the Therapeutic Goods Administration, if such approval is required for the technology in question. Evaluation of pharmaceuticals is administered by a separate body, the Pharmaceutical Benefits Advisory Committee.

An HTA commissioned by the MSAC takes the form of a systematic review. The review itself is undertaken by an independent group contracted by the MSAC under the guidance of an advisory panel formed by the MSAC, whose membership may comprise any combination of clinical experts (nominated by relevant clinical colleges), consumers of the technology, epidemiologists and health economists. On completion of the review, the MSAC can make one of three recommendations to the Minister for Health and Ageing: to fund the technology or procedure without restriction; to provide interim funding and have further evaluation take place at some future point; or to not fund the technology or procedure. This process provides a formal means by which new and emerging technologies and procedures may gain entry to the MBS. Once a technology or procedure has been listed on the MBS, a proportion of its cost will be met by government reimbursement.3

Under the current MSAC arrangements, there are no formal minimum requirements for evidence of safety, effectiveness or cost-effectiveness for a technology or procedure to be funded. Although the MSAC attempts to evaluate new technologies and procedures using an evidence-based approach, in many cases there is insufficient high-level evidence available to provide an unequivocal picture of the evidence in the form required by policymakers.

The Australian National Health and Medical Research Council (NHMRC) has provided much of the methodology for evaluating new technologies, with formalised levels of evidence that reflect susceptibility to bias within particular study designs and by which the validity of primary studies can be assessed.6

Data extraction

Data were extracted from the HTA reports by two investigators using a standardised data extraction form. The extracted covariates were predominantly based on the criteria developed by Busse et al to assess the quality of HTA reports;7 however, we restricted our data extraction to the section of each HTA report dealing with effectiveness of the technology or procedure. Details of the data extracted are shown in Box 1. For most of the covariates, we simply assessed whether the variable had been reported in the HTA (yes, no or unclear). Disagreements between the reviewers were resolved by consensus, with a third reviewer available (although not required) for adjudication.

Results

Of the 56 available HTA reports of applications to the MSAC, 31 met the inclusion criteria. Box 2 summarises the selection process for HTAs and the reasons for exclusion. Of the 31 included HTAs, six were published in 1999, five in 2000, six in 2001, six in 2002, six in 2003 and two in 2005. Results of the data extraction are shown in Box 1. Sixteen of the 31 technologies or procedures assessed in these HTAs were recommended for funding, either on a permanent or interim basis.

Results of the search for evidence

The type of evidence included in the effectiveness section of the HTA reports was classified according to a modified version of the NHMRC designation of levels of evidence6 in most reports. Level I evidence (a systematic review) was included in eight HTAs. Level II evidence (randomised controlled trials) was included in 12 HTAs, with the number of randomised controlled trials included in each of these ranging from one to 16. Level III-1 evidence (pseudo-randomised controlled trials) was only included in three HTAs, and level III-2 evidence (comparative studies with concurrent controls and allocation not randomised, cohort studies, case–control studies, or interrupted time series with a control group) was also rare, being included in only four HTAs. Level IV evidence (case series, either post-test or pre-test/post-test) was included in half of all HTAs (16/31). Interestingly, the level of evidence of some included studies was not reported in five HTAs.

Further methods used

Eighteen HTAs provided a validity assessment of included studies with explicit details of the methods used. The methods used were unclear in nine HTAs, and in the remainder of reports it was assumed no validity assessment had been undertaken. The most commonly reported method to assess the validity of included studies, used in five HTAs, was the Cochrane handbook for systematic reviews of interventions;8 other methods used in at least two reports were the Centre for Reviews and Dissemination handbook,9 Quality of Reporting of Meta-analyses (QUOROM) checklist10 and assessment methods proposed by Schulz et al11 and Greenhalgh.12 In total, at least seven different validity appraisal methods were used in these reports. In other cases it was difficult to determine whether formal validity assessments had been undertaken, as, although there were elements of validity assessment in the reports, the exact methods used were not reported. Both the descriptive elements and the results of included studies were presented in both tables and text in 30 HTAs. Due to a reported lack of appropriate studies, meta-analysis was performed in only three reports.

Discussion

Our examination of 31 HTA reports produced for the MSAC between 1998 and 2006 found considerable variability in quality. Reports did not describe authors’ potential conflicts of interest. Most reports did not formally state the research question, and just over half provided details of any validity assessments of included studies.

Conflict of interest is of concern because of the perception that it could lead to unreasonable bias in an HTA report.13 No information was provided within the published HTA reports, nor is there any documentation available publicly, which details how conflicts of interest are handled beyond the recording of such information by the MSAC.9,14 Other international HTA agencies have provided greater detail of how conflicts of interest are both identified and recorded.15,16

The reported methods applied and information presented in the HTAs differed substantially, with a little over half of the HTAs describing the methods used to assess the validity of included studies; these reports used at least seven different methods to do so. The reporting of methods used was poor, frequently omitting information required to clarify internal and external validity of the reports, as well as to facilitate replication. We did not contact authors of HTAs, so it remains unknown whether these omissions were due to inadequate reporting rather than methodological limitations.

In undertaking our study we used the criteria of Busse et al, which were published in 2002.7 Most of the HTAs we evaluated were conducted before these criteria were published, and indeed there is some evidence that reporting is improving, with an increase in the number of reports including details of the methods used over the time period examined. Optimal search strategies for primary studies to be included in the effectiveness and cost-effectiveness sections of HTAs have been suggested and may prove useful in future reports as a minimum standard to which report authors could adhere.17 Problems with reporting of systematic reviews and HTAs have been noted previously. A study by Olsen et al found that 29% of new Cochrane systematic reviews published in 1998 had major problems,18 and the authors specifically highlighted three areas of concern: that the evidence did not support the conclusions; that the conduct or reporting of reviews was unsatisfactory; and that reviews had stylistic concerns. A study conducted in Canada, which evaluated the conduct of reports from four HTA agencies in that country, found similar results to our study in that almost half of all reports failed to specify the methods used.19

The MSAC has produced its own guidance for evaluators undertaking HTAs,14 and this has recently been updated to include information on undertaking reviews of diagnostic and screening technologies.20 Other organisations such as the Cochrane Collaboration and the QUOROM group have produced guidance on the undertaking and reporting of systematic reviews and meta-analyses.8,10

While the MSAC does recommend interim funding in some instances, with a set time limit to allow further data collection, there is no mandatory requirement to publish these data. It therefore remains unclear what purpose this further data collection serves or if it facilitates the production of quality research evidence required for further appraisals of the technology or procedure.

Conclusions

Given the policy implications of HTAs produced by the MSAC, it is only right that they are produced to the highest quality standards. Statistical and methodological advances continue to take place in the field of systematic reviewing. There is an ongoing need to update the methods of conduct and reporting of HTAs in order to ensure that decisionmakers have access to the most scientifically rigorous information possible. The advantage of moving towards a system in which minimum reporting standards have been entrenched will be a reduction in variability of reports, raising their overall standards of quality and providing greater transparency of the decisions made.

1 Data extracted from 31 health technology assessment (HTA) reports produced for the Medical Services Advisory Committee, 1999–2005,* based on criteria developed by Busse et al7

Data extracted from HTA report

1999 (n = 6)

2000 (n = 5)

2001 (n = 6)

2002 (n = 6)

2003 (n = 6)

2005 (n = 2)


Basic information related to the HTA (ie, reporting of authorship, conflicts of interest)

Did the HTA report the authorship?
Yes/No

Yes 1
No 5

Yes 5

Yes 6

Yes 6

Yes 6

Yes 2

Did the HTA report provide details of conflict of interest? Yes/No

No 6

No 5

No 6

No 6

No 6

No 2


General methodological aspects of the HTA (ie, clarity of research question, sources of information used, selection criteria for information)

Did the HTA report the research question in a clear manner (eg, using the PICO format or similar)? Yes/No/Unclear

No 6

No 4
Unclear 1

Yes 3
No 2
Unclear 1

Yes 1
No 5

Yes 4
No 2

Yes 2

Were selection criteria for study inclusion reported? Yes/No/Unclear

Yes 4
No 1
Unclear 1

Yes 4
Unclear 1

Yes 6

Yes 6

Yes 5
Unclear 1

Yes 2

Did the authors provide details of resources searched? Yes/No

Yes 6

Yes 5

Yes 6

Yes 6

Yes 6

Yes 2

Was the search strategy reported? Yes/No/Unclear

Yes 6

Yes 5

Yes 6

Yes 6

Yes 6

Yes 2

Was more than one reviewer involved in the review process? Yes/No/Not reported

Not reported 6

Yes 1
Not reported 4

Yes 4
Not reported 2

Yes 1
Not reported 5

Yes 3
No 3

Yes 2


Efficacy/effectiveness (ie, sources of data, selection criteria, assessment of validity/quality of data)

Did the authors provide details of how they assessed the included studies for validity/quality beyond providing details of the study type? Yes/No/Unclear. If yes, what methods were used?

Yes 1
No 4
Unclear 1

Yes 2
Unclear 3

Yes 4
Unclear 2

Yes 3
Unclear 3

Yes 6

Yes 2

Was more than one reviewer involved in the extraction of data? Yes/No/Not reported

Not reported 6

Yes 1
Not reported 4

Yes 2
Not reported 4

Yes 1
Not reported 5

Yes 2
No 2
Not reported 2

Yes 2


PICO = patients/interventions/comparisons/outcomes. * None of the evaluated HTAs were published in 1998, 2004 or 2006.

  • Emily S Petherick1
  • Elmer V Villanueva2
  • Jo Dumville1
  • Emma J Bryan3
  • Shyamali Dharmage4

  • 1 Department of Health Sciences, University of York, York, UK.
  • 2 Department of Rural and Indigenous Health, Monash University, Moe, VIC.
  • 3 Monash Institute of Health Services Research, Monash University, Melbourne, VIC.
  • 4 School of Population Health, University of Melbourne, Melbourne, VIC.


Correspondence: ep9@york.ac.uk

Acknowledgements: 

We thank Ms Alexandra Raulli, Dr Alison Orrell and Professor Nicky Cullum for their positive encouragement and feedback on the manuscript.

Competing interests:

Emily Petherick, Elmer Villanueva and Emma Bryan were authors on several of the MSAC HTAs evaluated in this study. No funding was received to carry out this study.

  • 1. Goodman C. An introduction to health technology assessment. Falls Church, Va: The Lewin Group, 1998.
  • 2. Hailey D. Health care technology in Australia. Health Policy 1994; 30: 23-72.
  • 3. Hailey DM. Health technology assessment in Australia: a need to re-focus. J Qual Clin Pract 1996; 16: 123-129.
  • 4. Medical Services Advisory Committee. What is MSAC? http://www.msac.gov.au/internet/msac/publishing.nsf/Content/what-is-1 (accessed Mar 2007).
  • 5. May C, Mort M, Williams T, et al. Health technology assessment in its local contexts: studies of telehealthcare. Soc Sci Med 2003; 57: 697-710.
  • 6. National Health and Medical Research Council. How to use the evidence: assessment and application of scientific evidence. Canberra: NHMRC, 2000.
  • 7. Busse R, Velasco M, Perleth M, et al. Best practice in undertaking and reporting health technology assessments. Working group 4 report. Int J Technol Assess Health Care 2002; 18: 361-422.
  • 8. Higgins J, Green S, editors. Cochrane handbook for systematic reviews of interventions. Version 4.2.6. The Cochrane Library, Issue 4, 2006. Chichester, UK: John Wiley & Sons, Ltd.
  • 9. Centre for Reviews and Dissemination. Undertaking systematic review of research on effectiveness: CRD’s guidance for those carrying out or commissioning reviews. York: CRD, 2001. http://www.york.ac.uk/inst/crd/report4.htm (accessed Jul 2007).
  • 10. Moher D, Cook DJ, Eastwood S, et al. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999; 354: 1896-1900.
  • 11. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995; 273: 408-412.
  • 12. Greenhalgh T. Assessing the methodological quality of published papers. BMJ 1997; 315: 305-308.
  • 13. Hailey D. Towards transparency in health technology assessment: a checklist for HTA reports. Int J Technol Assess Health Care 2003; 19: 1-7.
  • 14. Medical Services Advisory Committee. Funding for new medical technologies and procedures: application and assessment guidelines. Canberra: MSAC, 2000.
  • 15. Medical Advisory Secretariat, Ontario. Technologies for osteoarthritis of the knee. Integrated health technology policy assessment. October 2005. https://ospace.scholarsportal.info/bitstream/1873/1875/1/259170.pdf (accessed Jul 2007).
  • 16. NHS Research & Development. The HTA programme. The principles underlying the work of the National Coordinating Centre for Health Technology assessment. March 2007. http://www.ncchta.org/sundry/probity.pdf (accessed Jul 2007).
  • 17. Centre for Reviews and Dissemination. Finding studies for systematic reviews: a checklist for researchers. York: CRD, 2006. http://www.york.ac.uk/inst/crd/revsrch.doc (accessed Jul 2007).
  • 18. Olsen O, Middleton P, Ezzo J, et al. Quality of Cochrane reviews: assessment of sample from 1998. BMJ 2001; 323: 829-832.
  • 19. Menon D, Topfer LA. Health technology assessment in Canada. A decade in review. Int J Technol Assess Health Care 2000; 16: 896-902.
  • 20. Medical Services Advisory Committee. Guidelines for the assessment of diagnostic technologies. Canberra: MSAC, 2005.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.