Connect
MJA
MJA

Is research ethics regulation really killing people?

David Hunter
Med J Aust 2015; 202 (6): 338-339. || doi: 10.5694/mja14.00338
Published online: 6 April 2015

Summary

  • It has been argued that research ethics regulation is leading to loss of life by delaying life-saving research. For example, Whitney and Schneider argue that the delays to the ISIS-2 trial cost 6538 lives. This suggests that there are grounds for rejecting research ethics regulation.
  • However, the methods adopted by critics are flawed because:
    • they conflate regulatory delays with those due to genuine normative requirements that would be present even if the regulatory framework was not; and
    • looking at the impact of regulation on a per-project basis is the wrong metric, because it neglects all the unsuccessful research and because delaying specific projects does not reduce the overall research done by researchers.
  • Research ethics regulation does not lead to substantial losses of life, but we have strong obligations to make it as efficient as possible.

Overseeing the conduct of medical research is complex and time consuming. In most countries, a significant part of this involves review by human research ethics committees (HRECs). Some researchers argue that a significant part of this oversight by HRECs is unjustified, because it creates significant delays and costs, prevents some research, and can translate into potential harm to patients.1-6 In the face of such claims, ethics committee review requires robust justification.

HREC processes must prevent more harm than they cause.1 In this article, I discuss and respond to the claim that delays created by the ethics oversight process lead to predictable and avoidable deaths of people who could have benefited from the interventions whose introduction has been slowed down.3-6

Unfortunately for defending the role of research regulation, there is controversy about its benefits, and only a weak supporting evidence base.7 This is by no means a new concern. In 1975, Gray pointed out that although such review was introduced in 1966 in the United States, “its adequacy and efficiency have not been sufficiently evaluated. Nor have the various revisions and modifications to the policy been based on systematic empirical knowledge”.8 Others have since criticised the lack of data generated from current systems:

No one can authoritatively report how many or what types of research studies are being conducted, how many people are enrolled in each type of study, how many serious (grade III and IV) and unexpected adverse events occur annually, how many participants die of research-related causes, and so on … Without such data, it is impossible to identify and correct inconsistent practices and determine the extent to which the current system enhances protection of research participants.9

However, even with such evidence available (as it is to some degree in Australia), it is still difficult to assess whether ethics committees are doing a good job, as their decisions are inherently normative and contestable. For example, Goodyear-Smith and colleagues described how the New Zealand arm of their study was hampered by an overzealous ethics committee.10 However, it could equally be said that only the New Zealand committee successfully identified risks and protected participants. Any readily accessible facts about the situation do not settle any debate over which interpretation is correct.

The best evidence we have of benefits from research ethics regulation is indirect. For example, based on a study of ethics committee decision letters, we can infer the kinds of harms prevented by regulation.11 Most of the harms identified were relatively minor, but they are still apparent benefits of regulation. However, they would only count as benefits if the ethics committee's judgement was right — this being of course normatively contestable. Without direct evidence and normative certainty, it is unclear how we might measure the impact of ethics committees to evaluate their benefits and harms.

Without firm evidence for the benefits of research ethics regulation, if it can be shown that it is leading to serious harm because it is delaying important research, there is a prima facie case against regulation.12 One can then ask whether research ethics regulation can be said to be leading to deaths that could have been prevented.

Counting the cost of research ethics regulation

To what extent do HRECs slow down research? The data on this is sketchy, as not all HRECs collect such data, nor is it necessarily published or readily available.13 However, the average total time that applications spent in review by HRECs in 2012 in New South Wales was 77 days (range, 33–165 days).13 This can be broken into an average 45 days with the HREC and 32 days with the researchers, who are presumably making changes in response to HREC comments.13 This is in line with international standards, which typically aim for applications to spend 30–40 days with research ethics committees.13 We need to examine whether this substantial length of time is crucial to the argument that regulatory delays cost lives.

Whitney and Schneider developed the most sophisticated version of this argument by aiming to establish a method to estimate the cost in lives of the delay of medical research by regulation.3 Despite their relatively balanced article focused on the institutional review board system in the US, several other critics of research ethics regulation have adopted their argument to make far more sweeping claims.7,14,15 It is therefore important to examine Whitney and Schneider's position.

Their quantitative measure for estimating lives lost due to regulatory delay is based on the Second International Study of Infarct Survival (ISIS-2),16 a multinational trial of antithrombotic agents that had significantly different trial enrolment rates between the United Kingdom and the US, attributed by the authors to the requirement at the time to seek fully informed consent in the US but not in the UK.3 Their formula took into account the differential uptake of medical interventions, the delays over each stage of drug development and the patients already receiving optimal treatment before the completion of the study to establish a more accurate estimate of cost in lives. In the ISIS-2 trial, the calculation was as follows:

Cost in lives = 19 633 patients per month × 5.6% mortality reduction × 8 months × 75% uptake = 6538 lives

The estimate of 6538 lives lost through delaying one trial alone is alarming. What hope can there be of justifying research ethics regulation? In the Australian context of 77 days spent in review, this finding would translate to 2087 lives lost due to time in review. Such a loss of life due to regulation seems hard to justify.

However, Whitney and Schneider's method has several problems, with implications for how we ought to weigh the costs and benefits of research ethics regulation. Should we count all regulatory delays as costs of research ethics regulation? Research ethics regulation delays clinical trials in two main ways:

  • by delaying the start of a trial through procedural means, such as requiring review, or changes and further review; and
  • by delaying the progress of trials through substantive normative means, such as the imposition of ethical requirements (eg, informed consent procedures).

While it is legitimate to count procedural delays as a direct cost, it is less clear that delays due to normative requirements should be counted as a cost of regulation. This is because researchers ought to comply with appropriate ethical norms and standards irrespective of whether such compliance is compulsory. In the ISIS-2 example, decreased recruitment was attributed to the stringent informed consent requirements imposed in the US.3 Although it could be argued that these requirements were too stringent, very few people would argue now that some substantial form of informed consent is not a minimal normative requirement for most research. Hence, at least some (perhaps most) of the delay in the ISIS-2 trial was due to legitimate substantive normative requirements.

The most significant problem with the argument that regulatory delays cost lives is that it focuses on the results of individual research studies as the measure for calculating the impact of research regulation. This is misleading in two ways. First, as Whitney and Schneider acknowledge, the impact can look significantly higher when the study intervention turns out to be particularly effective.3 In the many research trials that have negative results, regulatory delays will cost no lives at all.17

However, the more important way that the argument is misleading is that it focuses on the research project as the productive unit rather than on the researchers who do the actual work. The result is that although one research project may be delayed, the net amount of activity of researchers across all projects in which they are involved stays the same.

While researchers wait for research ethics approval they are presumably not simply sitting around after downing test-tubes, enduring a loss of their total research productivity. Rather, they will be working on other things, including other research projects and proposals for more research. Even when people are specifically employed for particular projects that are awaiting approval, there is, typically, constructive work that can be done towards the project while ethics approval is being sought. Although research ethics regulation might delay specific projects, it does not decrease the overall amount of work that researchers are doing. Therefore, roughly the same quantity of research is being carried out despite regulation, presumably saving roughly the same amount of lives. (However, this may not hold true if, for example, regulation considerably changes the types of research being conducted.)

Admittedly, research regulation will slightly decrease the overall amount of research, as some researcher time will be taken up by the procedural aspects of research ethics regulation. Much of this time will either be valuable work (by encouraging researchers to think through the ethics of their projects) or parasitic on existing work (such as writing up a research protocol). Although some delay will still result, it is on a relatively small scale compared with that claimed by those who argue that regulatory delays cost lives.

Conclusions

The number of lives lost that can be legitimately attributed to research ethics regulation is considerably smaller than claimed by Whitney and Schneider and proponents of reducing research ethics regulation. There are two important lessons to be learned from this when considering the justifiability of regulation of research more generally. First, we must ensure that any costs and benefits being counted derive from the regulation itself, not the normative standards it is enforcing. Second, we must count the impact on overall research activity, rather than focusing on delays to individual research projects.

This does not mean that research ethics regulatory systems are immune to criticism regarding delayed research. Some lives may be lost by regulatory delays that are not normatively justified. This gives us good reason to make the procedural aspects of our research regulation systems as efficient as possible. Therefore, delays that are not likely to substantially improve the normative quality of decision making (eg, review by multiple committees, as occurs sometimes in Australia and elsewhere) should be removed from research ethics regulatory systems as soon as possible.18


Provenance: Not commissioned; externally peer reviewed.

  • David Hunter

  • Flinders University, Adelaide, SA.



Acknowledgements: 

I thank participants at the 2013 Australasian Ethics Network Conference and the 12th World Congress of Bioethics (2014) for helpful feedback on earlier versions of this article, along with referees and editors for their helpful suggestions.

Competing interests:

No relevant disclosures.

  • 1. Wilson J, Hunter D. Research exceptionalism. Am J Bioeth 2010; 10: 45-54.
  • 2. Dyer S, Demeritt D. Un-ethical review? Why it is wrong to apply the medical model of research governance to human geography. Prog Hum Geog 2009; 33: 46-64. doi: 10.1177/0309132508090475.
  • 3. Whitney SN, Schneider CE. Viewpoint: a method to estimate the cost in lives of ethics board review of biomedical research. J Intern Med 2011; 269: 392-402.
  • 4. Christie DR, Gabriel GS, Dear K. Adverse effects of a multicentre system for ethics approval on the progress of a prospective multicentre trial of cancer treatment: how many patients die waiting? Intern Med J 2007; 37: 680-686.
  • 5. Millar JA. Multicentre drug trials, ethics approval and death of patients. Intern Med J 2008; 38: 298-299.
  • 6. Collins R, Doll R, Peto R. Ethics of clinical trials. In: Williams CJ, editor. Introducing new treatments for cancer: practical, ethical, and legal problems. Chichester, UK: John Wiley & Sons, 1992: 49-65.
  • 7. Dyck M, Allen G. Is mandatory research ethics reviewing ethical? J Med Ethics 2013; 39: 517-520.
  • 8. Gray BH. An assessment of institutional review committees in human experimentation. Med Care 1975; 13: 318-328.
  • 9. Emanuel EJ, Wood A, Fleischman A, et al. Oversight of human participants research: identifying problems to evaluate reform proposals. Ann Intern Med 2004; 141: 282-291.
  • 10. Goodyear-Smith F, Lobb B, Davies G, et al. International variation in ethics committee requirements: comparisons across five Westernised nations. BMC Med Ethics 2002; 3: E2.
  • 11. Angell E, Dixon-Woods M. Do research ethics committees identify process errors in applications for ethical approval? J Med Ethics 2009; 35: 130-132.
  • 12. Hunter D. How not to argue against mandatory ethics review. J Med Ethics 2013; 39: 521-524.
  • 13. NSW Ministry of Health, Office for Health and Medical Research. Health and Medical Research Governance Project: reform of the research pre-approval process. Discussion paper. Sydney: NSW Ministry of Health, 2013. http://www.health.nsw.gov.au/ethics/Documents/discussion-paper-ethics-and-governance-paper.pdf (accessed Jul 2014).
  • 14. Dingwall R. How did we ever get into this mess? The rise of ethical regulation in the social sciences. In: Love K, editor. Ethics in social research. Studies in qualitative methodology, vol. 12. Bingley, UK: Emerald Group Publishing Limited, 2012: 3-25.
  • 15. Schrag ZM. The case against ethics review in the social sciences. Research Ethics 2011; 7: 120-131. doi: 10.1177/174701611100700402.
  • 16. ISIS-2 (Second International Study of lnfarct Survival) Collaborative Group. Randomised trial of intravenous streptokinase, oral aspirin, both, or neither among 17 187 cases of suspected acute myocardial infarction: ISIS-2. Lancet 1988; 332: 349-360.
  • 17. Turner EH, Matthews AM, Linardatos E, et al. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008; 358: 252-260.
  • 18. Hunter DL. The ESRC research ethics framework and research ethics review at UK universities: rebuilding the Tower of Babel RECE by REC. J Med Ethics 2008; 34: 815-820.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.