Connect
MJA
MJA

What can we learn from the Hwang and Sudbø affairs?

Paul Gerber
Med J Aust 2006; 184 (12): 632-635. || doi: 10.5694/j.1326-5377.2006.tb00420.x
Published online: 19 June 2006

Abstract

The Hwang fraud

The chronology of the Hwang fraud is brief. The manuscript was received by Science on 15 March 2005 and promptly sent to reviewers for comment. The reviewers’ comments were sent to Hwang, who returned an amended version of the article, gratefully acknowledging “the anonymous reviewers for their constructive critique”. It was this amended article that was accepted for publication on 12 May 20051 (within 8 weeks of receipt).

The article was unconditionally retracted on 12 January 2006, after an investigation committee of Seoul National University (SNU) concluded that the laboratory at SNU, where the work was claimed to have been carried out, “does not possess patient-specific stem cell lines or any scientific basis for claiming to have created one”. An earlier article by Hwang and colleagues was likewise retracted,2 the investigation committee reporting “that the data showing that the DNA from human embryonic stem cell NT-1 is identical to that of the donor are invalid because they are the result of fabrication, as is the evidence that NT-1 is a bona fide stem cell line”. In March 2006, SNU dismissed Professor Hwang from his post, stating the fraud had brought dishonour to the University, and in May, Hwang was indicted in Seoul on fraud, embezzlement and ethics charges relating to his faked stem cell research. Five other scientists were also indicted on charges in connection with the bogus research and the disappearance of millions of dollars in donations. If convicted on all charges, Hwang could face up to 5 years’ imprisonment.3

The Sudbø fraud

The second fraud involved Dr Jon Sudbø, an oncologist working at the Radium Hospital in Oslo, who published a purported nested case–control study of 908 subjects in the Lancet in 2005.4 There were two problems with the study. Firstly, the 908 subjects, whose names were allegedly taken from a public health database, had never been studied as claimed. Secondly, some 250 subjects in the study shared the same birthdate.

It was a slipshod study, containing sufficient stigmata of fraud to have raised suspicion in reviewers’ minds as to the accuracy of the study. Inconsistencies in the study were first noted by an epidemiologist at Norway’s Institute of Public Health, who knew that the database did not exist in the years listed in the study. In its 21 January 2006 issue, the Lancet noted that it was informed on 13 January 2006 by officials at the Radium Hospital in Oslo that “they had information that strongly indicates that material published in [the journal] has not been based upon data from our national databases, but on manipulated data”.5

On 4 February 2006, the journal advised its readers that it had

The retraction led to Sudbø’s admission of the fraud.7 He also admitted to having fabricated results in articles published in other journals.8,9

More alarmingly, all three Sudbø studies were co-authored, the study in the Lancet involving no less than 13 “collaborators”.

How the Hwang fraud was perpetrated

As the Hwang fraud received worldwide publicity, the manner in which the fabricated data were cobbled together is of some interest. Professor Hwang and his team of 24 co-authors (of whom all but one were South Koreans hitherto regarded as leaders in stem cell research) described (in the later-retracted article) how they had created 11 human embryonic stem cell colonies from the skin cells of 11 patients, claiming that the DNA in the stem cells and skin cells was an exact match. The abstract proclaimed the work to be “of great biomedical importance for studies of disease and development, and to advance clinical deliberations regarding stem cell transplantation”.1

Unlike the Sudbø data, the Hwang data contained no obvious inconsistencies that should have rung alarm bells to arouse the suspicion of even the most cautious reviewers when examining this revolutionary technology that claimed to have successfully made human patient-specific stem cells. After all, hadn’t Hwang achieved world renown for his widely reported somatic cell nuclear transfer technology in mice and his cloned Afghan hound?

In fact, we now know how the fraud was perpetrated: Professor Sung Il Roh, the second-listed co-author, has since confessed that the photographs illustrating the 11 patient-specific stem cell lines were faked. What is surprising is that the perpetrators must surely have been aware that the fraud would almost certainly be exposed, sooner rather than later, if only because the discoveries of Wilmut et al10 in sheep cloning had generated much enthusiasm about the possibility of deriving patient-specific, immune-matched human embryonic stem cells (hESCs). It was thus a highly competitive area of research that would not only attract worldwide attention, but would most likely lead to other stem cell researchers trying — and failing — to reproduce hESCs by following the well documented procedure set out in the article by Hwang et al.

Had fraud been suspected, the technology now exists that could have exposed the inappropriate changes to the original data. Skilled editorial staff can now spot such manipulations, using features in the imaging software.11

The pressure to publish

It is to be hoped that the Australian Government’s impending introduction of the Research Quality Framework12 will narrow the research focus — and hence the pressure to publish — on universities other than the “Group of Eight”,* which may result in fewer dubious articles being submitted to scientific journals.

An analysis of the Science Citation Index showed that over 55% of published articles are not considered worth citing even once by others.13 It is therefore tempting to ask: if over half of published monographs are not worth citing, why are they being published? Are there too few worthwhile articles for too many prestigious journals compelled to fill their fortnightly quota by publishing research in the more arcane areas of medicine and science in which data are rarely closely evaluated? If a high proportion of published research is not worth citing, it is possible that a good deal of published research will contain fabricated data, confidently submitted on the assumption that no one will bother to attempt to replicate the results. In the case of the most recent of the three Sudbø frauds, for example, the data would have been noted as “interesting”, but would then, in all likelihood, have become part of the medical literature and lain undisturbed forever if it weren’t for the careless error of the identical birthdates. Having said that, one is left to wonder what some of these “researchers” were thinking when they put their names to these articles. Why were so many scientists in the Hwang case prepared to go to such lengths to manipulate data? How many ensured the veracity of the claimed results before putting their names on the article? Do scientific fraudsters suffer from some kind of compulsion for achieving instant fame and recognition, whatever the ultimate cost? While a folie à deux is well documented, a folie à vingt-cinq is only reminiscent of Jonestown and Waco, whose disciples self-destructed with cyanide and fire.

Browsing through the biomedical literature, one notes the increase in the sheer number of co-authors in published studies. This is inevitable in articles “that include techniques as diverse as molecular biology and economic evaluation, all carried out by different people”.14 As Sahu and Abraham noted, “[t]here was a time when most of the manuscripts were written single handedly. Times have changed. Now, more and more people want to be associated with a manuscript . . . and multi-authored articles are the norm”.15 While the complexity of research work may explain the necessity for multicentre collaboration, it also provides the opportunity for irresponsible authorship claims by researchers seeking to add to their list of publications. As one wag put it, “I can only suggest that holding the door open while rats are brought into the laboratory does not constitute authorship”.16 The all too common practice of routinely including the head of department’s name on manuscripts (frequently as first author), even though the person in question may have had no involvement in the work, must be abandoned.

The role of journal editors

It would seem that editors need to ask how much input each and every collaborator has had in the work undertaken or in the technology claimed to have been perfected. We now know that some “co-authors” have little — if any — input into the work being published under their name. In the well documented exposure of the infamous Dr John Darsee, for example, the New England Journal of Medicine unwittingly launched this serial fraudster on his notorious career when it published two articles with his name as first author.17,18 Darsee was at the time a junior member of the medical faculty at Emory University in Atlanta, and the articles were co-authored by a number of senior faculty members. It was subsequently discovered that Darsee had fabricated his results in both articles, leading to their withdrawal. Surprisingly, his co-authors were exonerated after claiming to have had no involvement in the work. Alas, before Darsee was finally exposed as a compulsive fraudster, his three peer reviewed publications led to his appointment to a fellowship in the Cardiac Research Laboratory of Brigham and Women’s Hospital, a teaching affiliate of Harvard University, from where he continued to publish, as lead author, no less than nine co-authored articles, all of which were subsequently withdrawn on the basis of fabricated data. In the aftermath, Science published an article on coping with fraud,19 and the Editor of the New England Journal of Medicine wrote an editorial on the lessons from the Darsee affair.20 History is silent on what became of the other “contributors” listed on Darsee’s articles.

In the Hwang fraud, the only American scientist listed as a co-author — Professor Gerald Schatten, Director of the Department of Obstetrics, Gynecology and Reproductive Sciences at the University of Pittsburgh School of Medicine — has since claimed that his role was merely “advisory”, admitting that he had taken no active part in the work. It is likely that a more rigorous editorial policy would have culled Schatten (and probably some of the other 23 collaborators) from the submitted work. The University of Pittsburgh subsequently cleared Schatten of any scientific misconduct in his collaboration with the Hwang team, but noted that his failure to take greater steps to ensure the veracity of the data supporting the article’s claim constituted “misbehaviour”.21

The Medical Journal of Australia barely escaped publishing an article containing manipulated data. Some years ago, a manuscript was submitted by several highly respected medical specialists who claimed that, in an authorised trial of a non-approved use of a drug, they had achieved significant rates of remission in treating a form of cancer. The manuscript was peer reviewed by two experts, who made a few suggestions (all adopted by the authors), and the article was accepted for publication. The MJA Editor at the time asked me to write an editorial condemning Australia’s Therapeutic Goods Administration for depriving Australian cancer patients of so beneficial a drug. However, after reading the manuscript and researching the history of this non-approved drug, I was providentially sent a copy of what appeared to be the same manuscript, but bore a date stamp on the title page showing that it had been received (and presumably rejected) by another journal two years earlier. Comparing the tables in the two versions, it soon became obvious to me that, in the table in the later version, the authors had not only added three years to the survival time of two patients cited in the earlier version, but had removed three patients from their current accounting. In other words, the more recent table and revised text suggested that the authors had deliberately set out to under-report their actual experience with the drug. Returning both versions to the Editor led to a more rigorous peer review, a subsequent rejection of the article, and heated correspondence between the Editor and one of the co-authors, the former ending the correspondence by stating, more in sorrow than in anger, “I feel dismayed at your apparently cavalier attitude to scholarly publication”.

As medical frauds in multi-authored articles, coming from distinguished authors in prestigious institutions, are finding their way into scientific journals,22 editors may want to question whether the structure of their current author contribution disclosure forms addresses compliance with authorship criteria specified by the International Committee of Medical Journal Editors (ICMJE).14 These criteria define who is an “author”, and, more importantly, exclude anyone whose contribution is not sufficient to justify inclusion. In a paper presented at the Fifth International Congress on Peer Review and Biomedical Publication in 2005,23 Marušić and colleagues described a randomised controlled trial to determine the effects of the structure of contribution disclosure forms on the number of authors not meeting criteria of the ICMJE. The conclusion reached was that the structure of forms eliciting author contributions significantly influences the number of contributions reported by authors and their compliance with ICMJE authorship criteria.

This raises the wider question of whether it is the task of editors to question each named author of a work on the extent of his or her contribution and decide whether any individual collaborators had insufficient input to satisfy the criteria for authorship. While the rejection of a manuscript in its entirety is admittedly a judgemental task, editors may balk at assuming the additional role of having to evaluate the significance of individual contributions to cull those deemed inadequate. Nevertheless, it is no longer appropriate — if it ever was — for a person to qualify as an author because he or she is familiar with the work or can attest that it was performed as described.

The MJA, on receipt of every article submitted for publication, routinely sends out a detailed questionnaire seeking assurance that the work is original and does not involve any conflicts of interest. Regrettably, this practice is not universally followed. For example, Psychopharmacology published a number of articles by Professor David Warburton, Professor of Pharmacology at the University of Reading and himself a senior editor of the journal, claiming that nicotine was not addictive. When a reporter from the Guardian Weekly advised Psychopharmacology this year that the University of Reading and Professor Warburton had received substantial grants from British American Tobacco between 1995 and 2003 (a fact not disclosed in the articles), the current Editor replied that the journal had been unaware of such funding, adding, “It is an author’s responsibility to disclose sources of funding, and widely understood that journals themselves do not expect to police this declaration”.24 Astonishingly, the University of Reading, when apprised of this information, responded that to compel an academic to declare sources of funding would amount to “censorship” and restrict “academic freedom”.24

Conclusion

Recent experience has shown that scientific frauds — even unsophisticated ones — can still slip past peer review. Should reviewers adopt a more critical evaluation of the submitted data? After all, reviewers are volunteers with substantial time commitments elsewhere, who generously agree to critically examine articles submitted to them by editors. Their task is not that of detectives looking for fabrications. Nevertheless, the reviewers of the most recent Sudbø fraud have much to answer for.

Whenever reviewers suspect fraud, they should advise the editor that, in order to undertake an adequate review, they will need to peruse the primary data on which the research is based. Asking authors for primary data may be an unpleasant task for editors, if only because it is likely to raise the hackles of innocent contributors. However, if that is the price we have to pay to ensure that the Darsees, Hwangs and Sudbøs no longer find an outlet for their fraudulent work, so be it. But will these more stringent measures invariably reveal a cleverly manipulated fraud? No! A street-smart rogue will generally find a way to avoid detection, despite increasingly sophisticated methods of detecting fraudulent image manipulation. Unfortunately, reviewers reading articles like the recent study by Sudbø and colleagues, “Risk markers of oral cancer in clinically normal mucosa as an aid in smoking cessation counseling”,9 are still likely to take the data on trust, if only because they involve a specialised area of medicine in which replication of the data would be time-consuming, costly and simply not justified.

Having learnt from sobering experience that the medical profession is not immune from rogue scientists prepared to manipulate data for their 15 minutes of fame, we must surely draw some lessons from that experience.

  • Paul Gerber

  • Faculty of Health Sciences, University of Queensland, Brisbane, QLD.


Correspondence: p.gerber@bigpond.net.au

Competing interests:

None identified.

  • 1. Hwang WS, Roh SI, Lee BC, et al. Patient-specific embryonic stem cells derived from human SCNT blastocysts. Science 2005; 308: 1777-1783.
  • 2. Kennedy D. Editorial retraction. Science 2006; 311: 335.
  • 3. “Cloner” indicted. Weekend Australian 2006; 13-14 May: 16.
  • 4. Sudbø J, Lee JJ, Lippman SM, et al. Non-steroidal anti-inflammatory drugs and the risk of oral cancer: a nested case–control study. Lancet 2005; 366: 1359-1366.
  • 5. Horton R. Expression of concern: non-steroidal anti-inflammatory drugs and the risk of oral cancer. Lancet 2006; 367: 196.
  • 6. Horton R. Retraction — Non-steroidal anti-inflammatory drugs and the risk of oral cancer: a nested case–control study. Lancet 2006; 367: 382.
  • 7. Pincock S. Lancet study faked. The Scientist 16 Jan 2006. Available at: http://www.the-scientist.com/news/display/22952 (accessed Apr 2006).
  • 8. Sudbø J, Lippman SM, Lee JJ. The influence of resection and aneuploidy on mortality in oral leukoplakia. N Engl J Med 2004; 350: 1405-1413.
  • 9. Sudbø J, Samuelsson R, Risberg B, et al. Risk markers of oral cancer in clinically normal mucosa as an aid in smoking cessation counseling. J Clin Oncol 2005; 23: 1927-1933.
  • 10. Wilmut I, Schnieke AE, McWhir J, et al. Viable offspring derived from fetal and adult mammalian cells. Nature 1997; 385: 810-813.
  • 11. Rossner M, Yamada KM. What’s in a picture? The temptation of image manipulation. J Cell Biol 2004; 166: 11-15.
  • 12. Australian Government. Department of Education, Science and Training. Research Quality Framework. Available at: http://www.dest.gov.au/sectors/research_sector/policies_issues_reviews/key_issues/research_quality_framework/default.htm (accessed May 2006).
  • 13. Hamilton DP. Publishing by — and for? — the numbers. Science 1990; 250: 1331-1332.
  • 14. Smith R. Authorship: time for a paradigm shift? BMJ 1997; 314: 992.
  • 15. Sahu DR, Abraham P. Authorship: rules, rights, responsibilities and recommendations. J Postgrad Med [serial online] 2000; 46: 205-210.
  • 16. Smith GM. The meaning of authorship. JAMA 1996; 276: 1385.
  • 17. Darsee JR, Heymsfeld SB, Nutter DO. Hypertrophic cardiomyopathy and human leukocyte antigen linkage differentiation of two forms of hypertrophic cardiomyopathy. N Engl J Med 1979; 300: 877-882.
  • 18. Darsee JR, Heymsfeld SB. Decreased myocardial taurine levels and hypertaurinuria in a kindred with mitral-valve prolapse and congestive cardiomyopathy. N Engl J Med 1981; 304:129-135.
  • 19. Culliton BJ. Coping with fraud: the Darsee case. Science 1983; 220: 31-35.
  • 20. Relman AS. Lessons from the Darsee affair. N Engl J Med 1983; 308; 1415-1417.
  • 21. Parry J. University clears scientist of misconduct but says his conduct was misbehaviour. BMJ 2006; 332: 382.
  • 22. Rhoades L. Institutional research misconduct activity: 1991–2000 [abstract]. 2004 ORI Research on Research Integrity Conference; 2004 Nov 12–14; San Diego, Calif, USA. Available at: http://ori.dhhs.gov/documents/rri_abstracts_2004.pdf (accessed Apr 2006).
  • 23. Marušić A, Bates T, Ani ´c A, et al. In the eye of the beholder: contribution disclosure practices and inappropriate authorship [abstract]. Fifth International Congress on Peer Review and Biomedical Publication; 2005 Sep 16–18; Chicago, Ill, USA. Available at: http://www.ama-assn.org/public/peer/peerhome.htm (accessed Apr 2006).
  • 24. Monbiot G. Just follow the money. Guardian Weekly 2006; 17–23 Feb: 14.

Author

remove_circle_outline Delete Author
add_circle_outline Add Author

Comment
Do you have any competing interests to declare? *

I/we agree to assign copyright to the Medical Journal of Australia and agree to the Conditions of publication *
I/we agree to the Terms of use of the Medical Journal of Australia *
Email me when people comment on this article

Online responses are no longer available. Please refer to our instructions for authors page for more information.