A Retracted Stem Cell Study Reveals Science’s Shortcomings

The withdrawal after 22 years of a controversial stem cell paper highlights how perverse incentives can distort scientific progress

Close up photograph of stacked plastic trays in a research lab containing human cells, handwriting in sharpie identifies samples in each of the trays' compartments

Trays of brain cells derived from bone marrow cells in the lab of Dr. Catherine Verfaillie at the University of Minnesota on November 10, 2000.

Bruce Bisping/Star Tribune via Getty Images

In June a notice posted on the website of the journal Nature set a new scientific record. It withdrew what is now the most highly cited research paper ever to be retracted.

The study, published in 2002 by Catherine Verfaillie, then at the University of Minnesota, and her colleagues, had been cited 4,482 times by its demise according to the Web of Science. The bone marrow cells it described were lauded as an alternative to embryonic stem cells, offering the same potential to develop into any type of tissue but without the need to destroy an early-stage human embryo. At that time the U.S. government was wrestling with the ethics of funding stem cell research, and politicians opposed to work on embryos championed Verfaillie’s findings.

The paper’s tortured history illustrates some fundamental problems in the way that research is conducted and reported to the public. Too much depends on getting flashy papers making bold claims into high-profile journals. Funding and media coverage follow in their wake. But often, dramatic findings are hard to repeat or just plain wrong.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


When such papers start falling apart, they are often vigorously defended. Research institutions and journals sometimes drag their feet in correcting the scientific record. This may partly be driven by legal caution; nobody relishes a libel lawsuit from a prominent researcher who objects to a retraction. The reputations of scientists’ employers and journals also suffer when papers are withdrawn, creating an incentive to let things stand.

Nature’s retraction notice for Verfaillie’s paper says that its editors “no longer have confidence in the reliability of the data.” I have had little confidence in the data since 2006. That’s when Eugenie Reich and I, then working for New Scientist, asked Verfaillie to explain duplications of plots across her Nature paper and another published in Experimental Hematology. By then several research groups had failed to repeat the experiments reported in the Nature paper—which was why we selected it for scrutiny.

We subsequently found multiple examples of reused and manipulated images in papers published by Verfaillie and her colleagues. By 2009, two papers had been retracted, and several more had been corrected—including the Nature paper that was subsequently retracted this June.

The investigations we triggered focused on whether there was deliberate data falsification. This led to a finding of scientific misconduct against a single junior researcher—who was not responsible for the images that ultimately caused the Nature paper to be retracted.

This focus on willful misconduct is itself a problem, in my view: it’s very hard to prove intent and assign blame. Junior scientists are often the ones who take the fall. More importantly, papers beset with errors borne from the haste to publish can be just as misleading as outright fraud.

The most disturbing twist for me came when the University of Minnesota declined to investigate our concerns about image manipulation in another Verfaillie paper in the Proceedings of the National Academy of Sciences USA—for which the researcher who was previously found guilty of misconduct was not an author, raising questions about whether justice had been done.

The university was able to let that study slide thanks to a policy that didn’t require the investigation of allegations about research that was conducted seven or more years before the allegations were made. PNAS accepted a correction to one duplicated image in that paper but left the most problematic figure untouched. (The journal told me it is now looking again at the matter in light of the Nature retraction.)

Reich and I eventually moved on to other projects. It wasn’t until 2019 that the research integrity consultant Elisabeth Bik reviewed Verfaillie’s work. She extended our findings and raised concerns about newer papers published since Verfaillie moved to KU Leuven in Belgium. Crucially, Bik also found images in the Nature paper that contained duplications, suggesting they had been edited inappropriately.

It was the failure of Verfaillie and her colleagues to provide original images to address these concerns that led to the paper’s demise. Verfaillie didn’t respond to my request for comment, but I’ve obtained correspondence with Nature that shows she fought to keep the paper alive, only reluctantly agreeing to the retraction almost five years after Bik’s investigation. In a statement, Nature said, “We appreciate that substantial delays to investigations can be frustrating, and we apologise for the length of time taken in this case.” (Nature is owned by Springer Nature, which is also the parent company of Scientific American.)

KU Leuven also looked into Bik’s concerns and said in 2020 that it had found “no breaches of research integrity.” It didn’t review the Nature paper, however, on the grounds that the University of Minnesota had examined that paper. The University of Minnesota told me that it did review the issues raised by Bik but said state law prevented it from sharing any further information.

I understand why universities and journals are reluctant or slow to take corrective action. But the saga of Verfaillie’s Nature paper reveals a deeper problem with perverse incentives that drive “successful” careers in science. A highly cited paper like this is a gateway to promotions and generous grants. That can starve funding to more promising research.

My profession of science journalism shares the blame, often fixating on the latest findings touted in journal press releases, rather than concentrating on the true measure of scientific progress: the construction of a body of repeatable research. When doing so, we mislead the public, selling a story of “breakthroughs” that frequently amount to little.

Around two thirds of the citations to Verfaillie’s paper accrued after Reich and I first went public with our concerns in 2007. We should rethink the incentives that propelled this paper to prominence and then kept it circulating for so many years.

In recent years publishers have experimented with various forms of “open” peer review, in which expert comments appear alongside the research before, at the time of, or after its publication. That’s a start, but my view is that the formal scientific paper, set in stone at the moment of publication, is an anachronism in the Internet age. The more we can move toward ways of publishing research as “living” documents, informed by constructive critical comment, the better. As for science journalism, let’s report on the bigger picture of scientific progress, warts and all.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

science