A community of whistleblowers looking for errors in scientific research has sent shockwaves through some of the world’s leading research institutions — and the scientific community at large.
High-profile cases of alleged image manipulations in papers authored by the former president at Stanford University and leaders at the Dana-Farber Cancer Institute have made national media headlines, and some top science leaders think this could be just the beginning.
“At the rate it’s going, we expect another one of these to pop up every few weeks,” said Holden Thorp, editor-in-chief of the Science family of scientific journals, whose publication has been named one of the two most more influential. in the park.
The sleuths argue that their work is necessary to correct the scientific record and prevent generations of researchers from chasing dead subjects because of flawed papers. And some scientists say it’s time for universities and academic publishers to reform how they approach flawed research.
“I understand why the people who find these things are so upset,” he said Michael Eisen, biologist, former editor of the journal eLife and prominent voice of reform in scientific publishing. “Everyone – the author, the magazine, the institution, everyone – is motivated to minimize the importance of these things.”
Over the past decade or so, scientists have noticed widespread problems with scientific imagery in published papers, publishing concerns online but not paying much attention.
That began to change last summer after then-President Stanford Marc Tessier-Lavigne, a neuroscientist, resigned after scrutiny of alleged image manipulations in studies he helped author and a report criticizing his lab culture. Tessier-Lavigne was not found to have engaged in misconduct himself, but members of his lab appeared to manipulate images in questionable ways, said a report by a scientific panel he hired to investigate the allegations.
In January, an alarming post by a blogger exposed questionable work by top executives at the Dana-Farber Cancer Institute, who subsequently asked journals to retract six articles and issue corrections for dozens more.
In a resignation statement, Tessier-Lavigne noted that the panel did not find that he knew of misconduct and that he never submitted papers that he believed were inaccurate. In a statement from its research integrity officer, Dana-Farber said it had taken decisive action to correct the scientific record and that the image discrepancies were not necessary evidence that an author was trying to deceive.
“We’re definitely living through a moment – public awareness – that hit an inflection point when the Marc Tessier-Lavigne thing happened and has continued steadily ever since, with Dana-Farber being the latest,” a said Thorpe.
Now, the long-standing problem is in the national spotlight, and new artificial intelligence tools are making it easier to spot problems that range from decades-old errors and sloppy science to unethically enhanced images in photo editing software.
This increased scrutiny is reshaping the way some publishers operate. And it’s putting pressure on universities, journals and researchers to embrace new technology, a potential backlog of undetected errors and how to be more transparent when problems are identified.
This comes at a somber time in academic halls. Bill Ackman, venture capitalist, in a post on X last month he discussed weaponizing artificial intelligence to identify plagiarism leaders at elite universities with ideological differences, raising questions about political motivations in plagiarism investigations. More broadly, public trust in scientists and science has steadily declined in recent years, according to the Pew Research Center.
Eisen said he didn’t think people’s concerns about scientific imagery had crossed into “McCarthyist” territory.
“I think they’re targeting a very specific kind of problem in the literature, and they’re right — it’s bad,” Eisen said.
Scientific publishing builds the foundation of what scientists understand about their disciplines, and is the primary way researchers with new findings describe their work to their colleagues. Before publication, scientific journals consider submissions and send them to external researchers for scrutiny and to find errors or flawed reasoning, known as peer review. Journal editors will review studies for plagiarism and copy editing before publication.
That system is not perfect and still depends on good faith efforts by researchers not to manipulate their results.
Over the past 15 years, scientists have become increasingly concerned about problems some researchers have had with digitally altering images in their papers to skew or emphasize results. A greater priority for the work of scientific journals is to discover irregularities — usually of experiments involving mice, gels or blots.
Jana Christopher, a scientific imaging expert who works for the Federation of European Biochemical Societies and its journals, said the field of image integrity screening has grown rapidly since she started working there about 15 years ago.
At the time, “no one was doing this and there were these kinds of people denying research fraud,” Christopher said. “The general perception was that it was very rare and every now and then you would come across someone who would leave their results.”
Today, scientific journals have entire teams dedicated to dealing with images and trying to ensure their accuracy. More papers are being withdrawn than ever – with 10,000 plus more withdrawn last year, according to a Nature analysis.
A loose group of scientific sleuths pressed outside. Sleuths often find and flag possible errors or manipulations on the PubPeer online forum. Some of the poor receive little payment or public recognition for their work.
“To some degree, there’s a sense of caution around it,” Eisen said.
An analysis of comments on more than 24,000 articles posted on PubPeer found that more than 62% of the comments on PubPeer involved image manipulation.
For years, sleuths relied on sharp eyes, keen pattern recognition and an understanding of photo manipulation tools. In recent years, their work has been overtaken by rapidly developing artificial intelligence tools that can scan papers for irregularities.
Now, scientific journals are adopting similar technology to try to prevent errors from being published. In January, Science announced that it was using an artificial intelligence tool called Proofig to scan papers that were being edited and peer reviewed for publication.
Thorp, the editor-in-chief of Science, said the family of six journals “quietly” incorporated the tool into their workflow about six months before that announcement in January. Previously, the journal relied on eye checks to find these types of problems.
Thorp said Proofig identified several papers late in the editorial process that were not published because of problematic images that were difficult to explain and other cases where authors had “logical explanations” for issues they corrected before publication.
“The serious errors that prevent us from publishing a paper are less than 1%,” Thorp said.
In a statement, Chris Graf, director of research integrity at publishing company Springer Nature, said his company is developing and testing “in-house AI image integrity software” to check for image duplication. Graf’s research integrity unit currently uses Proofig to help evaluate articles if concerns are raised after publication.
Graf said the processes varied across his journals, but some Springer Nature publications manually check images for manipulations with Adobe Photoshop tools and look for discrepancies in raw data for experiments that visualize cell components or common scientific experiments.
“While the AI-based tools are helpful in speeding up and augmenting the investigations, we still consider the human element of all our investigations to be critical,” said Graf, adding that image recognition software is not perfect and that human expertise is required. to protect against false and negative things.
No tool will catch every mistake or cheat.
“There are a lot of people in that process. We will never take everything,” said Thorp. “We have to get much better at managing this when it happens, as journals, institutions and authors.”
Many people involved in science were frustrated when their concerns seemed to go unheeded or as investigations progressed slowly and without public resolution.
Sholto David, who publicly disclosed concerns about Dana-Farber’s research in a blog post, said he had largely “given up” writing letters to journal editors about errors he found because their responses were so inadequate.
Elisabeth Bik, a microbiologist and longtime image sleuth, said that she often mentioned image problems and “nothing happens.”
Questioning public comments on PubPeer can start a public conversation about questionable research, but authors and research institutions often don’t respond directly to online reviews.
Although journals can issue corrections or retractions, it is usually a research institute or university that is responsible for investigating cases. When cases involve biomedical research supported by federal funding, the federal Office of Research Integrity can investigate.
Thorp said the institutions must move more quickly to take responsibility when errors are discovered and speak clearly and publicly about what happened to earn public trust.
“Universities are so slow to respond and so slow to run through their processes, and the longer it takes, the more damage is done,” Thorp said. “We don’t know what would have happened if Stanford had said instead of launching this investigation, ‘These papers are wrong. We are going to withdraw them. It is our responsibility. But right now, we’re taking the blame and this is on us.”
Some scientists worry that image concerns are only scratching the surface of science integrity issues—it’s much easier to spot problems in images than data errors in spreadsheets.
And while it’s important to police bad papers and seek accountability, some scientists think those measures will address the symptoms of the bigger problem: a culture that rewards the careers of those who publish the most exciting results, rather than the ones that stand the test of time.
“Scientific culture itself does not say that we care about being right; he says it’s a concern for us to get flashed papers,” Eisen said.
This article was originally published on NBCNews.com