The Dana-Farber Cancer Institute called for the withdrawal of six studies and corrections in 31 other papers after scathing criticism highlighted alleged errors that a blogger and biologist said ranged from sloppiness to “very serious concerns.”
The allegations – against top scientists at the prestigious Boston-based institution, which is a teaching affiliate of Harvard Medical School – have put the institution at the center of a debate about research misconduct, how to police scientific integrity and whether the organizational structure of the academic science encourages shortcuts or cheating.
The criticism also points to the growing role of artificial intelligence in catching false or questionable science.
The allegations, related to image duplication and manipulations in biomedical research, are similar to concerns aired last year against former Stanford University president Marc Tessier-Lavigne, who resigned following an investigation.
Biologist and blogger, Sholto David, drew attention to Dana-Farber after he highlighted problems in several studies by top researchers.
In early January, David detailed duplications and potentially misleading image changes in dozens of papers produced primarily by Dana-Farber researchers, writing in a blog post that it appears that research from the most best in the institution “hopelessly corrupted with errors evident from direct. cursory reading.”
After the publication of David’s blog, Dr. Barrett Rollins, the institute’s research integrity officer, said in an emailed statement Wednesday that Dana-Farber scientists had requested that six manuscripts be retracted, that 31 manuscripts were being corrected and that they had one. manuscript still under examination.
Rollins added that some of the papers David cited had already come up in previous “ongoing reviews” by the institute.
“The presence of image inconsistencies in a paper is not evidence of the author’s intent to deceive,” Rollins said. “That conclusion can only be reached after a careful examination based on facts which are a central part of our response. Our experience is that errors are often unintentional and do not rise to the level of misconduct.”
Ellen Berlin, director of communications at Dana-Farber, wrote in an email that all the allegations related to pure or basic science, rather than studies that led to the approval of cancer drugs.
“The review of Dana-Farber’s research papers does not affect cancer treatment in any way,” Berlin wrote.
David is one of the many sleuth scientists who read journal articles to look for errors or fabrications. He compared his hobby to playing a game like “spot the difference” or completing a crossword.
“It’s a puzzle,” David said in an interview, adding that he enjoys looking at figures that show the results of common biology experiments, such as those involving cells, mice and western blots, a laboratory method that identifies proteins .
“Of course, I care about getting the science right,” he said.
Scientific errors in published work have been a focal point in the scientific community in recent years. The Retraction Watch website, a website that tracks retracted papers, includes more than 46,000 recorded papers in its database. The organisation’s record of withdrawal work stretches back to the 1970s. A 2016 Nature article stated that more than a million papers are published in the biomedical field each year.
The PubPeer website, which allows outside researchers to post reviews of peer-reviewed research published in journals, is a forum where scientists can express problems. David said he has written more than 1,000 anonymous reviews on the website.
David said a trail of questionable science led him to Dana-Farber. In a previous investigation, David scrutinized the work of a Columbia University surgeon. He found flaws in the surgeon’s collaborative work, which eventually drew his attention to the leadership team at Dana-Farber.
David said he went through the leadership page on the Dana-Farber website, checking out the work of his top scientists and leaders.
He found a number of image errors, many of which could be explained by copy-pasted wall work or a combination, but also others where images are stretched or rotated, which is more difficult to explain. Other users have identified some errors on PubPeer before. David combined the previous concerns with his own findings in a blog post aimed at the institute. The Harvard Crimson, a student newspaper, first published a news story about the charges.
David said the images of mice in paper only looked like they had been digitally altered in ways that appeared to be deliberate and could distort the take away from the paper.
“I don’t understand how that could be an accident.” said David.
Most of the errors are “less serious” and may have been accidents, he said. However, the rash of mistakes, David, points to a broken research and review process if no one caught them before publication.
“When you see duplication, that’s a symptom of a problem,” David said.
Elisabeth Bickscientist who investigates image manipulation and research misconduct, that David’s work was credible.
“The allegations he’s raising are exactly the same I would say. They are spot on,” said Bik.
Bik, who has been doing this type of work for about 10 years, said she is often frustrated by the lack of response from academic institutions when she points out errors. She said she was pleased to see that Dana-Farber had responded and had already taken proactive steps to correct the scientific record.
“I am very surprised that the institute is in action. I hope they continue with the publishers,” said Bik. “I’ve reported many of these cases where nothing happened.”
In scientific communities, image manipulations are a concern, especially after Stanford University’s Tessier-Lavigne resigned as president of the institute following criticism of his work in neuroscience in the past.
Tessier-Lavigne said he had cleared himself of fraud or falsifying data, but an investigator found members of his lab improperly manipulated research data or engaged in “deficient scientific practices,” according to a report by a panel of researchers external who assessed the situation.
The report said Tessier-Lavigne’s lab culture rewarded junior scientists whose work produced favorable results and marginalized those who did not, a dynamic that could lead young scientists to manipulate results and pursue favoritism .
Outside researchers said that kind of culture is not uncommon at the best institutions, where ambitious professors can preside over sprawling labs with dozens of graduate students eager to please their bosses and in the know. that publishing a splash paper could quickly advance their careers.
Some scientists are increasingly concerned that limited opportunities for young scientists and a problematic system for publishing scientific work have led to corner-cutting of the career.
“There’s a lot of incentive to produce mounds of research and publish it in these high-impact journals to make your name,” said Dr. Ferric Fang, a microbiologist and professor at the University of Washington. “We are encouraging this kind of behavior.”
Problems with images published in research are widespread.
In a 2016 article published in the American Society of Microbiology, Bik and Fang evaluated images from more than 20,600 articles in 40 biomedical journals from 1995 to 2014. They found that about 3.8% contained “problematic figures” of the journal articles and that at least half of those “suggested deliberate manipulation”.
New tools are helping institutions and others root out mistakes and potential misconduct. David used a program called ImageTwin to identify some of the suspicious figures from the Dana-Farber researchers.
The artificial intelligence-powered software can ingest a study, analyze its images and in 15 seconds compare them to each other and also against about 50 million scientific images in its database, according to the co-founder of ImageTwin, Patrick Starke.
The software is commercially available from 2021. Starke, who is based in Vienna, said that several hundred academic organizations are using the tool to identify problems before publication.
“It’s great if it gets caught and retracted, and it’s even better if it’s not published,” said Starke, who envisions the program being used in academics as often as plagiarism-checking tools do text analysis.
But Starke said staying ahead of those who cut corners or cheat will be a challenge. Studies have already shown that AI programs can generate realistic figures of common experiments like western blots, Starke said. His company is developing tools to search for AI-generated patterns in research images.
“If AI can photograph faces realistically, it’s probably already happening in the scientific literature,” Bik said. “That’s the next level of corruption, I’m not sure we’re even ready for that.”
This article was originally published on NBCNews.com