Early COVID-19 research has poor methods and low-quality results – a problem for science the pandemic worsened but did not create

Early in the COVID-19 pandemic, researchers flooded journals with studies on the novel coronavirus. Many publications have streamlined the peer review process for COVID-19 papers while maintaining relatively high acceptance rates. The assumption was that policy makers and the public would be able to identify valid and useful research among a large amount of information that would be rapidly disseminated.

However, in my review of 74 COVID-19 papers published in 2020 in the 15 general public health journals listed in Google Scholar, I found that many of these studies used poor quality methods. Several other reviews of studies published in medical journals have also shown that much of the early COVID-19 research used poor research methods.

Some of these papers have been cited many times. For example, the most cited public health publication listed on Google Scholar used data from a sample of 1,120 people, mostly young educated women, recruited from social media over three days for the most part. Findings based on a small self-selected convenience sample cannot be generalized to a wider population. And since the researchers performed more than 500 analyzes of the data, many of the statistically significant results are more likely to occur. However, this study was cited over 11,000 times.

A highly cited paper means that many people have cited it in their own work. But a high number of citations is not strongly related to research quality, as these metrics can be manipulated and manipulated by researchers and journals. It increases the chances of poor evidence being used to inform policy rather than low quality research, further eroding public trust in science.

Methodological matters

I am a public health researcher with a longstanding interest in the quality and integrity of research. This interest exists in the belief that science has helped to solve important social and public health problems. Unlike the anti-science movement that spreads misinformation about successful public health measures like vaccines, I believe that rational criticism is fundamental to science.

The quality and integrity of research depends greatly on its methods. Each type of study design must have certain features so that it can provide valid and useful information.

For example, researchers have known for many years that studies evaluating the effectiveness of an intervention need a control group to know if any observed effects can be attributed to the intervention.

Systematic reviews that draw together data from existing studies should describe how the researchers identified the studies to be included, assessed their quality, extracted the data and pre-registered their protocols. These elements are essential to ensure that the review covers all the available evidence and tells the reader what is worth paying attention to and what is not.

Certain types of studies, such as one-time surveys of convenience samples that are not representative of the target population, collect and analyze data in a way that does not allow researchers to determine which single variable caused a particular outcome.

All study designs have standards that researchers can consult. But adhering to the standards slows down the research. Having a control group doubles the amount of data that needs to be collected, and it takes more time to identify and thoroughly review all studies on a topic than to review a few. Generating representative samples is more difficult than convenience samples, and collecting data at two time points is more work than collecting them all at once.

Studies comparing COVID-19 papers with non-COVID-19 papers published in the same journals found that the COVID-19 papers tended to have lower quality methods and were less likely to adhere to reporting standards than non-Covid-19 papers. – COVID-19. The COVID-19 papers rarely had preconceived hypotheses and plans for how they would report their findings or analyze their data. This meant that there were no safeguards against data mining to find “statistically significant” results that could be selectively reported.

Such methodological problems were likely overlooked in the much shorter peer review process for the COVID-19 papers. One study estimated that the average time from submission to acceptance of 686 papers on COVID-19 was 13 days, compared to 110 days in 539 pre-pandemic papers from the same journals. In my study, I found that two online journals that published a very high number of methodologically weak COVID-19 papers had a peer review process of about three weeks.

Publish-or-return culture

These quality control issues were present before the COVID-19 Pandemic. The pandemic only pushed them into overdrive.

Journals tend to favor “novel” positive results: that is, results that show a statistical association between variables and apparently identify something previously unknown. Since the pandemic was in many ways novel, it allowed some researchers to make bold claims about how COVID-19 would spread, what impact it would have on mental health, how it could be prevented and how it could be treated.

Braitheann go leor taighdeoirí go bhfuil brú orthu páipéir a fhoilsiú chun a ngairmeacha beatha a chur chun cinn.  <a href=South_agency/E+ via Getty Images” data-src=”https://s.yimg.com/ny/api/res/1.2/rEXM5db075wdJoWesKTYyA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY0MA–/https://media.zenfs.com/en/the_conversation_us_articles_815/33af65ff633d7fc9dcd70f8a159c953 7″/>

Academics have worked in a publish-or-die incentive system for decades, where the number of papers they publish is part of the metric used to evaluate employment, promotion and tenure. The flood of information on the mixed quality of COVID-19 allowed them to increase their publication counts and boost citation metrics as journals requested and rapidly reviewed COVID-19 papers, which were more likely to be cited than non-covid papers.

Online publishing has also contributed to the decline in research quality. Traditional academic publishing was limited in the number of articles it could generate because journals were packaged in a printed, physical document that was usually produced once a month. In contrast, some of today’s online mega-journals publish thousands of papers per month. Low quality studies rejected by reputable journals can find an outlet willing to publish it for a fee.

healthy criticism

Criticizing the quality of published research is very risky. It can be misconstrued as throwing fuel on the raging fire of anti-science. My answer is that a critical and rational approach to the production of knowledge is, in fact, fundamental to the practice of science and the functioning of an open society capable of solving complex problems such as a global pandemic.

If a large amount of disinformation disguised as science is published during a pandemic, it includes real and useful information. At worst, this can lead to poor public health practice and policy.

Science done right produces information that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This means critically examining the quality of study designs, statistical methods, reproducibility and transparency, not the number of times they have been cited or tweeted about.

Science relies on a slow, thoughtful and meticulous approach to data collection, analysis and presentation, especially if it is to provide information to enact effective public health policies. Likewise, papers that are in print only three weeks after first being submitted for review are unlikely to be thoughtful and thorough in peer review. Disciplines that reward quantity of research over quality are also less likely to defend scientific integrity during crises.

Éilíonn dian-eolaíocht dianmhachnamh agus aird, ní deifir.  <a href=Assembly/Stone via Getty Images” data-src=”https://s.yimg.com/ny/api/res/1.2/AuDX1plnz1dFOUknNczTJw–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTY3Nw–/https://media.zenfs.com/en/the_conversation_us_articles_815/193c0f78378f4eaeb646d09395f78 ​​125″/>

Public health benefits greatly from disciplines with replication crises, such as psychology, biomedical science and biology. It is similar to these disciplines in terms of its incentive structure, study designs and analytical methods, and its lack of attention to transparent methods and replication. Much public health research on COVID-19 shows that it suffers from similar poor quality methods.

Reexamining how the discipline rewards its students and evaluates its scholarship could help it better prepare for the next public health crisis.

This article is republished from The Conversation, a non-profit, independent news organization that brings you facts and analysis to help you make sense of our complex world.

It was written by: Dennis M. Gorman, Texas A&M University.

Read more:

Dennis M. Gorman does not work for, consult with, own shares in, or receive funding from any company or organization that would benefit from this article this article, and did not disclose any relevant connections beyond their academic appointment.

Leave a Reply

Your email address will not be published. Required fields are marked *