Scientists share their work by publishing articles in journals, such as Nature, Science or PLOS Biology. One major part of the publishing process involves having these manuscripts reviewed by unpaid peers. These scientists specialize in the same topic and volunteer to make sure the science is sound and the authors haven't missed anything critical in their data analysis.
The peer review process has reached a critical point where there are too many manuscript submissions and not enough peer reviewers. Carl Bergstrom , University of Washington professor of biology, and Kevin Gross , North Carolina State University professor of statistics, used mathematical modeling to demonstrate this crisis in the form of a self-perpetuating cycle. The team describes this cycle and potential interventions in a paper published Feb. 24 in PLOS Biology.
UW News reached out to Bergstrom and Gross to learn more about this cycle and how the potential interventions could mediate this crisis.
Why is the process of peer review important for science?
Carl Bergstrom and Kevin Gross: Peer review helps scientific literature maintain its credibility. The system of peer review guarantees that published research has been scrutinized by experts in the relevant field. While peer review is not, and never has been, a watertight seal of approval — peer reviewers are human, too! — it has proven to be a system that, by and large, helps ensure the reliability of the scientific literature.
What is happening to create and perpetuate this cycle you describe in your paper?
CB and KG: The basic insight that drives our paper is that when peer review functions effectively, it helps journals select the science most worthy of their readers' attention and creates a strong motivation for scientists to be selective about where they submit their work. After all, a scientist gains little by having their paper rejected by a top journal. So high-quality reviewing encourages scientists to choose where they submit their work carefully, and to submit only their very best work to the most prestigious outlets. Thus, effective peer review reinforces itself through a virtuous cycle.
The cycle can spin in the other direction too. If peer reviewers have to dilute their efforts over a larger volume of submitted manuscripts, then each manuscript may receive less scrutiny and editors' decisions consequently become less predictable. This encourages authors to try their luck at journals that might otherwise have been a stretch, increasing the volume of manuscripts that need to be reviewed even further and making editorial decisions even less predictable, and so on.
Why are we seeing this crisis now?
CB and KG: To be fair, scientists have been bemoaning the fragile state of peer review for decades. So we are far from the first to observe that using the goodwill of volunteers as a lynchpin of the scientific enterprise may not be a robust model.
But there is reason to believe that the situation is more dire now. There isn't one single cause driving this more recent turn — many factors contribute. For example, over the past few decades, scientific communities have become larger and looser knit, and the willingness to volunteer tends to decline as groups become more diffuse.
Large commercial publishers have also discovered that scientific publishing can be a lucrative business — especially when they can dip into a tradition of free peer-review labor. Drawn by the sizable profits they could make, these publishers have launched countless new journals, crowding the journal landscape. Scientists, in turn, now have more options for what to do with a paper that has been rejected once or numerous times. There's always another journal to send it to. And each time a paper is resubmitted, a new set of peer reviewers must be found.
The pandemic also shocked the system by compelling many researchers to reassess their time commitments. It seems that we have collectively yet to fully rebound to pre-pandemic levels of willingness to review.
Should people be concerned about the science described in current peer-reviewed papers?
CB and KG: Well, to back up a bit, the primary responsibility for the integrity and accuracy of the scientific literature rests squarely with the authors, as it always has. And, thankfully, most authors have strong reputational incentives to make sure that their work is solid and will stand the test of time. But authors have their blind spots.
Peer review isn't going to suddenly collapse and take the literature down with it, but as the system becomes stressed, we might start to see a few more cracks emerge. While that isn't catastrophic, it isn't good for science, either. Social trust in science can wax and wane, and even a little slippage has real consequences for scientists, their livelihoods and society as a whole.
What about this crisis concerns you?
CB and KG: Perhaps our biggest concern is that journal editors who become frustrated with the inability to find willing peer reviewers will turn to AI for machine review instead. There may be ways in which machine review could complement human peer review, but we think it's important that human review continues to be the engine of editorial deliberations at scientific journals.
Peer review is not just a process for making an accept-or-reject decision. Peer reviewers also provide commentary and feedback for the authors. These reports provide a venue for honest dialogue that helps researchers hone their ideas and grow in their careers. Outsourcing manuscript review to robots risks collapsing a discourse that is crucial to scientific progress.
One solution you discuss is to pay reviewers. Is this a viable solution?
CB and KG: Paying reviewers isn't as crazy as it may sound. The landscape of scientific publishing includes both nonprofit and for-profit journals, and all sorts of business models in between. It seems reasonable that especially scientists who review for for-profit journals should be remunerated for their efforts when they provide a service on which the viability of the journal depends.
Perhaps the most compelling argument for paying reviewers is that, of all the possible interventions one could propose, it requires the least amount of coordination among different stakeholders to succeed. As soon as one journal figures out a working model for paying reviewers, then everyone will notice that paying reviewers is viable, and there will be market pressure on other journals to follow suit.
Another idea that we quite like is for journals to offer substantial monetary awards for the most constructive or helpful reviews. This idea has its drawbacks too. Editors would have to spend a little bit of time choosing the prizewinning reviews, and editors could always select their friends for the prize. But every alternative is going to have its drawbacks, and it's important to focus on the net effect, especially when the viability of the status quo seems so tenuous.
If we want to keep peer review voluntary, what are other possible solutions?
CB and KG: There are lots of possible interventions. But the intervention that probably would enjoy the broadest support would be for university hiring and promotion committees to prioritize quality of publications instead of quantity. Most academic scientists today are working in a system that rewards a researcher for the number of publications above all else. This obviously creates incentives for researchers to submit lots of manuscripts, which puts lots of pressure on peer review. If the norms changed so that hiring and promotion hinged on a candidate's top two or three papers instead, then researchers' incentives would change and the pressure on peer reviewers would diminish.
This research was funded by the National Science Foundation and the Templeton World Charity Foundation.