AI to Audit All Research: Impact on Science Trust?

Self-correction is fundamental to science. One of its most important forms is peer review , when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written record.

Authors

  • Alexander Kaurov

    PhD Candidate in Science and Society, Te Herenga Waka — Victoria University of Wellington

  • Naomi Oreskes

    Professor of the History of Science, Harvard University

Yet problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource intensive.

Soon, artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science?

Peer review isn't catching everything

In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit publishing .

This has opened the doors for exploitation. Opportunistic " paper mills " sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing fees .

Corporations have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favour of their products.

These ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise.

Retraction Watch actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada identify manipulated data and figures.

Investigative journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and flaws.

Not all bad science has a major impact, but some certainly does. It doesn't just stay within academia; it often seeps into public understanding and policy.

In a recent investigation , we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto revealed that the paper had been ghostwritten by Monsanto employees and published in a journal with ties to the tobacco industry .

Even after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages worldwide.

When problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction . Rather, they may be taken as proof that something is rotten in the state of science. This " science is broken " narrative undermines public trust .

AI is already helping police the literature

Until recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI generation.

Natural language processing tools flag " tortured phrases " - the telltale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or contradiction.

AI - especially agentic, reasoning-capable models increasingly proficient in mathematics and logic - will soon uncover more subtle flaws.

For example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of text.

Given full-text access and sufficient computing power, these systems could soon enable a global audit of the scholarly record. A comprehensive audit will likely find some outright fraud and a much larger mass of routine, journeyman work with garden-variety errors.

We do not know yet how prevalent fraud is, but what we do know is that an awful lot of scientific work is inconsequential. Scientists know this; it's much discussed that a good deal of published work is never or very rarely cited .

To outsiders, this revelation may be as jarring as uncovering fraud, because it collides with the image of dramatic, heroic scientific discovery that populates university press releases and trade press treatments.

What might give this audit added weight is its AI author, which may be seen as (and may in fact be) impartial and competent, and therefore reliable.

As a result, these findings will be vulnerable to exploitation in disinformation campaigns, particularly since AI is already being used to that end .

Reframing the scientific ideal

Safeguarding public trust requires redefining the scientist's role in more transparent, realistic terms. Much of today's research is incremental, career‑sustaining work rooted in education, mentorship and public engagement.

If we are to be honest with ourselves and with the public, we must abandon the incentives that pressure universities and scientific publishers, as well as scientists themselves, to exaggerate the significance of their work . Truly ground-breaking work is rare. But that does not render the rest of scientific work useless.

A more humble and honest portrayal of the scientist as a contributor to a collective, evolving understanding will be more robust to AI-driven scrutiny than the myth of science as a parade of individual breakthroughs.

A sweeping, cross-disciplinary audit is on the horizon. It could come from a government watchdog, a think tank, an anti-science group or a corporation seeking to undermine public trust in science.

Scientists can already anticipate what it will reveal. If the scientific community prepares for the findings - or better still, takes the lead - the audit could inspire a disciplined renewal. But if we delay, the cracks it uncovers may be misinterpreted as fractures in the scientific enterprise itself.

Science has never derived its strength from infallibility. Its credibility lies in the willingness to correct and repair. We must now demonstrate that willingness publicly, before trust is broken.

The Conversation

Naomi Oreskes has received funding from various academic and philanthropic organisations. Currently, her research is partly funded by the Rockefeller Family Fund and the Maine Community Fund. She also receives royalties from her publications and honoraria for speaking events.

Alexander Kaurov does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).