LLM-assisted manuscripts exhibit more complexity of the written word but are lower in research quality, according to a Policy Article by Keigo Kusumegi, Paul Ginsparg, and colleagues that sought to evaluate the impacts of widespread use of generative artificial intelligence (AI) technologies on scientific production. "As AI systems advance, they will challenge our fundamental assumptions about research quality, scholarly communication, and the nature of intellectual labor," write the authors. "Science policymakers must consider how to evolve our scientific institutions to accommodate the rapidly changing scientific production process." Despite enormous enthusiasm and growing concern surrounding the use of generative AI and large language models (LLMS) across research and academia, there has been little systemic evidence about how these technologies are reshaping scientific production. To address this gap, Kusumegi et al. assembled five large datasets spanning 2.1 million preprints, 28,000 peer-reviewed reports, and 246 million online views and downloads of scientific documents. Then, using text-based detectors to identify first-time LLM use, they conducted difference-in-differences analyses to compare researchers' work before and after LLM adoption. They found that LLM adoption increases a researcher's scientific output by 23.7 – 89.3%, with especially large boosts for authors facing higher writing and language barriers. Kusumegi et al. also discovered that LLM-assisted manuscripts show a reversal of the traditional positive relationship between writing complexity and research quality. After LLM adoption, more sophisticated language was used, but in substantively weak manuscripts. Lastly, the authors show that LLM adopters read and cite more diverse literature, referencing more books, younger works, and less-cited documents. "Our findings show that LLMs have begun to reshape scientific production. These changes portend an evolving research landscape in which the value of English fluency will recede, but the importance of robust quality-assessment frameworks and deep methodological scrutiny is paramount," write the authors. "For peer reviewers and journal editors, and the community, more broadly, who create, consume, and apply this work, this represents a major issue."
LLM Use Reshapes Science: Boosts Output, Cuts Quality
American Association for the Advancement of Science (AAAS)
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.