Informed by community feedback, the detailed guidance addresses research methodology and peer review, while setting standards for disclosure and reproducibility
HOBOKEN, N.J., October 28, 2025 - Wiley (NYSE: WLY), a global leader in authoritative content and research intelligence, has set new standards for responsible and intentional AI use, delivering comprehensive guidelines specifically designed with and for research authors, journal editors, and peer reviewers.
As AI usage among researchers surges to 84%, Wiley is responding directly to the pressing need for publisher guidance articulated by 73% of respondents in the most recent ExplanAItions study. Building on similar advisement for book authors published in March 2025, and shaped by ExplanAItions findings, Wiley's new guidance draws from more than 40 in-depth interviews with research authors and editors across various disciplines, as well as the company's experts in AI, research integrity, copyright and permissions.
It offers the following research-specific provisions:
- Disclosure Standards: Detailed disclosure requirements with practical examples show researchers exactly when and how to disclose AI use-covering drafting and editing, study design, data collection, literature review, data analysis, and visuals. This guidance treats disclosure as an enabling practice, not a barrier, helping researchers use AI confidently and responsibly.
- Peer Review Confidentiality Protections: Clear prohibitions on uploading unpublished manuscripts to AI tools, while providing guidance on responsible AI applications for reviewers and editors. This outlines areas where AI use is and is not appropriate in the peer review process.
- Image Integrity Rules: Explicit prohibition of AI-edited photographs in journals, with clear distinctions between permitted conceptual illustrations and factual/evidential images that require verifiable accuracy, providing clarity on AI use for image generation in various contexts.
- Reproducibility Framework: Comprehensive advice as to which AI uses require disclosure, helping researchers understand when transparency is necessary for scientific evaluation.
"Researchers need clear frameworks for responsible AI use. We've worked directly with the community to create them, setting new standards that will benefit everyone involved in the creation and consumption of scientific content," said Jay Flynn, Executive Vice President and General Manager, Research & Learning at Wiley. "By partnering with the research community from the start, we're ensuring these AI guidelines are grounded in the realities researchers navigate every day while continuing to protect the integrity of the scientific record."
As the research publishing industry experiences rapid AI adoption, these guidelines will serve as a model for responsible AI integration across the sector. They emphasize that AI use should not result in automatic manuscript rejection. Instead, editorial evaluation should focus on research quality, integrity, and transparency, using disclosure as a routine, intentional practice. Beyond establishing standards, the guidelines provide practical examples, workflow integration tips, and decision-making frameworks.
This advisement is a key component of Wiley's comprehensive, coordinated effort to support researchers as AI transforms scientific discovery. The Wiley AI Gateway, launched earlier this month, allows scholars to access peer-reviewed research directly within their AI workflows, while the ongoing ExplanAItions study provides continuous benchmarks on researcher perspectives and needs. The company has also established core AI principles that guide its journey as it continues to integrate AI features into its products and platforms. Together, these initiatives showcase Wiley's commitment to serving as a partner to the research community as it navigates technological change responsibly.