Scientists Trust AI Over Colleagues: What's Next?

Artificial intelligence has crossed a threshold in the modern workplace. It is being used for everything from helping employees manage schedules to supporting financial forecasts. A similar shift is now unfolding inside research laboratories.

There is currently a boom in national initiatives to accelerate the integration of AI into science. These include the US Genesis Mission and South Korea's AI Co-Scientist Challenge . But despite clear benefits, we believe these institutional drives are neglecting important issues that carry immense risks for scientific research.

Today, more than half of researchers use AI for work tasks including reviews of academic journals and designing experiments.

AlphaFold is an AI tool developed to predict the structures of proteins for scientific research. Working out protein structures was incredibly time-consuming before its release - taking years in some cases. The same tasks now take hours. AlphaFold was acknowledged by the 2024 Nobel Prize in Chemistry .

AI tools for use in medicine now assist with everything from the interpretation of results from X-rays and MRIs to supporting doctors' decisions on the diagnosis and treatment of disease .

Our key concern is that hasty adoption of AI may gradually erode the scientific culture and human relationships that sustain rigorous research. It starts with the erosion of core thinking skills among researchers, as a result of an increased reliance on AI to perform that work . This can alienate researchers from the deeper reasoning behind their work.

Loss of independent thinking

Early-career scientists are particularly vulnerable , because they are still developing their scientific reasoning. Troubleshooting skills and the critical evaluation of ideas may be outsourced to AI systems.

AI's fluent, confident and immediate responses can easily be mistaken for authoritative information. Once researchers begin to treat AI outputs as implicitly correct, the responsibility for judgment calls may gradually shift from them to their machines.

AI's persuasive arguments, probably drawn from mainstream ideas in their training data, could replace more rigorous, time-consuming and creative research approaches. These are traditionally shaped through critical back-and-forth discussions between researchers.

This can evolve into over-dependence. As reasoning is delegated to AI, researchers become less confident at working unaided. Unfortunately, modern scientific labs are full of conditions that reinforce this dependence, such as intense competition , long hours and frequent isolation .

Limited mentorship and feedback from colleagues that is delayed, critical or politically influenced can enhance this issue. In contrast, AI provides an immediate, patient and nonjudgmental alternative.

Scientists interact with AI systems daily in order to check computer code, revise illustrations or charts, draft the language for grant applications, clarify scientific concepts, and at times, ask for personal advice.

As researchers begin to trust the AI assistant, it can begin to function less like a tool and more like a companion. This phenomenon bears the risk of emotional dependency , too. When ChatGPT-4 was retired , many users expressed a form of grief .

Replacing relationships

Another important concern is the potential for replacement of human relationships in the office or research lab. AI is always available, nonjudgmental, noncompeting - and indifferent to office politics, with no ego to defend. It remembers context, adapts to individual working styles, and offers reassurance without social cost.

Human scientific relationships are more complicated , involving nuance , criticism, time constraints, hierarchy - and sometimes, ulterior motives. For early-career researchers especially, these interactions can feel risky .

Critical feedback from humans can feel adversarial, while AI responses feel supportive . So, early-career scientists might have good reason to prefer testing ideas or seeking validation through AI, rather than their peers or superiors.

The scientific community cannot thrive without opposing ideas, deep scepticism against consensus, vigorous debate and rigorous mentoring. If AI begins to replace these, it threatens the foundations on which scientific progress has always been made.

The current debate on AI safety mostly focuses on errors in models' responses, or on AI systems circumventing the restrictions imposed on the way they work, known as "jailbreaking" . Such rules have limited effects when it comes to the AI models' societal and cultural impact.

Given the recent drives to get scientists to work more closely with AI assistants, we should educate our young scientists on the risks of AI dependence . We also need benchmarks to rigorously test AI models for their ability to establish boundaries with users, to prevent overdependence and other unhealthy interactions.

Finally, all of us - but especially institutional leaders - should understand the capabilities and permanence of AI companionship. They are here to stay, and we should learn to make our relationships with them as healthy as possible.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).