Learning about a topic by interacting with AI chatbots like ChatGPT rather than following links provided by web search can produce shallower knowledge. Advice given on the basis of this shallow knowledge tends to be sparser, less original, and less likely to be adopted by others. Shiri Melumad and Jin Ho Yun conducted seven experiments with thousands of online participants who were randomly assigned to learn about various topics, including how to plant a vegetable garden, how to lead a healthier lifestyle, or how to cope with financial scams, using either large language models (LLMs) or traditional Google web search links. Participants then wrote advice based on what they learned. Participants who used LLMs spent less time engaging with search results and reported developing shallower knowledge compared to those using web links, even when the underlying facts were identical. When forming advice, LLM users invested less effort and produced content that was objectively shorter, contained fewer factual references, and showed greater similarity to other participants' advice. In an experiment with 1,501 independent evaluators, recipients—who were unaware of where the advice came from—rated advice written after LLM searches as less helpful, less informative, and less trustworthy compared to advice based on web search, and were less willing to adopt the LLM-derived advice. While LLMs are undeniably efficient, relying on pre-synthesized summaries can transform learning from an active quest to a passive activity. According to the authors, LLMs are thus potentially less useful than web search if the goal is developing procedural knowledge—an understanding of how to actually do things.
AI Yields Shallower Insights Than Web Search
PNAS Nexus
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.