We Should Talk More At School: Researchers Call For More Conversation-rich Learning As AI Spreads

University of Cambridge

Generative Artificial Intelligence could result in a renewed emphasis on conversational approaches to teaching, researchers say, as chatbots make it easier to bypass recall-based learning and test the limits of traditional exams.

In a new conceptual paper, researchers at the University of Cambridge argue that AI raises questions for aspects of traditional models of education which focus on absorbing and memorising information.

The authors suggest that AI, like many earlier communications technologies, is forcing a rethink of education. They urge educators and policymakers to consider moving towards 'dialogic' learning, in which teachers and students talk more, explore problems together, and test ideas from different angles. They argue that AI might in future be used to support students to learn and work collaboratively while drawing on different sources of knowledge.

As an example of how this might be put into practice, their paper reimagines a basic science lesson about gravity.

In a conventional lesson, students might be taught key principles, laws and formulae relating to gravity, which they are expected to memorise and reproduce later. In the dialogic version, they begin with a question, such as "Why do objects fall to the ground?" The paper imagines students discussing this in groups, then running their ideas past an AI chatbot that takes on the guise of different thinkers such as Aristotle, Newton and Einstein.

Approaches like this, the authors suggest, would have the advantage of placing students 'inside' scholarly conversations relevant to the national curriculum, and help them to grasp key concepts by discussing and reasoning their way through them.

The paper, in the British Journal of Educational Technology, was co-authored by Rupert Wegerif, Professor of Education, University of Cambridge, and Dr Imogen Casebourne, Researcher at the Digital Education Futures Initiative (DEFI), Hughes Hall, Cambridge.

"Every so often a technology comes along that forces a rethink of how we teach," Wegerif said. "It happened with the internet, with blackboards – even with the development of writing. Now it's happening with AI."

"If ChatGPT can pass the exams we use to assess students, then at the very least we ought to be thinking deeply about what we are preparing them for. One thing we should consider is education as a more conversational, collaborative activity – an approach first advocated by Socrates, but also highly relevant to a digitally connected world with planet-sized problems."

Although schools in the UK are receiving guidance on AI, the paper suggests that many strategies risk bolting the technology on to a system it is already capable of short-circuiting. Students who struggle when writing an essay for their homework, for example, will inevitably be tempted to ask a chatbot to write it for them, with a diminishing risk of being caught.

In such situations, Wegerif argues, AI becomes a "cognitive poison", enabling students to offload their thinking and limiting their progress.

To address this, he proposes that education itself needs to adapt, and that students should enter into conversation with each other and with scholarly ideas. An example prototype tool is the Open University's BCause project, which is exploring the use of technology to support balanced and civil online deliberations between groups of people and which uses AI to create summaries of the discussion.

The paper calls for a "double-dialogic pedagogy" in schools. This means, firstly, foregrounding dialogic methods of teaching in which students and teachers work through problems in conversation, systematically interrogating different perspectives, with AI acting as a guide and support. 'ModeratorBot', currently in development at Cambridge, is one such example. The AI joins group discussions and is intended to gently intervene when some voices dominate or introduce open-ended questions to support perspective-switching.

Secondly, the authors argue that AI might induct students into the "dialogue so far" on a given subject, by enabling them to test and develop their ideas against different theories and thinkers – as in the imagined lesson on gravity.

The paper also notes AI's potential to act as a "devil's advocate" that challenges students' ideas to test their reasoning, A relevant example of how AI might do this is QReframer , developed by Simon Buckingham Shum, an AI tool that does not answer students' questions but instead interrogates their assumptions, encouraging deeper critical reflection on a given subject.

Such innovations, the authors argue, demonstrate how Generative AI might be successfully integrated into education, but also how education will need to become more conversational and collaborative to accommodate it.

"Generative AI has arrived at a time when there are many other pressures on educational systems," Casebourne said. "The question is whether it is adopted in ways that enable students to develop skills such as dialogue and critical thinking or ways that undermine this."

The authors add that learning which illuminates different perspectives by placing students inside a dialogue could help equip young people to address the "polycrisis": the term given to interconnected, global challenges – such as climate change, rapid population growth, and threats to democracy – that demand joined-up thinking and collective problem-solving.

"This is a sort of threshold moment," Wegerif added. "The way we teach and learn needs to change. AI can be part of the remedy, but only with approaches to learning and assessment that reward collaborative inquiry and collective reasoning. There is no point just teaching students to regurgitate knowledge. AI can already do that better than we can."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.