AUTHOR: Professor Matt McGuire, Dean, School of Humanities and Social Sciences.
When the pocket calculator entered classrooms in the 1970s, educators panicked. If machines could calculate faster and more accurately than humans, what was the point of teaching arithmetic anymore?
Yet over time, something important became clear: calculators did not eliminate the need for human thinking - they increased its importance. Students still needed number sense. They still needed judgment. They still needed to know when an answer was wrong.
I increasingly think generative AI represents a similar moment for universities.
Not the end of higher education. Not the collapse of learning. But a profound shift in what learning is for.
Generative AI has arrived in universities with astonishing speed. ChatGPT reached 100 million users in just two months. By 2025, more than a billion people were using AI tools globally. Unsurprisingly, higher education has become one of the primary testing grounds for this technology.
And why wouldn't it?
Universities revolve around precisely the kinds of tasks large language models excel at:
- reading
- writing
- summarising
- synthesising
- generating arguments
The result is both exciting and deeply unsettling.
On one hand, AI creates the possibility of scalable personalised learning at a level universities have never been able to provide before.
For decades, education researchers have known that one-to-one tutoring dramatically improves student outcomes. Benjamin Bloom famously described this as the "2 sigma problem": students with personalised tutoring often perform two standard deviations better than those in conventional classrooms.
The problem was always cost.
Now, for the first time, universities can plausibly imagine providing every student with a 24/7 AI tutor - one capable of scaffolding learning, adapting explanations, generating practice activities, and offering immediate feedback at scale.
That is potentially transformative. But there is another side to this story.
AI also challenges some of the foundational assumptions universities have relied upon for centuries - particularly around assessment, authorship, and intellectual development.
Take the university essay.
For generations, essays functioned as evidence of thinking. But in the age of generative AI, students can now produce polished assignments in minutes without necessarily understanding the material.
This is why debates about AI in education often feel so anxious: universities are confronting a deeper question beneath the technology itself.
How do we know learning has actually occurred?
In my view, this is where many institutions risk heading in the wrong direction. The temptation is to frame AI primarily as an academic integrity problem - something to detect, police, or prohibit.
But AI is exposing weaknesses that already existed in assessment design. Traditional essays were never simply about producing text. They were meant to develop reasoning, synthesis, judgment, and sustained intellectual engagement.
If AI can now generate the product, universities need to refocus on the process.
That means:
- making thinking more visible
- valuing iteration and reflection
- redesigning assessments around judgment and application
- integrating supervised and unsupervised forms of assessment
- teaching students how to use AI critically rather than pretending it does not exist
This is not about abandoning standards. It is about redefining what rigorous learning looks like in an AI-mediated world. The bigger challenge, however, may be cultural rather than technological.
Universities need to resist both extremes:
- the utopian belief that AI will "solve" education
- the dystopian fear that it will destroy it
The reality is more complicated.
AI is likely to be enormously powerful in structured domains where feedback is clear and knowledge can be scaffolded effectively. But education is also profoundly human:
- motivation
- trust
- identity
- mentorship
- belonging
- social interaction
These are not incidental features of university life. They are central to learning itself.
The future university will not simply be "AI-powered." It will be defined by how successfully it combines technological capability with human judgment. That is the real challenge ahead.
The institutions that thrive will not necessarily be those with the biggest AI budgets or the fastest adoption cycles. They will be the ones that remain clear about what universities are actually for.
Because educational technologies do not determine outcomes.
People and institutions do.