Humanities scholars, artists, authors and computer scientists recently came together at the University of Toronto to explore how artificial intelligence is poised to change society.
Co-presented by U of T's BMO Lab for Creative Research in the Arts, Performance, Emerging Technologies and AI and University College, the Who's Afraid of AI? conference bridged disciplines and brought together diverse perspectives on a revolutionary technology that is changing the way we live and work - and perhaps even our place in the world.
The two-day event, which took place alongside an accompanying arts festival, featured a keynote by "godfather of AI" Geoffrey Hinton and computer vision expert Fei-Fei Li, who is sometimes dubbed AI's "godmother," as well as talks by Berlin-based artist Marco Donnarumma, British author Jeanette Winterson and scores of others.
Here are three insights drawn from the conference about how AI's future will shape our own:
Learning to co-exist with AI is more important than controlling it

From early skepticism to technological breakthroughs, Hinton, a U of T University Professor emeritus of computer science and 2024 Nobel Prize winner , and Li, a professor of computer science at Stanford University and co-director of the school's Institute for Human-Centered AI, reflected on the evolution of AI during the conference's keynote and Neil Graham Lecture in Science - and what that means for humanity's future.
Hinton urged the need to design AI systems that can co-exist with humanity, even as they surpass human intelligence. He proposed the idea of a "maternal AI" - one that cares about us and protects us against the systems that do not.
"We have to make it so that when it's more powerful than us, it's not going to want to replace us," he said.
Li, meanwhile, emphasized the importance of shared responsibility in shaping our future.
"Instead of talking about what we are afraid of, we should ask 'what can we do with AI?,'" she said, adding that she was particularly optimistic about the positive influence AI could have on the process of teaching and learning.
If we want AI that includes everyone, we need to question the data that powers it

Jutta Treviranus, director at the Inclusive Design Research Centre and a professor in the faculty of design at OCAD University, Eryk Salvaggio, media artist and fellow at Tech Press Policy and Donnarumma, an artist, stage director and inventor discussed how to design a more inclusive AI.
Treviranus warned about AI's reliance on statistical reasoning because it often excludes marginalized groups. She urged that we ask whose perspectives are missing and aim to design systems around society's lived experiences.
She called for new approaches to data ownership, including data co-operatives and platform co-operatives that give communities control over how their data is used. Her team at OCAD's Inclusive Design Research Centre is also developing a large language model to help children who are non-verbal and have limited mobility.
Donnarumma, whose hearing impairment has shaped much of his work including pieces like "I Am Your Body," which emerged from reflections about sound, technology and deafness, reflected on an audience question about how society can reclaim agency in the age of AI.
"We need more conferences like this," he said, urging people to connect and understand how the current AI systems work.
AI can talk to us, but conversation remains uniquely human

How do machine minds relate to human minds and what can we learn from one about the other?
A panel featuring Jennifer Nagel, a professor in the department philosophy at U of T Mississauga, Jeanette Winterson, author and fellow of the Royal Society of Literature, and Leif Weatherby, director of the Digital Theory Lab at New York University, explored AI's impact on how society understands human knowledge and communication.
While AI may be able to outperform humans in mathematics or even playing chess, conversation remains a uniquely human skill that AI has not yet mastered, Nagel argued.
"You might think superficially, these systems should be, in a sense, better at conversation than we are," she said.
"They've read all the books. They've seen everything on YouTube. They have massive vocabularies. They can follow our steps very easily. But if you've conversed with a large language model for any period of time, you may have the sense that there's something missing - there's something that we do that they don't do."
To illustrate her point, she engaged in a conversation with Winterson as the audience looked on.
The exchange included signals like nodding and interjections like "oh" and "yeah," which can carry crucial meanings. AI is not trained in the same way, Nagel said, operating in "broadcast mode," predicting the text exchange rather than engaging with us.
"These models get smarter over time in the sense that their parameters get updated every six months, but they're not learning in real time conversational exchanges the way that you and I are learning from each other."