Canadian youth have delivered a set of AI policy recommendations to ministers, parliamentarians and senators in Ottawa as they seek to make their voices heard at a pivotal moment in the debate around AI and online harms.
Among the proposals is an age-verification system to restrict users' access to generative AI platforms, while there is also a call for AI companies to address the addictive design of AI chatbots.
The report comes after 100 youth, split across four citizens' assemblies, discussed and debated key issues around AI chatbots, information integrity, data privacy and age assurance.
Spearheaded by Simon Fraser University's Dialogue on Technology Project, representatives from the Gen(Z)AI project delivered their recommendations to Evan Solomon, Minister of Artificial Intelligence and Digital Innovation, and Marc Miller, Minister of Canadian Identity and Culture earlier today.
Liam McKay-Argyriou, an SFU undergrad in communication, took part in the initiative and is among those making the trip to Ottawa.
"While the economic benefits of AI are exciting, young Canadians have experienced how these new tools can cause serious harms that individual citizens are not equipped to address," says McKay-Argyriou.
"We need legislation that enforces clear guardrails to protect the safety of our data, mental health and democratic institutions, in the same way our government upholds safety standards for automobiles or pharmaceuticals.
"It feels empowering to share my perspective on a topic of importance to me and know my voice will be heard by decision-makers, something many youth don't have the chance to experience."
There is currently no binding legal framework to regulate either AI systems or online platforms in Canada, following the collapse of both the Online Harms Act (C-63) and the Artificial Intelligence and Data Act (AIDA), in 2025.
"My message to legislators is that youth need to be consulted and involved in the policy processes," says Joie Marin, an SFU undergrad in communication.
"AI and online harms regulations are a form of care that need to be implemented in a way that supports the digital empowerment of young people.
"Now is a critical time to act and something must be done now to keep Canadians safe in digital environments."
The recommendations put forward in the report input directly into the process of shaping Canada's digital governance architecture, according to Fergus Linley-Mota, director of the Dialogue on Technology Project.
"Young people are on the front lines of AI technology and they're facing a whole series of disruptive changes to their lives," says Linley-Mota.
Yet they've been largely absent from the governance processes shaping their digital lives. Gen(Z)AI was set up to change that."
The 100 youth were selected nationally by civic lottery in order to reflect Canada's geographic, linguistic and demographic diversity.
The four citizens' assemblies each tackled a specific policy theme: AI chatbots in Toronto; information integrity in Montreal; data privacy in Vancouver; and age assurance in Halifax.
After three days of discussions, youth in each location came up with issue statements and a set of recommendations for their policy area.
"Our youth participants expressed a consistent and striking ambivalence - they use AI tools, often extensively, while simultaneously distrusting the platforms that deliver them, the governments that regulate them, and the incentive structures that shape them," says Helen Hayes, project co-lead and a fellow at SFU's Morris J. Wosk Centre for Dialogue.
"This is a rational response to a governance landscape that has, until now, spoken about young people rather than with them.
"The legislation being constructed now will shape the digital lives of young Canadians for decades to come. It's vital that they have a seat at the table."