New Board Game Explores AI Ethics

Concordia University

Artificial intelligence is rapidly reshaping how people study, work and create, but understanding the ethical implications behind everyday AI use can be difficult to grasp in the abstract. Concordia researchers are tackling this challenge by turning AI ethics into a board game.

Funded by the International Observatory on the Societal Impacts of AI and Digital Technology (OBVIA), and Laboratoire vivant d'innovation sur l'apprentissage en enseignement supérieur (LAVIA), the game Feed the Machine places players in the role of a media intern tasked with producing articles while deciding whether - and how - to rely on AI tools.

"We often talk about ethical AI at the level of systems and regulations," says the project's lead researcher Ann-Louise Davidson, a professor in the Department of Education and director of Concordia's Innovation Lab. "But in practice, people experience these ethical questions through the small decisions they make in their daily work. The game helps them reflect on those decisions in a concrete way."

The game is set to be printed in English and French in the coming months, with pilot sessions and research activities planned as part of its rollout.

Playing through the pressure

In Feed the Machine, players progress through a simulated internship, gathering resources to complete stories and earn points. Longer, more in-depth articles yield higher rewards - but often require more time and effort. Using AI can speed up the writing process, but doing so may introduce hidden risks.

Each time a player relies on AI or adopts a risky practice, they draw cards representing potential unintended consequences, such as hallucinations or the spread of misinformation. Ethical choices, on the other hand, slow progress but can bring long-term benefits.

Co-creator Scott DeJong, a [program] doctoral candidate, says the game's design models the tensions of implementing AI into our practices. Rather than give players all the answers, it asks them to wrestle with the compromises each decision requires.

The tension intensifies as players watch others around the table advance more quickly by using AI tools, reflecting a real-world pressure many workers are facing today.

"At some point, players stop questioning their choices and start optimizing just to keep up," Davidson explains. "That moment is important: it mirrors what happens in real workplaces, where productivity pressures can overshadow ethical reflection."

At the end of the game, a narrative twist reveals that players' work has been used to train an AI system, prompting a group discussion about responsibility, unintended consequences and collective decision-making.

From research to resource

Originally developed as a research project, Feed the Machine has attracted interest from educators, public organizations and private-sector partners interested in using it to support AI literacy and ethical training.

The team plans to distribute hundreds of copies of the game to institutions and facilitators. A printable version will be made available so that organizations in remote or resource-constrained settings can run sessions without specialized equipment.

The project also explores an emerging research area: how game facilitation can help players process complex and sometimes emotionally challenging topics such as ethics, responsibility and technological risk.

"Facilitation is crucial," Davidson says. "A well-run session helps players reflect on what they felt during the game, not just what they did. That reflection is where the learning really happens."

A suite of AI ethics games

Feed the Machine is one of three tabletop experiences developed by a larger research team, each designed to explore ethical AI from a different angle and level of depth.

The most research-oriented of the three, Ethical Pursuit, is based on systems-mapping tools originally developed to help entrepreneurs analyze the broader impacts of their innovations. Players work through real-world scenarios - such as using AI in recruitment, journalism or social media content - while mapping stakeholders, risks and unintended consequences across 12 dimensions of ethical AI.

A shorter, solo experience, Risky Path, condenses these ideas into a quick reflective exercise that can be completed in about 10 minutes, making it easier to integrate into workshops or classrooms with limited time.

Together, the three games are designed to meet users where they are, whether they are students encountering AI tools for the first time or professionals grappling with complex organizational decisions.

Building AI literacy

Davidson says the project is also part of a broader movement toward "unplugged" approaches to teaching complex technological topics. These methods do not require computers or internet access but still allow participants to explore the social and ethical dimensions of digital systems.

Despite the rapid adoption of generative AI tools since 2022, a recent survey from SOM-Radio-Canada showed that many students have received little formal training in how to use them responsibly. Interactive formats like games could create the space needed to pause, reflect and discuss, the researchers say.

"We rarely slow down to think about the ethical implications of the technologies we use," Davidson says. "Games create a structured moment when people can step back, experiment with different choices and see their consequences, without real-world harm."

The team is currently inviting schools, public organizations and other institutions to request copies of the games and participate in upcoming research on how game-based approaches can support AI ethics education.

Individuals can sign up now to receive a copy of Feed the Machine upon its release.

Discover the Department of Education at Concordia.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.