The lecture series "Humanity in the Automated State" continued on 19 March 2026 at Leiden Law School with Dr. Mengchen Dong from the Center for Humans and Machines at the Max Planck Institute for Human Development.
Dr. Dong is a social psychologist and behavioral scientist whose research focuses on AI ethics and governance across interpersonal, organizational, and societal contexts, with particular attention to how power dynamics, personal circumstances, and sociocultural backgrounds shape human-AI interaction.
In her lecture, "False Consensus Biases AI Against Vulnerable Stakeholders," Dr. Dong presented findings from a large-scale empirical study examining public attitudes toward AI-assisted welfare benefit allocation in the United States and the United Kingdom. Drawing on survey data from over 3,200 participants, she explored a central tension: when AI systems offer faster decisions at the cost of higher error rates, whose preferences should count? Her research shows that aggregate public opinion masks deep and consequential divergences. While the general population shows some willingness to accept modest accuracy losses in exchange for speed, welfare claimants-the primary stakeholders in these systems-are significantly more resistant to such trade-offs, a concern made urgent by the fact that AI deployment in this domain has already led to increases in wrongful benefit denials and erroneous fraud accusations.
A particularly striking finding concerned what Dr. Dong terms "asymmetric insights." Non-claimants consistently overestimate the willingness of welfare claimants to accept AI-driven trade-offs-even when financially incentivized to take claimants' perspectives accurately. Claimants, by contrast, demonstrate a much better understanding of non-claimants' views. Because non-claimants constitute the majority and tend to have greater influence over policy, this asymmetry creates conditions for a false consensus: well-intentioned advocacy on behalf of vulnerable groups is no safeguard if it rests on a misreading of those groups' actual preferences. The lecture concluded with a call for direct engagement with vulnerable stakeholders when designing and deploying AI systems in contexts marked by power imbalance, rather than assuming that their preferences can be inferred or adequately represented by others.
The lecture series, organized by Dr. Melanie Fink (Europa Institute) and Dr. Daria Morozova (Department of Business Studies), is funded by the Dutch Research Council (NWO) under the VENI grant "Gateways for Humanity: The Duty to Reason in the Automated State" and supported by Leiden Law School's research focus area "Technology, law, and justice." The series brings together scholars from law, management, public administration, and computer science throughout the 2025/2026 academic year to examine how algorithmic governance reshapes human relationships with public authority.
Upcoming sessions feature Ida Koivisto (University of Helsinki, April 9) and Natali Helberger (University of Amsterdam, May 26).