How To Make 'Smart City' Technologies Behave Ethically

NC State

As local governments adopt new technologies that automate many aspects of city services, there is an increased likelihood of tension between the ethics and expectations of citizens and the behavior of these "smart city" tools. Researchers are proposing an approach that will allow policymakers and technology developers to better align the values programmed into smart city technologies with the ethics of the people who will be interacting with them.

"Our work here lays out a blueprint for how we can both establish what an AI-driven technology's values should be and actually program those values into the relevant AI systems," says Veljko Dubljević, corresponding author of a paper on the work and Joseph D. Moore Distinguished Professor of Philosophy at North Carolina State University.

At issue are smart cities, a catch-all term that covers a variety of technological and administrative practices that have emerged in cities in recent decades. Examples include automated technologies that dispatch law enforcement when they detect possible gunfire, or technologies that use automated sensors to monitor pedestrian and auto traffic to control everything from street lights to traffic signals.

"These technologies can pose significant ethical questions," says Dubljević, who is part of the Science, Technology & Society program at NC State.

"For example, if AI technology presumes it detected a gunshot and sends a SWAT team to a place of business, but the noise was actually something else, is that reasonable?" Dubljević asks. "Who decides to what extent people should be tracked or surveilled by smart city technologies? Which behaviors should mark someone out as an individual who should be under escalated surveillance? These are reasonable questions, and at the moment there is no agreed upon procedure for answering them. And there is definitely not a clear procedure for how we should train AI to answer these questions."

To address this challenge, the researchers looked to something called the Agent Deed Consequence (ADC) model. The ADC model holds that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that results from the deed.

In their paper, the researchers demonstrate that the ADC model can be used to not only capture how humans make value judgments and ethical decisions, but can do so in a way that can be programmed into an AI system. This is possible because the ADC model uses deontic logic, which is a type of imperative logic.

"It allows us to capture not only what is true, but what should be done," says Daniel Shussett, first author of the paper and a postdoctoral researcher at NC State. "This is important because it drives action, and can be used by an AI system to distinguish between legitimate and illegitimate orders or requests."

"For example, if an AI system is tasked with managing traffic and an ambulance with flashing emergency lights approaches a traffic light, this may be a signal to the AI that the ambulance should have priority and alter traffic signals to help it travel quickly," says Dubljević. "That would be a legitimate request. But if a random vehicle puts flashing lights on its roof in an attempt to get through traffic more quickly, that would be an illegitimate request and the AI should not give them a green light.

"With humans, it is possible to explain things in a way where people learn what should and shouldn't be done, but that doesn't work with computers. Instead, you have to be able to create a mathematical formula that represents the chain of reasoning. The ADC model allows us to create that formula."

"These emerging smart city technologies are being adopted around the world, and the work we've done here suggests the ADC model can be used to address the full scope of ethical questions these technologies pose," says Shussett. "The next step is to test a variety of scenarios across multiple technologies in simulations to ensure the model works in a consistent, predictable way. If it passes those tests, it would be ready for testing in real-world settings."

The paper, "Applying the Agent-Deed-Consequence (ADC) Model to Smart City Ethics," is published open access in the journal Algorithms.

This work was supported by the National Science Foundation under grant number 2043612.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.