The Care Bears taught a generation of kids that sharing is caring, but few carried this principle into adulthood. Researchers at Michigan State University have found a new angle to promote cooperation — artificial intelligence (AI). The results of this study are published in npj Complexity .
"Cooperation is everywhere in nature," said Christoph Adami , professor at Michigan State University and senior author on the study. "But the mathematics of how cooperation can persist is not easy to understand."
The project revolves around the concept of the "tragedy of the commons" that examines how people respond to a finite supply of resources. Without universally agreed upon cooperation, some members of the population will selfishly exploit the resources to the detriment of everyone in the community.
"Being a good citizen is more costly than being a leech," said Adami. "We have studied this problem for more than 15 years to learn how to lower the barrier for cooperative behavior in order to convert a selfish society into a cooperative one."
Adami and his colleague examined this concept using the "Public Goods" game, which presents participants with a choice to either cooperate and contribute to society or to defect by withholding a contribution. The human players in the game behave like separate populations in a community.
Unlike past studies, the research team introduced AI agents into the game. They explored three scenarios: 1) the AI agents were required to cooperate all the time; 2) the players controlled the cooperation of the AI agents; and 3) the AI agents mimicked human behavior.
For the first scenario, Adami found that simply injecting cooperative agents into the system was insufficient to influence human behavior.
The second scenario actually made the situation worse. The human players gamed the system, offloading cooperation onto the AI agents while reaping the benefits. According to the authors, this outcome mirrors real-world scenarios where individuals might leverage AI systems for personal gain without contributing to the collective good.
The final scenario produced a significant shift in cooperative dynamics in the game. The agents that mimicked human behavior lowered the threshold to cooperation, producing larger pools of reciprocity. By designing AI agents that reciprocate human actions, the team found it was possible to create an environment that encouraged cooperative behavior and improved collective welfare.
While this study cannot immediately be extrapolated directly to real-world scenarios, the authors speculate that it could be applied to make small improvements in society, such as increasing cooperation in self-driving vehicles.
"The idea of always being a good actor is not always a good strategy in society but being an actor that will stand firm against a bad actor may achieve a better outcome," said Adami. "Imitation is not just the sincerest form of flattery; it is also a form of communication that can provide the incentive to tip a population into cooperation."