Strategies to Boost Group Problem-Solving

University of Pennsylvania

When a crowd gets something right, like guessing how many beans are in a jar, forecasting an election, or solving a difficult scientific problem, it's tempting to credit the sharpest individual in the room. But new research suggests focusing on the 'expert' can lead groups astray.

In a study published in Proceedings of the National Academy of Sciences , researchers led by Joshua Plotkin at the University of Pennsylvania show that collective intelligence, or the "wisdom of crowds"—a phenomenon wherein groups often outperform individuals on complex tasks—is more likely to emerge when individuals are rewarded not for being right themselves, but for helping the group get closer to the truth.

Computer scientists can engineer collective intelligence in algorithms with centralized control, assigning subtasks, tuning whose input counts more, and basically running the whole operation like a tower controller. But real-world groups, whether people, animals, or loose networks of decision-makers, rarely have that kind of top-down, organized control.

Instead, individuals in natural settings more often tend to learn socially, copying strategies from one another that appear successful.

"Social learning is everywhere," Plotkin says, "but it can cause a problem for collective problem solving. The very mechanism that spreads good ideas can also wipe out the vital variation a group needs to perform well together."

The researchers developed a mathematical model to tease out how a group of relatively uninformed individuals can escape the expert trap—tendency for a crowd to lean on the sharpest individual until their collective diversity wanes.

They tested this against a complex prediction task where the outcome shifts over time based on dozens of random, interconnected factors. Think predicting the weather: no single person can track every gust of wind or humidity spike simultaneously.

The model tasks each individual with watching a single factor, they each make a personal prediction based on that factor and their belief about how it aligns with the outcome, and the model aggregates those narrow glimpses into a single "crowd" forecast.

To determine under how individual incentives might produce collective intelligence, they tested three reward schemes: rewarding those whose predictions are accurate—the experts; rewarding 'niche experts,' those whose predictions are accurate but focus on underrepresented factors; and rewarding 'reformers,' those whose contributions improve the collective prediction regardless of their own personal accuracy.

They found that rewarding the standard experts fails because it inadvertently destroys the diversity of opinion. In this scenario, individuals simply imitate the single most successful peer until everyone is watching the same factor and ignoring the rest of the puzzle.

Rewarding niche experts results in predictions that can be accurate, but fragile; the group struggles when the expert is out of their depth. When a problem changes suddenly, when factors are correlated, when some information is missing, or when the environment is constantly changing, under those conditions, the niche expert approach can converge, yes, but it can converge to the wrong prediction.

By contrast, rewarding reformers facilitates diverse beliefs and collective accuracy, helps the process recover after changes (e.g., to the task), and keeps working when individual judgments are noisy, biased, overconfident, or anomalous. What matters is not who is right, but whose contribution moves the group's prediction in a better direction.

Speaking to more natural, real-world scenarios, first author Guocheng Wang says, "Reformers don't need to be accurate on their own, but they should be rewarded for improving the collective accuracy of the group."

Scientific collaborations often resemble the "niche expert" system, the team explains. Researchers gain recognition for rare expertise that fills a gap in a larger project. On the other hand, markets, prediction platforms, and even stock trading more closely resemble the reformer model: profits come not from being closest to the truth but from moving collective beliefs in the right direction.

"Hopefully," says Plotkin, "this kind of research will help guide non-market institutions to set up incentive schemes that engender good collective outcomes, even for problems that are too difficult for any one person to solve alone."

Joshua B. Plotkin is the Walter H. and Leonore C. Annenberg Professor of the Natural Sciences in the Department of Biology in the School of Arts & Sciences at the University of Pennsylvania

Other authors include Guocheng Wang at Penn at Peking University; Qi Su of Shanghai Jiao Tong University; and Long Wang of Peking University.

This research received support from the U.S. Army Research Office (Award W911NF2410393), and the U.S. Office of Naval Research (award N000142412778).

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.