New Framework Boosts Human-AI Team Collaboration

Carnegie Mellon University

As artificial intelligence (AI) becomes embedded in critical decisions about health, safety, finance, and governance, a key challenge is no longer whether people and AI will collaborate, but rather how to structure this collaboration to achieve true complementarity. In the new paper, " Toward a Science of Human–AI Teaming for Decision Making: A Complementarity Framework ," researchers present a framework for understanding and designing human–AI teams for decision making. Drawing on collective intelligence research, the framework focuses on reasoning, memory, and attention as core processes that can be distributed across people and AI systems. Published in PNAS Nexus, the work offers guidance for researchers, practitioners, and policymakers seeking to build human–AI teams that are effective, accountable, and aligned with human values.

A multidisciplinary team from Carnegie Mellon University (CMU), Massachusetts Institute of Technology, the University of Illinois at Urbana-Champaign, Microsoft Research, Harvard University, and the University of Tennessee at Knoxville co-authored the paper.

"Organizations often frame the issue as humans versus AI," says Anita Williams Woolley, Professor of Organizational Behavior at Carnegie Mellon University's Tepper School of Business and a coauthor of the paper. "A better question is how to design teams so AI expands what people can notice, remember, and reason through, while people provide context, judgment, and accountability."

The framework articulates the sociotechnical conditions that shape whether human–AI teams actually achieve complementarity, which is when human-AI teams outperform either humans alone or AI systems alone. The conditions leading to human-AI complementarity include details related to team composition, trust calibration, shared mental models, training, and task structure.

The paper also outlines design principles for achieving complementarity: defining goals and constraints, partitioning roles, orchestrating attention and interrogation, building knowledge infrastructures, and establishing continuous training and evaluation. Their framework provides a common vocabulary for diagnosing where human–AI teams are likely to succeed, where they may fail, and how to improve them.

The authors also point out theoretical, practical, and policy implications of their work, emphasizing alignment with human values, accountability, and equity.

"AI is becoming deeply embedded in collective decision making, marking a profound transformation in how decisions are made across domains, from health care and emergency response to finance, transportation, and governance," explains Cleotilde Gonzalez, Professor of Cognitive Decision Science at CMU, and lead author on the paper.

"Realizing this potential requires deliberate design, rigorous evaluation, and principled governance. Our insights offer a roadmap for building human-AI teams that are not only high-performing and adaptive, but also transparent, trustworthy, and fundamentally human-centered."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.