Research: AI Aids Brainstorming, Needs Humans for Decisions

Institute for Operations Research and the Management Sciences

CATONSVILLE, Md., Nov. 11, 2025 – A new peer-reviewed study in the INFORMS journal Decision Analysis finds that while generative AI (GenAI) can help define viable objectives for organizational and policy decision-making, the overall quality of those objectives falls short unless humans intervene.

In the field of decision analysis, defining objectives is a foundational step. Before you can evaluate options, allocate resources or design policies, you need to identify what you're trying to achieve.

The research underscores that AI tools are valuable brainstorming partners, but sound decision analysis still requires a "human in the loop."

The study, "ChatGPT vs. Experts: Can GenAI Develop High-Quality Organizational and Policy Objectives?" was authored by Jay Simon of American University and Johannes Ulrich Siebert of Management Center Innsbruck.

The researchers compared objectives generated by GenAI tools—including GPT-4o, Claude 3.7, Gemini 2.5 and Grok-2—to objectives created by professional decision analysts in six previously published Decision Analysis studies. Each GenAI-generated set was rated across nine key criteria from value-focused thinking (VFT), such as completeness, decomposability and redundancy.

They found that while GenAI frequently produced individually reasonable objectives, the sets as a whole were incomplete, redundant and often included "means objectives" despite explicit instructions to avoid them. "In short, AI can list what might matter, but it cannot yet distinguish what truly matters," the authors wrote.

"Both lists are better than most individuals could create. However, neither list should be used for a quality decision analysis, as you should only include the fundamental objectives in explicitly evaluating alternatives," said Ralph Keeney, a pioneer of value-focused thinking, in response to two AI-produced lists of objectives in the study.

To improve GenAI output, the researchers tested several prompting strategies, including chain-of-thought reasoning and expert critique-and-revise methods. When both techniques were combined, the AI's results significantly improved—producing smaller, more focused and more logically structured sets of objectives.

"Generative AI performs well on several criteria," said Simon. "But it still struggles with producing coherent and nonredundant sets of objectives. Human decision analysts are essential to refine and validate what the AI produces."

Siebert added, "Our findings make clear that GenAI should augment, not replace, expert judgment. When humans and AI work together, they can leverage each other's strengths for better decision making."

The study concludes with a four-step hybrid model for decision-makers that integrates GenAI brainstorming with expert refinement to ensure the objectives used in analysis are essential, decomposable and complete.

Read the study here .

About INFORMS and Decision Analysis

INFORMS is the world's largest association for professionals and students in operations research, AI, analytics and data science and related disciplines. It serves as a global authority advancing cutting-edge practices and fostering an interdisciplinary community of innovation.

Decision Analysis, a leading journal published by INFORMS, features research on modeling and supporting decision-making under uncertainty. INFORMS empowers its community to improve organizational performance and drive data-driven decision making through its journals, conferences and resources.

Learn more at www.informs.org or @informs.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.