Human Moral Agency Essential Amid AI Era

University of Pennsylvania School of Nursing

PHILADELPHIA (March 10, 2026) – As artificial intelligence (AI) rapidly integrates into clinical settings—from predicting patient outcomes to deploying humanoid "robotic nurses"—an article published in the Hastings Center Report warns that the core of nursing, its moral agency, must remain a human-driven responsibility.

The article, What Does Moral Agency Mean for Nurses in the Era of Artificial Intelligence? , explores the growing tension between advanced algorithmic capabilities and the ethical obligations of the world's most trusted profession.

While AI systems can now simulate empathy and generate context-aware responses, Penn Nursing's Connie M. Ulrich, PhD, RN, FAAN , the Lillian S. Brunner Chair in Medical and Surgical Nursing, Professor of Nursing, and Professor of Medical Ethics and Health Policy, and her co-authors argue that AI lacks sentience, intentionality, and accountability. The authors define a moral agent as a person capable of discerning right from wrong and being held accountable for their actions.

Key findings from the article include:

  • AI as "Moral Zombies": The authors note that algorithms lack the sentience required for true moral accountability.
  • The Relational Value: Nursing is characterized by a "therapeutic presence" and an intuitive exchange of shared humanity that algorithms cannot replicate, particularly in sensitive areas like end-of-life care.
  • A Growing Industry: The global robotic nurse industry is projected to reach over $2.7 billion by 2031, underscoring the urgency of establishing ethical guardrails.

Recommendations for Health Systems

The article emphasizes that nurses must not be passive users of technology but active leaders in its design and implementation. To preserve the public's trust, the authors offer several critical recommendations:

  • Design Participation: Nurses must be part of AI design teams to ensure tools align with clinical values and preferences.
  • Transparency as Default: Facilities should explicitly disclose when AI is used to generate summaries or treatment suggestions, allowing patients and clinicians to understand the source of information.
  • Boundaries on AI Hiring: AI should never be used to determine the hiring of nurses, as algorithms cannot identify the human characteristics of empathy and critical reasoning.
  • Preserving Accountability: AI should be treated as a resource to support, rather than supplant, human moral deliberation.

"Patients come to health care settings to be heard, seen, and valued by skilled professionals, not to seek care from machines," the authors state. "While AI may simulate compassion... it cannot 'care' in the moral sense."

Co-authors from Penn Nursing include George Demiris, PhD; Patricia Brennan, PhD; Oonjee Oh, MSN; and Sang Bin You, MSN.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.