Survey: Ethical Gaps Hinder AI's Pediatric Surgery Role

Zhejiang University

Artificial intelligence (AI) is rapidly advancing across modern healthcare, yet its role in pediatric surgery remains limited and ethically complex. This study reveals that although surgeons recognize AI's potential to enhance diagnostic precision, streamline planning, and support clinical decision-making, its practical use is still rare and mostly academic. Pediatric surgeons expressed strong concerns about accountability in the event of AI-related harm, the difficulty of obtaining informed consent for children, the risk of data privacy breaches, and the possibility of algorithmic bias. By examining pediatric surgeons' experiences and perceptions, this study highlights the critical barriers that must be addressed before AI can be safely and responsibly integrated into pediatric surgical care.

At present, throughout the world, AI is reshaping how medical data are interpreted, how risks are predicted, and how complex decisions are supported. Yet pediatric surgery faces unique ethical challenges due to children's limited autonomy, the need for parental decision-making, and the heightened sensitivity of surgical risks. In low-resource settings, concerns about infrastructure, data representativeness, and regulatory preparedness further complicate adoption. Pediatric surgeons must balance innovation with the obligation to protect vulnerable patients and maintain trust. These pressures intensify debates around transparency, fairness, and responsibility in the use of AI tools. It was with these challenges that a deeper research is needed to guide the ethical and practical integration of AI in pediatric surgical care.

A national team of pediatric surgeons from the Federal Medical Centre in Umuahia, Nigeria, has released the first comprehensive survey examining how clinicians perceive the ethical and practical implications of integrating AI into pediatric surgical care. Published (DOI: 10.1136/wjps-2025-001089) on 20 October 2025 in the World Journal of Pediatric Surgery (WJPS), the study gathered responses from surgeons across all six geopolitical zones to assess levels of AI awareness, patterns of use, and key ethical concerns. The findings reveal a profession cautiously weighing AI's potential benefits against unresolved questions regarding accountability, informed consent, data privacy, and regulatory readiness.

The study analyzed responses from 88 pediatric surgeons, most of whom were experienced consultants actively practicing across diverse clinical settings. Despite global momentum in AI-enabled surgical innovation, only one-third of respondents had ever used AI, and their use was largely restricted to tasks such as literature searches and documentation rather than clinical applications. Very few reported using AI for diagnostic support, imaging interpretation, or surgical simulation, highlighting a substantial gap between emerging technological capabilities and everyday pediatric surgical practice.

Ethical concerns were nearly universal. Surgeons identified accountability for AI-related errors, the complexity of securing informed consent from parents or guardians, and the vulnerability of patient data as major sources of hesitation. Concerns also extended to algorithmic bias, reduced human oversight, and unclear legal responsibilities in the event of harm. Opinions on transparency with families were divided. While many supported informing parents about AI involvement, others felt disclosure was unnecessary when AI did not directly influence clinical decisions.

Most respondents expressed low confidence in existing legal frameworks governing AI use in healthcare. Many called for stronger regulatory leadership, clearer guidelines, and standardized training to prepare pediatric surgeons for future AI integration. Collectively, the findings underscore an urgent need for structured governance and capacity building.

"The results show that pediatric surgeons are not opposed to AI—they simply want to ensure it is safe, fair, and well regulated," the research team explained. "Ethical challenges such as accountability, informed consent, and data protection must be addressed before clinicians can confidently rely on AI in settings involving vulnerable children. Clear national guidelines, practical training programs, and transparent standards are essential to ensure that AI becomes a supportive tool rather than a source of uncertainty in pediatric surgical care."

The study underscores the need for pediatric-specific ethical frameworks, clearer consent procedures, and well-defined accountability mechanisms for AI-assisted care. Strengthening data governance, improving digital infrastructure, and expanding AI literacy among clinicians and families will be essential for building trust. As AI continues to enter surgical practice, these measures offer a practical roadmap for integrating innovation while safeguarding child safety and public confidence.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.