Pioneers in Ethical Artificial Intelligence

Artificial Intelligence's Impact on Society

Looking back, historians may point to 2023 as the year humans collectively began to take notice of large-scale artificial intelligence (AI) potential impact on society. Researchers at Beth Israel Deaconess Medical Center (BIDMC) and colleagues across the country have been thinking about it for more than two decades.

In collaboration with the American Medical Informatics Association (AMIA), BIDMC's Yuri Quintana, PhD, chief of the Division of Clinical Information, and Steven E.Labkoff, MD, a collaborating scientist in the same division, recently convened a two-day hybrid symposium to explore the pressing issues surrounding AI in the healthcare setting.

"We are absolutely overwhelmed by digital healthcare data, and we don't know what we can trust, but these are the problems that our field was made to solve," said AMIA President and Board Chair Gretchen Purcell Jackson, MD, PhD, in her welcoming remarks. "The multidisciplinary field of health informatics and AMIA has been waiting for this for possibly three decades, so I'm excited to talk about the frameworks and the potential regulations we need around AI in healthcare. I add my thanks to all of you for spending your time on these important problems and allowing AMIA to take the lessons that we learn and translate those into action."

Symposium panelists and attendees—including patients, physicians, scholars, thought leaders, and industry representatives—focused on improving trust and transparency in AI in three areas:

  • Real-world evidence: AI uses large amounts of data to provide meaningful biomedical insights that may form the basis of policy recommendations in healthcare. But, existing data sources may already have baked-in biases, such as the lack of data from historically underrepresented populations. Participants explored means of ensuring data sets are unbiased or correcting for pre-existing bias.
  • AI and decision support: Medical devices undergo rigorous trials before being used in the clinical setting. What kind of testing and validation must AI systems undergo before they can be deemed safe for clinical care?
  • AI and consumer health: Similarly, what safeguards should be put in place for patients who choose to use AI? How should patient privacy be protected? Who accepts responsibility if a patient is harmed by information that came from AI?

"AI has been around for decades, but if we look back at other emerging technologies that have had a great impact—aviation, social media—people didn't pay attention enough to safety," Quintana said in his opening remarks. "AI tends to not always be able to explain how it arrived at its answers, so it can be very difficult for people to trust it. In a world where there's a lot of misinformation, we need to think about some of these issues."

Hosted at Harvard University, "Blueprints for Trust: Best Practices and Regulatory Pathways for Ethical AI in Healthcare" attracted more than 50 in-person attendees and more than 100 virtual participants.

Keynote addresses, white papers, and related webinars about ethical AI in health care content may be viewed online. Key panelists will present a report from "Blueprints for Trust" at the AMIA 2023 Annual Symposium on November 15. The group also has plans to reconvene for a follow-up symposium in spring 2024.

Overall, Quintana suggested keeping patients' privacy and wellbeing at the center of AI-based initiatives must remain a priority. He also stressed the importance of experts in the field communicating the scope of the opportunities and challenges of AI in healthcare to government leaders rather than leaving its growth to for-profit industries that may choose the bottom line over what's best for people and society.

"We need to discuss what kind of society we want," he said. "Technology can accelerate us down a good path or a bad path. The question isn't always how to engineer better software but how to ensure humans behave more ethically."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.