AI researchers trust international, scientific organizations most

Researchers working in the areas of machine learning and artificial intelligence trust international and scientific organizations the most to shape the development and use of AI in the public interest.

But who do they trust the least? National militaries, Chinese tech companies and Facebook.

Those are some of the results of a new study led by Baobao Zhang, a Klarman postdoctoral fellow in the College of Arts and Sciences. The paper, "Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers," published Aug. 2 in the Journal of Artificial Intelligence Research.

"Both tech companies and governments emphasize that they want to build 'trustworthy AI,'" Zhang said. "But the challenge of building AI that can be trusted is directly linked to the trust that people place in the institutions that develop and manage AI systems."

AI is nearly ubiquitous in everything from recommending social media content to informing hiring decisions and diagnosing diseases. Although AI and machine learning (ML) researchers are well-placed to highlight new risks and develop technical solutions, Zhang said, not much is known about this influential group's attitudes about governance and ethics issues.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.