Oklahoma University Enlists in National AI Safety Consortium

University of Oklahoma

NORMAN, OKLA. - The University of Oklahoma has joined a newly formed U.S. Artificial Intelligence Safety Institute Consortium led by the U.S. Department of Commerce's National Institute of Standards and Technology. This consortium aims to bring together the largest group of AI developers, users, researchers and affected groups worldwide to promote the creation of safe and trustworthy artificial intelligence.

"OU is a national leader in trustworthy AI for weather research and is at the forefront of AI/ML research in many fields, coordinated by our Data Institute for Societal Challenges. We're excited to apply OU's expertise to support the goals of this national consortium," said OU Vice President for Research and Partnerships Tomás Díaz de la Rubia.

OU's role in the consortium involves its Data Institute for Societal Challenges, led by director David Ebert, and the OU-led NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography, known as AI2ES and directed by Amy McGovern, the Lloyd G. and Joyce Austin Presidential Professor in the Gallogly College of Engineering and professor in the School of Meteorology.

OU's Data Institute for Societal Challenges will collaborate to address AI's complex challenges, ensuring positive outcomes nationally and globally. It also will help shape guidelines to advance industry standards in AI development and deployment.

"At DISC, we understand the critical importance of ensuring the safety and trustworthiness of artificial intelligence as it increasingly shapes our world. Our experience in addressing real-world challenges through data-driven solutions uniquely positions the University of Oklahoma to be a leader in the U.S. Artificial Intelligence Safety Institute Consortium," said Ebert, who is an associate vice president for research and partnerships and the Gallogly Chair in the School of Electrical and Computer Engineering.

"By joining forces with NIST and fellow consortium members, we are committed to advancing AI trustworthiness, fairness and safety measures that align with societal norms and values, ultimately empowering our communities and fostering a future where AI technologies drive positive societal impact," he added.

The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography focuses on creating trustworthy AI for a variety of high-impact weather phenomena and developing a modern workforce that can harness AI and ML for the benefit and safety of society.

"AI2ES is delighted to be working with the consortium as part of our core focus on understanding the nature of trust in artificial intelligence," McGovern said. "As AI is growing rapidly, it is clear that we need to develop ways to ensure that it is deployed in an ethical and responsible manner."

Established to support the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Executive Order of Oct. 30, 2023," AISIC will stimulate activities to spur innovation and advance trustworthy and responsible AI. Consortium participants like DISC and AI2ES will provide expertise in 20 different areas, including human-AI teaming and interaction, AI governance, AI system design and development, responsible AI and more.

Learn more about the Artificial Intelligence Safety Institute Consortium and see the complete list of consortium participants from NIST.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.