NIST Requests Information to Help Develop an AI Risk Management Framework

Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.
A recent NIST publication proposed a list of nine factors that contribute to a human’s potential trust in an AI system. A person may weigh the nine factors differently depending on both the task itself and the risk involved in trusting the AI’s decision. As an example, two different AI programs – a music selection algorithm and an AI that assists with cancer diagnosis – may score the same on all nine criteria. Users, however, might be inclined to trust the music selection algorithm but not the medical assistant, which is performing a far riskier task.
Credit:

N. Hanacek/NIST

As a key step in its effort to manage the risks posed by artificial intelligence (AI), the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) is requesting input from the public that will inform the development of AI risk management guidance.

Responses to the Request for Information (RFI), which appears today in the Federal Register, will help NIST draft an Artificial Intelligence Risk Management Framework (AI RMF), a guidance document for voluntary use intended to help technology developers, users and evaluators improve the trustworthiness of AI systems. The draft AI RMF will answer a direction from Congress for NIST to develop the framework, and it also forms part of NIST’s response to the Executive Order on Maintaining American Leadership in AI.

The AI RMF could make a critical difference in whether or not new AI technologies are competitive in the marketplace, according to Deputy Commerce Secretary Don Graves.

“Each day it becomes more apparent that artificial intelligence brings us a wide range of innovations and new capabilities that can advance our economy, security and quality of life. It is critical that we are mindful and equipped to manage the risks that AI technologies introduce along with their benefits,” Graves said. “This AI Risk Management Framework will help designers, developers and users of AI take all of these factors into account – and thereby improve U.S. capabilities in a very competitive global AI market.”

AI has the potential to benefit nearly all aspects of society, but the development and use of new AI-based technologies, products and services bring technical and societal challenges and risks. NIST is soliciting input to understand how organizations and individuals involved with developing and using AI systems might be able to address the full scope of AI risk and how a framework for managing these risks might be constructed. The RFI mentions specific topics including:

  • The greatest challenges in improving management of AI-related risks;
  • How organizations currently define and manage characteristics of AI trustworthiness; and
  • The extent to which AI risks are incorporated into organizations’ overarching risk management, such as the management of risks related to cybersecurity, privacy and safety.

“The AI Risk Management Framework will meet a major need in advancing trustworthy approaches to AI to serve all people in responsible, equitable and beneficial ways,” said Lynne Parker, director of the National AI Initiative Office in the White House Office of Science and Technology Policy. “AI researchers and developers need and want to consider risks before, during and after the development of AI technologies, and this framework will inform and guide their efforts.”

“For AI to reach its full potential as a benefit to society, it must be a trustworthy technology,” said NIST’s Elham Tabassi, federal AI standards coordinator and a member of the National AI Research Resource Task Force. “While it may be impossible to eliminate the risks inherent in AI, we are developing this guidance framework through a consensus-driven, collaborative process that we hope will encourage its wide adoption, thereby minimizing these risks.”

Responses to the RFI are due on Aug. 19, 2021. NIST also plans to hold a workshop in September where attendees can help develop the outline for the draft AI RMF. Information on the workshop will be available on the NIST website when details are finalized. After releasing the initial draft AI RMF, NIST will continue to develop it over several iterations, including additional opportunities for public feedback.

To submit responses to the RFI, download the template response form

/Public Release. This material comes from the originating organization/author(s)and may be of a point-in-time nature, edited for clarity, style and length. The views and opinions expressed are those of the author(s).View in full here.