NIST Proposes Approach for Reducing Risk of Bias in Artificial Intelligence

In an effort to counter the often pernicious effect of biases in artificial intelligence (AI) that can damage people’s lives and public trust in AI, the National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing these biases – and is requesting the public’s help in improving it.

NIST outlines the approach in A Proposal for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), a new publication that forms part of the agency’s broader effort to support the development of trustworthy and responsible AI. NIST is accepting comments on the document until Aug. 5, 2021, and the authors will use the public’s responses to help shape the agenda of several collaborative virtual events NIST will hold in coming months . This series of events is intended to engage the stakeholder community and allow them to provide feedback and recommendations for mitigating the risk of bias in AI.

“Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear,” said NIST’s Reva Schwartz, one of the report’s authors. “We want to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause.”

NIST contributes to the research, standards, and data required to realize the full promise of artificial intelligence (AI) as an enabler of American innovation across industry and economic sectors. Working with the AI community, NIST seeks to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias. A key but still insufficiently defined building block of trustworthiness is bias in AI-based products and systems. That bias can be purposeful or inadvertent. By hosting discussions and conducting research, NIST is helping to move us closer to agreement on understanding and measuring bias in AI systems.

AI has become a transformative technology as it can often make sense of information more quickly and consistently than humans can. AI now plays a role in everything from disease diagnosis to the digital assistants on our smartphones. But as AI’s applications have grown, so has our realization that its results can be thrown off by biases in the data it is fed – data that captures the real world incompletely or inaccurately.

Moreover, some AI systems are built to model complex concepts, such as “criminality” or “employment suitability,” that cannot be directly measured or captured by data in the first place. These systems use other factors, such as area of residence or education level, as proxies for the concepts they attempt to model. The imprecise correlation of the proxy data with the original concept can contribute to harmful or discriminatory AI outcomes, such as wrongful arrests, or qualified applicants being erroneously rejected for jobs or loans.

The approach the authors propose for managing bias involves a conscientious effort to identify and manage bias at different points in an AI system’s lifecycle, from initial conception to design to release. The goal is to involve stakeholders from many groups both within and outside of the technology sector, allowing perspectives that traditionally have not been heard.

“We want to bring together the community of AI developers of course, but we also want to involve psychologists, sociologists, legal experts and people from marginalized communities,” said NIST’s Elham Tabassi, a member of the National AI Research Resource Task Force. “We would like perspective from people whom AI affects, both from those who create AI systems and also those who are not directly involved in its creation.”

The NIST authors’ preparatory research involved a literature survey that included peer-reviewed journals, books and popular news media, as well as industry reports and presentations. It revealed that bias can creep into AI systems at all stages of their development, often in ways that differ depending on the purpose of the AI and the social context in which people use it.

“An AI tool is often developed for one purpose, but then it gets used in other very different contexts,” Schwartz said. “Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended. All these factors can allow bias to go undetected.”

Because the team members recognize that they do not have all the answers, Schwartz said that it was important to get public feedback – especially from people outside the developer community who do not ordinarily participate in technical discussions.

“We would like perspective from people whom AI affects, both from those who create AI systems and also those whose are not directly involved in its creation.” – Elham Tabassi

“We know that bias is prevalent throughout the AI lifecycle,” Schwartz said. “Not knowing where your model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital next step.”

Comments on the proposed approach can be submitted by Aug. 5, 2021, by downloading and completing the template form

/Public Release. This material comes from the originating organization and may be of a point-in-time nature, edited for clarity, style and length. View in full here.