To counter the negative effect of biases in artificial intelligence (AI) that can damage people’s lives and public trust in AI, the National Institute of Standards and Technology (NIST) is advancing an approach for identifying and managing these biases. NIST outlines the approach in a publication titled “A Proposal for Identifying and Managing Bias in Artificial Intelligence”.
The outline forms part of the agency’s broader effort to support the development of trustworthy and responsible AI. NIST is accepting comments on the document until Aug. 5, 2021, and the authors will use the public’s responses to help shape the agenda of several collaborative virtual events NIST will hold in the coming months. This series of events is intended to engage the stakeholder community and allow them to provide feedback and recommendations for mitigating the risk of bias in AI.
Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear. NIST wants to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause.
AI has become a transformative technology as it can often make sense of information more quickly and consistently than humans can. AI now plays a role in everything from disease diagnosis to the digital assistants on our smartphones. But as AI’s applications have grown, researchers realise that its results can be thrown off by biases in the data that captures the real-world incompletely or inaccurately.
Moreover, some AI systems are built to model complex concepts, such as criminality or employment suitability, that cannot be directly measured or captured by data in the first place. These systems use other factors, such as area of residence or education level, as proxies for the concepts they attempt to model. The imprecise correlation of the proxy data with the original concept can contribute to harmful or discriminatory AI outcomes such as wrongful arrests, or qualified applicants being erroneously rejected for jobs or loans.
The approach for managing bias involves a conscientious effort to identify and manage bias at different points in an AI system’s lifecycle, from initial conception to design to release. The goal is to involve stakeholders from many groups both within and outside of the technology sector, allowing perspectives that traditionally have not been heard.
An AI tool is often developed for one purpose, but then it gets used in other very different contexts. Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended. All these factors can allow bias to go undetected. The researchers are inviting the public to give feedback as they want a perspective from people whom AI affects, both from those who create AI systems and also those who are not directly involved in its creation.
NIST contributes to the research, standards, and data required to realise the full promise of AI as an enabler of American innovation across industry and economic sectors. Working with the AI community, NIST seeks to identify the technical requirements needed to cultivate trust that AI systems are accurate and reliable, safe and secure, explainable, and free from bias. The bias in AI-based products and systems can be purposeful or inadvertent. By hosting discussions and conducting research, NIST is helping to move us closer to the agreement on understanding and measuring bias in AI systems.
As the problem of AI bias is prevalent in housing, U.S. programmers tweak the AI app to eliminate bias against minority groups. As reported by OpenGov Asia, in housing, AI has helped perpetuate segregation and discrimination. The creators of the AI-enabled housing solution app were worried that the AI would promote bias, so they tweaked it so that tenants could search for apartments using their voucher number alone, without providing any other identifying information.