Federal agencies and officials utilising artificial intelligence systems need to vigilantly monitor and control for systemic and racial biases included in machine learning technology, according to a new report from the National Institute of Standards and Technology. This recommendation comes from an extensive report on how organisations and enterprises, both private and public, can cultivate better trust in Artificial Intelligence.
The main distinction between the draft and final versions of the publication is the new emphasis on how bias manifests itself not only in AI algorithms and the data used to train them but also in the societal context in which AI systems are used.
Context is everything. AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point
– Reva Schwartz, Principal Investigator for AI Bias
Bias in AI can harm humans. AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan or accepted as a rental applicant. It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group. The revised NIST publication acknowledges that while these computational and statistical sources of bias remain highly important, they do not represent the full picture.
A more complete understanding of bias must take into account human and systemic biases, which figure significantly in the new version. Systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating against individuals based on their race. Human biases can relate to how people use data to fill in missing information, such as a person’s neighbourhood of residence influencing how likely authorities would consider the person to be a crime suspect. When human, systemic and computational biases combine, they can form a pernicious mixture — especially when explicit guidance is lacking for addressing the risks associated with using AI systems.
To address these issues, the NIST authors make the case for a socio-technical approach to mitigating bias in AI. This approach involves a recognition that AI operates in a larger social context — and that purely technically based efforts to solve the problem of bias will come up short.
Organisations often default to overly technical solutions for AI bias issues. But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates.
As reported by OpenGov Asia, NIST titled “Engineering Trustworthy Secure Systems”, addresses the engineering-driven perspective and actions necessary to develop more defensible and survivable systems, inclusive of the machine, physical, and human components that compose those systems and the capabilities and services delivered by those systems.
The need for trustworthy secure systems stems from the adverse effects associated with a diverse set of stakeholder needs that are driven by mission, business, and other objectives and concerns. The characteristics of these systems reflect a growth in the geographic size, number, and types of components and technologies that compose the systems; the complexity and dynamicity in the behaviours and outcomes of the systems; and the increased dependence that results in a range of consequences from major inconvenience to catastrophic loss due to adversity within the global operating environment.