When compared to conventional software, AI faces several hazards. AI systems are taught on data that can change over time, often dramatically and unexpectedly, influencing the systems in unforeseen ways.
These systems are also “socio-technical,” meaning they are affected by social dynamics and human behaviour. The intricate interplay of these technical and societal aspects might result in AI dangers that influence people’s lives in circumstances ranging from their interactions with online chatbots to the outcomes of job and loan applications.
As a result, the National Institute of Standards and Technology (NIST) of the United States Department of Commerce has issued it’s Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidelines document for voluntary use by organisations designing, developing, and deploying, or using AI services to help manage the many risks of AI technologies. The AI RMF was developed in close partnership with the business and public sectors in response to a directive from Congress.
“This voluntary framework will assist in developing and deploying AI technology in ways that support the United States, other nations, and organisations to improve AI trustworthiness while limiting risks following our democratic ideals,” said Deputy Commerce Secretary Don Graves. “It should stimulate AI innovation and growth while enhancing — rather than suppressing or undermining — civil freedoms, civil rights, and equity for everyone.”
The AI RMF establishes a flexible, organised, and quantified process for enterprises to address AI risks. This AI risk management method can reap the value of AI technologies while limiting the potential of negative repercussions on individuals, groups, communities, companies, and society.
It is meant to respond to the AI landscape as technologies advance and to be used by organisations to various degrees and competencies so that society can profit from AI while simultaneously being guarded against its potential downsides.
The approach enables firms to think differently about AI and vulnerability. It encourages enterprises to approach AI with a new perspective, including how to think about, discuss, assess, and monitor AI risks and their possible positive and negative implications.
Under Secretary for Standards and Technology and NIST Director Laurie E. Locascio underlined the framework is part of NIST’s broader endeavour to foster trust in AI technologies, which is required if the technology is to be generally embraced by society.
“The AI Risk Management Framework may assist enterprises and other organisations of any size and sector in launching or improving their AI risk management methods,” Locascio added. “It offers a new way to incorporate responsible practises and actionable recommendations to operationalise trustworthy and responsible AI. We anticipate that the AI RMF will aid in the development of best practices and standards.”
The AI RMF is split into two sections. The first section addresses how enterprises should frame AI risks and describes the features of trustworthy AI systems. The framework’s second section, the core, describes four specific roles — govern, map, measure, and manage — to assist companies in addressing the hazards of AI systems in practice. These functions can be used in various circumstances and at any point in the AI life cycle.
For the past 18 months, NIST has been building the AI RMF in collaboration with the corporate and public sectors. The paper incorporates about 400 sets of formal comments from NIST from over 240 different organisations on drafting versions of the framework. The National Institute of Standards and Technology (NIST) today released statements from some organisations that have already committed to using or promoting the framework.
The agency also released a voluntary AI RMF Playbook today as an advisory book for navigating and applying the framework. NIST intends to cooperate with the AI community to enhance the framework regularly and invites additions and changes to the playbook at any time. In addition, NIST expects to develop a Trustworthy and Responsible AI Resource Centre to assist enterprises in implementing the AI RMF 1.0.