NIST Releases Artificial Intelligence Risk Management Framework | Practical Law

NIST Releases Artificial Intelligence Risk Management Framework | Practical Law

The National Institute of Standards and Technology (NIST) has released its Artificial Intelligence Risk Management Framework, providing guidance for managing risks associated with artificial intelligence (AI) while fostering innovation and promoting trustworthiness.

NIST Releases Artificial Intelligence Risk Management Framework

Practical Law Legal Update w-038-3241 (Approx. 3 pages)

NIST Releases Artificial Intelligence Risk Management Framework

by Practical Law Data Privacy & Cybersecurity
Published on 27 Jan 2023USA (National/Federal)
The National Institute of Standards and Technology (NIST) has released its Artificial Intelligence Risk Management Framework, providing guidance for managing risks associated with artificial intelligence (AI) while fostering innovation and promoting trustworthiness.
On January 26, 2023, the National Institute of Standards and Technology (NIST) released the Artificial Intelligence Risk Management Framework (NIST AI Framework), developed in response to Congress's directive in the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283). The voluntary framework aims to provide adaptable and practical guidelines designed to encourage artificial intelligence (AI) technology development and adoption, while mitigating the associated risks.
The NIST AI Framework is not restricted to any particular sector or use case. Specifically, the framework:
  • Outlines recommended guidelines for assessing, documenting, and managing AI risks and potential impacts.
  • Discusses the various stakeholders any AI risk management program should consider across different AI lifecycle stages.
  • Highlights trustworthy AI system characteristics, including:
    • validity and reliability;
    • safety;
    • security and resiliency;
    • accountability and transparency;
    • clarity about how and why an AI system made a decision;
    • privacy-protective norms such as anonymity and confidentiality guiding AI system design, development, and deployment; and
    • fairness, with harmful bias managed.
  • Defines the four core functions an AI risk management program should perform, including:
    • governance driven by a risk management culture;
    • mapping to enhance teams' visibility across different activities across the AI lifecycle and recognition of the complex contexts involved in new technology development and applications;
    • measurement to assess, analyze, and track identified risks; and
    • allocating resources to mapped and measured risks as defined by the organization's AI governance.
The NIST AI Framework includes specific categories and subcategories for each function that identify specific actions an organization's AI risk management program should take. NIST also issued a companion Artificial Intelligence Risk Management Framework Playbook that suggests ways for organizations to navigate and operationalize the NIST AI Framework.
NIST will update the framework periodically following comments from the AI community. NIST is accepting feedback until February 27, 2023. The first NIST AI Framework updates are planned for spring 2023.