White House Releases Mandatory AI Policy for Federal Agencies | Practical Law

White House Releases Mandatory AI Policy for Federal Agencies | Practical Law

On March 28, 2024, consistent with the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (AI), the White House's Office of Management and Budget (OMB) issued a memorandum directing federal agencies on the use, governance, and risk management of AI.

White House Releases Mandatory AI Policy for Federal Agencies

Practical Law Legal Update w-042-9000 (Approx. 5 pages)

White House Releases Mandatory AI Policy for Federal Agencies

by Practical Law Intellectual Property & Technology
Law stated as of 04 Apr 2024USA (National/Federal)
On March 28, 2024, consistent with the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (AI), the White House's Office of Management and Budget (OMB) issued a memorandum directing federal agencies on the use, governance, and risk management of AI.
On March 28, 2024, the White House's Office of Management and Budget (OMB) issued a memorandum establishing requirements and guidelines for federal agencies regarding AI use, governance, innovation, and risk management. The memorandum follows President Biden's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (EO), which outlined new standards for AI safety and security and directed federal agencies to assess the potential risks of and implement policy on advancing and using AI technology.
The memorandum focuses on federal agencies' risk management and transparency practices for using and developing AI.
To address federal agencies' internal oversight of AI, the memorandum:
  • Requires federal agencies to:
    • within 60 days of the memorandum, designate a Chief AI Officer (CAIO) and convene senior officials to govern AI issues;
    • within 180 days of the memorandum, and every two years until 2036, submit to the OMB and post publicly either an AI compliance plan or determination that the agency does not use covered AI; and
    • monitor and report AI use cases at least annually.
  • Outlines the roles and responsibilities of the CAIO and requires that the CAIO possess the experience, expertise, seniority, and authority to oversee the agency's use of AI as required by the memorandum.
  • Requires internal coordination among agency officials responsible for aspects of AI adoption and risk management.
To advance responsible AI innovation and transparency, the memorandum:
  • Requires agencies to, within 365 days, develop and release publicly on the agency's website a strategy for identifying and removing barriers to the responsible use of AI and achieving improvements in AI.
  • Recommends:
    • securing adequate information technology infrastructures to share, curate, and govern agency data for training, testing, and operating AI, updating cybersecurity practices, and assessing beneficial uses of generative AI;
    • prioritizing recruiting, hiring, developing, and retaining AI talent to increase enterprise capacity for innovation;
    • sharing and releasing AI code, models, and data assets collaboratively; and
    • implementing AI management requirements consistently across agencies to promote efficiencies and opportunities for sharing.
To address management of risks from AI use, the memorandum provides that by December 1, 2024, federal agencies must implement minimum risk management practices and other safeguards when the use of AI impacts the public's rights or safety or terminate the use of noncompliant AI. Agencies must:
  • Complete an AI impact assessment documenting, at minimum, the intended purpose of the AI and its expected benefit, the potential risks of using AI, and the quality and appropriateness of the data.
  • Conduct testing to ensure the AI's performance and components work in its intended real-world context.
  • Independently review relevant AI documentation to ensure the AI system works appropriately and as intended and that its expected benefits outweigh its potential risks.
  • Provide public notice and documentation in plain language.
  • Ensure adequate training, assessment, and oversight for AI operators.
  • For public rights:
    • identify and assess AI's impact on equity and fairness and mitigate algorithmic discrimination when present;
    • consult and incorporate feedback from affected communities and the public;
    • conduct ongoing monitoring and mitigation for AI-enabled discrimination and notify negatively affected individuals; and
    • provide an option to opt out of AI-enabled decisions.
  • Continually monitor, evaluate, and review AI risks and mitigate emerging risks of AI to the public's safety and rights.