President Biden Issues Comprehensive Executive Order on Artificial Intelligence | Practical Law

President Biden Issues Comprehensive Executive Order on Artificial Intelligence | Practical Law

On October 30, 2023, President Biden issued a wide-ranging executive order aimed at advancing safe, secure, and trustworthy artificial intelligence (AI).

President Biden Issues Comprehensive Executive Order on Artificial Intelligence

Practical Law Legal Update w-041-2047 (Approx. 6 pages)

President Biden Issues Comprehensive Executive Order on Artificial Intelligence

by Practical Law Intellectual Property & Technology
Published on 31 Oct 2023USA (National/Federal)
On October 30, 2023, President Biden issued a wide-ranging executive order aimed at advancing safe, secure, and trustworthy artificial intelligence (AI).
On October 30, 2023, President Biden signed an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The comprehensive order sets new standards for artificial intelligence (AI) safety and security, directing federal agencies to assess the potential risks of and implement policy on advancing and using AI technology. Among other objectives, the order aims to protect individual privacy and promote equity, consumer rights, and innovation, while maintaining the US as a world leader in AI.

AI Safety and Security

The order outlines a robust, interagency AI safety and security framework, including directives regarding:
  • AI use and testing guidelines. The Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (NIST), with the Secretaries of Energy and Homeland Security and other relevant agencies, must establish:
    • guidelines and best practices to promote consensus AI industry use standards; and
    • appropriate AI testing procedures, particularly for dual-use foundation models (including generative AI and large language models (LLMs)), known as "red-teaming" tests (structured tests to find flaws and vulnerabilities in an AI system).
      (see the NISTs FAQ sheet regarding its responsibilities under the order.)
  • Notification requirements. The Secretary of Commerce must require companies developing dual-use foundation models to report and record efforts regarding:
    • training, developing, and producing the models, including physical and cybersecurity measures;
    • the results of all red-team testing of the models.
    The Secretary of Commerce must also propose regulations that require US Infrastructure as a Service (IaaS) providers to report when a foreign person transacts with that United States IaaS Provider to train a large AI model (or resells a US IaaS product) with potential capabilities that could be used in malicious cyber-enabled activity.
  • AI infrastructure and cybersecurity. Among other directives, the order requires:
    • agency heads to evaluate and report to the Secretary of Homeland Security potential risks related to the use of AI in critical infrastructure, consistent with NIST's AI Risk Management Framework (see Legal Update, NIST Releases Artificial Intelligence Risk Management Framework);
    • agencies to incorporate the AI Risk Management Framework and other guidance into safety and security guidelines for use by critical infrastructure owners and operators;
    • the Secretary of the Treasury to issue a report on best practices for financial institutions to manage AI-specific cybersecurity risks; and
    • the Secretaries of Defense and Homeland Security to identify, develop, test, evaluate, and deploy AI capabilities, including LLMs, through a pilot program to help evaluate vulnerabilities in critical US Government software, systems, and networks.
  • Chemical, biological, radiological, or nuclear (CBRN) weapons. The Secretary of the Department of Homeland Security (DHS), in consultation with the Director of the Office of Science and Technology Policy (OSTP), must evaluate and report the risks AI misuse poses in producing CBRN threats, and the benefits of using AI to counter such threats.
  • Authentication. The Department of Commerce must develop guidance for content authentication and watermarking to label AI-generated content clearly.

AI Innovation

  • The order also aims to promote innovation by:
  • Directing the Secretaries of State and Homeland Security to:
    • streamline processing times of visa petitions and applications for AI-related workers and researchers; and
    • establish a program to identify and attract top talent in AI and other critical and emerging technologies at universities, research institutions, and overseas private sectors.
  • Requiring:
    • the Director of the National Science Foundation (NSF) to launch a National AI Research Resource (NAIRR);
    • the US Patent and Trademark Office to publish guidance on the use of AI, including generative AI, in the inventive process; and
    • the US Copyright Office to recommend potential executive actions to the President regarding copyright and AI.
  • Directing the Secretary of Health and Human Services (HHS) to prioritize grantmaking to advance responsible AI innovation by healthcare technology developers that promotes the welfare of patients and healthcare workers.

AI in the Workplace

The order also aims to support worker welfare, equity, and civil rights by, among other things:
  • Directing the Secretary of Labor to:
    • submit to the President a report analyzing the abilities of agencies to support workers displaced by the adoption of AI and other technological advancements;
    • develop and publish principles and best practices for employers that could be used to mitigate AI's potential harms to employees' well-being and maximize its potential benefits; and
    • publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.
  • Directing the Secretaries of State and Homeland Security to take steps to streamline visa timelines and processing and consider rulemaking regarding nonimmigrant visa processes and protocols to help attract and retain talent in AI and other emerging technologies in the US economy.
For more on the order and other developments regarding AI in the workplace, see Practice Note, Artificial Intelligence (AI) in the Workplace (US).

Consumer Protection

The order aims to protect consumers, patients, passengers, and students by directing:
  • The Secretaries of HHS, Defense, and Veterans Affairs to establish an AI task force to develop policies on responsible deployment and use of AI in the health and human services sector, including in drug safety.
  • The Secretary of Transportation to assess the need for information, technical assistance, and guidance regarding the use of AI in transportation.
  • The Secretary of Education to develop resources, policies, and guidance regarding safe, responsible, and nondiscriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities.
  • The Federal Housing Finance Agency, the Consumer Financial Protection Bureau, and the Department of Housing and Urban Development to:
    • evaluate bias in underwriting models; and
    • issue guidance on combatting unlawful housing discrimination in algorithmic tools.

Privacy

To mitigate privacy risks associated with AI, the order:
  • Requires the Director of the Office of Management and Budget (OMB) to:
    • evaluate and take steps to identify commercially available information (CAI) that agencies have procured from sources including data brokers and vendors, particularly CAI that contains personally identifiable information; and
    • evaluate agency standards and procedures associated with the collection, processing, maintenance, use, sharing, dissemination, and disposition of CAI that contains personally identifiable information.
  • Directs NIST to create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI.
  • Requires the Director of the NSF to:
    • collaborate with the Secretary of Energy to fund the creation of a Research Coordination Network (RCN) aimed at advancing privacy research and developing, deploying, and scaling privacy enhancing technologies (PETs); and
    • engage with agencies to identify ongoing work and potential opportunities to incorporate PETs into their operations.

Government AI Use

The order also aims to advance federal government use of AI by requiring:
  • The formation of an interagency council to issue guidance on government AI development and use.
  • The Director of OMB, together with the interagency council, to provide guidance and recommendations to agencies regarding appropriate AI use, including:
    • required minimum risk-management procedures;
    • external testing (red-teaming) of generative AI;
    • testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs; and
    • reasonable steps to watermark or otherwise label output from generative AI.
  • The Director of OMB to develop a method for agencies to track and assess their ability to adopt AI into their programs and operations, manage its risks, and comply with federal policy on AI.
  • Relevant agencies to develop and issue a framework for prioritizing critical and emerging technologies, including generative AI and LLM-based chat interfaces, code-generation and debugging tools, and associated application programming interfaces, as well as prompt-based image generators.