On October 30, 2023, the Biden administration issued a long-awaited executive order (EO) on artificial intelligence (AI). The EO expands on previous AI initiatives, such as the Blueprint for an AI Bill of Rights, and lays out the most comprehensive set of directions to date for federal agencies and the largest AI developers. The goal of the EO is to create a broad framework for “responsible AI” that can protect against potential harms without stifling innovation. To that end, the EO instructs agencies to use regulatory and enforcement tools to address safety, privacy, discrimination, and collaboration with global AI regulatory efforts.
While the EO contains instructions to the various agencies and executive branch offices, it does not create new mandates. Moreover, implementation of the EO’s requirements takes place over various time frames – ranging from 90 days to 365 days from the date of signing – making it difficult to predict with specificity when guidance will be issued or regulations promulgated. Also, many of the initiatives in the EO will require congressional action before taking effect.
Below, we’ve outlined the key elements to understanding the scope of the EO. Cooley practitioners will be discussing these and other parts of the EO during our AI Talks webinar series.
AI safety and security
- The Department of Commerce, in coordination with other federal agencies, shall issue guidelines and best practices – with the aim of promoting consensus industry standards – for developing and deploying safe, secure, and trustworthy AI systems.
- Companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and they must share the results of all red-team safety tests.
- Beginning 90 days after the issuance of the EO and continuing at least annually, the head of each agency with relevant regulatory authority over critical infrastructure shall evaluate and provide to the secretary of homeland security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, with “critical infrastructure” defined in the EO as “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.”
- The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content, as well as establish standards and best practices for detecting AI-generated content and authenticating official content. The National Security Council and White House chief of staff will develop a national security memorandum that directs further actions on AI and security.
Privacy
- Agencies shall use available policy and technical tools, including privacy-enhancing technologies where appropriate, to protect privacy and to combat the broader legal and societal risks – including the chilling of First Amendment rights – that result from the improper collection and use of people’s data.
- Independent regulatory agencies are encouraged to use their full range of authorities, including issuing new regulations to protect American consumers from threats to their privacy.
- Through the EO, the president calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially children.
Equity and civil rights
- Agencies are to address algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
- Agencies including the Consumer Financial Protection Bureau and the Department of Labor shall use their respective civil rights and civil liberties offices and authorities to prevent and address unlawful discrimination, as well as other harms that result from uses of AI in federal government programs and benefits administration.
- The Department of Justice and other law enforcement agencies shall ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
Healthcare and education
- The Department of Health and Human Services will develop a strategic plan that includes policies and frameworks – possibly including regulatory action to ensure responsible use of AI in healthcare, including drug development.
- The Department of Health and Human Services will also establish a safety program to receive reports of – and act to remedy – harms or unsafe healthcare practices involving AI.
- The federal government will provide resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.
Workplace fairness
- The Department of Labor is directed to addressing job displacement, labor standards, workplace equity, health and safety, and data collection.
- The chair of the Council of Economic Advisers shall prepare a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions.
Innovation and competition
- The under secretary of commerce for intellectual property and the director of the US Patent and Trademark Office (USPTO) shall publish guidance to USPTO patent examiners and applicants addressing inventorship and the use of AI – which includes generative AI – in the inventive process, with illustrative examples in which AI systems play different roles in inventive processes and how, in each example, inventorship issues ought to be analyzed.
- After the US Copyright Office of the Library of Congress publishes its forthcoming AI study that will address copyright issues raised by AI, the under secretary of commerce for intellectual property and USPTO director shall consult with the director of the US Copyright Office and issue recommendations to the president on potential executive actions relating to copyright and AI.
- To stop unlawful collusion, prevent dominant firms from disadvantaging competitors, and ensure that consumers and workers are protected from harms that may be enabled by the use of AI, the EO encourages the Federal Trade Commission to consider whether to exercise its existing authorities, including its rulemaking authority under the Federal Trade Commission Act.
- The National Science Foundation shall launch a pilot of the National AI Research Resource – a tool that will provide AI researchers and students access to key AI resources and data to foster public-private collaboration.
Federal government use and procurement of AI
- The director of the Office of Management and Budget shall convene and chair an interagency council to coordinate the development and use of AI in agencies’ programs and operations.
- The heads of each agency shall implement or increase the use of existing training and familiarization programs with the goal of acquiring specified AI products and services faster, more cheaply, and more effectively through more rapid and efficient contracting.
International collaboration
- The State Department, in collaboration with the Commerce Department, will lead an effort to establish robust international frameworks for harnessing AI’s benefits – and managing its risks and ensuring safety.
Cooley will monitor the implementation of this EO. For additional updates on the EO and other developments in AI, sign up to receive Cooley thought leadership content or follow us on LinkedIn.