Back to Blogs

Understanding the United States Executive Order on Safe, Secure, and Trustworthy AI

November 1, 2023
5 mins
blog image

On October 30, 2023, the White House announced an Executive Order issued by President Joe Biden aimed at fostering a balanced approach toward the development and deployment of Artificial Intelligence (AI) to ensure it's safe, secure, and trustworthy. It acknowledges the potential of AI technologies in solving urgent societal challenges and enhancing prosperity, productivity, innovation, and security.

However, the Executive Order highlights the potential adverse effects that an irresponsible use of artificial intelligence could have, such as fraud, discrimination, bias, misinformation, threats to national security, and the need for guardrails. The Order calls for a collective effort from the federal government (including the Department of Homeland Security, the Department of Health and Human Services, the Department of Energy, the Department of Commerce, and more), the private sector, academia, and civil society to mitigate these harms while maximizing the benefits of AI.

Here are the three main guiding principles behind this Executive Order:

  • Safety and security: The Order emphasizes the need for robust, reliable, repeatable, and standardized evaluations of AI systems. It mandates addressing security risks, including those related to biotechnology, cybersecurity, and critical infrastructure. The document also highlights the importance of testing, post-deployment monitoring, and effective labeling to ensure that AI systems are ethically developed, securely operated, and compliant with federal laws​.
  • Responsible innovation: It encourages promoting responsible innovation, competition, and collaboration to maintain U.S. leadership in AI. The Order calls for investments in AI-related education, training, development, research, and tackling intellectual property issues. It also emphasizes creating a fair, open, and competitive AI ecosystem and marketplace, supporting small developers, and addressing potential risks from dominant firms' control over critical assets like semiconductors, computing power, cloud storage, and data​​.
  • Supporting American workers: As AI creates new jobs and industries, the Order stresses adapting job training and education to support a diverse workforce. It advises against deploying AI in ways that undermine rights, worsen job quality, or cause harmful labor-force disruptions. The Order encourages building the next steps in AI development based on the views of workers, labor unions, educators, and employers to support responsible AI uses that improve workers' lives and augment human work.

In subsequent sections of this article, we will examine the actions among the AI directives in this Executive Order. In the meantime, let’s explore how we got here.

How did we get here? The History of AI Regulation in the United States of America

President Biden's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence is the result of years of developing insights and responses to emerging technologies in the field of AI.

In order to show how we came to this important turning point, this section will walk you through the path of AI regulation in the United States.

Early Engagement, Regulating Open- and Closed-Source LLMs

Navigating the spectrum between open and closed LLM systems is critical for effective AI policy. Striking the right balance will promote innovation and competition while managing the potential risks of AI.

By 2024, the National Institute of Standards and Technology (NIST) under the U.S. Department of Commerce will determine whether they will allow the release of open model weights under public licenses. This, of course, is bound to stir up discussions surrounding treating open model weights as free speech and accusations of lobbying from big tech companies to protect their MOAT.

As these LLM systems began permeating various sectors, the need for a regulatory framework became apparent. Policymakers grappling with the rapid advancements in AI models and tools started the conversation about balancing promoting US global leadership in AI with the risks to individuals, businesses, and national security.

Legislative Efforts

The early engagement translated into legislative action, with the USA’s House and Senate committees holding numerous hearings on AI. The hearings included big names like Elon Musk, CEO of SpaceX, Tesla, and X, formerly known as Twitter; Mark Zuckerberg, CEO of Meta; former Microsoft co-founder Bill Gates; and Sam Altman, CEO of OpenAI, the parent company of AI chatbot, ChatGPT.

Biden Administration’s Early Steps

In October 2022, the Biden administration issued a non-binding AI Bill of Rights, marking an early step towards delineating the government’s stance on governing automated systems, focusing on civil rights protection.

Soon after, on September 12, several tech companies signed voluntary agreements to follow the rules President Biden set out for AI. This was the first step toward encouraging responsible AI use through partnerships with the private sector.

SAFE Innovation—A Values-Based Framework and New Legislative Process

Despite strong bipartisan interest, the challenge of passing comprehensive AI legislation continued, paving the way for the SAFE Innovation Framework proposal by Senate Majority Leader Chuck Schumer​.

The Executive Order

The culmination of these efforts and the evolving understanding of AI's impact led to the issuance of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. This Executive Order embodies a more structured approach to AI governance, reflecting the administration’s commitment to promoting responsible AI development and deployment while addressing the associated potential risks of AI.

What are the Executive Order Directives?

We have summarized the Executive Order Directives below so you can easily skim through and find the directives and the corresponding actions relevant to you.

Directive 1: New Standards for AI Safety and Security

Actions:

  • Require developers to share safety test results with the U.S. government.
  • Develop standards and tools to ensure AI systems are safe and secure.
  • Protect against AI-enabled risks to national security and public health.
  • Establish strong standards for biological synthesis screening.

Directive 2: Protecting Americans’ Privacy

Actions:

  • Prioritize federal support for privacy-preserving techniques in AI.
  • Strengthen privacy-preserving research and technologies.
  • Evaluate how agencies collect and use commercially available data.
  • Develop guidelines for federal agencies to evaluate privacy-preserving techniques.

Directive 3: Advancing Equity and Civil Rights

Actions:

  • Offer advice to stop AI programs from making discrimination worse.
  • Address algorithmic discrimination through training and coordination.
  • Ensure fairness in the criminal justice system's use of AI.

Directive 4: Standing Up for Consumers, Patients, and Students

Actions:

  • Make advances in the responsible use of AI in healthcare.
  • Shape AI’s potential in education.
  • Protect consumers and patients while ensuring AI benefits.

Directive 5: Promoting Innovation and Competition

Actions:

  • Catalyze AI research and provide grants in vital areas.
  • Promote a fair and competitive AI ecosystem.
  • Streamline visa criteria for skilled immigrants.

Directive 6: Supporting Workers

Actions:

  • Develop principles and best practices for worker protection.
  • Produce a report on AI’s labor-market impacts.

Directive 7: Advancing American Leadership Abroad

Actions:

  • Expand collaborations on AI at bilateral, multilateral, and multistakeholder levels.
  • Accelerate the development of AI standards with international partners.
  • Promote responsible AI development abroad.

Directive 8: Ensuring Responsible and Effective Government Use of AI

Actions:

  • Issue guidance for agencies’ AI use.
  • Streamline AI product and service acquisition.
  • Accelerate the hiring of AI professionals in government.

Now that we've discussed the key directives of the US Executive Order on AI, let's compare and contrast them with the European Union's approach to AI regulation, known as the EU Artificial Intelligence Act (AI Act).

US Executive Order on Safe, Secure, and Trustworthy AI vs European Union AI Act

In the table below, we present a comparative overview of the key aspects and focus areas of the US Executive Order on Safe, Secure, and Trustworthy AI and the EU Artificial Intelligence Act (AI Act).

Read more about the takes on “Proposed AI Regulation: EU AI Act, UK's Pro-Innovation, US AI Bill of Rights” from Encord’s co-founder and president.

As you saw in the comparison, while both regulations aim to foster a safe and responsible AI ecosystem, they approach AI governance from slightly different vantage points, reflecting the distinct priorities and regulatory philosophies of the US and the EU.

light-callout-cta What does the European AI Act mean for you, an AI developer? Learn more from this article by Ulrik Stig Hansen, Encord’s co-founder and president.

Conclusion

Increased involvement from policymakers, legislative efforts, and joint initiatives between the public and private sectors have all contributed to the current AI regulatory landscape. The issuance of the Executive Order represents a significant milestone in the ongoing journey towards establishing a robust framework for AI governance in the U.S. aimed at harnessing the benefits of AI while mitigating its potential perils. But will regulations stifle the efforts of open-source AI? Or would it encourage an ecosystem of open innovation while regulating the risks at the application layer?

In this article, you learned about the evolution of AI regulation in the U.S., focusing on key legislative efforts, the Biden Administration's early steps towards AI governance, and the collaborative initiatives that marked the journey towards the recent Executive Order. We talked about how AI was regulated, which led to the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. These included actions taken by lawmakers, tech companies making voluntary commitments, and the release of frameworks based on values like the SAFE Innovation Framework.

Finally, we compared different aspects of the directives to the proposed European Union AI Act, where you saw clearly different priorities and regulatory philosophies between the United States Congress and the European Parliament.

Get access to our new AI Act Learning Pack, which includes all the key resources you need to ensure forward compatibility.

Scale your annotation workflows and power your model performance with data-driven insights
medical banner

cta banner

Build better ML models with Encord

Get started today
Written by
author-avatar-url

Stephen Oladele

View more posts