Contents
How did we get here? The History of AI Regulation in the United States of America
The Executive Order
What are the Executive Order Directives?
US Executive Order on Safe, Secure, and Trustworthy AI vs European Union AI Act
Conclusion
Encord Blog
Understanding the United States Executive Order on Safe, Secure, and Trustworthy AI
Contents
How did we get here? The History of AI Regulation in the United States of America
The Executive Order
What are the Executive Order Directives?
US Executive Order on Safe, Secure, and Trustworthy AI vs European Union AI Act
Conclusion
Written by
Stephen Oladele
View more postsOn October 30, 2023, the White House announced an Executive Order issued by President Joe Biden aimed at fostering a balanced approach toward the development and deployment of Artificial Intelligence (AI) to ensure it's safe, secure, and trustworthy. It acknowledges the potential of AI technologies in solving urgent societal challenges and enhancing prosperity, productivity, innovation, and security.
However, the Executive Order highlights the potential adverse effects that an irresponsible use of artificial intelligence could have, such as fraud, discrimination, bias, misinformation, threats to national security, and the need for guardrails. The Order calls for a collective effort from the federal government (including the Department of Homeland Security, the Department of Health and Human Services, the Department of Energy, the Department of Commerce, and more), the private sector, academia, and civil society to mitigate these harms while maximizing the benefits of AI.
Here are the three main guiding principles behind this Executive Order:
- Safety and security: The Order emphasizes the need for robust, reliable, repeatable, and standardized evaluations of AI systems. It mandates addressing security risks, including those related to biotechnology, cybersecurity, and critical infrastructure. The document also highlights the importance of testing, post-deployment monitoring, and effective labeling to ensure that AI systems are ethically developed, securely operated, and compliant with federal laws.
- Responsible innovation: It encourages promoting responsible innovation, competition, and collaboration to maintain U.S. leadership in AI. The Order calls for investments in AI-related education, training, development, research, and tackling intellectual property issues. It also emphasizes creating a fair, open, and competitive AI ecosystem and marketplace, supporting small developers, and addressing potential risks from dominant firms' control over critical assets like semiconductors, computing power, cloud storage, and data.
- Supporting American workers: As AI creates new jobs and industries, the Order stresses adapting job training and education to support a diverse workforce. It advises against deploying AI in ways that undermine rights, worsen job quality, or cause harmful labor-force disruptions. The Order encourages building the next steps in AI development based on the views of workers, labor unions, educators, and employers to support responsible AI uses that improve workers' lives and augment human work.
In subsequent sections of this article, we will examine the actions among the AI directives in this Executive Order. In the meantime, let’s explore how we got here.
How did we get here? The History of AI Regulation in the United States of America
President Biden's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence is the result of years of developing insights and responses to emerging technologies in the field of AI.
In order to show how we came to this important turning point, this section will walk you through the path of AI regulation in the United States.
Early Engagement, Regulating Open- and Closed-Source LLMs
Navigating the spectrum between open and closed LLM systems is critical for effective AI policy. Striking the right balance will promote innovation and competition while managing the potential risks of AI.
By 2024, the National Institute of Standards and Technology (NIST) under the U.S. Department of Commerce will determine whether they will allow the release of open model weights under public licenses. This, of course, is bound to stir up discussions surrounding treating open model weights as free speech and accusations of lobbying from big tech companies to protect their MOAT.
As these LLM systems began permeating various sectors, the need for a regulatory framework became apparent. Policymakers grappling with the rapid advancements in AI models and tools started the conversation about balancing promoting US global leadership in AI with the risks to individuals, businesses, and national security.
Legislative Efforts
The early engagement translated into legislative action, with the USA’s House and Senate committees holding numerous hearings on AI. The hearings included big names like Elon Musk, CEO of SpaceX, Tesla, and X, formerly known as Twitter; Mark Zuckerberg, CEO of Meta; former Microsoft co-founder Bill Gates; and Sam Altman, CEO of OpenAI, the parent company of AI chatbot, ChatGPT.
Biden Administration’s Early Steps
In October 2022, the Biden administration issued a non-binding AI Bill of Rights, marking an early step towards delineating the government’s stance on governing automated systems, focusing on civil rights protection.
Soon after, on September 12, several tech companies signed voluntary agreements to follow the rules President Biden set out for AI. This was the first step toward encouraging responsible AI use through partnerships with the private sector.
SAFE Innovation—A Values-Based Framework and New Legislative Process
Despite strong bipartisan interest, the challenge of passing comprehensive AI legislation continued, paving the way for the SAFE Innovation Framework proposal by Senate Majority Leader Chuck Schumer.
The Executive Order
The culmination of these efforts and the evolving understanding of AI's impact led to the issuance of the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. This Executive Order embodies a more structured approach to AI governance, reflecting the administration’s commitment to promoting responsible AI development and deployment while addressing the associated potential risks of AI.
What are the Executive Order Directives?
We have summarized the Executive Order Directives below so you can easily skim through and find the directives and the corresponding actions relevant to you.
Directive 1: New Standards for AI Safety and Security
Actions:
- Require developers to share safety test results with the U.S. government.
- Develop standards and tools to ensure AI systems are safe and secure.
- Protect against AI-enabled risks to national security and public health.
- Establish strong standards for biological synthesis screening.
Directive 2: Protecting Americans’ Privacy
Actions:
- Prioritize federal support for privacy-preserving techniques in AI.
- Strengthen privacy-preserving research and technologies.
- Evaluate how agencies collect and use commercially available data.
- Develop guidelines for federal agencies to evaluate privacy-preserving techniques.
Directive 3: Advancing Equity and Civil Rights
Actions:
- Offer advice to stop AI programs from making discrimination worse.
- Address algorithmic discrimination through training and coordination.
- Ensure fairness in the criminal justice system's use of AI.
Directive 4: Standing Up for Consumers, Patients, and Students
Actions:
- Make advances in the responsible use of AI in healthcare.
- Shape AI’s potential in education.
- Protect consumers and patients while ensuring AI benefits.
Directive 5: Promoting Innovation and Competition
Actions:
- Catalyze AI research and provide grants in vital areas.
- Promote a fair and competitive AI ecosystem.
- Streamline visa criteria for skilled immigrants.
Directive 6: Supporting Workers
Actions:
- Develop principles and best practices for worker protection.
- Produce a report on AI’s labor-market impacts.
Directive 7: Advancing American Leadership Abroad
Actions:
- Expand collaborations on AI at bilateral, multilateral, and multistakeholder levels.
- Accelerate the development of AI standards with international partners.
- Promote responsible AI development abroad.
Directive 8: Ensuring Responsible and Effective Government Use of AI
Actions:
- Issue guidance for agencies’ AI use.
- Streamline AI product and service acquisition.
- Accelerate the hiring of AI professionals in government.
Now that we've discussed the key directives of the US Executive Order on AI, let's compare and contrast them with the European Union's approach to AI regulation, known as the EU Artificial Intelligence Act (AI Act).
US Executive Order on Safe, Secure, and Trustworthy AI vs European Union AI Act
In the table below, we present a comparative overview of the key aspects and focus areas of the US Executive Order on Safe, Secure, and Trustworthy AI and the EU Artificial Intelligence Act (AI Act).
As you saw in the comparison, while both regulations aim to foster a safe and responsible AI ecosystem, they approach AI governance from slightly different vantage points, reflecting the distinct priorities and regulatory philosophies of the US and the EU.
Conclusion
Increased involvement from policymakers, legislative efforts, and joint initiatives between the public and private sectors have all contributed to the current AI regulatory landscape. The issuance of the Executive Order represents a significant milestone in the ongoing journey towards establishing a robust framework for AI governance in the U.S. aimed at harnessing the benefits of AI while mitigating its potential perils. But will regulations stifle the efforts of open-source AI? Or would it encourage an ecosystem of open innovation while regulating the risks at the application layer?
In this article, you learned about the evolution of AI regulation in the U.S., focusing on key legislative efforts, the Biden Administration's early steps towards AI governance, and the collaborative initiatives that marked the journey towards the recent Executive Order. We talked about how AI was regulated, which led to the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. These included actions taken by lawmakers, tech companies making voluntary commitments, and the release of frameworks based on values like the SAFE Innovation Framework.
Finally, we compared different aspects of the directives to the proposed European Union AI Act, where you saw clearly different priorities and regulatory philosophies between the United States Congress and the European Parliament.
Build better ML models with Encord
Get started todayWritten by
Stephen Oladele
View more postsRelated blogs
Proposed AI Regulation: EU AI Act, UK's Pro-Innovation, US AI Bill of Rights
Should AI be regulated? How might governments attempt to regulate AI? What would regulation mean for AI startups, scaleups, and tech giants supporting and innovating in the AI sector? While humanity has not developed artificial intelligence that can unequivocally pass the Turing test yet, these fears are not unfounded. We don’t have AIs on the same level as Ian M. Bank’s ships “Minds” in the Culture series or Alastair Reynold's “Mechanism” in the Blue Remembered Earth series. Or the “SI” in Peter F. Hamilton’s series of books about a space-faring, wormhole-connected human society known as The Commonwealth. Discussions and debates around AI regulation have gained significant traction recently. Governments, regulatory bodies, and policymakers around the world are grappling with the challenges and potential risks associated with the inevitable widespread adoption of artificial intelligence. There has been considerable talk and speculation in the media, regulatory bodies, and political leaders at every level recently about proposed AI regulations across the EU, UK, US, and worldwide. As a result, governments are moving fast to propose regulations and new laws to govern the development and use of Artificial Intelligence.. It is worth noting the stark contrast in the great sense of urgency for adopting AI regulation compared to slow progress of cryptocurrency regulations. It's a evident that lawmakers’ see AI as a potential threat to human society, the economy, and environment. Let’s dive into why governments are moving quickly, what regulation means for the AI sector, some of the challenges, and the proposed regulations thus far. Public Discourse around AI Regulation There has been significant pressure from prominent figures in the industry to get AI regulation right. On May 30, 2023, hundreds of, leading AI experts, including Sam Altman, CEO of OpenAI, and Microsoft Founder, Bill Gates, signed the following Statement of AI Risk: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This declaration is not without precedent: In 2015, Elon Musk, Professor Stephen Hawking and numerous others met for an AI ethics conference. During the conference, it was agreed that an “uncontrolled hyper-leap in the cognitive ability of AI . . . could one day spell doom for the human race.” The sentiment is best summed up by what the Future of Life Institute published in March 2023: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” In theory, OpenAI CEO Sam Altman agrees: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” Source Google’s ‘Godfather of AI’, Geoffrey Hinton, quit in May 2023 over fears the industry and governments still aren’t moving fast enough to contain the risks posed by unregulated AI systems. Hinton is particularly fearful of “bad actors”, such as Russia, China, North Korea, Iran, and global terrorist organizations using AI to attack Western democracies. Around the same time, Elon Musk warned Google’s co-founders that they are “not taking AI safety seriously enough”. Despite Musk’s involvement in co-founding OpenAI and benefiting from AI advancements in his other businesses such as Tesla and SpaceX, he has been increasingly vocal about the dangers of AI recently. Why governments are moving quickly to regulate AI In short, governments are moving quickly because AI technologies are moving incredibly fast. There are several considerations that are drawing particular attention: Economic Disruption: The potential eradication of millions of jobs through the automation of numerous professions could cause economic disruption. Security Risks: The increased reliance on AI systems also introduces new security risks. Bad actors could use AI for automated cyberattacks, even giving AI systems control over aerial weapons, such as drones, chemical weapons, and nuclear warheads. Misinformation: In the wrong hands, generative AI could be used to spread misinformation and could be used to manipulate populations, economies, and political debates if regulations aren’t applied for how this technology is used. Ethical Concerns: There are worries about the ethical implications of AI, particularly regarding the use of AI in military applications and surveillance. The lack of transparency in AI processes is a concern as it can lead to biased outcomes. As the Center for AI Safety states: “AI systems are trained using measurable objectives, which may only be indirect proxies for what we value.” Lack of Control: Some people fear that AI systems may become too autonomous and surpass human intelligence. This would result in loss of control over their actions and decision making. There are numerous other concerns with advancements in AI technologies and applications. How businesses, academia, and governments influence the way AI evolves and iterates now will directly impact the way AI shapes humanity, the economy, and the environment for years to come. What regulation means for the AI sector Every industry that impacts society in significant ways has regulatory oversight, laws that govern the use of technology, and safeguards to prevent risks to life, health, the economy, or the environment. Nuclear, healthcare, finance, and communications are some of the most heavily regulated sectors. The challenge is finding balance. Governments don’t want to prevent innovation, especially in the technology sector. Innovation creates jobs, economic growth, and new tax revenues for governments. Other challenges are more practical. Such as working out how much money and how many people will it take to regulate AI businesses? AI is advancing fast. New models and developments are emerging every week. How can governments handle a fast-moving volume of applications and models to test, and what tests can be applied to determine whether an AI is safe or not? These are some of the many questions AI experts, industry leaders, and lawmakers’ are wrestling with to find the best ways to regulate the sector without negatively impacting it. OpenAI CEO Sam Altman has been amongst the most vocal in calling for laws to regulate the AI industry. In a US congressional hearing, Altman said “We think it can be a printing press moment. We have to work together to make it so.” He called for the creation of regulatory bodies for AI, similar to the Food and Drug Administration (FDA). As for what this means in practice, the AI industry, businesses that use AI tools, and consumers will have to see what laws are passed before government agencies are established and put them into practice. In reality, this is a process that normally takes several years. The FDA's process for AI developers to understand whether their model needs FDA approval Assuming governments take this seriously, we might see legislation move more quickly than attempts to regulate other technological advances, such as crypto. AI is already impacting society, businesses, and the economy faster, so for that reason political leaders are accelerating the legislative process. We are already seeing swift movement in the drafting of AI laws. Now let’s look at what the US, European, and British governments are doing about this . . . What AI Regulations have been proposed? Let’s dive into the AI regulation that has been proposed in the EU, UK, US, and around the world. European Union’s Artificial Intelligence Act (AI Act) The EU is proposing a “risk-based approach” to ensure that any AI system considered “a clear threat to the safety, livelihoods and rights of people will be banned.” The AI Act is part of a wider, coordinated approach to AI development and uses across Europe, including a Coordinated Plan on AI. A key part of this proposal is the conformity assessment, which will be required before the AI system enters the market. Source This way, the EU will assess the risk factor for every commercial AI model active in Europe. Depending on the outcome of the assessment, an AI system could be banned or placed into an EU-wide database and granted a CE mark to show security compliance. The EU notes that: “the vast majority of AI systems currently used in the EU fall into this [minimal and no risk] category.” In the US, the FDA is already processing and regulating hundreds of AI models for healthcare, so we could see a similar process in the EU with a broader range of applications. UK Government’s Pro-Innovation Policy Paper The UK government isn’t being as proactive. The British Department for Science, Innovation and Technology has published: AI regulation: a pro-innovation approach – policy proposals. As the name implies, the British government aims to demonstrate it’s pro-business and pro-innovation. Speaking at London Tech Week, British Prime Minister Rishi Sunak said “I want to make the UK not just the intellectual home, but the geographical home of global AI safety regulation.” However, so far the UK approach to AI regulation isn’t as robust as the EU or US. No new laws or regulatory bodies are being created. Instead, the UK is passing on the responsibility to the Information Commissioner’s Office (ICO) and Financial Conduct Authority (FCA). US Government: President Biden’s administration’s proposal: AI Bill of Rights In the US, The National Telecommunications and Information Administration (NTIA), a Commerce Department agency, and the White House Office of Science and Technology Policy put together recommendations that President Biden’s administration is acting on. The result is a blueprint for a proposed AI Bill of Rights on the table. The aim of this legislation, on a federal and state level, is to protect “the public’s rights, opportunities, or access to critical needs.” At the heart of this proposed AI Bill of Rights are five public and economic safeguards: Protection from unsafe or ineffective systems: AI “Systems should undergo pre-deployment testing, risk identification, and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.” Protection from algorithmic discrimination: AI systems should be “used and designed in an equitable way.” Protection from abusive data practices: Building on current data protection legislation to ensure greater security and agency over how data is used and processed by AI systems. Protection from AI systems taking action or decisions without people understanding how or why: In other words, transparency on AI decision-making processes. Protection from AI having the final say, with opt-out and human remedies to overrule AI systems. Source Alongside the White House’s proposal, the US Senate is holding hearings and deliberating on how to regulate AI. No regulations have been finalized. However, recent AI developments ⏤ especially the prominence of ChatGPT ⏤ firmly put AI regulation on the legislative agenda at every level of government. Around the world, several other countries are also taking AI regulation seriously. China and Japan have both taken a human-centric and safety-first approach. Japan has a vast IT industry, and AI development is accelerating, so they’ve adopted OECD AI Principles that align with their plans for “Society 5.0.” Brazil, Canada, and several other countries are also drafting AI legislation. India hasn’t made any formal moves so far. However, given the size of the Indian tech sector, it’s likely that regulation will soon have to be considered. There’s no global approach to AI regulation so far. But that might be something we see developing in time, especially since AI can and will impact everyone in some way. Key Takeaways: What to expect in the short-term? Government legislation, passing laws and regulations, establishing agencies, and allocating budgets take time. In most cases, a proposal can take several years to become a law and then have an agency to oversee and coordinate a legislative mandate. However, it is worth getting familiar with what the EU and US are proposing so your organization is ready. It’s even more important for those already operating in regulated sectors, such as healthcare, weapons, transport, and financial services. Although we aren’t likely to see any sudden changes, lawmakers are moving quicker than normal, so it’s worth being prepared. Ready to improve the performance, security, and audit trails of your active learning for computer vision and AI projects? Sign-up for an Encord Free Trial: The Active Learning Platform for Computer Vision, used by the world’s leading computer vision teams. AI-assisted labeling, model training & diagnostics, find & fix dataset errors and biases, all in one collaborative active learning platform, to get to production AI faster. Try Encord for Free Today. Want to stay updated? Follow us on Twitter and LinkedIn for more content on computer vision, training data, and active learning. Join our Discord channel to chat and connect.
Jun 23 2023
4 M
What the European AI Act Means for You, AI Developer [Updated December 2023]
TL;DR AI peeps, brace for impact! The EU AI Act is hitting the stage with the world's first-ever legislation on artificial intelligence. Imagine GDPR but for AI. Say 'hello' to legal definitions of 'foundation models' and 'general-purpose AI systems' (GPAI). The Act rolls out a red carpet of dos and don'ts for AI practices, mandatory disclosures, and an emphasis on 'trustworthy AI development.' The wild ride doesn't stop there - we've got obligations to follow, 'high-risk AI systems' to scrutinize, and a cliffhanger ending on who's the new AI sheriff in town. Hang tight; it's a whole new world of AI legislation out there! The European Parliament recently voted to adopt the EU AI Act, marking the world's first piece of legislation on artificial intelligence. The legislation intends to ban systems with an "unacceptable level of risk" and establish guardrails for developing and deploying AI systems into production, particularly in limited risk and high risk scenarios, which we’ll get into later. Like GDPR (oh, how don't we all love the "Accept cookie" banners), which took a few years from adoption (14 April 2016) until enforceability (25 May 2018), the legislation will have to pass through final negotiations between various EU institutions (so-called 'trilogues') before we have more clarity on concrete timelines for enforcement. As an AI product developer, the last thing you probably want to be spending time on is understanding and complying with regulations (you should've considered that law degree, after all, huh), so I decided to stay up all night reading through the entirety of The Artificial Intelligence Act - yes, all 167 pages, outlining the key points to keep an eye on as you embark on bringing your first AI product to market. We've collated all the key resources you need to learn about the EU AI Act, including our latest webinar, here. The main pieces of the legislation and corresponding sections that I'll cover in this piece are: Definitions, general principles & prohibited practices - Article 3/4/5 Fixed and general-purpose AI systems and provisions - Article 3/28 High-risk AI classification - Articles 6/7 High-risk obligations - Title III (Chapter III) Transparency obligations - Article 52 Governance and enforcement - Title VI/VII It's time to grab that cup of ☕ and get ready for some legal boilerplate crunching 🤡 Definitions, general principles & prohibited practices As with most EU legislation, the AI Act originated with a set of committees based in the European Union. The Act is the brainchild of two bodies - specifically, The European Internal Market and Consumer Protection ('IMCO') and Civil Liberties, Justice, and Home Affairs ('LIBE') committees - which seem to have an even greater fondness for long-winded acronyms than developers - who first brought forward the Act through the European Commission on 21 April 2021. Now that we've got that settled let's move on to some legal definitions of AI 🎉 In addition to defining 'artificial intelligence systems' (which has been left deliberately neutral in order to cover techniques which are not yet known/developed), lawmakers distinguish between 'foundation models' and a 'general-purpose AI systems' (GPAI), adopted in the more recent versions to cover development of models that can be applied to a wide range of tasks (as opposed to fixed-purpose AI systems). Article 3(1) of the draft act states that ‘artificial intelligence system’ means: ...software that is developed with [specific] techniques and approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. Notably, the IMCO & LIBE committees have aligned their definition of AI with the OECD's definition and proposed the following definitions of GPAI and foundation models in their article: (1c) 'foundation model' means an AI model that is trained on broad data at scale, is designed for the generality of output, and can be adapted to a wide range of distinctive tasks (1d) 'general-purpose AI system' means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed These definitions encompass both closed-sourced and open-source technology. The definitions are important as they determine what bucket you fall into and thus what obligations you might have as a developer. In other words, the set of requirements you have to comply with are different if you are a foundation model developer than if you are a general-purpose AI system developer, and different still to being a fixed-purpose AI system developer. The GPAI and foundation model definitions might appear remarkably similar since a GPAI can be interpreted as a foundation model and vice versa. However, there's some subtle nuance in how the terms are defined. The difference between the two concepts focuses specifically on training data (foundation models are trained on 'broad data at scale') and adaptability. Additionally, generative AI systems fall into the category of foundation models, meaning that providers of these models will have to comply with additional transparency obligations, which we'll get into a bit later. The text also includes a set of general principles and banned practices that both fixed-purpose and general-purpose AI developers - and even adopters/users - must adhere to. Specifically, the language adopted in Article 4 expands the definitions to include general principles for so-called 'trustworthy AI development.' It encapsulates the spirit that all operators (i.e., developers and adopters/users) make the best effort to develop ‘trustworthy’ (I won’t list all the requirements for being considered trustworthy, but they can be found here) AI systems. In the spirit of the human-centric European approach, the most recent version of the legislation that ended up going through adoption also includes a list of banned and strictly prohibited practices (so-called “unacceptable risk”) in AI development, for example, developing biometric identification systems for use in certain situations (e.g., kidnappings or terrorist attacks), biometric categorization, predictive policing, and using emotion recognition software in law enforcement or border management. Risk-based obligations for developers of AI systems Now, this is where things get interesting and slightly heavy on the legalese, so if you haven't had your second cup of coffee yet, now is a good time. Per the text, any AI developer selling services in the European Union, or the EU internal market as it is known, must adhere to general, high-risk, and transparency obligations, adopting a "risk-based" approach to regulation. This risk-based approach means that the set of legal requirements (and thus, legal intervention) you are subject to depends on the type of application you are developing, and whether you are developing fixed-purpose AI systems, GPAI, a foundation model, or generative AI. The main thing to call out is the different risk-based "bucket categories", which fall into minimal/no-risk, high-risk, and unacceptable risk categories, with an additional ‘limited risk’ category for AI systems that carry specific transparency obligations (i.e., generative AI like GPT): Minimal/no-risk AI systems (e.g., spam filters and AI within video games) will be permitted with no restrictions. Limited risk AI systems (e.g. image/text generators) are subject to additional transparency obligations. High-risk AI systems (e.g recruitment, medical devices, and recommender systems used by social media platforms - I’ve included an entire section on what constitutes high-risk AI systems later in the post - stay tuned) are allowed but subject to compliance with AI requirements and conformity assessments - more on that later. Unacceptable risk systems, which we touched on before, are prohibited. We’ve constructed the below decision tree based on our current understanding of the regulation, and what you’ll notice is that there is a set of “always required” obligations for general-purpose AI system, foundation model, and generative AI developers irrespective of whether they are deemed high-risk or not. Foundation models developers will have to do things like disclose how much compute (so the total training time, model size, compute power, and so on) and measure the energy consumption used to train the model - and similar to high-risk AI system developers - also conduct conformity assessments, register the model in a database, do technical audits, and so on. If you’re a generative AI developer, you’ll also have to document and disclose any use of copyrighted training data. Fixed and general-purpose AI developers (Article 3/28) Most AI systems will not be high-risk (Titles IV, IX), which carry no mandatory obligations, so the provisions and obligations for developers mainly centre around high-risk systems. However, the act envisages the creation of “codes of conduct” to encourage developers of non-high-risk AI systems to voluntarily apply the mandatory requirements. The developers building high-risk fixed and certain types of general-purpose AI systems must comply with a set of rules, including an ex-ante conformity assessment, as mentioned above, alongside other extensive requirements such as risk management, testing, technical robustness, appropriate training data, etc. Articles 8 to 15 in the Act list all requirements, which are too lengthy to recite here. As an AI developer, you should pay particular attention to Article 10 concerning data and data governance. Take, for example, Article 10 (3): Training, validation and testing data sets shall be relevant, representative, free of errors and complete. As a data scientist, you can probably appreciate how difficult it will be to prove compliance 💩 Separately, the conformity assessment (in case you have to do one - see decision tree above) stipulates that you must register the system in an EU-wide database before placing them on the market or in service. You’re not off the hook if you’re a provider selling into the EU - in that case, you have to appoint an authorised representative to ensure the conformity assessment and establish a post-market monitoring system. On a more technical legalese point (no, you’re not completely off the 🪝if you are an AI platform selling models via API or deploying an open-source model), the AI Act mandates that GPAI providers actively support downstream operators in achieving compliance by sharing all necessary information and documentation regarding an AI model for general-purpose AI systems. However, the provision stipulates that if a downstream provider employs any GPAI system in a high-risk AI context, they will bear the responsibility as the provider of 'high-risk AI systems'. So, suppose you're running a model off an AI platform or via an API and deploying it in a high-risk environment as the downstream deployer. In that case, you're liable - not the upstream provider (i.e., the AI platform or API in this example). Phew. Providers of foundation models (Article 28b) The lawmakers seem to have opted for a stricter approach to foundation models (and conversely, generative AI systems) than general fixed-purpose systems and GPAI, as there is no notion of a minimal/no-risk system. Specifically, foundation model developers must comply with obligations related to risk management, data governance, and the level of robustness of the foundation model to be vetted by independent experts. These requirements mean foundation models must undergo extensively documented analysis, testing, and vetting - similar to high-risk AI systems - before developers can deploy them into production. Who knows, 'AI foundation model auditor' might become the hottest job of the 2020s. As with high-risk systems, EU lawmakers demand foundation model providers implement a quality management system to ensure risk management and data governance. These providers must furnish the pertinent documents for up to 10 years after launching the model. Additionally, they are required to register their foundation models on the EU database and disclose the computing power needed alongside the total training time of the model. Providers of generative AI models (Article 28b 4) As an addendum to the requirements for foundation model developers, generative AI providers must disclose that content (text, video, images, and so on) has been artificially generated or manipulated under the transparency obligations outlined in Article 52 (which provides the official definition for deep fakes, exciting stuff) and also implement adequate safeguards against generating content in breach of EU law. Moreover, generative AI models must "make publicly available a summary disclosing the use of training data protected under copyright law." Ouch, we're in for some serious paperwork ⚖️ High-risk AI systems and classifications (Articles 6/7) I've included the formal definition of high-risk AI systems given its importance in the regulation for posterity. Here goes! High-risk systems are AI products that pose significant threats to health, safety, or the fundamental rights of persons, requiring compulsory conformity assessments to be undertaken by the provider. The following conditions fulfill the consideration of systems as high-risk: (a) the AI system is intended to be used as a safety component of a product or is itself a product covered by the Union harmonization legislation listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonization legislation listed in Annex II. Annex II includes a list of all the directives that point to regulation of things like medical devices, heavy machinery, the safety toys, and so on. Furthermore, the text explicitly provides provisions for considering AI systems in the following areas as always high-risk (Annex III): Biometric identification and categorization of natural persons Management and operation of critical infrastructure Education and vocational training Employment, workers management, and access to self-employment Access to and enjoyment of essential private services and public services and benefits Law enforcement Migration, asylum, and border control management Administration of justice and democratic processes A ChatGPT-generated joke about high-risk AI systems is in order (full disclosure: this joke was created by a generative model). Q: Why did all the AI systems defined by the AI Act form a support group? A: Because they realized they were all "high-risk" individuals and needed some serious debugging therapy! Lol. Governance and enforcement Congratulations! You've made it through what we at Encord think are the most pertinent sections of the 167-long document to familiarise yourself with. However, many unknowns exist about how the AI Act will play out. The relevant legal definitions and obligations remain vague, raising questions about what effective enforcement will play out in practice. For example, what does 'broad data at scale' in the foundation model definition mean? They'll mean different things to Facebook AI Research (FAIR) than they do to smaller research labs & some of the recently emerged foundation model startups like Anthropic, Mistral, etc. The ongoing debate on enforcement revolves around the limited powers of the proposed AI Office, which is intended to play a supporting role in providing guidance and coordinating joint investigations (Title VI/VII). Meanwhile, the European Commission is responsible for settling disputes among national authorities regarding dangerous AI systems, adding to the complexity of determining who will ultimately police compliance and ensure obligations are met. What is clear is that the fines for non-compliance can be substantial - up to €35M or 7% of total worldwide annual turnover (depending on the severity of the offence). December amendments and ratification On December 9th, after nearly 15 hours of negotiations and an almost 24-hour debate, the European Parliament and EU countries reached a provisional deal outlining the rules governing the use of artificial intelligence in the AI Act. The negotiations primarily focused on unacceptable-risk and high-risk AI systems, with concerns about excessive regulation impacting innovation among European companies. Unacceptable-risk AI systems will still be banned in the EU, with the Act specifically prohibiting emotion recognition in workplaces and the use of AI that exploits individual’s vulnerabilities, among other practices. Biometric identification systems (RBI) are generally banned, although there are limited exceptions for their use in publicly accessible places for law enforcement purposes. High-risk systems will adhere to most of the key principles previously laid out in June, and detailed above, with a heightened emphasis on transparency. However, there are exceptions for law enforcement in cases of extreme urgency, allowing bypassing of the ‘conformity assessment procedure.’ Another notable aspect is the regulation of GPAIs, which is divided into two tiers. Tier 1 applies to all GPAIs and includes requirements such as maintaining technical documentation for transparency, complying with EU copyright law, and providing summaries of the content used for training. Note that transparency requirements do not apply to open-source models and those in research and development that do not pose systemic risk. Tier 2 sets additional obligations for GPAIs with systemic risk, including conducting model evaluations, assessing and mitigating systemic risk, conducting adversarial testing, reporting serious incidents to the Commission, ensuring cybersecurity, and reporting on energy efficiency. There were several other notable amendments, including the promotion of sandboxes to facilitate real-world testing and support for SMEs. Most of the measures will come into force in two years, but prohibitions will take effect after six months and GPAI model obligations after twelve months. The focus now shifts to the Parliament and the Council, with the Parliament’s Internal Market and Civil Liberties Committee expected to vote on the agreement, which is expected to be mostly a formality. Final remarks On a more serious note, the EU AI Act is an unprecedented step toward the regulation of artificial intelligence, marking a new era of accountability and governance in the realm of AI. As AI developers, we now operate in a world where considerations around our work's ethical, societal, and individual implications are no longer optional but mandated by law. The Act brings substantial implications for our practice, demanding an understanding of the regulatory landscape and a commitment to uphold the principles at its core. As we venture into this new landscape, the challenge lies in navigating the complexities of the Act and embedding its principles into our work. In a dynamic and rapidly evolving landscape as AI, the Act serves as a compass, guiding us towards responsible and ethical AI development. The task at hand is by no means simple - it demands patience, diligence, and commitment from us. However, precisely through these challenges, we will shape an AI-driven future that prioritizes the rights and safety of individuals and society at large. We stand at the forefront of a new era, tasked with translating this legislation into action. The road ahead may seem daunting, but it offers us an opportunity to set a new standard for the AI industry, one that champions transparency, accountability, and respect for human rights. As we step into this uncharted territory, let us approach the task with the seriousness it demands, upholding our commitment to responsible AI and working towards a future where AI serves as a tool for good. Get access to our new AI Act Learning Pack, that includes all the key resources you need to ensure forward compatibility here.
Jun 26 2023
11 M
Learning Pack: The European AI Act's Impact on AI Developers
The European Parliament recently voted to adopt the EU AI Act, marking the world’s first piece of legislation on artificial intelligence. The legislation intends to ban systems with an “unacceptable level of risk” and establish guardrails for developing and deploying AI systems into production. We've put together this learning pack to equip you with all the resources you need to understand how the AI Act will impact you, whether or not you're based in the EU. From webinar recordings to the most informative blogs, sign up now to get access to these crucial resources.
Jul 13 2023
30 M
What the European AI Act Means for You, AI Developer [Updated December 2023]
TL;DR AI peeps, brace for impact! The EU AI Act is hitting the stage with the world's first-ever legislation on artificial intelligence. Imagine GDPR but for AI. Say 'hello' to legal definitions of 'foundation models' and 'general-purpose AI systems' (GPAI). The Act rolls out a red carpet of dos and don'ts for AI practices, mandatory disclosures, and an emphasis on 'trustworthy AI development.' The wild ride doesn't stop there - we've got obligations to follow, 'high-risk AI systems' to scrutinize, and a cliffhanger ending on who's the new AI sheriff in town. Hang tight; it's a whole new world of AI legislation out there! The European Parliament recently voted to adopt the EU AI Act, marking the world's first piece of legislation on artificial intelligence. The legislation intends to ban systems with an "unacceptable level of risk" and establish guardrails for developing and deploying AI systems into production, particularly in limited risk and high risk scenarios, which we’ll get into later. Like GDPR (oh, how don't we all love the "Accept cookie" banners), which took a few years from adoption (14 April 2016) until enforceability (25 May 2018), the legislation will have to pass through final negotiations between various EU institutions (so-called 'trilogues') before we have more clarity on concrete timelines for enforcement. As an AI product developer, the last thing you probably want to be spending time on is understanding and complying with regulations (you should've considered that law degree, after all, huh), so I decided to stay up all night reading through the entirety of The Artificial Intelligence Act - yes, all 167 pages, outlining the key points to keep an eye on as you embark on bringing your first AI product to market. We've collated all the key resources you need to learn about the EU AI Act, including our latest webinar, here. The main pieces of the legislation and corresponding sections that I'll cover in this piece are: Definitions, general principles & prohibited practices - Article 3/4/5 Fixed and general-purpose AI systems and provisions - Article 3/28 High-risk AI classification - Articles 6/7 High-risk obligations - Title III (Chapter III) Transparency obligations - Article 52 Governance and enforcement - Title VI/VII It's time to grab that cup of ☕ and get ready for some legal boilerplate crunching 🤡 Definitions, general principles & prohibited practices As with most EU legislation, the AI Act originated with a set of committees based in the European Union. The Act is the brainchild of two bodies - specifically, The European Internal Market and Consumer Protection ('IMCO') and Civil Liberties, Justice, and Home Affairs ('LIBE') committees - which seem to have an even greater fondness for long-winded acronyms than developers - who first brought forward the Act through the European Commission on 21 April 2021. Now that we've got that settled let's move on to some legal definitions of AI 🎉 In addition to defining 'artificial intelligence systems' (which has been left deliberately neutral in order to cover techniques which are not yet known/developed), lawmakers distinguish between 'foundation models' and a 'general-purpose AI systems' (GPAI), adopted in the more recent versions to cover development of models that can be applied to a wide range of tasks (as opposed to fixed-purpose AI systems). Article 3(1) of the draft act states that ‘artificial intelligence system’ means: ...software that is developed with [specific] techniques and approaches and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. Notably, the IMCO & LIBE committees have aligned their definition of AI with the OECD's definition and proposed the following definitions of GPAI and foundation models in their article: (1c) 'foundation model' means an AI model that is trained on broad data at scale, is designed for the generality of output, and can be adapted to a wide range of distinctive tasks (1d) 'general-purpose AI system' means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed These definitions encompass both closed-sourced and open-source technology. The definitions are important as they determine what bucket you fall into and thus what obligations you might have as a developer. In other words, the set of requirements you have to comply with are different if you are a foundation model developer than if you are a general-purpose AI system developer, and different still to being a fixed-purpose AI system developer. The GPAI and foundation model definitions might appear remarkably similar since a GPAI can be interpreted as a foundation model and vice versa. However, there's some subtle nuance in how the terms are defined. The difference between the two concepts focuses specifically on training data (foundation models are trained on 'broad data at scale') and adaptability. Additionally, generative AI systems fall into the category of foundation models, meaning that providers of these models will have to comply with additional transparency obligations, which we'll get into a bit later. The text also includes a set of general principles and banned practices that both fixed-purpose and general-purpose AI developers - and even adopters/users - must adhere to. Specifically, the language adopted in Article 4 expands the definitions to include general principles for so-called 'trustworthy AI development.' It encapsulates the spirit that all operators (i.e., developers and adopters/users) make the best effort to develop ‘trustworthy’ (I won’t list all the requirements for being considered trustworthy, but they can be found here) AI systems. In the spirit of the human-centric European approach, the most recent version of the legislation that ended up going through adoption also includes a list of banned and strictly prohibited practices (so-called “unacceptable risk”) in AI development, for example, developing biometric identification systems for use in certain situations (e.g., kidnappings or terrorist attacks), biometric categorization, predictive policing, and using emotion recognition software in law enforcement or border management. Risk-based obligations for developers of AI systems Now, this is where things get interesting and slightly heavy on the legalese, so if you haven't had your second cup of coffee yet, now is a good time. Per the text, any AI developer selling services in the European Union, or the EU internal market as it is known, must adhere to general, high-risk, and transparency obligations, adopting a "risk-based" approach to regulation. This risk-based approach means that the set of legal requirements (and thus, legal intervention) you are subject to depends on the type of application you are developing, and whether you are developing fixed-purpose AI systems, GPAI, a foundation model, or generative AI. The main thing to call out is the different risk-based "bucket categories", which fall into minimal/no-risk, high-risk, and unacceptable risk categories, with an additional ‘limited risk’ category for AI systems that carry specific transparency obligations (i.e., generative AI like GPT): Minimal/no-risk AI systems (e.g., spam filters and AI within video games) will be permitted with no restrictions. Limited risk AI systems (e.g. image/text generators) are subject to additional transparency obligations. High-risk AI systems (e.g recruitment, medical devices, and recommender systems used by social media platforms - I’ve included an entire section on what constitutes high-risk AI systems later in the post - stay tuned) are allowed but subject to compliance with AI requirements and conformity assessments - more on that later. Unacceptable risk systems, which we touched on before, are prohibited. We’ve constructed the below decision tree based on our current understanding of the regulation, and what you’ll notice is that there is a set of “always required” obligations for general-purpose AI system, foundation model, and generative AI developers irrespective of whether they are deemed high-risk or not. Foundation models developers will have to do things like disclose how much compute (so the total training time, model size, compute power, and so on) and measure the energy consumption used to train the model - and similar to high-risk AI system developers - also conduct conformity assessments, register the model in a database, do technical audits, and so on. If you’re a generative AI developer, you’ll also have to document and disclose any use of copyrighted training data. Fixed and general-purpose AI developers (Article 3/28) Most AI systems will not be high-risk (Titles IV, IX), which carry no mandatory obligations, so the provisions and obligations for developers mainly centre around high-risk systems. However, the act envisages the creation of “codes of conduct” to encourage developers of non-high-risk AI systems to voluntarily apply the mandatory requirements. The developers building high-risk fixed and certain types of general-purpose AI systems must comply with a set of rules, including an ex-ante conformity assessment, as mentioned above, alongside other extensive requirements such as risk management, testing, technical robustness, appropriate training data, etc. Articles 8 to 15 in the Act list all requirements, which are too lengthy to recite here. As an AI developer, you should pay particular attention to Article 10 concerning data and data governance. Take, for example, Article 10 (3): Training, validation and testing data sets shall be relevant, representative, free of errors and complete. As a data scientist, you can probably appreciate how difficult it will be to prove compliance 💩 Separately, the conformity assessment (in case you have to do one - see decision tree above) stipulates that you must register the system in an EU-wide database before placing them on the market or in service. You’re not off the hook if you’re a provider selling into the EU - in that case, you have to appoint an authorised representative to ensure the conformity assessment and establish a post-market monitoring system. On a more technical legalese point (no, you’re not completely off the 🪝if you are an AI platform selling models via API or deploying an open-source model), the AI Act mandates that GPAI providers actively support downstream operators in achieving compliance by sharing all necessary information and documentation regarding an AI model for general-purpose AI systems. However, the provision stipulates that if a downstream provider employs any GPAI system in a high-risk AI context, they will bear the responsibility as the provider of 'high-risk AI systems'. So, suppose you're running a model off an AI platform or via an API and deploying it in a high-risk environment as the downstream deployer. In that case, you're liable - not the upstream provider (i.e., the AI platform or API in this example). Phew. Providers of foundation models (Article 28b) The lawmakers seem to have opted for a stricter approach to foundation models (and conversely, generative AI systems) than general fixed-purpose systems and GPAI, as there is no notion of a minimal/no-risk system. Specifically, foundation model developers must comply with obligations related to risk management, data governance, and the level of robustness of the foundation model to be vetted by independent experts. These requirements mean foundation models must undergo extensively documented analysis, testing, and vetting - similar to high-risk AI systems - before developers can deploy them into production. Who knows, 'AI foundation model auditor' might become the hottest job of the 2020s. As with high-risk systems, EU lawmakers demand foundation model providers implement a quality management system to ensure risk management and data governance. These providers must furnish the pertinent documents for up to 10 years after launching the model. Additionally, they are required to register their foundation models on the EU database and disclose the computing power needed alongside the total training time of the model. Providers of generative AI models (Article 28b 4) As an addendum to the requirements for foundation model developers, generative AI providers must disclose that content (text, video, images, and so on) has been artificially generated or manipulated under the transparency obligations outlined in Article 52 (which provides the official definition for deep fakes, exciting stuff) and also implement adequate safeguards against generating content in breach of EU law. Moreover, generative AI models must "make publicly available a summary disclosing the use of training data protected under copyright law." Ouch, we're in for some serious paperwork ⚖️ High-risk AI systems and classifications (Articles 6/7) I've included the formal definition of high-risk AI systems given its importance in the regulation for posterity. Here goes! High-risk systems are AI products that pose significant threats to health, safety, or the fundamental rights of persons, requiring compulsory conformity assessments to be undertaken by the provider. The following conditions fulfill the consideration of systems as high-risk: (a) the AI system is intended to be used as a safety component of a product or is itself a product covered by the Union harmonization legislation listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonization legislation listed in Annex II. Annex II includes a list of all the directives that point to regulation of things like medical devices, heavy machinery, the safety toys, and so on. Furthermore, the text explicitly provides provisions for considering AI systems in the following areas as always high-risk (Annex III): Biometric identification and categorization of natural persons Management and operation of critical infrastructure Education and vocational training Employment, workers management, and access to self-employment Access to and enjoyment of essential private services and public services and benefits Law enforcement Migration, asylum, and border control management Administration of justice and democratic processes A ChatGPT-generated joke about high-risk AI systems is in order (full disclosure: this joke was created by a generative model). Q: Why did all the AI systems defined by the AI Act form a support group? A: Because they realized they were all "high-risk" individuals and needed some serious debugging therapy! Lol. Governance and enforcement Congratulations! You've made it through what we at Encord think are the most pertinent sections of the 167-long document to familiarise yourself with. However, many unknowns exist about how the AI Act will play out. The relevant legal definitions and obligations remain vague, raising questions about what effective enforcement will play out in practice. For example, what does 'broad data at scale' in the foundation model definition mean? They'll mean different things to Facebook AI Research (FAIR) than they do to smaller research labs & some of the recently emerged foundation model startups like Anthropic, Mistral, etc. The ongoing debate on enforcement revolves around the limited powers of the proposed AI Office, which is intended to play a supporting role in providing guidance and coordinating joint investigations (Title VI/VII). Meanwhile, the European Commission is responsible for settling disputes among national authorities regarding dangerous AI systems, adding to the complexity of determining who will ultimately police compliance and ensure obligations are met. What is clear is that the fines for non-compliance can be substantial - up to €35M or 7% of total worldwide annual turnover (depending on the severity of the offence). December amendments and ratification On December 9th, after nearly 15 hours of negotiations and an almost 24-hour debate, the European Parliament and EU countries reached a provisional deal outlining the rules governing the use of artificial intelligence in the AI Act. The negotiations primarily focused on unacceptable-risk and high-risk AI systems, with concerns about excessive regulation impacting innovation among European companies. Unacceptable-risk AI systems will still be banned in the EU, with the Act specifically prohibiting emotion recognition in workplaces and the use of AI that exploits individual’s vulnerabilities, among other practices. Biometric identification systems (RBI) are generally banned, although there are limited exceptions for their use in publicly accessible places for law enforcement purposes. High-risk systems will adhere to most of the key principles previously laid out in June, and detailed above, with a heightened emphasis on transparency. However, there are exceptions for law enforcement in cases of extreme urgency, allowing bypassing of the ‘conformity assessment procedure.’ Another notable aspect is the regulation of GPAIs, which is divided into two tiers. Tier 1 applies to all GPAIs and includes requirements such as maintaining technical documentation for transparency, complying with EU copyright law, and providing summaries of the content used for training. Note that transparency requirements do not apply to open-source models and those in research and development that do not pose systemic risk. Tier 2 sets additional obligations for GPAIs with systemic risk, including conducting model evaluations, assessing and mitigating systemic risk, conducting adversarial testing, reporting serious incidents to the Commission, ensuring cybersecurity, and reporting on energy efficiency. There were several other notable amendments, including the promotion of sandboxes to facilitate real-world testing and support for SMEs. Most of the measures will come into force in two years, but prohibitions will take effect after six months and GPAI model obligations after twelve months. The focus now shifts to the Parliament and the Council, with the Parliament’s Internal Market and Civil Liberties Committee expected to vote on the agreement, which is expected to be mostly a formality. Final remarks On a more serious note, the EU AI Act is an unprecedented step toward the regulation of artificial intelligence, marking a new era of accountability and governance in the realm of AI. As AI developers, we now operate in a world where considerations around our work's ethical, societal, and individual implications are no longer optional but mandated by law. The Act brings substantial implications for our practice, demanding an understanding of the regulatory landscape and a commitment to uphold the principles at its core. As we venture into this new landscape, the challenge lies in navigating the complexities of the Act and embedding its principles into our work. In a dynamic and rapidly evolving landscape as AI, the Act serves as a compass, guiding us towards responsible and ethical AI development. The task at hand is by no means simple - it demands patience, diligence, and commitment from us. However, precisely through these challenges, we will shape an AI-driven future that prioritizes the rights and safety of individuals and society at large. We stand at the forefront of a new era, tasked with translating this legislation into action. The road ahead may seem daunting, but it offers us an opportunity to set a new standard for the AI industry, one that champions transparency, accountability, and respect for human rights. As we step into this uncharted territory, let us approach the task with the seriousness it demands, upholding our commitment to responsible AI and working towards a future where AI serves as a tool for good. Get access to our new AI Act Learning Pack, that includes all the key resources you need to ensure forward compatibility here.
Jun 26 2023
11 M
Proposed AI Regulation: EU AI Act, UK's Pro-Innovation, US AI Bill of Rights
Should AI be regulated? How might governments attempt to regulate AI? What would regulation mean for AI startups, scaleups, and tech giants supporting and innovating in the AI sector? While humanity has not developed artificial intelligence that can unequivocally pass the Turing test yet, these fears are not unfounded. We don’t have AIs on the same level as Ian M. Bank’s ships “Minds” in the Culture series or Alastair Reynold's “Mechanism” in the Blue Remembered Earth series. Or the “SI” in Peter F. Hamilton’s series of books about a space-faring, wormhole-connected human society known as The Commonwealth. Discussions and debates around AI regulation have gained significant traction recently. Governments, regulatory bodies, and policymakers around the world are grappling with the challenges and potential risks associated with the inevitable widespread adoption of artificial intelligence. There has been considerable talk and speculation in the media, regulatory bodies, and political leaders at every level recently about proposed AI regulations across the EU, UK, US, and worldwide. As a result, governments are moving fast to propose regulations and new laws to govern the development and use of Artificial Intelligence.. It is worth noting the stark contrast in the great sense of urgency for adopting AI regulation compared to slow progress of cryptocurrency regulations. It's a evident that lawmakers’ see AI as a potential threat to human society, the economy, and environment. Let’s dive into why governments are moving quickly, what regulation means for the AI sector, some of the challenges, and the proposed regulations thus far. Public Discourse around AI Regulation There has been significant pressure from prominent figures in the industry to get AI regulation right. On May 30, 2023, hundreds of, leading AI experts, including Sam Altman, CEO of OpenAI, and Microsoft Founder, Bill Gates, signed the following Statement of AI Risk: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This declaration is not without precedent: In 2015, Elon Musk, Professor Stephen Hawking and numerous others met for an AI ethics conference. During the conference, it was agreed that an “uncontrolled hyper-leap in the cognitive ability of AI . . . could one day spell doom for the human race.” The sentiment is best summed up by what the Future of Life Institute published in March 2023: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” In theory, OpenAI CEO Sam Altman agrees: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” Source Google’s ‘Godfather of AI’, Geoffrey Hinton, quit in May 2023 over fears the industry and governments still aren’t moving fast enough to contain the risks posed by unregulated AI systems. Hinton is particularly fearful of “bad actors”, such as Russia, China, North Korea, Iran, and global terrorist organizations using AI to attack Western democracies. Around the same time, Elon Musk warned Google’s co-founders that they are “not taking AI safety seriously enough”. Despite Musk’s involvement in co-founding OpenAI and benefiting from AI advancements in his other businesses such as Tesla and SpaceX, he has been increasingly vocal about the dangers of AI recently. Why governments are moving quickly to regulate AI In short, governments are moving quickly because AI technologies are moving incredibly fast. There are several considerations that are drawing particular attention: Economic Disruption: The potential eradication of millions of jobs through the automation of numerous professions could cause economic disruption. Security Risks: The increased reliance on AI systems also introduces new security risks. Bad actors could use AI for automated cyberattacks, even giving AI systems control over aerial weapons, such as drones, chemical weapons, and nuclear warheads. Misinformation: In the wrong hands, generative AI could be used to spread misinformation and could be used to manipulate populations, economies, and political debates if regulations aren’t applied for how this technology is used. Ethical Concerns: There are worries about the ethical implications of AI, particularly regarding the use of AI in military applications and surveillance. The lack of transparency in AI processes is a concern as it can lead to biased outcomes. As the Center for AI Safety states: “AI systems are trained using measurable objectives, which may only be indirect proxies for what we value.” Lack of Control: Some people fear that AI systems may become too autonomous and surpass human intelligence. This would result in loss of control over their actions and decision making. There are numerous other concerns with advancements in AI technologies and applications. How businesses, academia, and governments influence the way AI evolves and iterates now will directly impact the way AI shapes humanity, the economy, and the environment for years to come. What regulation means for the AI sector Every industry that impacts society in significant ways has regulatory oversight, laws that govern the use of technology, and safeguards to prevent risks to life, health, the economy, or the environment. Nuclear, healthcare, finance, and communications are some of the most heavily regulated sectors. The challenge is finding balance. Governments don’t want to prevent innovation, especially in the technology sector. Innovation creates jobs, economic growth, and new tax revenues for governments. Other challenges are more practical. Such as working out how much money and how many people will it take to regulate AI businesses? AI is advancing fast. New models and developments are emerging every week. How can governments handle a fast-moving volume of applications and models to test, and what tests can be applied to determine whether an AI is safe or not? These are some of the many questions AI experts, industry leaders, and lawmakers’ are wrestling with to find the best ways to regulate the sector without negatively impacting it. OpenAI CEO Sam Altman has been amongst the most vocal in calling for laws to regulate the AI industry. In a US congressional hearing, Altman said “We think it can be a printing press moment. We have to work together to make it so.” He called for the creation of regulatory bodies for AI, similar to the Food and Drug Administration (FDA). As for what this means in practice, the AI industry, businesses that use AI tools, and consumers will have to see what laws are passed before government agencies are established and put them into practice. In reality, this is a process that normally takes several years. The FDA's process for AI developers to understand whether their model needs FDA approval Assuming governments take this seriously, we might see legislation move more quickly than attempts to regulate other technological advances, such as crypto. AI is already impacting society, businesses, and the economy faster, so for that reason political leaders are accelerating the legislative process. We are already seeing swift movement in the drafting of AI laws. Now let’s look at what the US, European, and British governments are doing about this . . . What AI Regulations have been proposed? Let’s dive into the AI regulation that has been proposed in the EU, UK, US, and around the world. European Union’s Artificial Intelligence Act (AI Act) The EU is proposing a “risk-based approach” to ensure that any AI system considered “a clear threat to the safety, livelihoods and rights of people will be banned.” The AI Act is part of a wider, coordinated approach to AI development and uses across Europe, including a Coordinated Plan on AI. A key part of this proposal is the conformity assessment, which will be required before the AI system enters the market. Source This way, the EU will assess the risk factor for every commercial AI model active in Europe. Depending on the outcome of the assessment, an AI system could be banned or placed into an EU-wide database and granted a CE mark to show security compliance. The EU notes that: “the vast majority of AI systems currently used in the EU fall into this [minimal and no risk] category.” In the US, the FDA is already processing and regulating hundreds of AI models for healthcare, so we could see a similar process in the EU with a broader range of applications. UK Government’s Pro-Innovation Policy Paper The UK government isn’t being as proactive. The British Department for Science, Innovation and Technology has published: AI regulation: a pro-innovation approach – policy proposals. As the name implies, the British government aims to demonstrate it’s pro-business and pro-innovation. Speaking at London Tech Week, British Prime Minister Rishi Sunak said “I want to make the UK not just the intellectual home, but the geographical home of global AI safety regulation.” However, so far the UK approach to AI regulation isn’t as robust as the EU or US. No new laws or regulatory bodies are being created. Instead, the UK is passing on the responsibility to the Information Commissioner’s Office (ICO) and Financial Conduct Authority (FCA). US Government: President Biden’s administration’s proposal: AI Bill of Rights In the US, The National Telecommunications and Information Administration (NTIA), a Commerce Department agency, and the White House Office of Science and Technology Policy put together recommendations that President Biden’s administration is acting on. The result is a blueprint for a proposed AI Bill of Rights on the table. The aim of this legislation, on a federal and state level, is to protect “the public’s rights, opportunities, or access to critical needs.” At the heart of this proposed AI Bill of Rights are five public and economic safeguards: Protection from unsafe or ineffective systems: AI “Systems should undergo pre-deployment testing, risk identification, and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.” Protection from algorithmic discrimination: AI systems should be “used and designed in an equitable way.” Protection from abusive data practices: Building on current data protection legislation to ensure greater security and agency over how data is used and processed by AI systems. Protection from AI systems taking action or decisions without people understanding how or why: In other words, transparency on AI decision-making processes. Protection from AI having the final say, with opt-out and human remedies to overrule AI systems. Source Alongside the White House’s proposal, the US Senate is holding hearings and deliberating on how to regulate AI. No regulations have been finalized. However, recent AI developments ⏤ especially the prominence of ChatGPT ⏤ firmly put AI regulation on the legislative agenda at every level of government. Around the world, several other countries are also taking AI regulation seriously. China and Japan have both taken a human-centric and safety-first approach. Japan has a vast IT industry, and AI development is accelerating, so they’ve adopted OECD AI Principles that align with their plans for “Society 5.0.” Brazil, Canada, and several other countries are also drafting AI legislation. India hasn’t made any formal moves so far. However, given the size of the Indian tech sector, it’s likely that regulation will soon have to be considered. There’s no global approach to AI regulation so far. But that might be something we see developing in time, especially since AI can and will impact everyone in some way. Key Takeaways: What to expect in the short-term? Government legislation, passing laws and regulations, establishing agencies, and allocating budgets take time. In most cases, a proposal can take several years to become a law and then have an agency to oversee and coordinate a legislative mandate. However, it is worth getting familiar with what the EU and US are proposing so your organization is ready. It’s even more important for those already operating in regulated sectors, such as healthcare, weapons, transport, and financial services. Although we aren’t likely to see any sudden changes, lawmakers are moving quicker than normal, so it’s worth being prepared. Ready to improve the performance, security, and audit trails of your active learning for computer vision and AI projects? Sign-up for an Encord Free Trial: The Active Learning Platform for Computer Vision, used by the world’s leading computer vision teams. AI-assisted labeling, model training & diagnostics, find & fix dataset errors and biases, all in one collaborative active learning platform, to get to production AI faster. Try Encord for Free Today. Want to stay updated? Follow us on Twitter and LinkedIn for more content on computer vision, training data, and active learning. Join our Discord channel to chat and connect.
Jun 23 2023
4 M
Software To Help You Turn Your Data Into AI
Forget fragmented workflows, annotation tools, and Notebooks for building AI applications. Encord Data Engine accelerates every step of taking your model into production.