Back to Blogs

Proposed AI Regulation: EU AI Act, UK's Pro-Innovation, US AI Bill of Rights

June 23, 2023
|
4 mins
blog image

Should AI be regulated? How might governments attempt to regulate AI? What would regulation mean for AI startups, scaleups, and tech giants supporting and innovating in the AI sector?

While humanity has not developed artificial intelligence that can unequivocally pass the Turing test yet, these fears are not unfounded. 

We don’t have AIs on the same level as Ian M. Bank’s ships “Minds” in the Culture series or Alastair Reynold's “Mechanism” in the Blue Remembered Earth series. Or the “SI” in Peter F. Hamilton’s series of books about a space-faring, wormhole-connected human society known as The Commonwealth. 

Discussions and debates around AI regulation have gained significant traction recently. Governments, regulatory bodies, and policymakers around the world are grappling with the challenges and potential risks associated with the inevitable widespread adoption of artificial intelligence. 

There has been considerable talk and speculation in the media, regulatory bodies, and political leaders at every level recently about proposed AI regulations across the EU, UK, US, and worldwide.

As a result, governments are moving fast to propose regulations and new laws to govern the development and use of Artificial Intelligence.. It is worth noting the stark contrast in the great sense of urgency for adopting AI regulation compared to slow progress  of cryptocurrency regulations.

It's a evident that lawmakers’ see AI as a potential threat to human society, the economy, and environment.

Let’s dive into why governments are moving quickly, what regulation means for the AI sector, some of the challenges, and the proposed regulations thus far. 

Public Discourse around AI Regulation

There has been significant pressure from prominent figures in the industry to get AI regulation right. 

light-callout-cta On May 30, 2023, hundreds of, leading AI experts, including Sam Altman, CEO of OpenAI, and Microsoft Founder, Bill Gates, signed the following Statement of AI Risk:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

This declaration is not without precedent: In 2015, Elon Musk, Professor Stephen Hawking and numerous others met for an AI ethics conference. During the conference, it was agreed that an “uncontrolled hyper-leap in the cognitive ability of AI . . . could one day spell doom for the human race.” 

The sentiment is best summed up by what the Future of Life Institute published in March 2023: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” 

In theory, OpenAI CEO Sam Altman agrees: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

blog_image_3954

Source

Google’s ‘Godfather of AI’, Geoffrey Hinton, quit in May 2023 over fears the industry and governments still aren’t moving fast enough to contain the risks posed by unregulated AI systems. Hinton is particularly  fearful of “bad actors”, such as Russia, China, North Korea, Iran, and global terrorist organizations using AI to attack Western democracies. 

Around the same time, Elon Musk warned Google’s co-founders that they are “not taking AI safety seriously enough”. Despite Musk’s involvement in co-founding OpenAI and benefiting from AI advancements  in his other businesses such as Tesla and SpaceX, he has been increasingly vocal about the dangers of AI recently. 

Why governments are moving quickly to regulate AI

In short, governments are moving quickly because AI technologies are moving incredibly fast. 

There are several considerations that are drawing particular attention: 

  • Economic Disruption: The potential eradication of millions of jobs through the automation of numerous professions could cause economic disruption. 
  • Security Risks: The increased reliance on AI systems also introduces new security risks. Bad actors could use AI for automated cyberattacks, even giving AI systems control over aerial weapons, such as drones, chemical weapons, and nuclear warheads. 
  • Misinformation: In the wrong hands, generative AI could be used to spread misinformation and could be used to manipulate populations, economies, and political debates if regulations aren’t applied for how this technology is used.
  • Ethical Concerns: There are worries about the ethical implications of AI, particularly regarding the use of AI in military applications and surveillance. The lack of transparency in AI processes is a concern as it can lead to biased outcomes. As the Center for AI Safety states: “AI systems are trained using measurable objectives, which may only be indirect proxies for what we value.” 
  • Lack of Control:  Some people fear that AI systems may become too autonomous and surpass human intelligence. This would result in loss of control over their actions and decision making. 

There are numerous other concerns with advancements in AI technologies and applications. 

How businesses, academia, and governments influence the way AI evolves and iterates now will directly impact the way AI shapes humanity, the economy, and the environment for years to come.

Supercharge Your Annotations with the
Segment Anything Model
medical banner

What regulation means for the AI sector

Every industry that impacts society in significant ways has regulatory oversight, laws that govern the use of technology, and safeguards to prevent risks to life, health, the economy, or the environment. Nuclear, healthcare, finance, and communications are some of the most heavily regulated sectors. 

The challenge is finding balance. Governments don’t want to prevent innovation, especially in the technology sector. Innovation creates jobs, economic growth, and new tax revenues for governments. 

Other challenges are more practical. Such as working out how much money and how many people will it take to regulate AI businesses? AI is advancing fast. New models and developments are emerging every week. How can governments handle a fast-moving volume of applications and models to test, and what tests can be applied to determine whether an AI is safe or not? 

These are some of the many questions AI experts, industry leaders, and lawmakers’ are wrestling with to find the best ways to regulate the sector without negatively impacting it. 

light-callout-cta OpenAI CEO Sam Altman has been amongst the most vocal in calling for laws to regulate the AI industry. In a US congressional hearing, Altman said “We think it can be a printing press moment. We have to work together to make it so.” He called for the creation of regulatory bodies for AI, similar to the Food and Drug Administration (FDA)

As for what this means in practice, the AI industry, businesses that use AI tools, and consumers will have to see what laws are passed before government agencies are established and put them into practice. In reality, this is a process that normally takes several years. 

blog_image_10815

The FDA's process for AI developers to understand whether their model needs FDA approval

Assuming governments take this seriously, we might see legislation move more quickly than attempts to regulate other technological advances, such as crypto. AI is already impacting society, businesses, and the economy faster, so for that reason political leaders are accelerating the legislative process. 

We are already seeing swift movement in the drafting of AI laws. Now let’s look at what the US, European, and British governments are doing about this . . . 

What AI Regulations have been proposed?  

Let’s dive into the AI regulation that has been proposed in the EU, UK, US, and around the world. 

European Union’s Artificial Intelligence Act (AI Act) 

The EU is proposing a “risk-based approach” to ensure that any AI system considered “a clear threat to the safety, livelihoods and rights of people will be banned.” 

The AI Act is part of a wider, coordinated approach to AI development and uses across Europe, including a Coordinated Plan on AI

A key part of this proposal is the  conformity assessment, which will be required before the AI system enters the market.

blog_image_12461

Source

This way, the EU will  assess the risk factor for every  commercial AI model active in Europe. Depending on the outcome of the assessment, an AI system could be banned or placed  into an EU-wide database and granted a CE mark to show security compliance. The EU notes that: “the vast majority of AI systems currently used in the EU fall into this [minimal and no risk] category.” 

light-callout-cta In the US, the FDA is already processing and regulating hundreds of AI models for healthcare, so we could see a similar process in the EU with a broader range of applications. 
 

UK Government’s Pro-Innovation Policy Paper  

The UK government isn’t being as proactive. The British Department for Science, Innovation and Technology has published: AI regulation: a pro-innovation approach – policy proposals.

As the name implies, the British government aims to demonstrate it’s pro-business and pro-innovation.

Speaking at London Tech Week, British Prime Minister Rishi Sunak said “I want to make the UK not just the intellectual home, but the geographical home of global AI safety regulation.”

However, so far the UK approach to AI regulation isn’t as robust as the EU or US. No new laws or regulatory bodies are being created. Instead, the UK is passing on the responsibility to the Information Commissioner’s Office (ICO) and Financial Conduct Authority (FCA).

US Government: President Biden’s administration’s proposal: AI Bill of Rights 

blog_image_15129

In the US, The National Telecommunications and Information Administration (NTIA), a Commerce Department agency, and the White House Office of Science and Technology Policy put together recommendations that President Biden’s administration is acting on.  

The result is a blueprint for a proposed AI Bill of Rights on the table. The aim of this legislation, on a federal and state level, is to protect “the public’s rights, opportunities, or access to critical needs.” 

At the heart of this proposed AI Bill of Rights are five public and economic safeguards: 

  • Protection from unsafe or ineffective systems: AI “Systems should undergo pre-deployment testing, risk identification, and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards.”
  • Protection from algorithmic discrimination: AI systems should be “used and designed in an equitable way.” 
  • Protection from abusive data practices: Building on current data protection legislation to ensure greater security and agency over how data is used and processed by AI systems. 
  • Protection from AI systems taking action or decisions without people understanding how or why: In other words, transparency on AI decision-making processes. 
  • Protection from AI having the final say, with opt-out and human remedies to overrule AI systems. 

blog_image_17057

Source

No regulations have been finalized. However, recent AI developments ⏤ especially the prominence of ChatGPT ⏤ firmly put AI regulation on the legislative agenda at every level of government. 

Around the world, several other countries are also taking AI regulation seriously. China and Japan have both taken a human-centric and safety-first approach. Japan has a vast IT industry, and AI development is accelerating, so they’ve adopted OECD AI Principles that align with their plans for “Society 5.0.”

Brazil, Canada, and several other countries are also drafting AI legislation. India hasn’t made any formal moves so far. However, given the size of the Indian tech sector, it’s likely that regulation will soon have to be considered. There’s no global approach to AI regulation so far. But that might be something we see developing in time, especially since AI can and will impact everyone in some way.  

Supercharge Your Annotations with the
Segment Anything Model
medical banner

Key Takeaways: What to expect in the short-term? 

Government legislation, passing laws and regulations, establishing agencies, and allocating budgets take time. In most cases, a proposal can take several years to become a law and then have an agency to oversee and coordinate a legislative mandate. 

However, it is worth getting familiar with what the EU and US are proposing so your organization is ready.

It’s even more important for those already operating in regulated sectors, such as healthcare, weapons, transport, and financial services. Although we aren’t likely to see any sudden changes, lawmakers are moving quicker than normal, so it’s worth being prepared. 

Ready to improve the performance, security, and audit trails of your active learning for computer vision and AI projects? 

Sign-up for an Encord Free Trial: The Active Learning Platform for Computer Vision, used by the world’s leading computer vision teams. 

AI-assisted labeling, model training & diagnostics, find & fix dataset errors and biases, all in one collaborative active learning platform, to get to production AI faster. Try Encord for Free Today. 

Want to stay updated?

encord logo

Power your AI models with the right data

Automate your data curation, annotation and label validation workflows.

Get started
Written by
author-avatar-url

Ulrik Stig Hansen

View more posts

Explore our products