EU Negotiators Reach Agreement on World’s First AI Regulations

European Union AI Act

After lengthy negotiations, European Union officials have reached an agreement on the Artificial Intelligence Act, a set of comprehensive regulations surrounding artificial intelligence (AI) that could provide a blueprint for other nations and regions aiming to limit the dangers of AI.

At this point, negotiators have only agreed on what EU lawmakers will ultimately vote on rather than enacted any actual regulations. The European Parliament, comprised of more than 700 members, will vote on the AI Act by the end of the month or early next year, and any ratified legislation will not go into effect until 2025 at the earliest.

As the United States and United Kingdom, among others, rapidly work to legislate and guide the development and deployment of AI technologies, the EU, as if often the case, is first across the line with proposed legislation.

The AI Act is far-reaching, determining safeguards for the general use of AI, limiting how law enforcement agencies can use biometric identification, banning AI from manipulating social media users, providing consumers the right to lodge official complaints about AI, and outlining massive punishments for violators.

“This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI, while boosting innovation and making Europe a leader in the field. The rules establish obligations for AI based on its potential risks and level of impact,” writes the European Parliament in an official press release.

“It was long and intense, but the effort was worth it. Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise — ensuring that rights and freedoms are at the center of the development of this ground-breaking technology,” says co-rapporteur Brando Benifei (S&D, Italy). “Correct implementation will be key — the Parliament will continue to keep a close eye, to ensure support for new business ideas with sandboxes, and effective rules for the most powerful models.”

The agreed-upon iteration of the provisional AI Act includes numerous “banned applications.” One such banned use of AI is categorization based on “sensitive characteristics” like political affiliation, religious beliefs, philosophical ideologies, sexual orientation, or race. The AI Act also prohibits untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.

Additional banned applications include using AI to measure and recognize the emotional state of employees and companies doing “social scoring based on social behavior and personal characteristics.”

Two more banned applications are more general and perhaps the most important. The AI Act says that companies cannot deploy “AI systems that manipulate human behavior to circumvent their free will” or “exploit the vulnerabilities of people (due to their age, disability, social or economic situation).”

The AI Act has been years in the making, with the first draft having arrived in April 2021, long before many of the most widespread AI technologies had ever reached the public. The rapidity with which AI systems are developed, deployed, and proliferated makes highly targeted legislation remarkably challenging. The EU has opted for a more general approach, aiming to limit the risk of AI at its source.

To that end, as promised, the final version of the AI Act requires companies to disseminate details concerning how their AI models are trained, and perhaps most importantly, requires companies to comply with EU copyright law. How this will shape the training of image generation models, many of which have shady origins, will be extremely interesting.

More generally, any AI system classified as high-risk due to potential harm to health, safety, rights, the environment, democracy, and the rule of law will be subject to a fundamental rights impact assessment. Under the AI Act, citizens can file official complaints against high-risk AI systems and receive explanations concerning how the EU has assessed a specific AI system.

“The EU is the first in the world to set in place robust regulation on AI, guiding its development and evolution in a human-centric direction. The AI Act sets rules for large, powerful AI models, ensuring they do not present systemic risks to the Union and offers strong safeguards for our citizens and our democracies against any abuses of technology by public authorities. It protects our SMEs, strengthens our capacity to innovate and lead in the field of AI, and protects vulnerable sectors of our economy. The European Union has made impressive contributions to the world; the AI Act is another one that will significantly impact our digital future,” says co-rapporteur Dragos Tudorache (Renew, Romania).

The AI Act also aims to nurture the development of innovative AI systems. The provisional law includes promoting “regulatory sandboxes” and other real-world testing environments where companies, especially smaller ones, can develop and train innovative AI systems before they are released to the general public.

As for the AI Act’s teeth, the EU negotiators have also agreed on potential punishments, ranging from fines up to 35 million euros ($37.7 million) or 7% of a company’s “global turnover,” depending on the severity of the infringement and the offending company’s size.


Image credits: Header photo licensed via Depositphotos.

Discussion