Europe has agreed on the best AI rules in the world. How do they work and will they affect people everywhere?

LONDON (AP) – European Union officials worked late last week to reach an agreement on the world’s best rules to govern the use of artificial intelligence in the 27-nation bloc.

The Artificial Intelligence Act is the latest set of regulations designed to govern the technology in Europe that will have a global impact.

Here’s a closer look at the AI ​​rules:

WHAT IS THE AI ACT AND HOW DOES IT WORK?

The AI ​​Act takes a “risk-based approach” to products or services that use artificial intelligence and focuses on regulating the uses of AI rather than the technology. The legislation is designed to protect democracy, the rule of law and fundamental rights such as freedom to protect speech. , while encouraging investment and innovation.

The more risky the AI ​​application, the stricter the rules. Those with limited risk, such as content recommendation systems or spam filters, would only need to follow light rules, as revealed to be powered by AI.

High-risk systems, such as medical devices, have more stringent requirements, such as using high-quality data and providing clear information to users.

Some uses of AI are banned because they are considered an unacceptable risk, such as social scoring systems that control people’s behavior, some types of predictive policing systems and emotion recognition in school and workplaces.

Police cannot get their public faces using AI-powered “biometric” remote identification systems, except for serious crimes such as kidnapping or terrorism.

The AI ​​Act will not come into force until two years after receiving final approval from European lawmakers, which is expected in a rubber stamp vote in early 2024. Violations could result in fines of up to 35 million euros ($38 million) or pulling up 7% of a global company. income.

HOW IS THE ACT ADJECTIVE OF THE WORLD?

The AI ​​Act will affect nearly 450 million EU residents, but experts say its impact could be felt much further due to Brussels’ leading role in drafting rules that act as a global standard.

The EU has already played a role with previous technology directives, most notably ordering a popular charging plug that forced Apple to abandon its internal Lightning cable.

While many other countries are figuring out if and how they can re-establish themselves in AI, the EU’s comprehensive regulations are poised to serve as a blueprint.

“The AI ​​Act is the world’s first comprehensive, horizontal and binding AI regulation that will not only be a game-changer in Europe but is likely to significantly increase the global momentum to regulate AI across jurisdictions,” said Anu Bradford, Columbia Law School. a professor who is an expert in EU law and digital regulation.

“It puts the EU in a unique position to lead the way and show the world that AI can be regulated and that its development can be subject to democratic oversight,” she said.

Even what the law does not do could have global consequences, rights groups said.

By not pursuing a total ban on live facial recognition, Brussels “has effectively ignited dystopian digital surveillance in the EU’s 27 member states, setting a devastating precedent around the world,” Amnesty International said.

The partial ban is a sorely missed opportunity to stop and prevent massive damage to human rights, civil space and the rule of law already threatened by the EU.

Amnesty also decried the failure of lawmakers to ban the export of AI technologies that could harm human rights – including their use in social scoring, which China does to reward obedience to the state by surveillance.

WHAT ARE OTHER COUNTRIES DOING ABOUT AI REGULATION?

The world’s two major AI powers, the US and China, have started the ball rolling on their own rules.

US President Joe Biden signed a sweeping executive order on AI in October, which is expected to add to global legislation and agreements.

It requires early AI developers to share safety test results and other information with the government. Agencies will create standards to ensure AI tools are safe before they are released publicly and issue guidance for labeling AI-generated content.

Biden’s order builds on earlier voluntary commitments made by tech companies including Amazon, Google, Meta, Microsoft to ensure their products are safe before release.

Meanwhile, China has issued “interim measures” to manage generative AI, which involves text, pictures, audio, video and other content generated for people in China.

President Xi Jinping has also proposed a Global AI Governance Initiative, calling for an open and fair environment for AI development.

HOW DOES THE TRANSFER ACT AFFECT THEM?

The spectacular rise of OpenAI’s ChatGPT showed that the technology was making significant progress and forced European policy makers to update their proposal.

The AI ​​Act includes provisions for chatbots and other so-called general purpose AI systems that can perform many different tasks, from composing poetry to creating video and writing computer code.

Officials took a two-tiered approach, with most multi-purpose systems requiring basic transparency requirements such as disclosing details about their data governance and, in a nod to the EU’s environmental sustainability efforts, how much energy they used to power the models to train on the many written series. works and images scraped from the Internet.

They must also comply with EU copyright law and summarize the material they used for training.

The newest AI systems with the most computing power face tougher rules, creating “systemic risks” that officials want to stop spreading to services added by other software developers.

___

AP writer Frank Bajak of Boston contributed.

Leave a Reply

Your email address will not be published. Required fields are marked *