Monday, December 23, 2024

Europe’s New “Historic” AI Law Divides Tech Companies and Civil Rights Groups

Must read

European Union policymakers have made history, becoming the first legislative body worldwide to pass legislation regulating the use of artificial intelligence technology and to put up guardrails governing the commercial and public applications of AI.

“Europe has positioned itself as a pioneer, understanding the importance of its role as a global standard setter,” said Thierry Breton, the European commissioner who helped negotiate the deal to approve the AI Act, which was passed by the European Commission, European Council, and the European Parliament on Friday, Dec. 8 after a 36-hour negotiating marathon.

Calling the vote “a historic moment,” EU Commission President Ursula Von der Leyen said the AI Act would provide “legal certainty and opens the way for innovation in trustworthy AI” and would make “a substantial contribution to the development of global guardrails for trustworthy AI.”

The AI Act hopes to act as a benchmark for countries worldwide looking to balance the promise and risks of artificial intelligence technology. The legislation still needs to win final approval from the European Parliament and the Council before becoming law. There will be a push to vote before EU parliamentary elections in early June 2024. If the law is voted on time, parts of the legislation may go into effect starting next year, but the majority will take effect in 2025 and 2026.

By then, critics note, many of the technologies the AI Act hopes to regulate may have changed substantially. A first draft of the AI Act was released in 2021 but the launch of ChatGPT and other so-called general-purpose AI models forced a major rewriting of the legislation to take the new tech breakthroughs into account.

During the recent writers and actors strikes, negotiations around AI focused on issues such as the protection of actors’ likenesses and assurances for writers and other creatives that artificial intelligence systems will not be used to replace them. The EU legislation is much broader in scope and covers AI uses by companies and governments, including crucial sectors such as law enforcement and energy.

Key provisions include restrictions on facial recognition software by police and governments outside of certain safety and national security exemptions, such as its use to prevent terrorist attacks or to locate the victims or suspects of a pre-defined list of serious crimes. The AI Act also introduces new transparency requirements for makers of the largest general-purpose AI systems, like those powering ChatGPT. Here, the EU has used the standard applied by U.S. President Joe Biden in his Oct. 30 executive order, requiring only the most powerful large language models, defined as those that use foundational models that require training upwards of 1025 flops (a measure of computational complexity) to abide by new transparency laws. Companies that violate the regulations could face fines of up to 7 percent of their total global sales.

Just how impactful the new legislation will be depends to a large extent on enforcement. The European Union was at the forefront of digital privacy law, drafting a landmark digital privacy law, the General Data Protection Regulation (GDPR) in 2016 but the legislation has been criticized for being unevenly enforced across the 27 nations of the EU.

Companies impacted by the AI Act are expected to challenge some of its provisions in the courts, which could further delay implementation across the continent.

“There’s a lot for businesses to consider,” noted Irish-based AI legal expert Barry Scannell in a post following Friday’s vote, noting that “enhanced transparency requirements” may challenge “the protection of intellectual property,” requiring “major strategic shifts” from firms using artificial intelligence systems.

In a statement on Saturday, Oct. 9, the Computer and Communications Industry Association in Europe (CCIA), a corporate lobby group representing major internet service, software and telecom companies, including Amazon, Google and Apple, called the EU’s proposal “half-baked,” warning it could over-regulate many aspects of AIand slow-down tech innovation on the continent.

“This could lead to an exodus of European AI companies and talent seeking growth elsewhere,” asserted the CCIA.

Civil rights groups, in contrast, called out the new legislation for not regulating enough, particularly in the use of AI-assisted facial recognition technology by governments and police.

“The three European institutions — Commission, Council and the Parliament — in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning artificial intelligence (AI) regulation,” said Mher Hakobyan, an advocacy advisor on AI for human rights group Amnesty International. “Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space and rule of law that are already under threat throughout the EU.”

Latest article