The European Union’s Artificial Intelligence Act, known as the EU AI Act, has been described by the European Commission as “the world’s first comprehensive AI law.” After years in the making, it is progressively becoming a part of reality for the 450 million people living in the 27 countries that comprise the EU.
The EU AI Act, however, is more than a European affair. It applies to companies both local and foreign, and it can affect both providers and deployers of AI systems; the European Commission cites examples of how it would apply to a developer of a CV screening tool and to a bank that buys that tool. Now all of these parties have a legal framework that sets the stage for their use of AI.
Why does the EU AI Act exist?
As usual with EU legislation, the EU AI Act exists to make sure there is a uniform legal framework applying to a certain topic across EU countries — the topic this time being AI. Now that the regulation is in place, it should “ensure the free movement, cross-border, of AI-based goods and services” without diverging local restrictions.
With timely regulation, the EU seeks to create a level playing field across the region and foster trust, which could also create opportunities for emerging companies. However, the common framework that it has adopted is not exactly permissive: Despite the relatively early stage of widespread AI adoption in most sectors, the EU AI Act sets a high bar for what AI should and shouldn’t do for society more broadly.
What is the purpose of the EU AI Act?
According to European lawmakers, the framework’s main goal is to “promote the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.”
Yes, that’s quite a mouthful, but it’s worth parsing carefully. First, because a lot will depend on how you define “human centric” and “trustworthy” AI. And second, because it gives a good sense of the precarious balance to maintain between diverging goals: innovation vs. harm prevention, as well as uptake of AI vs. environmental protection. As usual with EU legislation, again, the devil will be in the details.
How does the EU AI Act balance its different goals?
To balance harm prevention against the potential benefits of AI, the EU AI Act adopted a risk-based approach: banning a handful of “unacceptable risk” use cases; flagging a set of “high-risk” uses calling for tight regulation; and applying lighter obligations to “limited risk” scenarios.
... continue reading