EU’s Sweeping AI Act Takes Tough Stance on High Risk Use
April 6, 2022
The European Union’s pending Artificial Intelligence Act — the world’s first comprehensive effort to regulate AI — is coming under scrutiny as it moves to law. The Act proposes unplugging AI deemed a risk to society. Critics say it draws too heavily on general consumer product safety rules, overlooking unique aspects of AI, and is too closely tied to EU market law. This could limit its applicability as a template for other regions evaluating AI legislation, contravening the EU’s desired first-movers status in the digital sphere.
Also of concern is the carve out for military use of AI, “most of which you’d expect to be risk-ridden by default,” writes TechCrunch, concluding that while “there’s significant enforcement firepower lurking” in the draft law as written, there exists potential for fine-tuning (though major change is said to “seem unlikely at this relatively late stage of the EU’s co-legislative process”).
The European Commission introduced the AI Act just over a year ago, and the Council and Parliament have since been debating their positions, with final passage expected sometime in 2023. The goal is to foster “trustworthy” and “human-centric” AI.
The existing framework prohibits outright only a small number of AI use cases “considered too dangerous to people’s safety or EU citizens’ fundamental rights.” These include “a China-style social credit scoring system,” according TechCrunch, writing of a number of regulations that target “a subset of ‘high risk’ use cases subject to a regime of both ex ante (before) and ex post (after) market surveillance.”
As per TechCrunch, high-risk implementations in the draft include:
- Biometric identification and categorization of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to essential private services and public services
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
“There is also another category of AIs, such as deepfakes and chatbots, which are judged to fall in the middle and are given some specific transparency requirements to limit their potential to be misused and cause harms,” TechCrunch reports.
The original proposal bans “almost nothing” outright, and “most use cases for AI won’t face serious regulation under the Act as they would be judged to pose ‘low risk,’” thus left largely to self-regulation proscribing to a “voluntary code of standards and a certification scheme to recognize compliance AI systems.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.