European Council Weighs in on the Artificial Intelligence Act

The European Council (EU’s governing body) has adopted a position on the Artificial Intelligence Act, which aims to ensure that AI systems used or marketed in the European Union are safe and respect existing laws on fundamental rights. In addition to defining artificial intelligence, the European Council’s general approach specifies prohibited AI practices, calls for risk level allocation, and stipulates ways to deal with those risks. The Council — comprised of EU heads of state — becomes the first co-legislate to complete this initial step, with the European Parliament expected to offer its version of the AIA in the first half of 2023.

Calling AI “of paramount importance for our future,” Czech deputy prime minister for digitalization and minister of regional development Ivan Bartoš said in an announcement that the Council “managed to achieve a delicate balance which will boost innovation and uptake of artificial intelligence technology across Europe, with all the benefits it presents, on the one hand, and full respect of the fundamental rights of our citizens, on the other.”

The AIA draft regulation presented by the Commission in April 2022 is a key element of the EU’s policy to foster the development and implantation of artificial intelligence.

The Council’s risk-based approach offers what the group calls “a uniform, horizontal legal framework for AI that aims to ensure legal certainty” while promoting investment and innovation and promoting safety. It complements other initiatives, like the Coordinated Plan on Artificial Intelligence that focuses on accelerating AI investment in Europe.

“How AI is defined was a critical part of the discussions as that defines the scope of the regulation,” writes Euractiv, noting that “member states were concerned that traditional software would be included, so they put forth a narrower definition of systems developed through machine learning, logic- and knowledge-based approaches, elements that the Commission can specify or update later via delegate acts.”

Banned practices include the subliminal use of AI and “social scoring” by public and private actors. Prevalent in China, social scoring involves collecting data about people and assigning them a score that is used to assess and categorize them for things like employment and public services.

AI defined as high-risk — potentially causing harm to people or property — will be subjected to stricter regulation. “Notably, the Czech presidency introduced an extra layer, meaning that, to be classified as high-risk, the system should have a decisive weight in the decision-making process and not be ‘purely accessory’,” Euractiv says.

The Council removed from the high-risk list deepfake detection by law enforcement, as well as crime analytics and authenticity verification of travel documents, while critical digital infrastructure and health insurance were added.

Providers of high-risk AI must register on an EU database, and the high-risk systems “will have to comply with requirements such as the dataset’s quality and detailed technical documentation,” Euractiv reports, explaining that “the general approach also attempts to clarify the allocation of responsibility along the complex AI value chains and how the AI Act will interact with existing sectorial legislation.”

Related:
How to Fix Canada’s Proposed Artificial Intelligence Act, Tech Policy Press, 12/6/22
The EU’s AI Act: Is It Unfair to Insurers?, Insurance Business America, 12/6/22
The EU AI Act Must Protect People on the Move, Refugees International, 12/6/22

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.