EU Makes Provisional Agreement on Artificial Intelligence Act

The EU has reached a provisional agreement on the Artificial Intelligence Act, making it the first Western democracy to establish comprehensive AI regulations. The sweeping new law predominantly focuses on so-called “high-risk AI,” establishing parameters — largely in the form of reporting and third-party monitoring — “based on its potential risks and level of impact.” Parliament and the 27-country European Council must still hold final votes before the AI Act is finalized and goes into effect, but the agreement, reached Friday in Brussels after three days of negotiations, means the main points are set.

The AI Act “sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security,” writes The New York Times.

The rules, which limit biometric use — in many cases banning it outright — include consumer complaint mechanisms and a sliding scale of fines of up to 7 percent of global revenue.

The Parliament’s news release describes the AI Act as ambitious, striving to ensure fairness, transparency and environmental sustainability while at the same time boosting innovation and creating opportunities for smaller players. It provides the basic scope of the rules, setting out as banned practices:

  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
  • Emotion recognition in the workplace and educational institutions
  • Social scoring based on social behavior or personal characteristics

“Makers of the largest general-purpose AI systems, like those powering the ChatGPT chatbot, would face new transparency requirements,” NYT says, referring to OpenAI. “Chatbots and software that creates manipulated images such as ‘deepfakes’ would have to make clear that what people were seeing was generated by AI,” NYT adds.

Bloomberg calls the nascent AI Act “a key part of the world’s first comprehensive artificial intelligence regulation.” All those who developed so-called “general purpose AI” —  which Bloomberg describes as “ powerful models that have a wide range of possible uses” — will be required to meet basic transparency standards, “unless they’re provided free and open-source,” the news outlet says.

Highlights of acceptable use, according to Bloomberg, are:

  • Keeping up-to-date information on how they trained their models
  • Reporting a detailed summary of the data used to train their models
  • Having a policy to respect copyright law
  • Models deemed to pose a “systemic risk” would be subject to additional rules

Risk is based on factors including compute power and training data, with the threshold set at “models that use more than 10 trillion trillion (or septillion) operations per second,” reports Bloomberg, which cites “experts” claiming that “currently, the only model that would automatically meet this threshold is OpenAI’s GPT-4,” although others are rapidly gaining.

The EU’s executive arm will curate the list and add as needed.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.