European Union Takes Steps to Regulate Artificial Intelligence

The European Parliament on Wednesday took a major step to legislate artificial intelligence, passing a draft of the AI Act, which puts restrictions on many of what are believed to be the technology’s riskiest uses. The EU has been leading the world in advancing AI regulation, and observers are already citing this developing law as a model framework for global policymakers eager to place guardrails on this rapidly advancing technology. Among the Act’s key tenets: it will dramatically curtail use of facial recognition software and require AI firms such as OpenAI to disclose more about their training data.

“We have made history today,” said EU Parliamentarian Brando Benifei, commenting on the AI Act draft lawmakers have agreed upon. They will now negotiate the language with EU member states and the European Council, reports CNN. A final version of the law is expected to be passed later this year.

The fact that the EU has already put the proposed AI Act through several iterations puts the bloc further along than the U.S. and the rest of the free world. China is also making significant progress and has a fast track as a result of totalitarian rule. Canada, Australia, India, Japan and South Korea are also pressing ahead.

The UK closes consulting on its AI Regulation White Paper on June 21.

“Policymakers everywhere from Washington to Beijing are now racing to control an evolving technology that is alarming even some of its earliest creators,” The New York Times reports.

Acting on behalf of its 27 member countries, EU lawmakers have worked on the AI Act for more than two years, focusing on the technology’s potential effects on the workforce, civil liberties and the potential to fuel misinformation.

“In a sign that the technology’s new abilities are emerging seemingly faster than lawmakers are able to address them, earlier versions of the EU law did not give much attention to so-called generative AI systems like ChatGPT,” writes NYT. The EU takes a “risk-based approach,” and the proposal the MEPs just voted on contains the following provisions:

  • Full ban on artificial intelligence for biometric surveillance, emotion recognition, predictive policing
  • Generative AI systems like ChatGPT must disclose that content was AI-generated
  • AI systems used to influence voters in elections considered to be high-risk

AI providers and those deploying AI systems will have obligations classified on a graded scale. Systems with an unacceptable level of risk to people’s safety will be prohibited, such as those used for social scoring (classifying people based on their social behavior or personal characteristics).

MEPs expanded the list to include bans on intrusive and discriminatory uses of AI, such as:

  • “Real-time” remote biometric identification systems in publicly accessible spaces
  • “Post” remote biometric identification system (with an exception for law enforcement that much like a search warrant will require judicial authorization)
  • Biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation)
  • Predictive policing systems (based on profiling, location or past criminal behaviour)
  • Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions
  • Untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy)

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.