U.S. and Europe Sign the First Legally Binding Global AI Treaty

The first legally binding international treaty on artificial intelligence was signed last week by the countries that negotiated it, including the United States, United Kingdom and European Union members. The Council of Europe Framework Convention on Artificial Intelligence is “aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.” Drawn up by the Council of Europe (COE), an international human rights organization, the treaty was signed at the COE’s Conference of Ministers of Justice in Lithuania. Other signatories include Israel, Iceland, Norway, the Republic of Moldova and Georgia.

“The list means the COE’s framework has netted a number of countries where some of the world’s biggest AI companies are either headquartered or are building substantial operations,” TechCrunch writes, adding that “perhaps as important are the countries not included so far: none in Asia, the Middle East, nor Russia, for example.”

“I hope that these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible,” Council of Europe Secretary General Marija Pejčinović Burić said in a COE announcement.

Described as “technology-neutral” so it may stand the test of time, the treaty “provides a legal framework covering the entire lifecycle of AI systems,” according to the COE. “It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law.”

TechCrunch points out that “the COE is not a lawmaking entity, but was founded in the wake of World War II with a function to uphold human rights, democracy and Europe’s legal systems,” but nonetheless calls the framework “a high-level treaty.”

To protect its three core values, TechRepublic reports that the Framework Convention requires signatories to:

  • Ensure AI systems respect human dignity, autonomy, equality, non-discrimination, privacy, transparency, accountability and reliability.
  • Provide information about decisions made using AI and allow people to challenge the decisions or the use of the AI itself.
  • Offer procedural safeguards, including complaint mechanisms and notice of AI interactions.
  • Conduct ongoing risk assessments for human rights impacts and establish protective measures.
  • Allow authorities to ban or pause certain AI applications if necessary.

TechRepublic notes that the treaty covers the use of AI systems by public and private entities, but “does not apply to activities relating to national security, national defense matters, or research and development unless they have the potential to interfere with human rights, democracy, or the rule of law.”

It is meant to enhance, not replace, existing laws like the UK’s Online Safety Act and the EU’s AI Act.

Related:
UK Signs First International Treaty to Implement AI Safeguards, The Guardian, 9/5/24
U.S. Signs International Treaty on AI, What it Means for Tech Industry, Newsweek, 9/6/24

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.