U.S., Britain and 16 Nations Aim to Make AI Secure by Design

The United States, Britain and 16 other countries have signed a 20-page agreement on working together to keep artificial intelligence safe from bad actors, mandating collaborative efforts for creating AI systems that are “secure by design.” The 18 countries said they will aim to ensure companies that design and utilize AI develop and deploy it in a way that protects their customers and the public from abuse. The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) jointly released the Guidelines for Secure AI System Development.

“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs, CISA Director Jen Easterly told Reuters, saying the guidelines represent “an agreement that the most important thing that needs to be done at the design phase is security.”

While it is significant that 18 countries signed onto the idea that AI systems must prioritize public safety, Reuters points out that the agreement is the latest of many, “few of which carry teeth, by governments around the world to shape the development of AI, whose weight is increasingly being felt in industry and society at large.”

Forbes says the guidelines are “aimed mainly at providers of AI systems that are using models hosted by an organization, or that are using external application programming interfaces (APIs),” adding that “the aim is to help developers make sure that cybersecurity is baked in as an essential pre-condition of AI system safety.”

The guidelines “were formulated in cooperation with 21 other agencies and ministries from across the world — including all members of the Group of 7 major industrial economies — and are the first of their kind to be agreed to globally,” CISA announced in a news release that called the agreement “historic.”

The UK, which positioned itself as leading the negotiations, said in an NCSC statement that the guidelines “will help developers of any systems that use AI make informed cybersecurity decisions at every stage of the development process — whether those systems have been created from scratch or built on top of tools and services provided by others.”

The guidelines are broken down into four key areas within the AI system development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.