U.S. AI Safety Institute Consortium Debuts with 200 Members

The U.S. has established the AI Safety Institute Consortium (AISIC), uniting artificial intelligence researchers, creators, academics and other users across government, industry and civil society organizations to support the development and deployment of safe and trustworthy AI. The group launches with more than 200 member entities ranging from tech giants Google, Microsoft and Amazon to AI-first firms OpenAI, Cohere and Anthropic. Secretary of Commerce Gina Raimondo announced the move the day after naming Elizabeth Kelly director of the new U.S. AI Safety Institute, housed at the National Institute of Standards and Technology (NIST).

Operating as part of the AI Safety Institute, the Consortium will contribute to the priority actions outlined in President Biden’s October Executive Order on managing the risks of AI. The coalition’s activities will include developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content, according to a Department of Commerce announcement.

The AISIC “represents the largest collection of test and evaluation teams established to date and will focus on establishing the foundations for a new measurement science in AI safety,” explains an NIST news post.

“Participants who were selected (and are required to pay a $1,000 annual fee) entered into a ‘Consortium Cooperative Research and Development Agreement’ with NIST,” explains VentureBeat, noting “there have been few details disclosed about how the institute would work and where its funding would come from — especially since NIST itself, with reportedly a staff of about 3,400 and an annual budget of just over $1.6 billion — is known to be underfunded.”

The AISIC was announced in October as part of President Biden’s AI Executive Order. “Participation in the consortium is open to all interested organizations that can contribute their expertise, products, data, and/or models to the activities of the Consortium,” says the NIST website, outlining the activities and responsibilities of the group.

Among them: developing “new guidelines, tools, methods, protocols and best practices to facilitate the evolution of industry standards for developing or deploying AI in safe, secure, and trustworthy ways” and establishing “guidance and benchmarks for identifying and evaluating AI capabilities, with a focus on capabilities that could potentially cause harm.”

Incoming AISI Chief Kelly, an economic policy adviser for President Biden, “was a driving force behind the domestic components of the AI executive order,” according to an appointment announcement.  A Yale Law School graduate, she previously worked in the Obama administration, and at Capital One. Elham Tabassi, integral in Commerce’s AI work at NIST, will serve as the AISI’s chief technology officer.

“The federal government recently began requiring AI companies to test their systems, but so far, those tests lack the universal set of standards that the institute plans to finalize this summer,” reports ABC News.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.