Apple Joins the Safe AI Initiative as NIST Amps Up Outreach

The U.S. Commerce Department has issued a large package of material designed to help AI developers and those using the systems with an approach to identifying and mitigating risks stemming from generative AI and foundation models. Prepared by the National Institute of Standards and Technology and the AI Safety Institute, the guidance includes the initial public draft of its guidelines on “Managing Misuse Risk for Dual-Use Foundation Models.” Dual-use refers to models that can be used for good or ill. The release also includes an open-source software test called Dioptra. Apple is the latest to join the government’s voluntary commitments to responsible AI innovation.

Dioptra aims to help “small to medium-sized businesses,” as well as government agencies, “conduct evaluations to assess AI developers’ claims about their systems’ performance,” the agency notes in an NIST explainer.

Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra, NIST says, using the example of poisoning training data with inaccuracies, such as by causing it to “misidentify stop signs as speed limit signs.”

The Dioptra software is available for free download on Github. While the NIST’s target audience is those with “a wide variety of familiarity with and expertise in machine learning,” an info page on Dioptra points out that “newcomers to the platform will be able to run the included demonstrations of attacks and defenses even if they have very little programming experience.”

The NIST says the materials are being released in response to the executive order on “Safe, Secure and Trustworthy Development of AI” signed by President Biden on October 30, 2023. On Friday, the administration announced that Apple has signed on, joining the voluntary commitments to responsible AI innovation. Earlier signatories include Amazon, Google, Meta Platforms, Microsoft, Nvidia and OpenAI.

A public draft of a guidance document from the U.S. AI Safety Institute and the Dioptra testing platform are both NIST developments, the agency says, adding that three publications appeared in draft form on April 29.

Two are geared toward managing generative AI risks that “serve as companion resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF).” The third proposes a plan for U.S. stakeholders to work with others around the globe on AI standards.

The Dioptra release “follows the launch of the UK AI Safety Institute’s Inspect, a toolset similarly aimed at assessing the capabilities of models and overall model safety,” reports TechCrunch.

The principles behind Biden’s EO “call for companies to transparently share the results of those tests with governments, civil society and academia — and to report any vulnerabilities,” according to Bloomberg.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.