Apple Joins the Safe AI Initiative as NIST Amps Up Outreach

The U.S. Commerce Department has issued a large package of material designed to help AI developers and those using the systems with an approach to identifying and mitigating risks stemming from generative AI and foundation models. Prepared by the National Institute of Standards and Technology and the AI Safety Institute, the guidance includes the initial public draft of its guidelines on “Managing Misuse Risk for Dual-Use Foundation Models.” Dual-use refers to models that can be used for good or ill. The release also includes an open-source software test called Dioptra. Apple is the latest to join the government’s voluntary commitments to responsible AI innovation. Continue reading Apple Joins the Safe AI Initiative as NIST Amps Up Outreach

UK Launches New Open-Source Platform for AI Safety Testing

The UK AI Safety Institute announced the availability of its new Inspect platform designed for the evaluation and testing of artificial intelligence tech in order to help develop safe AI models. The Inspect toolset enables testers — including worldwide researchers, government agencies, and startups — to analyze the specific capabilities of such models and establish scores based on various criteria. According to the Institute, the “release comes at a crucial time in AI development, as more powerful models are expected to hit the market over the course of 2024, making the push for safe and responsible AI development more pressing than ever.” Continue reading UK Launches New Open-Source Platform for AI Safety Testing

U.S. and UK Form Partnership to Accelerate AI Safety Testing

The United States has entered into an agreement with the United Kingdom to collaboratively develop safety tests for the most advanced AI models. The memorandum of understanding aims at evaluating the societal and national defense risks posed by advanced models. Coming after commitments made at the AI Safety Summit in November, the deal is being described as the world’s first bilateral agreement on AI safety. The agreement, signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, envisions the countries “working to align their scientific approaches” and to accelerate evaluations for AI models, systems and agents. Continue reading U.S. and UK Form Partnership to Accelerate AI Safety Testing

CES: Panelists Weigh Need for Safe AI That Serves the Public

A CES session on government AI policy featured an address by Assistant Secretary of Commerce for Communications and Information Alan Davidson (who is also administrator of the National Telecommunications and Information Administration), followed by a discussion of government activities, and finally industry perspective from execs at Google, Microsoft and Xperi. Davidson studied at MIT under nuclear scientist Professor Philip Morrison, who spent the first part of his career developing the atomic bomb and the second half trying to stop its use. That lesson was not lost on Davidson. At NTIA they are working to ensure “that new technologies are developed and deployed in the service of people and in the service of human progress.” Continue reading CES: Panelists Weigh Need for Safe AI That Serves the Public