Anthropic Updates ‘Responsible Scaling’ to Minimize AI Risks

Anthropic, maker of the the popular Claude AI chatbot, has updated its Responsible Scaling Policy (RSP), designed and implemented to mitigate the risks of advanced AI systems. The policy was introduced last year and has since been improved, with new protocols added to ensure AI models are developed and deployed safely as they grow more powerful. This latest update offers “a more flexible and nuanced approach to assessing and managing AI risks while maintaining our commitment not to train or deploy models unless we have implemented adequate safeguards,” according to Anthropic. Continue reading Anthropic Updates ‘Responsible Scaling’ to Minimize AI Risks

MIT’s AI Risk Assessment Database Debuts with 700 Threats

The list of potential risks associated with artificial intelligence continues to grow. “Global AI adoption is outpacing risk understanding,” warns the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL), which has joined with the MIT multidisciplinary computer group FutureTech to compile the AI Risk Repository, a “living database” of more than 700 unique risks extracted across 43 source categories. Organized by cause, classifying “how, when and why these risks occur,” the repository is comprised of seven risk domains (for example, “misinformation”) and 23 subdomains (such as “false or misleading information”). Continue reading MIT’s AI Risk Assessment Database Debuts with 700 Threats

Copyright Office Calls for Federal Law Regulating Deepfakes

The U.S. Copyright Office is warning of an urgent national need for protection against deepfakes. In the first installment of a multipart report on the adverse effects of artificial intelligence on copyright, the office recommends the immediate enactment of a law to combat AI-driven “digital replicas.” Acknowledging that copyright has always had a symbiotic relationship with technology, as well as AI’s tremendous potential, the report nonetheless decries the proliferation of AI-generated deepfakes, “from celebrities’ images endorsing products to politicians’ likenesses seeking to affect voter behavior.” Continue reading Copyright Office Calls for Federal Law Regulating Deepfakes

Apple Joins the Safe AI Initiative as NIST Amps Up Outreach

The U.S. Commerce Department has issued a large package of material designed to help AI developers and those using the systems with an approach to identifying and mitigating risks stemming from generative AI and foundation models. Prepared by the National Institute of Standards and Technology and the AI Safety Institute, the guidance includes the initial public draft of its guidelines on “Managing Misuse Risk for Dual-Use Foundation Models.” Dual-use refers to models that can be used for good or ill. The release also includes an open-source software test called Dioptra. Apple is the latest to join the government’s voluntary commitments to responsible AI innovation. Continue reading Apple Joins the Safe AI Initiative as NIST Amps Up Outreach

Justice Department Appoints Jonathan Mayer Chief AI Officer

Jonathan Mayer has been named the Justice Department’s first chief science and technology advisor and will also hold the title chief artificial intelligence officer, another first. The announcement was made by Attorney General Merrick Garland, who said “the Justice Department must keep pace with rapidly evolving scientific and technological developments in order to fulfill our mission to uphold the rule of law, keep our country safe, and protect civil rights.” Mayer will advise Garland and department leaders and collaborate with other departments “on complex issues requiring technical expertise,” including cybersecurity, AI and other areas of emerging technology. Continue reading Justice Department Appoints Jonathan Mayer Chief AI Officer

U.S. AI Safety Institute Consortium Debuts with 200 Members

The U.S. has established the AI Safety Institute Consortium (AISIC), uniting artificial intelligence researchers, creators, academics and other users across government, industry and civil society organizations to support the development and deployment of safe and trustworthy AI. The group launches with more than 200 member entities ranging from tech giants Google, Microsoft and Amazon to AI-first firms OpenAI, Cohere and Anthropic. Secretary of Commerce Gina Raimondo announced the move the day after naming Elizabeth Kelly director of the new U.S. AI Safety Institute, housed at the National Institute of Standards and Technology (NIST). Continue reading U.S. AI Safety Institute Consortium Debuts with 200 Members

Major Tech Players Launch Frontier Model Forum for Safe AI

Advancing President Biden’s push for responsible development of artificial intelligence, top AI firms including Anthropic, Google, Microsoft and OpenAI have launched the Frontier Model Forum, an industry forum that will work collaboratively with outside researchers and policymakers to implement best practices. The new group will focus on AI safety, research into its risks, and disseminating information to the public, governments and civil society. Other companies involved in building bleeding-edge AI models will also be invited to join and participate in technical evaluations and benchmarks. Continue reading Major Tech Players Launch Frontier Model Forum for Safe AI

Anthropic Shares Details of Constitutional AI Used on Claude

AI startup Anthropic is sharing new details of the “safe AI” principles that helped train its Claude chatbot. Also known as “Constitutional AI,” the method draws inspiration from treatises that range from a Universal Declaration of Human Rights to Apple’s Terms of Service and Anthropic’s own research. “What ‘values’ might a language model have?,” Anthropic asks, noting “our recently published research on Constitutional AI provides one answer by giving language models explicit values determined by a constitution, rather than values determined implicitly via large-scale human feedback.” Continue reading Anthropic Shares Details of Constitutional AI Used on Claude