Tech Firms Push Back Against California AI Safety Regulation

California tech companies are bristling at a state bill that would force them to enact strict safety protocols, including installing “kill switches” to turn-off AI models that present a public risk. Silicon Valley has emerged as a global AI leader, and the proposed law would impact not only OpenAI, but Anthropic, Cohere, Google and Meta Platforms. The bill, SB 1047, focuses on what its lead sponsor, State Senator Scott Wiener, calls “common sense safety standards” for frontier models. Should the bill become law, it could affect even firms like Amazon that provide AI cloud services to California customers even though they are not based in the state. Continue reading Tech Firms Push Back Against California AI Safety Regulation

OpenAI Is Working on New Frontier Model to Succeed GPT-4

OpenAI has begun training a new flagship artificial intelligence model to succeed GPT-4, the technology currently associated with ChatGPT. The new model — which some are already calling GPT-5, although OpenAI hasn’t yet shared its name — is expected to take the company’s compute to the next level as it works toward artificial general intelligence (AGI), intelligence equal to or surpassing human cognitive abilities. The company also announced it has formed a new Safety and Security Committee two weeks after dissolving the old one upon the departure of OpenAI co-founder and Chief Scientist Ilya Sutskever. Continue reading OpenAI Is Working on New Frontier Model to Succeed GPT-4

U.S. and UK Form Partnership to Accelerate AI Safety Testing

The United States has entered into an agreement with the United Kingdom to collaboratively develop safety tests for the most advanced AI models. The memorandum of understanding aims at evaluating the societal and national defense risks posed by advanced models. Coming after commitments made at the AI Safety Summit in November, the deal is being described as the world’s first bilateral agreement on AI safety. The agreement, signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, envisions the countries “working to align their scientific approaches” and to accelerate evaluations for AI models, systems and agents. Continue reading U.S. and UK Form Partnership to Accelerate AI Safety Testing

OpenAI Creates a Team to Examine Catastrophic Risks of AI

OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI