California tech companies are bristling at a state bill that would force them to enact strict safety protocols, including installing “kill switches” to turn-off AI models that present a public risk. Silicon Valley has emerged as a global AI leader, and the proposed law would impact not only OpenAI, but Anthropic, Cohere, Google and Meta Platforms. The bill, SB 1047, focuses on what its lead sponsor, State Senator Scott Wiener, calls “common sense safety standards” for frontier models. Should the bill become law, it could affect even firms like Amazon that provide AI cloud services to California customers even though they are not based in the state.
Passed by the California Senate in May and set for a general vote in August, the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act “requires AI groups in California to guarantee to a newly created state body that they will not develop models with ‘a hazardous capability,’ such as creating biological or nuclear weapons or aiding cybersecurity attacks,” Ars Technica reports.
Under the pending law, AI developers would be required to keep the state government apprised of safety testing and compliance with the kill switch feature. Ars Technica quotes Amazon board member Andrew Ng, a “renowned computer scientist who led AI projects at Alphabet’s Google and China’s Baidu,” saying such a law would create “massive liabilities for science-fiction risks,” and in doing so would stifle innovation.
It is unclear if California Governor Gavin Newsom would sign such a bill into law. Politico says that speaking last week at an AI event in San Francisco, Newsom signaled Democratic lawmakers “who are advancing dozens of AI bills in the state Legislature,” saying “if we over-regulate … we could put ourselves in a perilous position.”
Bloomberg writes that “in the absence of federal legislation on AI, U.S. states are increasingly pushing for their own regulations,” quoting Wiener saying he would “much prefer” the issue be handled at the federal level, “but as we’ve seen with data privacy, with social media, with net neutrality, even for technology issues with strong bipartisan support, it’s been very hard and at times impossible for Congress to act.”
“The California bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based non-profit run by computer scientist Dan Hendrycks, who is the safety adviser to Elon Musk’s AI startup, xAI,” writes Ars Technica, adding that “CAIS has close ties to the effective altruism movement, which was made famous by jailed cryptocurrency executive Sam Bankman-Fried.”
Related:
States Take Up AI Regulation Amid Federal Standstill, The New York Times, 6/10/24
Can California Fill the Federal Void on Frontier AI Regulation?, Brookings, 6/4/24
California’s SB 1047 Won’t Address Existential Risks from AI, Forbes, 6/5/24
Proposed Law to Control Powerful AI Models Will Destroy California’s Nascent Industry, VentureBeat, 6/11/24
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.