OpenAI Bestows Independent Oversight on Safety Committee

The OpenAI board’s Safety and Security Committee will become an independent board oversight committee, chaired by Zico Kolter, machine learning department chair at Carnegie Mellon University. The committee will be responsible for “the safety and security processes guiding OpenAI’s model deployment and development.” Three OpenAI board members segue from their current SSC roles to the new committee: Quora founder Adam D’Angelo, former Sony Corporation EVP Nicole Seligman and erstwhile NSA chief Paul Nakasone. OpenAI is currently putting together a new funding round that reportedly aims to value the company at $150 billion.

OpenAI’s announcement did not specify whether the company plans to have the three SSC members continue in dual roles (though the fact that it is a “board committee” suggests they will). Seligman and Nakasone were named to the OpenAI board this year with the formation of the original SSC, while D’Angelo joined in 2018.

Generally speaking, independent oversight eschews those with financial ties to the company, and regular board members are paid for their service.

OpenAI’s announcement did not mention plans to expand the SSC beyond the four named members. TechCrunch drew the conclusion that it will not include OpenAI co-founder and CEO Sam Altman. Neither he nor board chairman Bret Taylor, a former CEO of Salesforce, were mentioned in the OpenAI post. Both served on the earlier SSC incarnation.

OpenAI says “independent governance” for the safety and security overseers was recommended by the SSC itself. The company also lists increased transparency, external collaboration and unified safety frameworks for model development as recommended measures.

The SSC will have “the authority to delay a release until safety concerns are addressed,” according to the announcement, which states that privilege also extends to “the full board.”

“Last week, OpenAI released o1, a preview version of its new AI model focused on reasoning and ‘solving hard problems,’” writes CNBC, noting that “the company said the committee ‘reviewed the safety and security criteria that OpenAI used to assess OpenAI o1’s fitness for launch,’ as well as safety evaluation results.”

TechCrunch reports that “nearly half of the OpenAI staff that once focused on AI’s long-term risks have left,” and ex-OpenAI researchers have accused Altman, who has publicly called for government oversight of AI, “of opposing ‘real’ AI regulation.”

OpenAI launched the Safety and Security Committee as part of its board in May to replace the Superalignment Team, dissolved after the departure of OpenAI co-founder and chief scientist Ilya Sutskever earlier that month.

The move to shore up the committee comes as Sutskever has been getting a lot of attention for his new company, Safe Superintelligence, which launched in June. This month, the company was in the news for raising $1 billion in venture cash for its efforts devoted to ethical and safe AI.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.