Sutskever Targets Safe Superintelligence with New Company

Ilya Sutskever — who last month exited his post as chief scientist at OpenAI after a highly publicized power struggle with CEO Sam Altman — has launched a new AI company, Safe Superintelligence Inc. Sutskever’s partners in the new venture are his former OpenAI colleague Daniel Levy and Daniel Gross, who founded the AI startup Cue, which was acquired by Apple where Gross continued in an AI leadership role. “Building safe superintelligence (SSI) is the most important technical problem of our​​ time,” the trio posted on the company’s one-page website, stating its goal is to “scale in peace.”

“We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” the announcement-cum-recruitment post states.

VentureBeat describes superintelligence, also known as artificial general intelligence, as “a hypothetical agent with intelligence far superior to that of the smartest human.” Whether, or when, it may be possible to build a machine with that capability is the subject of much debate.

The SSI team thinks it is. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever tells Bloomberg, which summarizes SSI’s goal as “to create a safe, powerful artificial intelligence system within a pure research organization that has no near-term intention of selling AI products or services.”

In other words, Bloomberg reports, “he’s attempting to continue his work without many of the distractions that rivals such as OpenAI, Google and Anthropic face.” The company “will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race,” Sutskever said.

The New York Times reached out to SSI, which “declined to name who is funding the company or how much it has raised.” Sutskever’s title at SSI will be chief scientist, according to NYT, which quotes an SSI spokesperson saying he describes his actual job function as “responsible for revolutionary breakthroughs.”

At OpenAI, Sutskever helped found what was called the “Superalignment Team,” which “aimed to ensure that future AI technologies would not do harm,” writes NYT, adding that “like others in the field, he had grown increasingly concerned that AI could become dangerous and perhaps even destroy humanity.”

VentureBeat writes that with Sutskever’s departure from OpenAI, “that group was disbanded, a move that was heavily criticized by one of the former leads, Jan Leike,” who now works at competitor Anthropic.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.