Safe Superintelligence Raises $1 Billion to Develop Ethical AI

OpenAI co-founder and former chief scientist Ilya Sutskever, who exited the company in May after a power struggle with CEO Sam Altman, has raised $1 billion for his new venture, Safe Superintelligence (SSI). The cash infusion from major Silicon Valley venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel and NFDG has resulted in a $5 billion valuation for the startup. As its name implies, SSI is focused on developing artificial intelligence that does not pose a threat to humanity, a goal that will be pursued “in a straight shot” with “one product,” Sutskever has stated.

“On its website, SSI writes that it will eschew the productization that OpenAI and other AI startups have pursued — and which allegedly led to Sutskever and other researchers’ increasing disillusionment with Altman — instead focusing entirely on developing a ‘safe’ artificial ‘superintelligence,’” with the latter term referring to “AI that is vastly smarter and more capable than most (or all) human beings,” writes VentureBeat.

AI safety “refers to preventing AI from causing harm,” explains Reuters, adding that it “is a hot topic amid fears that rogue AI could act against the interests of humanity or even cause human extinction.”

Reuters, the first company to report the raise, said “the funding underlines how some investors are still willing to make outsized bets on exceptional talent focused on foundational AI research … despite a general waning in interest towards funding such companies which can be unprofitable for some time.” SSI declined to confirm Reuters’ valuation.

Sutskever’s tussle with Altman, which resulted in the CEO’s temporary ouster, was rooted in complex issues, notes CNBC, which writes that “The Wall Street Journal and other media outlets reported that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.”

SSI “plans to partner with cloud providers and chip companies to fund its computing power needs but hasn’t yet decided which firms it will work with,” says Reuters, listing “companies such as Microsoft and Nvidia as go-to choices for addressing infrastructure needs.”

In an interview, Reuters asked Sutskever if he plans on open-sourcing SSI’s work and was told that “at this point, all AI companies are not open-sourcing their primary work. The same holds true for us,” though he left the door open for sharing “relevant superintelligence safety work.”

Sutskever was part of OpenAI’s safety-focused Superalignment Team, disbanded shortly after his exit. He launched SSI in June with another former OpenAI colleague, researcher Daniel Levy, and Apple’s erstwhile AI lead Daniel Gross, co-founder of Cue.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.