OpenAI Launches a Task Force to Control Superintelligent AI

OpenAI believes artificial intelligence exceeding human intelligence “could arrive this decade.” Calling the massive compute power “superintelligence rather than AGI to stress a much higher capability level,” the company warns that even though this new cognition holds great promise it will not necessarily be benevolent. Preparing for the worst, OpenAI has formed an internal unit charged with developing ways to keep superintelligent AI in check. Led by OpenAI’s Ilya Sutskever and Jan Leike, the Superalignment Team will work toward “steering or controlling a potentially superintelligent AI and preventing it from going rogue.” Continue reading OpenAI Launches a Task Force to Control Superintelligent AI