Concerned Thought Leaders Call for Pause on AI Movement

Elon Musk and Steve Wozniak are among a group of more than 1,100 tech leaders, researchers and AI stakeholders who have signed an open letter calling for a pause on “giant AI experiments.” The missive, published by the Future of Life Institute, warns of “profound risks to society and humanity” that could be caused by an “out-of-control race” to develop and commercially deploy artificial intelligence systems “that no one — not even their creators — can understand, predict, or reliably control.” Other signatories include politician Andrew Yang, Skype co-founder Jaan Tallinn, Pinterest co-founder Evan Sharp and Stability AI CEO Emad Mostaque. 

Among academe’s machine learning expert signatories are the University of Southern California computer science professor emeritus Paul Rosenbloom, a fellow at the Association for the Advancement of Artificial Intelligence; the University of Montreal’s  Yoshua Bengio, a Turing prize-winner; NYU AI researcher Gary Marcus; and Stuart Russell, director of the Center for Intelligent Systems at UC Berkeley.

The authors “call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” a benchmark for “giant” models. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Full text of the letter and a list of signatories are posted at the Future of Life Institute website.

“The letter is unlikely to have any effect on the current climate in AI research, which has seen tech companies like Google and Microsoft rush to deploy new products,” writes The Verge, calling it “a sign of the growing opposition to this ‘ship it now and fix it later’ approach; an opposition that could potentially make its way into the political domain for consideration by actual legislators.”

The communiqué recognizes AI’s potential but warns against reckless advancement. It adds that “society has hit pause on other technologies with potentially catastrophic effects,” listing human cloning and eugenics.

Elon Musk “has expressed frustration over regulators critical of efforts to regulate the autopilot system” and “has sought a regulatory authority to ensure that development of AI serves the public interest,” explains Reuters. “AI stresses me out,” Musk said.

The call for caution contrasts a UK regulatory white paper released this week that offers five guiding “principles” but leaves rules guiding adoption and deployment of machine learning systems to “existing regulators such as the Competition and Markets Authority and Health and Safety Executive,” writes The Guardian.

“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow,” UK science, innovation and technology secretary Michelle Donelan is quoted as saying in The Guardian, which lists the Ada Lovelace Institute among those critical of that strategy, claiming it has “significant gaps, which could leave harms unaddressed, and is underpowered relative to the urgency and scale of the challenge.”

Related:
The Case for Slowing Down AI, Vox, 3/20/23
AI’s Great “Pause” Debate, Axios, 3/30/23
Pausing AI Developments Isn’t Enough. We Need to Shut it All Down, TIME, 3/29/23
FTC Should Stop OpenAI from Launching New GPT Models, Says AI Policy Group, The Verge, 3/30/23

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.