Advancing President Biden’s push for responsible development of artificial intelligence, top AI firms including Anthropic, Google, Microsoft and OpenAI have launched the Frontier Model Forum, an industry forum that will work collaboratively with outside researchers and policymakers to implement best practices. The new group will focus on AI safety, research into its risks, and disseminating information to the public, governments and civil society. Other companies involved in building bleeding-edge AI models will also be invited to join and participate in technical evaluations and benchmarks.
The Frontier Model Forum will establish an advisory board to help guide its strategy and priorities. It’s four stated goals according to a blog post are to help:
- Advance AI safety research to promote responsible development of frontier models and minimize potential risks.
- Identify safety best practices for frontier models.
- Share knowledge with policymakers, academics, civil society, and others to advance responsible AI development.
- Support efforts to leverage AI to address society’s biggest challenges.
“Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by U.S. and European Union lawmakers to craft binding legislation for the industry,” writes CNN.
The announcement follows a White House summit at which the four founding FMF members joined Amazon and Meta Platforms in pledging “to subject their AI systems to third-party testing before releasing them to the public and to clearly label AI-generated content.”
The FMF was unveiled a day after leading AI experts testified before the Senate Judiciary Subcommittee on Technology and the Law, warning of grave, potentially even ‘catastrophic,’ societal dangers stemming from unchecked AI development.
“In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Anthropic CEO Dario Amodei shared with the senators.
AI pioneer Yoshua Bengio and others agreed that “the best way to prevent major harms, is to restrict access to AI systems; develop standard and effective testing regimes to ensure those systems reflect shared societal values; limit how much of the world any single AI system can truly understand; and constrain the impact that AI systems can have on the real world,” reports CNN.
Amodei said that in two or three years AI could be advanced enough to help malicious actors build sophisticated biological weapons.
The FMF defines frontier models as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models.” The blog post includes membership criteria, inviting participation from organizations that develop and deploy what the Forum defines as frontier models and are willing to commit to frontier model safety, including “technical and institutional approaches.”
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.