CES: Panelists Weigh Need for Safe AI That Serves the Public
January 16, 2024
A CES session on government AI policy featured an address by Assistant Secretary of Commerce for Communications and Information Alan Davidson (who is also administrator of the National Telecommunications and Information Administration), followed by a discussion of government activities, and finally industry perspective from execs at Google, Microsoft and Xperi. Davidson studied at MIT under nuclear scientist Professor Philip Morrison, who spent the first part of his career developing the atomic bomb and the second half trying to stop its use. That lesson was not lost on Davidson. At NTIA they are working to ensure “that new technologies are developed and deployed in the service of people and in the service of human progress.”
There is no better example of the impact of technology on humanity than the global conversation we are having now around AI. Davidson said that we will only achieve the promise of AI if we address the real risks that it poses today — including safety, security, privacy, innovation, and IP concerns. There is a strong sense of urgency across the Biden administration and in governments around the world to address these concerns.
He described President Biden’s executive order (EO) on AI. Directed by the EO, NIST is establishing a new AI safety institute, the Patent and Trademark Office is exploring copyright issues, and NTIA is working on AI accountability and the benefits and risks of openness. Governments, businesses, and civil society are stepping up to engage in the dialogue that will shape AI’s future.
U.S. government activities were discussed by Stephanie Nguyen, CTO of the Federal Trade Commission, and Sam Marullo, counselor to the Secretary of Commerce. Nguyen explained that the FTC’s mission is to protect against unfair, deceptive, and anticompetitive acts and practices. There are “no AI exemptions on the books,” she said.
The President’s EO on AI aspires to safe, secure, and trustworthy AI. Both the FTC and Department of Commerce are working to build the best multidisciplinary subject-matter-expertise teams to address key aspects of this challenge. Those teams will include government, private sector, and civil society representation.
Open questions that the panel discussed include how AI will impact establishing provenance, ‘know your customer’ models, and patent and copyright issues. There is bipartisan interest in Congress to do “something,” Marullo said, but discussions are in their very early stages.
Industry perspective was articulated by Xperi VP of Marketing David McIntyre, Google Global Cloud AI Policy Lead Addie Cooke, and Microsoft Senior Director for AI Policy Danyelle Solomon.
McIntyre made a key point about AI and regulation when he said that AI is not a fundamentally new problem for the agencies. It is simply a new tool to be applied to the use cases for which the agencies have already developed regulations. The agencies should focus on the end goal of the use cases, he suggested, and identify the gaps and holes that AI introduces, and develop regulations to address these issues rather than develop a separate AI regulatory regime.
Cooke was heartened to hear the FTC suggest that a diversity of people, skills, and ideas is important for policy development. She stressed the need to coordinate standards development nationally and internationally, rather than have individual jurisdictions develop their own local regulations. She noted that the first ISO standard for AI (42001) was recently published, and NIST has put out AI standards. At the federal level, Solomon listed a number of bills being developed for which she has a positive opinion.
“Something I’m sure we all feel pretty strongly about,” said McIntyre, “is making sure you’re regulating the use-case rather than the underlying technology.” How do you make sure your AI rules and safeguards are part of the engineering mindset and then how do you build them into the test framework? How do you incorporate tests for potential risks at a micro level so they are as integral to the QC process as any other feature that is prone to attack?
The President’s EO on AI set up the NIST’s Artificial Intelligence Safety Institute with a proposed $10 million budget, and a pilot program through the National Science Foundation and partners for a National Artificial Intelligence Research Resource (NAIRR) that will establish a shared national research infrastructure. Solomon called these critical resources and expressed her support for them.
What stood out in the EO for McIntyre was the comment that AI is not an exception or something new. The agencies’ fundamental rule-making roles continue. “AI is a tool that achieves an end, and you’re regulating that end,” he said.
Cooke added that privacy regulators need to remind players in their jurisdictions that the rules around privacy, security and other issues have not changed. It doesn’t matter what the technology is. You have to respect the rules and regulations.
Solomon called for a national privacy law. The three panelists agreed that national policies and laws on many of these key issues, rather than state level laws, would be desirable.
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.