By
Paula ParisiOctober 18, 2024
A new LLM framework evaluates how well generative AI models are meeting the challenge of compliance with the legal parameters of the European Union’s AI Act. The free and open-source software is the product of a collaboration between ETH Zurich; Bulgaria’s Institute for Computer Science, Artificial Intelligence and Technology (INSAIT); and Swiss startup LatticeFlow AI. It is being billed as “the first evaluation framework of the EU AI Act for Generative AI models.” Already, it has found that some of the top AI foundation models are falling short of European regulatory goals in areas including cybersecurity resilience and discriminatory output. Continue reading ‘EU AI Act Checker’ Holds Big AI Accountable for Compliance
By
Paula ParisiSeptember 27, 2024
The European Commission has released a list of more than 100 companies that have become signatories to the EU’s AI Pact. While Google, Microsoft and OpenAI are among them, Apple and Meta are not. The voluntary AI Pact is aimed at eliciting policies on AI deployment during the period before the legally binding AI Act takes full effect. The EU AI Pact focuses on transparency in three core areas: internal AI governance, high-risk AI systems mapping and promoting AI literacy and awareness among staff to support ethical development. It is aimed at “relevant stakeholders,” across industry, civil society and academia. Continue reading Amazon, Google, Microsoft and OpenAI Join the EU’s AI Pact
By
Paula ParisiSeptember 9, 2024
The first legally binding international treaty on artificial intelligence was signed last week by the countries that negotiated it, including the United States, United Kingdom and European Union members. The Council of Europe Framework Convention on Artificial Intelligence is “aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.” Drawn up by the Council of Europe (COE), an international human rights organization, the treaty was signed at the COE’s Conference of Ministers of Justice in Lithuania. Other signatories include Israel, Iceland, Norway, the Republic of Moldova and Georgia. Continue reading U.S. and Europe Sign the First Legally Binding Global AI Treaty
By
Paula ParisiMay 24, 2024
Leading AI firms spanning Europe, Asia, North America and the Middle East have signed a new voluntary commitment to AI safety. The 16 signatory companies — including Amazon, Google DeepMind, Meta Platforms, Microsoft, OpenAI, xAI and China’s Zhipu AI — will publish outlines indicating how they will measure the risks posed by their frontier models. “In the extreme, leading AI tech companies including from China and the UAE have committed to not develop or deploy AI models if the risks cannot be sufficiently mitigated,” according to UK Technology Secretary Michelle Donelan. Continue reading Global Technology Companies Sign Pledge to Foster AI Safety
By
ETCentric StaffMarch 25, 2024
The United Nations General Assembly on Thursday adopted a U.S.-led resolution to promote “safe, secure and trustworthy” artificial intelligence systems and their sustainable development for the benefit of all. The non-binding proposal, which was adopted without a formal vote, drew support from more than 122 co-sponsors, including China and India. It emphasizes “the respect, protection and promotion of human rights in the design, development, deployment and use” of responsible and inclusive AI. “The same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” the resolution affirms. Continue reading UN Adopts Global AI Resolution Backed by U.S., 122 Others
By
ETCentric StaffMarch 15, 2024
The European Union has passed the Artificial Intelligence Act, becoming the first global entity to pass comprehensive law to regulate AI’s development and use. Member states agreed on the framework in December 2023, and it was adopted Wednesday by the European Parliament with 523 votes in favor, 46 against and 49 abstentions. The legislation establishes what are being called “sweeping rules” for those building AI as well as those who deploy it. The rules, which will take effect gradually, implement new risk assessments, ban AI uses deemed “high risk,” and mandate transparency requirements. Continue reading EU Lawmakers Pass AI Act, World’s First Major AI Regulation
By
Paula ParisiDecember 12, 2023
The EU has reached a provisional agreement on the Artificial Intelligence Act, making it the first Western democracy to establish comprehensive AI regulations. The sweeping new law predominantly focuses on so-called “high-risk AI,” establishing parameters — largely in the form of reporting and third-party monitoring — “based on its potential risks and level of impact.” Parliament and the 27-country European Council must still hold final votes before the AI Act is finalized and goes into effect, but the agreement, reached Friday in Brussels after three days of negotiations, means the main points are set. Continue reading EU Makes Provisional Agreement on Artificial Intelligence Act
By
Paula ParisiDecember 7, 2023
IBM and Meta Platforms have launched the AI Alliance, a coalition of companies and educational institutions committed to responsible, transparent development of artificial intelligence. The group launched this week with more than 50 global founding participants from industry, startup, academia, research and government. Among the members and collaborators: AMD, CERN, Cerebras, Cornell University, Dell Technologies, Hugging Face, Intel, Linux Foundation, NASA, Oracle, Red Hat, Sony Group, Stability AI, the University of Tokyo and Yale Engineering. The group’s stated purpose is “to support open innovation and open science in AI.” Continue reading IBM and Meta Debut AI Alliance for Safe Artificial Intelligence
By
Paula ParisiNovember 21, 2023
Germany, France and Italy have reached an agreement on a strategy to regulate artificial intelligence. The agreement comes on the heels of infighting among key European Union member states that has held up legislation and could potentially accelerate the broader EU negotiations. The three governments support binding voluntary commitments for large and small AI providers and endorse “mandatory self-regulation through codes of conduct” for foundation models while opposing “un-tested norms.” The paper underscores that “the AI Act regulates the application of AI and not the technology as such” and says the “inherent risks” are in the application, not the technology. Continue reading Germany, France and Italy Strike AI Deal, Pushing EU Forward
By
Paula ParisiJune 26, 2023
Senate Majority Leader Chuck Schumer unveiled his approach toward regulating artificial intelligence, beginning with nine listening sessions to explore topics including AI’s impact on the job market, copyright, national security and “doomsday scenarios.” Schumer’s plan — the SAFE (Security, Accountability, Foundations, Explainability) Innovation framework — isn’t proposed legislation, but a discovery roadmap. Set to begin in September, the panels will draw on members of industry, academia and civil society. “Experts aren’t even sure which questions policymakers should be asking,” said Schumer of the learning curve. “In many ways, we’re starting from scratch.” Continue reading Schumer Shares Plan for SAFE AI Senate Listening Sessions
By
Paula ParisiJune 16, 2023
The European Parliament on Wednesday took a major step to legislate artificial intelligence, passing a draft of the AI Act, which puts restrictions on many of what are believed to be the technology’s riskiest uses. The EU has been leading the world in advancing AI regulation, and observers are already citing this developing law as a model framework for global policymakers eager to place guardrails on this rapidly advancing technology. Among the Act’s key tenets: it will dramatically curtail use of facial recognition software and require AI firms such as OpenAI to disclose more about their training data. Continue reading European Union Takes Steps to Regulate Artificial Intelligence
By
Paula ParisiJune 1, 2023
Mitigating the risk of extinction due to AI must should be as much a global priority as pandemics and nuclear war, according to the non-profit Center for AI Safety, which this week released a warning that artificial intelligence systems may pose an existential threat to humanity. Among the more than 350 executives, researchers and engineers who signed the statement are the CEOs of three leading AI firms: OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis and Anthropic’s Dario Amodei. The statement comes as rapid advancements in large language models raise fears of societal disruption through job loss and widespread misinformation. Continue reading Industry Leaders Caution That AI Presents ‘Risk of Extinction’
By
Paula ParisiMay 23, 2023
Leaders at the G7 Summit in Hiroshima, Japan, are calling for discussions that could lead to global standards and regulations for generative AI, with the aim of responsible use of the technology. The chief executives of the world’s largest economies — which in addition to the host nation include Canada, France, Germany, Italy, the UK, the U.S. (and additionally the EU) — expressed the goal of forming a G7 working group to establish by the end of the year a “Hiroshima AI process” for discussion about uniform policies for dealing with AI technologies including chatbots and image generators. Continue reading G7 Leaders Call for Global AI Standards at Hiroshima Summit
By
Paula ParisiApril 25, 2023
The European Union, which has been working on artificial intelligence legislation for the past two years, is playing last minute catch-up with rapidly evolving technology as it retools a final draft law that can be adopted, possibly by the end of the year. While the European Council in December thought it had completed its framework in all but the details, that version largely deferred attaching specific rules to generative AI, which having since exploded, has triggered a movement among member states to add those guardrails along with rules for general purpose AI. Continue reading EU Considers Technology Updates for Next Draft of the AI Act
By
Paula ParisiOctober 6, 2022
The White House has issued a “blueprint” for consumer protections with regard to artificial intelligence. Aimed at guiding federal agencies while setting the bar for future legislation, the voluntary directive offers five areas of focus — safety, algorithmic discrimination protection, data privacy, notice, human alternatives — and a section on applying the rules. “Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public,” begins the bill, which says such tools are “too often used” to limit opportunities and prevent access to critical resources or services. Continue reading White House Creates a ‘Blueprint’ of AI Rights for Consumers