Amazon, Google, Microsoft and OpenAI Join the EU’s AI Pact

The European Commission has released a list of more than 100 companies that have become signatories to the EU’s AI Pact. While Google, Microsoft and OpenAI are among them, Apple and Meta are not. The voluntary AI Pact is aimed at eliciting policies on AI deployment during the period before the legally binding AI Act takes full effect. The EU AI Pact focuses on transparency in three core areas: internal AI governance, high-risk AI systems mapping and promoting AI literacy and awareness among staff to support ethical development. It is aimed at “relevant stakeholders,” across industry, civil society and academia. Continue reading Amazon, Google, Microsoft and OpenAI Join the EU’s AI Pact

New Microsoft Safety Tools Fix AI Flubs, Detect Proprietary IP

Microsoft has released a suite of “Trustworthy AI” features that address concerns about AI security and reliability. The four new capabilities include Correction, a content detection upgrade in Microsoft Azure that “helps fix hallucination issues in real time before users see them.” Embedded Content Safety allows customers to embed Azure AI Content Safety on devices where cloud connectivity is intermittent or unavailable, while two new filters flag AI output of protected material. Additionally, a transparency safeguard providing the company’s AI assistant, Microsoft 365 Copilot, with specific “web search query citations” is coming soon. Continue reading New Microsoft Safety Tools Fix AI Flubs, Detect Proprietary IP

Cloudflare Tool Can Prevent AI Bots from Scraping Websites

Cloudflare has released AI Audit, a free set of new tools designed to help websites analyze and control how their content is used by artificial intelligence models. Described as “one-click blocking” to prevent unauthorized AI scraping, Cloudflare says it will also make it easier to identify the content bots scan most, so they can wall it off and negotiate payment in exchange for access. Helping its clients toward a sustainable future, Cloudflare is also creating a marketplace for sites to negotiate fees based on AI audits that trace cyber footprints on server files. Continue reading Cloudflare Tool Can Prevent AI Bots from Scraping Websites

Google Unveils Gemini-Powered Ad Features and AI Image ID

AI-powered ad campaigns “are continuing to deliver big results for businesses large and small,” according to Google, which has put Gemini to work for Google Ads. The company announced at the DMEXCO digital marketing event in Cologne a new suite of Gemini-powered tools aimed at making the experience even better by providing additional insights and more control over where and how marketing assets are deployed globally using Google Ads. For starters, Gemini’s “conversational experience” for search campaigns will expand its language palette, making auto-generated headlines and images available in German, French and Spanish in the months ahead. Continue reading Google Unveils Gemini-Powered Ad Features and AI Image ID

OpenAI Bestows Independent Oversight on Safety Committee

The OpenAI board’s Safety and Security Committee will become an independent board oversight committee, chaired by Zico Kolter, machine learning department chair at Carnegie Mellon University. The committee will be responsible for “the safety and security processes guiding OpenAI’s model deployment and development.” Three OpenAI board members segue from their current SSC roles to the new committee: Quora founder Adam D’Angelo, former Sony Corporation EVP Nicole Seligman and erstwhile NSA chief Paul Nakasone. OpenAI is currently putting together a new funding round that reportedly aims to value the company at $150 billion. Continue reading OpenAI Bestows Independent Oversight on Safety Committee

U.S. and Europe Sign the First Legally Binding Global AI Treaty

The first legally binding international treaty on artificial intelligence was signed last week by the countries that negotiated it, including the United States, United Kingdom and European Union members. The Council of Europe Framework Convention on Artificial Intelligence is “aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.” Drawn up by the Council of Europe (COE), an international human rights organization, the treaty was signed at the COE’s Conference of Ministers of Justice in Lithuania. Other signatories include Israel, Iceland, Norway, the Republic of Moldova and Georgia. Continue reading U.S. and Europe Sign the First Legally Binding Global AI Treaty

Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’

In a move toward increased transparency, San Francisco-based AI startup Anthropic has published the system prompts for three of its most recent large language models: Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku. The information is now available on the web and in the Claude iOS and Android apps. The prompts are instruction sets that reveal what the models can and cannot do. Anthropic says it will regularly update the information, emphasizing that evolving system prompts do not affect the API. Examples of Claude’s prompts include “Claude cannot open URLs, links, or videos” and, when dealing with images, “avoid identifying or naming any humans.” Continue reading Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’

YouTube Tests Expanded Community Fact-Checking for Video

YouTube, which began testing crowdsourced fact-checking in June, is now expanding the experiment by inviting users to try the feature. Likened to the Community Notes accountability method introduced by Twitter and continued under X, YouTube’s as yet unnamed feature lets users provide context and corrections to posts that might be misleading or false. “You can sign up to submit notes on videos you find inaccurate or unclear,” YouTube explains, adding that “after submission, your note is reviewed and rated by others.” Notes widely rated as helpful “may be published and appear below the video.” Continue reading YouTube Tests Expanded Community Fact-Checking for Video

Latest Gemma 2 Models Emphasize Security and Performance

Google has unveiled three additions to its Gemma 2 family of compact yet powerful open-source AI models, emphasizing safety and transparency. The company’s Gemma 2 2B is a 2.6 billion parameter update to the lightweight 2B parameter Gemma 2, with built-in improvements in safety and performance. Built on Gemma 2, ShieldGemma is a suite of safety content classifier models that “filter the input and outputs of AI models and keep the user safe.” Interoperability model tool Gemma Scope offers what Google calls “unparalleled insight into our models’ inner workings.” Continue reading Latest Gemma 2 Models Emphasize Security and Performance

Federal Policy Specifies Guidelines for Risk Management of AI

The White House is implementing a new AI policy across the federal government that will be implemented by the Office of Management and Budget (OMB). Vice President Kamala Harris announced the new rules, which require that all federal agencies have a senior leader overseeing AI systems use, in an effort to ensure that AI deployed in public service remains safe and unbiased. The move was positioned as making good on “a core component” of President Biden’s AI Executive Order (EO), issued in October. Federal agencies reported completing the 150-day actions tasked by the EO. Continue reading Federal Policy Specifies Guidelines for Risk Management of AI

YouTube Adds GenAI Labeling Requirement for Realistic Video

YouTube has added new rules requiring those uploading realistic-looking videos that are “made with altered or synthetic media, including generative AI” to label them using a new tool in Creator Studio. The new labeling “is meant to strengthen transparency with viewers and build trust between creators and their audience,” YouTube says, listing examples of content that require disclosure as “likeness of a realistic person” including voice as well as image, “altering footage of real events or places” and “generating realistic scenes” of fictional major events, “like a tornado moving toward a real town.” Continue reading YouTube Adds GenAI Labeling Requirement for Realistic Video

EU Lawmakers Pass AI Act, World’s First Major AI Regulation

The European Union has passed the Artificial Intelligence Act, becoming the first global entity to pass comprehensive law to regulate AI’s development and use. Member states agreed on the framework in December 2023, and it was adopted Wednesday by the European Parliament with 523 votes in favor, 46 against and 49 abstentions. The legislation establishes what are being called “sweeping rules” for those building AI as well as those who deploy it. The rules, which will take effect gradually, implement new risk assessments, ban AI uses deemed “high risk,” and mandate transparency requirements. Continue reading EU Lawmakers Pass AI Act, World’s First Major AI Regulation

Researchers Call for Safe Harbor for the Evaluation of AI Tools

Artificial intelligence stakeholders are calling for safe harbor legal and technical protections that will allow them access to conduct “good-faith” evaluations of various AI products and services without fear of reprisal. More than 300 researchers, academics, creatives, journalists and legal professionals had as of last week signed an open letter calling on companies including Meta Platforms, OpenAI and Google to allow access for safety testing and red teaming of systems they say are shrouded in opaque rules and secrecy despite the fact that millions of consumers are already using them. Continue reading Researchers Call for Safe Harbor for the Evaluation of AI Tools

EU Makes Provisional Agreement on Artificial Intelligence Act

The EU has reached a provisional agreement on the Artificial Intelligence Act, making it the first Western democracy to establish comprehensive AI regulations. The sweeping new law predominantly focuses on so-called “high-risk AI,” establishing parameters — largely in the form of reporting and third-party monitoring — “based on its potential risks and level of impact.” Parliament and the 27-country European Council must still hold final votes before the AI Act is finalized and goes into effect, but the agreement, reached Friday in Brussels after three days of negotiations, means the main points are set. Continue reading EU Makes Provisional Agreement on Artificial Intelligence Act

Altman Reinstated as CEO of OpenAI, Microsoft Joins Board

Sam Altman has wasted no time since being rehired as CEO of OpenAI on November 22, four days after being fired. This week, the 38-year-old leader of one of the most influential artificial intelligence firms outlined his “immediate priorities” and announced a newly constituted “initial board” that includes a non-voting seat for investor Microsoft. The three voting members thus far include former Salesforce co-CEO Bret Taylor as chairman and former U.S. Treasury Secretary Larry Summers — both newcomers — and sophomore Adam D’Angelo, CEO of Quora. Mira Murati, interim CEO during Altman’s brief absence, returns to her role as CTO. Continue reading Altman Reinstated as CEO of OpenAI, Microsoft Joins Board