Amazon, Google, Microsoft and OpenAI Join the EU’s AI Pact

The European Commission has released a list of more than 100 companies that have become signatories to the EU’s AI Pact. While Google, Microsoft and OpenAI are among them, Apple and Meta are not. The voluntary AI Pact is aimed at eliciting policies on AI deployment during the period before the legally binding AI Act takes full effect. The EU AI Pact focuses on transparency in three core areas: internal AI governance, high-risk AI systems mapping and promoting AI literacy and awareness among staff to support ethical development. It is aimed at “relevant stakeholders,” across industry, civil society and academia. Continue reading Amazon, Google, Microsoft and OpenAI Join the EU’s AI Pact

OpenAI Bestows Independent Oversight on Safety Committee

The OpenAI board’s Safety and Security Committee will become an independent board oversight committee, chaired by Zico Kolter, machine learning department chair at Carnegie Mellon University. The committee will be responsible for “the safety and security processes guiding OpenAI’s model deployment and development.” Three OpenAI board members segue from their current SSC roles to the new committee: Quora founder Adam D’Angelo, former Sony Corporation EVP Nicole Seligman and erstwhile NSA chief Paul Nakasone. OpenAI is currently putting together a new funding round that reportedly aims to value the company at $150 billion. Continue reading OpenAI Bestows Independent Oversight on Safety Committee

Safe Superintelligence Raises $1 Billion to Develop Ethical AI

OpenAI co-founder and former chief scientist Ilya Sutskever, who exited the company in May after a power struggle with CEO Sam Altman, has raised $1 billion for his new venture, Safe Superintelligence (SSI). The cash infusion from major Silicon Valley venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel and NFDG has resulted in a $5 billion valuation for the startup. As its name implies, SSI is focused on developing artificial intelligence that does not pose a threat to humanity, a goal that will be pursued “in a straight shot” with “one product,” Sutskever has stated. Continue reading Safe Superintelligence Raises $1 Billion to Develop Ethical AI

Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’

In a move toward increased transparency, San Francisco-based AI startup Anthropic has published the system prompts for three of its most recent large language models: Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku. The information is now available on the web and in the Claude iOS and Android apps. The prompts are instruction sets that reveal what the models can and cannot do. Anthropic says it will regularly update the information, emphasizing that evolving system prompts do not affect the API. Examples of Claude’s prompts include “Claude cannot open URLs, links, or videos” and, when dealing with images, “avoid identifying or naming any humans.” Continue reading Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’

OpenAI Teams with Los Alamos for Frontier Model Research

OpenAI has partnered with the Los Alamos National Laboratory to study the ways artificial intelligence frontier models can assist with scientific research in an active lab environment. Established in 1943, the New Mexico facility is best known as home to the Manhattan Project and the development of the world’s first atomic bomb. It currently focuses on national security challenges under the direction of the Department of Energy. As part of the new partnership, the lab will work with OpenAI to produce what it describes as a first-of-its-kind study on the impact of artificial intelligence and biosecurity. Continue reading OpenAI Teams with Los Alamos for Frontier Model Research

Cloudflare Blocking Web Bots from Scraping AI Training Data

Cloudflare has a new tool that can block AI from scraping a website’s content for model training. The no-code feature is available even to customers on the free tier. “Declare your ‘AIndependence’” by blocking AI bots, scrapers and crawlers with a single click, the San Francisco-based company urged last week, simultaneously releasing a chart of frequent crawlers by “request volume” on websites using Cloudflare. The ByteDance-owned Bytespider was number one, presumably gathering training data for its large language models “including those that support its ChatGPT rival, Doubao,” Cloudflare says. Amazonbot, ClaudeBot and GPTBot rounded out the top four. Continue reading Cloudflare Blocking Web Bots from Scraping AI Training Data

Sutskever Targets Safe Superintelligence with New Company

Ilya Sutskever — who last month exited his post as chief scientist at OpenAI after a highly publicized power struggle with CEO Sam Altman — has launched a new AI company, Safe Superintelligence Inc. Sutskever’s partners in the new venture are his former OpenAI colleague Daniel Levy and Daniel Gross, who founded the AI startup Cue, which was acquired by Apple where Gross continued in an AI leadership role. “Building safe superintelligence (SSI) is the most important technical problem of our​​ time,” the trio posted on the company’s one-page website, stating its goal is to “scale in peace.” Continue reading Sutskever Targets Safe Superintelligence with New Company

ElevenLabs Launches an AI Tool for Generating Sound Effects

ElevenLabs has launched its text-to-sound generator Sound Effects for all users, available now at the company’s website. The new AI tool can create audio effects, short instrumental tracks, soundscapes and even character voices. Sound Effects “has been designed to help creators — including film and television studios, video game developers, and social media content creators — generate rich and immersive soundscapes quickly, affordably and at scale,” according to the startup, which developed the tool in partnership with Shutterstock, using its library of licensed audio tracks. Continue reading ElevenLabs Launches an AI Tool for Generating Sound Effects

IBM Introduces Granite LLMs for Enterprise Code Developers

IBM has released a family of its Granite AI models to the open-source community. The series of decoder-only Granite code models are purpose-built to write computer code for enterprise developers, with training in 116 programming languages. These Granite models range in size from 3 to 34 billion parameters in base model and instruction-tuned variants. They offer a range of uses, from modernizing older code with new languages to optimizing programs for on-device memory constraints, such as might be experienced when conforming for mobile gadgets. In addition to generation, the models can repair and explain code. Continue reading IBM Introduces Granite LLMs for Enterprise Code Developers

Microsoft, Amazon Commit to Expanding Operations in France

France has been pursuing Big Tech and Microsoft and Amazon are among the first to express interest. Microsoft has committed $4.3 billion to expand cloud and AI infrastructure there, sharing plans to bring as many as 25,000 advanced GPUs to France by the close of 2025. The software giant will also train one million people for AI and data jobs while supporting 2,500 AI startups over the next three years. Meanwhile, Amazon announced that it would invest up to $1.3 billion to expand its existing footprint of 35 logistics facilities in the country. The deals were announced Monday during the Choose France summit hosted by French President Emmanuel Macron. Continue reading Microsoft, Amazon Commit to Expanding Operations in France

U.S. and UK Form Partnership to Accelerate AI Safety Testing

The United States has entered into an agreement with the United Kingdom to collaboratively develop safety tests for the most advanced AI models. The memorandum of understanding aims at evaluating the societal and national defense risks posed by advanced models. Coming after commitments made at the AI Safety Summit in November, the deal is being described as the world’s first bilateral agreement on AI safety. The agreement, signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, envisions the countries “working to align their scientific approaches” and to accelerate evaluations for AI models, systems and agents. Continue reading U.S. and UK Form Partnership to Accelerate AI Safety Testing

OpenAI Voice Cloning Tool Needs Only a 15-Second Sample

OpenAI has debuted a new text-to-voice generation platform called Voice Engine, available in limited access. Voice Engine can generate a synthetic voice from a 15-second clip of someone’s voice. The synthetic voice can then read a provided text, even translating to other languages. For now, only a handful of companies are using the tech under a strict usage policy as OpenAI grapples with the potential for misuse. “These small scale deployments are helping to inform our approach, safeguards, and thinking about how Voice Engine could be used for good across various industries,” OpenAI explained. Continue reading OpenAI Voice Cloning Tool Needs Only a 15-Second Sample

Federal Policy Specifies Guidelines for Risk Management of AI

The White House is implementing a new AI policy across the federal government that will be implemented by the Office of Management and Budget (OMB). Vice President Kamala Harris announced the new rules, which require that all federal agencies have a senior leader overseeing AI systems use, in an effort to ensure that AI deployed in public service remains safe and unbiased. The move was positioned as making good on “a core component” of President Biden’s AI Executive Order (EO), issued in October. Federal agencies reported completing the 150-day actions tasked by the EO. Continue reading Federal Policy Specifies Guidelines for Risk Management of AI

Google GenAI Accelerator Launches with $20 Million in Grants

Google.org, the charitable arm of the Alphabet giant, has launched a program to help fund non-profits working on technology to support “high-impact applications of generative AI.” The Google.org Accelerator: Generative AI is a six-month program that kicks off with more than $20 million in grants for 21 non-profit firms. Among them, student writing aid group Quill.org, job seeker for low- to middle-income countries Tabiya, and Benefits Data Trust, which helps low-income applicants access and enroll in public benefits. In addition to funds, the new unit provides mentorship, technical training and pro bono support from “a dedicated AI coach.” Continue reading Google GenAI Accelerator Launches with $20 Million in Grants

UN Adopts Global AI Resolution Backed by U.S., 122 Others

The United Nations General Assembly on Thursday adopted a U.S.-led resolution to promote “safe, secure and trustworthy” artificial intelligence systems and their sustainable development for the benefit of all. The non-binding proposal, which was adopted without a formal vote, drew support from more than 122 co-sponsors, including China and India. It emphasizes “the respect, protection and promotion of human rights in the design, development, deployment and use” of responsible and inclusive AI. “The same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” the resolution affirms. Continue reading UN Adopts Global AI Resolution Backed by U.S., 122 Others