Safe Superintelligence Raises $1 Billion to Develop Ethical AI

OpenAI co-founder and former chief scientist Ilya Sutskever, who exited the company in May after a power struggle with CEO Sam Altman, has raised $1 billion for his new venture, Safe Superintelligence (SSI). The cash infusion from major Silicon Valley venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel and NFDG has resulted in a $5 billion valuation for the startup. As its name implies, SSI is focused on developing artificial intelligence that does not pose a threat to humanity, a goal that will be pursued “in a straight shot” with “one product,” Sutskever has stated. Continue reading Safe Superintelligence Raises $1 Billion to Develop Ethical AI

Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’

In a move toward increased transparency, San Francisco-based AI startup Anthropic has published the system prompts for three of its most recent large language models: Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku. The information is now available on the web and in the Claude iOS and Android apps. The prompts are instruction sets that reveal what the models can and cannot do. Anthropic says it will regularly update the information, emphasizing that evolving system prompts do not affect the API. Examples of Claude’s prompts include “Claude cannot open URLs, links, or videos” and, when dealing with images, “avoid identifying or naming any humans.” Continue reading Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’

OpenAI Teams with Los Alamos for Frontier Model Research

OpenAI has partnered with the Los Alamos National Laboratory to study the ways artificial intelligence frontier models can assist with scientific research in an active lab environment. Established in 1943, the New Mexico facility is best known as home to the Manhattan Project and the development of the world’s first atomic bomb. It currently focuses on national security challenges under the direction of the Department of Energy. As part of the new partnership, the lab will work with OpenAI to produce what it describes as a first-of-its-kind study on the impact of artificial intelligence and biosecurity. Continue reading OpenAI Teams with Los Alamos for Frontier Model Research

Cloudflare Blocking Web Bots from Scraping AI Training Data

Cloudflare has a new tool that can block AI from scraping a website’s content for model training. The no-code feature is available even to customers on the free tier. “Declare your ‘AIndependence’” by blocking AI bots, scrapers and crawlers with a single click, the San Francisco-based company urged last week, simultaneously releasing a chart of frequent crawlers by “request volume” on websites using Cloudflare. The ByteDance-owned Bytespider was number one, presumably gathering training data for its large language models “including those that support its ChatGPT rival, Doubao,” Cloudflare says. Amazonbot, ClaudeBot and GPTBot rounded out the top four. Continue reading Cloudflare Blocking Web Bots from Scraping AI Training Data

Sutskever Targets Safe Superintelligence with New Company

Ilya Sutskever — who last month exited his post as chief scientist at OpenAI after a highly publicized power struggle with CEO Sam Altman — has launched a new AI company, Safe Superintelligence Inc. Sutskever’s partners in the new venture are his former OpenAI colleague Daniel Levy and Daniel Gross, who founded the AI startup Cue, which was acquired by Apple where Gross continued in an AI leadership role. “Building safe superintelligence (SSI) is the most important technical problem of our​​ time,” the trio posted on the company’s one-page website, stating its goal is to “scale in peace.” Continue reading Sutskever Targets Safe Superintelligence with New Company

ElevenLabs Launches an AI Tool for Generating Sound Effects

ElevenLabs has launched its text-to-sound generator Sound Effects for all users, available now at the company’s website. The new AI tool can create audio effects, short instrumental tracks, soundscapes and even character voices. Sound Effects “has been designed to help creators — including film and television studios, video game developers, and social media content creators — generate rich and immersive soundscapes quickly, affordably and at scale,” according to the startup, which developed the tool in partnership with Shutterstock, using its library of licensed audio tracks. Continue reading ElevenLabs Launches an AI Tool for Generating Sound Effects

IBM Introduces Granite LLMs for Enterprise Code Developers

IBM has released a family of its Granite AI models to the open-source community. The series of decoder-only Granite code models are purpose-built to write computer code for enterprise developers, with training in 116 programming languages. These Granite models range in size from 3 to 34 billion parameters in base model and instruction-tuned variants. They offer a range of uses, from modernizing older code with new languages to optimizing programs for on-device memory constraints, such as might be experienced when conforming for mobile gadgets. In addition to generation, the models can repair and explain code. Continue reading IBM Introduces Granite LLMs for Enterprise Code Developers

Microsoft, Amazon Commit to Expanding Operations in France

France has been pursuing Big Tech and Microsoft and Amazon are among the first to express interest. Microsoft has committed $4.3 billion to expand cloud and AI infrastructure there, sharing plans to bring as many as 25,000 advanced GPUs to France by the close of 2025. The software giant will also train one million people for AI and data jobs while supporting 2,500 AI startups over the next three years. Meanwhile, Amazon announced that it would invest up to $1.3 billion to expand its existing footprint of 35 logistics facilities in the country. The deals were announced Monday during the Choose France summit hosted by French President Emmanuel Macron. Continue reading Microsoft, Amazon Commit to Expanding Operations in France

U.S. and UK Form Partnership to Accelerate AI Safety Testing

The United States has entered into an agreement with the United Kingdom to collaboratively develop safety tests for the most advanced AI models. The memorandum of understanding aims at evaluating the societal and national defense risks posed by advanced models. Coming after commitments made at the AI Safety Summit in November, the deal is being described as the world’s first bilateral agreement on AI safety. The agreement, signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, envisions the countries “working to align their scientific approaches” and to accelerate evaluations for AI models, systems and agents. Continue reading U.S. and UK Form Partnership to Accelerate AI Safety Testing

OpenAI Voice Cloning Tool Needs Only a 15-Second Sample

OpenAI has debuted a new text-to-voice generation platform called Voice Engine, available in limited access. Voice Engine can generate a synthetic voice from a 15-second clip of someone’s voice. The synthetic voice can then read a provided text, even translating to other languages. For now, only a handful of companies are using the tech under a strict usage policy as OpenAI grapples with the potential for misuse. “These small scale deployments are helping to inform our approach, safeguards, and thinking about how Voice Engine could be used for good across various industries,” OpenAI explained. Continue reading OpenAI Voice Cloning Tool Needs Only a 15-Second Sample

Federal Policy Specifies Guidelines for Risk Management of AI

The White House is implementing a new AI policy across the federal government that will be implemented by the Office of Management and Budget (OMB). Vice President Kamala Harris announced the new rules, which require that all federal agencies have a senior leader overseeing AI systems use, in an effort to ensure that AI deployed in public service remains safe and unbiased. The move was positioned as making good on “a core component” of President Biden’s AI Executive Order (EO), issued in October. Federal agencies reported completing the 150-day actions tasked by the EO. Continue reading Federal Policy Specifies Guidelines for Risk Management of AI

Google GenAI Accelerator Launches with $20 Million in Grants

Google.org, the charitable arm of the Alphabet giant, has launched a program to help fund non-profits working on technology to support “high-impact applications of generative AI.” The Google.org Accelerator: Generative AI is a six-month program that kicks off with more than $20 million in grants for 21 non-profit firms. Among them, student writing aid group Quill.org, job seeker for low- to middle-income countries Tabiya, and Benefits Data Trust, which helps low-income applicants access and enroll in public benefits. In addition to funds, the new unit provides mentorship, technical training and pro bono support from “a dedicated AI coach.” Continue reading Google GenAI Accelerator Launches with $20 Million in Grants

UN Adopts Global AI Resolution Backed by U.S., 122 Others

The United Nations General Assembly on Thursday adopted a U.S.-led resolution to promote “safe, secure and trustworthy” artificial intelligence systems and their sustainable development for the benefit of all. The non-binding proposal, which was adopted without a formal vote, drew support from more than 122 co-sponsors, including China and India. It emphasizes “the respect, protection and promotion of human rights in the design, development, deployment and use” of responsible and inclusive AI. “The same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” the resolution affirms. Continue reading UN Adopts Global AI Resolution Backed by U.S., 122 Others

EU Lawmakers Pass AI Act, World’s First Major AI Regulation

The European Union has passed the Artificial Intelligence Act, becoming the first global entity to pass comprehensive law to regulate AI’s development and use. Member states agreed on the framework in December 2023, and it was adopted Wednesday by the European Parliament with 523 votes in favor, 46 against and 49 abstentions. The legislation establishes what are being called “sweeping rules” for those building AI as well as those who deploy it. The rules, which will take effect gradually, implement new risk assessments, ban AI uses deemed “high risk,” and mandate transparency requirements. Continue reading EU Lawmakers Pass AI Act, World’s First Major AI Regulation

Researchers Call for Safe Harbor for the Evaluation of AI Tools

Artificial intelligence stakeholders are calling for safe harbor legal and technical protections that will allow them access to conduct “good-faith” evaluations of various AI products and services without fear of reprisal. More than 300 researchers, academics, creatives, journalists and legal professionals had as of last week signed an open letter calling on companies including Meta Platforms, OpenAI and Google to allow access for safety testing and red teaming of systems they say are shrouded in opaque rules and secrecy despite the fact that millions of consumers are already using them. Continue reading Researchers Call for Safe Harbor for the Evaluation of AI Tools