By
Paula ParisiOctober 25, 2024
Runway is launching Act-One motion capture system that uses video and voice recordings to map human facial expressions onto characters using the company’s latest model, Gen-3 Alpha. Runway calls it “a significant step forward in using generative models for expressive live action and animated content.” Compared to past facial capture techniques — which typically require complex rigging — Act-One is driven directly and only by the performance of an actor, requiring “no extra equipment,” making it more likely to capture and preserve an authentic, nuanced performance, according to the company. Continue reading Runway’s Act-One Facial Capture Could Be a ‘Game Changer’
By
Paula ParisiOctober 18, 2024
Anthropic, maker of the the popular Claude AI chatbot, has updated its Responsible Scaling Policy (RSP), designed and implemented to mitigate the risks of advanced AI systems. The policy was introduced last year and has since been improved, with new protocols added to ensure AI models are developed and deployed safely as they grow more powerful. This latest update offers “a more flexible and nuanced approach to assessing and managing AI risks while maintaining our commitment not to train or deploy models unless we have implemented adequate safeguards,” according to Anthropic. Continue reading Anthropic Updates ‘Responsible Scaling’ to Minimize AI Risks
By
Paula ParisiOctober 16, 2024
OpenAI has announced Swarm, an experimental framework that coordinates networks of AI agents, and true to its name the news has kicked over a hornet’s nest of contentious debate about the ethics of artificial intelligence and the future of enterprise automation. OpenAI emphasizes that Swarm is not an official product and says though it has shared the code publicly it has no intention of maintaining it. “Think of it more like a cookbook,” OpenAI engineer Shyamal Anadkat said in a social media post, calling it “code for building simple agents.” Continue reading OpenAI Tests Open-Source Framework for Autonomous Agents
By
Paula ParisiSeptember 27, 2024
The European Commission has released a list of more than 100 companies that have become signatories to the EU’s AI Pact. While Google, Microsoft and OpenAI are among them, Apple and Meta are not. The voluntary AI Pact is aimed at eliciting policies on AI deployment during the period before the legally binding AI Act takes full effect. The EU AI Pact focuses on transparency in three core areas: internal AI governance, high-risk AI systems mapping and promoting AI literacy and awareness among staff to support ethical development. It is aimed at “relevant stakeholders,” across industry, civil society and academia. Continue reading Amazon, Google, Microsoft and OpenAI Join the EU’s AI Pact
By
Paula ParisiSeptember 18, 2024
The OpenAI board’s Safety and Security Committee will become an independent board oversight committee, chaired by Zico Kolter, machine learning department chair at Carnegie Mellon University. The committee will be responsible for “the safety and security processes guiding OpenAI’s model deployment and development.” Three OpenAI board members segue from their current SSC roles to the new committee: Quora founder Adam D’Angelo, former Sony Corporation EVP Nicole Seligman and erstwhile NSA chief Paul Nakasone. OpenAI is currently putting together a new funding round that reportedly aims to value the company at $150 billion. Continue reading OpenAI Bestows Independent Oversight on Safety Committee
By
Paula ParisiSeptember 6, 2024
OpenAI co-founder and former chief scientist Ilya Sutskever, who exited the company in May after a power struggle with CEO Sam Altman, has raised $1 billion for his new venture, Safe Superintelligence (SSI). The cash infusion from major Silicon Valley venture capital firms including Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel and NFDG has resulted in a $5 billion valuation for the startup. As its name implies, SSI is focused on developing artificial intelligence that does not pose a threat to humanity, a goal that will be pursued “in a straight shot” with “one product,” Sutskever has stated. Continue reading Safe Superintelligence Raises $1 Billion to Develop Ethical AI
By
Paula ParisiAugust 29, 2024
In a move toward increased transparency, San Francisco-based AI startup Anthropic has published the system prompts for three of its most recent large language models: Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku. The information is now available on the web and in the Claude iOS and Android apps. The prompts are instruction sets that reveal what the models can and cannot do. Anthropic says it will regularly update the information, emphasizing that evolving system prompts do not affect the API. Examples of Claude’s prompts include “Claude cannot open URLs, links, or videos” and, when dealing with images, “avoid identifying or naming any humans.” Continue reading Anthropic Publishes Claude Prompts, Sharing How AI ‘Thinks’
By
Paula ParisiJuly 15, 2024
OpenAI has partnered with the Los Alamos National Laboratory to study the ways artificial intelligence frontier models can assist with scientific research in an active lab environment. Established in 1943, the New Mexico facility is best known as home to the Manhattan Project and the development of the world’s first atomic bomb. It currently focuses on national security challenges under the direction of the Department of Energy. As part of the new partnership, the lab will work with OpenAI to produce what it describes as a first-of-its-kind study on the impact of artificial intelligence and biosecurity. Continue reading OpenAI Teams with Los Alamos for Frontier Model Research
By
Paula ParisiJuly 9, 2024
Cloudflare has a new tool that can block AI from scraping a website’s content for model training. The no-code feature is available even to customers on the free tier. “Declare your ‘AIndependence’” by blocking AI bots, scrapers and crawlers with a single click, the San Francisco-based company urged last week, simultaneously releasing a chart of frequent crawlers by “request volume” on websites using Cloudflare. The ByteDance-owned Bytespider was number one, presumably gathering training data for its large language models “including those that support its ChatGPT rival, Doubao,” Cloudflare says. Amazonbot, ClaudeBot and GPTBot rounded out the top four. Continue reading Cloudflare Blocking Web Bots from Scraping AI Training Data
By
Paula ParisiJune 21, 2024
Ilya Sutskever — who last month exited his post as chief scientist at OpenAI after a highly publicized power struggle with CEO Sam Altman — has launched a new AI company, Safe Superintelligence Inc. Sutskever’s partners in the new venture are his former OpenAI colleague Daniel Levy and Daniel Gross, who founded the AI startup Cue, which was acquired by Apple where Gross continued in an AI leadership role. “Building safe superintelligence (SSI) is the most important technical problem of our time,” the trio posted on the company’s one-page website, stating its goal is to “scale in peace.” Continue reading Sutskever Targets Safe Superintelligence with New Company
By
Paula ParisiJune 6, 2024
ElevenLabs has launched its text-to-sound generator Sound Effects for all users, available now at the company’s website. The new AI tool can create audio effects, short instrumental tracks, soundscapes and even character voices. Sound Effects “has been designed to help creators — including film and television studios, video game developers, and social media content creators — generate rich and immersive soundscapes quickly, affordably and at scale,” according to the startup, which developed the tool in partnership with Shutterstock, using its library of licensed audio tracks. Continue reading ElevenLabs Launches an AI Tool for Generating Sound Effects
By
Paula ParisiMay 15, 2024
IBM has released a family of its Granite AI models to the open-source community. The series of decoder-only Granite code models are purpose-built to write computer code for enterprise developers, with training in 116 programming languages. These Granite models range in size from 3 to 34 billion parameters in base model and instruction-tuned variants. They offer a range of uses, from modernizing older code with new languages to optimizing programs for on-device memory constraints, such as might be experienced when conforming for mobile gadgets. In addition to generation, the models can repair and explain code. Continue reading IBM Introduces Granite LLMs for Enterprise Code Developers
By
Paula ParisiMay 15, 2024
France has been pursuing Big Tech and Microsoft and Amazon are among the first to express interest. Microsoft has committed $4.3 billion to expand cloud and AI infrastructure there, sharing plans to bring as many as 25,000 advanced GPUs to France by the close of 2025. The software giant will also train one million people for AI and data jobs while supporting 2,500 AI startups over the next three years. Meanwhile, Amazon announced that it would invest up to $1.3 billion to expand its existing footprint of 35 logistics facilities in the country. The deals were announced Monday during the Choose France summit hosted by French President Emmanuel Macron. Continue reading Microsoft, Amazon Commit to Expanding Operations in France
By
ETCentric StaffApril 3, 2024
The United States has entered into an agreement with the United Kingdom to collaboratively develop safety tests for the most advanced AI models. The memorandum of understanding aims at evaluating the societal and national defense risks posed by advanced models. Coming after commitments made at the AI Safety Summit in November, the deal is being described as the world’s first bilateral agreement on AI safety. The agreement, signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, envisions the countries “working to align their scientific approaches” and to accelerate evaluations for AI models, systems and agents. Continue reading U.S. and UK Form Partnership to Accelerate AI Safety Testing
By
ETCentric StaffApril 2, 2024
OpenAI has debuted a new text-to-voice generation platform called Voice Engine, available in limited access. Voice Engine can generate a synthetic voice from a 15-second clip of someone’s voice. The synthetic voice can then read a provided text, even translating to other languages. For now, only a handful of companies are using the tech under a strict usage policy as OpenAI grapples with the potential for misuse. “These small scale deployments are helping to inform our approach, safeguards, and thinking about how Voice Engine could be used for good across various industries,” OpenAI explained. Continue reading OpenAI Voice Cloning Tool Needs Only a 15-Second Sample