By
ETCentric StaffMarch 11, 2024
Alibaba is touting a new artificial intelligence system that can animate portraits, making people sing and talk in realistic fashion. Researchers at the Alibaba Group’s Institute for Intelligent Computing developed the generative video framework, calling it EMO, short for Emote Portrait Alive. Input a single reference image along with “vocal audio,” as in talking or singing, and “our method can generate vocal avatar videos with expressive facial expressions and various head poses,” the researchers say, adding that EMO can generate videos of any duration, “depending on the length of video input.” Continue reading Alibaba’s EMO Can Generate Performance Video from Images
By
ETCentric StaffFebruary 22, 2024
“What if you could describe a sound and generate it with AI?,” asks startup ElevenLabs, which set out to do just that, and says it has succeeded. The two-year-old company explains it “used text prompts like ‘waves crashing,’ ‘metal clanging,’ ‘birds chirping,’ and ‘racing car engine’ to generate audio.” Best known for using machine learning to clone voices, the AI firm founded by Google and Palantir alums has yet to make publicly available its new text-to-sound model but began teasing it by releasing online demos this week. Some see the technology as a natural complement to the latest wave of image generators. Continue reading ElevenLabs Promotes Its Latest Advances in AI Audio Effects
By
Paula ParisiDecember 12, 2023
The EU has reached a provisional agreement on the Artificial Intelligence Act, making it the first Western democracy to establish comprehensive AI regulations. The sweeping new law predominantly focuses on so-called “high-risk AI,” establishing parameters — largely in the form of reporting and third-party monitoring — “based on its potential risks and level of impact.” Parliament and the 27-country European Council must still hold final votes before the AI Act is finalized and goes into effect, but the agreement, reached Friday in Brussels after three days of negotiations, means the main points are set. Continue reading EU Makes Provisional Agreement on Artificial Intelligence Act
By
Paula ParisiNovember 8, 2023
CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI
By
Paula ParisiNovember 1, 2023
President Biden has signed a far-ranging executive order establishing guardrails for artificial intelligence. Companies are now required to report to the federal government on risks related to their AI systems should they fall into the hands of terrorists or be used for weapons of mass destruction. The order also attempts to mitigate the dangers of deepfakes that could be used to manipulate elections or defraud consumers. “Deepfakes use AI-generated audio and video to smear reputations, spread fake news and commit fraud,” Biden said as he signed the order at the White House. Continue reading President Biden Signs Executive Order to Contain Risks of AI
By
Paula ParisiSeptember 27, 2023
Spotify is using AI to drive podcast language translation in what sounds like the podcaster’s own voice, which has obvious implications for film and television dubbing. Working with podcast notables including Dax Shepard, Monica Padman and Bill Simmons, Spotify used AI to mimic their voices in Spanish, French and German for several episodes. The proprietary Spotify technology uses OpenAI’s new text-to-speech voice-generation technology as well as its open-source Whisper speech recognition system, which transcribes spoken words into text. The result, Spotify says, is “more authentic” and “more personal and natural” than traditional dubbing. Continue reading Spotify Uses AI to Copy Host Voices for Podcast Translations
By
Paula ParisiSeptember 22, 2023
OpenAI has released the DALL-E 3 generative AI imaging platform in research preview. The latest iteration features more safety options and integrates with OpenAI’s ChatGPT, currently driven by the now seasoned large language model GPT-4. That is the ChatGPT version to which Plus subscribers and enterprise customers have access — the same who will be able to preview DALL-E 3. The free chatbot is built around GPT-3.5. OpenAI says GPT-4 makes for better contextual understanding by DALL-E, which even in version 2 evidenced some glaring comprehension glitches. Continue reading OpenAI’s Latest Version of DALL-E Integrates with ChatGPT
By
Paula ParisiSeptember 1, 2023
Google DeepMind and Google Cloud have teamed to launch what they claim is an indelible AI watermark tool, which if it works would mark an industry first. Called SynthID, the technique for identifying AI-generated images is being launched in beta. The technology embeds its digital watermark “directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification,” according to DeepMind. SynthID is being released to a limited number of Google’s Vertex AI customers using Imagen, a Google AI language model that generates photorealistic images. Continue reading Google Introduces an AI Watermark That Cannot Be Removed
By
Paula ParisiJune 8, 2023
The European Union wants deepfakes and other AI-generated content labeled, and is pressing signatories to its Code of Practice on Online Disinformation to adopt technology that will clearly identify output that is generated or manipulated by machines. “The new AI technologies can be a force for good” that offers “new avenues for increased efficiency and creative expression. But, as always, we have to mention the dark side,” EU values and transparency commissioner Vera Jourova said, citing “new risks and the potential for negative consequences for society.” Continue reading EU Urges Tech Companies to Label All AI-Generated Content
By
Paula ParisiJune 8, 2023
Deezer, the global music streaming platform based in France, claims to have developed a technique for flagging — and potentially deleting — songs that use artificial intelligence to simulate the performance of popular singers. “We need to take a stand now,” Deezer CEO Jeronimo Folgueira said in an interview. “We are at a pivotal moment in music.” His company plans to “weed out illegal and fraudulent content” in an effort to protect artists. Deezer’s detection technology is still under development. It relies on AI, which Folgueira said he is not against if it is used ethically. Continue reading Deezer Says Its Tech Can Flag and Delete Deepfake AI Tunes
By
Paula ParisiJune 1, 2023
Twitter is emphasizing crowdsourced moderation. The launch of Community Notes for images in posts seeks to address instances where morphed or AI-generated images are posted. The idea is to expose altered content before it goes viral, as did the image of Pope Francis wearing a Balenciaga puffy coat in March and the fake image of an explosion at the Pentagon in May. Twitter says Community Notes about an image will appear with “recent and future” posts containing the graphic in question. Currently in the test phase, the feature works with tweets featuring a single image. Continue reading Twitter Community Notes Aim to Curb Impact of Fake Images
By
Paula ParisiApril 24, 2023
There’s been a lot of noise recently about music generated by artificial intelligence tools. The clamor is on multiple fronts: generative mimicry of specific artists’ vocal styles, the potential to put Muzak-style background tunesmiths out of business with potentially cheaper alternatives, and the particulars of takedown orders. The matter came to a head this month after generative AI vocals prompted to sound like Drake and The Weeknd performed a song called “Heart on My Sleeve,” written and produced by a TikTok user. The tune quickly went viral, raising numerous concerns. Continue reading Music Industry Contends with Artificial Intelligence Disruption
By
Paula ParisiApril 6, 2023
After many years of academia leading the way in the development of artificial intelligence, the tides have shifted and industry has taken over, according to the 2023 AI Index, a report created by Stanford University with help from companies including Google, Anthropic and Hugging Face. “In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia,” the report says. The shift in influence is attributed mainly to the large resource demands — in staff, computing power and training data — required to create state of the art AI systems. Continue reading Report: Enterprise Supplants Academia as Driving Force of AI
By
Paula ParisiDecember 9, 2022
The European Council (EU’s governing body) has adopted a position on the Artificial Intelligence Act, which aims to ensure that AI systems used or marketed in the European Union are safe and respect existing laws on fundamental rights. In addition to defining artificial intelligence, the European Council’s general approach specifies prohibited AI practices, calls for risk level allocation, and stipulates ways to deal with those risks. The Council — comprised of EU heads of state — becomes the first co-legislate to complete this initial step, with the European Parliament expected to offer its version of the AIA in the first half of 2023. Continue reading European Council Weighs in on the Artificial Intelligence Act
By
Paula ParisiNovember 22, 2022
Intel has debuted FakeCatcher, touting it as the first real-time deepfake detector. capable of determining whether digital video has been altered to change context or meaning. Intel says FakeCatcher has a 96 percent accuracy rate and returns results in milliseconds by analyzing the “blood flow” of pixel patterns, a process called photoplethysmography (PPG) that Intel borrowed from medical research. The company says potential use cases include social media platforms screening to prevent uploads of harmful deepfake videos and helping global news organizations to avoid inadvertent amplification of deepfakes. Continue reading Intel Promises 96 Percent Accuracy with New Deepfake Filter