By
Paula ParisiOctober 29, 2024
Marking its first news deal in years, Meta Platforms entered into an agreement with Reuters to use its content to answer user questions posed to its Meta AI chatbot. The arrangement comes as Meta has been minimizing news content on its services in response to publisher demands for revenue sharing and regulatory criticism over misinformation. Terms of the partnership were not disclosed, nor were details provided as to whether Meta plans to use Reuters content for model training. Meta AI is available across its Facebook, Whatsapp, Instagram and Messenger services. Continue reading Meta, Reuters Sign Multi-Year AI Content Licensing Agreement
By
Paula ParisiAugust 19, 2024
The list of potential risks associated with artificial intelligence continues to grow. “Global AI adoption is outpacing risk understanding,” warns the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL), which has joined with the MIT multidisciplinary computer group FutureTech to compile the AI Risk Repository, a “living database” of more than 700 unique risks extracted across 43 source categories. Organized by cause, classifying “how, when and why these risks occur,” the repository is comprised of seven risk domains (for example, “misinformation”) and 23 subdomains (such as “false or misleading information”). Continue reading MIT’s AI Risk Assessment Database Debuts with 700 Threats
By
Paula ParisiAugust 19, 2024
Grok-2 and Grok-2 mini, the latest generative chatbots from Elon Musk’s xAI, create images with seemingly few guardrails. Early pictures of notable personalities such as Bill Gates, Donald Trump and Kamala Harris in questionable or compromising settings may not appear photorealistic to a trained eye, but they are still described in many cases to be quite realistic. Powered by the FLUX.1 AI model from Black Forest Labs, Grok-2 and Grok-2 mini are available in beta on X social for Premium and Premium+ subscribers and will be coming to xAI’s enterprise API later this month, according to the company. Continue reading xAI’s Grok-2 Generates Realistic Images with Few Guardrails
By
Paula ParisiAugust 14, 2024
YouTube, which began testing crowdsourced fact-checking in June, is now expanding the experiment by inviting users to try the feature. Likened to the Community Notes accountability method introduced by Twitter and continued under X, YouTube’s as yet unnamed feature lets users provide context and corrections to posts that might be misleading or false. “You can sign up to submit notes on videos you find inaccurate or unclear,” YouTube explains, adding that “after submission, your note is reviewed and rated by others.” Notes widely rated as helpful “may be published and appear below the video.” Continue reading YouTube Tests Expanded Community Fact-Checking for Video
By
Paula ParisiJuly 30, 2024
Microsoft has begun the release of Bing generative search, making it available for “a small percentage of user queries.” The company says it will solicit user feedback and undertake further testing prior to a broader rollout. Google began dabbling in what it called the Search Generative Experience last summer, then upped the ante by adding a search-optimized version of its Gemini model this spring. The journey was not without controversy, something Microsoft will surely try to avoid. Microsoft says its new AI-driven search functionality “combines the foundation of Bing’s search results with the power of large and small language models (LLMs and SLMs).” Continue reading Microsoft Testing Bing Generative Search for User Feedback
By
Paula ParisiJuly 2, 2024
Deepfake videos are becoming increasingly problematic, not only in spreading disinformation on social media but also in enterprise attacks. Now researchers at Drexel University College of Engineering say they have developed an advanced algorithm with a 98 percent accuracy rate in detecting deepfake videos. Called the MISLnet algorithm, for the school’s Multimedia and Information Security Lab where it was invented, the platform uses machine learning to recognize and extract the “digital fingerprints” of video generators including Stable Video Diffusion, VideoCrafter and CogVideo. Continue reading Drexel Claims Its AI Has 98 Percent Rate Detecting Deepfakes
By
Paula ParisiJune 20, 2024
Meta Platforms is publicly releasing five new AI models from its Fundamental AI Research (FAIR) team, which has been experimenting with artificial intelligence since 2013. These models including image-to-text, text-to-music generation, and multi-token prediction tools. Meta is introducing a new technique called AudioSeal, an audio watermarking technique designed for the localized detection of AI-generated speech. “AudioSeal makes it possible to pinpoint AI-generated segments within a longer audio snippet,” according to Meta. The feature is timely in light of concern about potential misinformation surrounding the fall presidential election. Continue reading Meta’s FAIR Team Announces a New Collection of AI Models
By
Paula ParisiJune 20, 2024
YouTube is experimenting with a feature that allows viewers to add contextual “Notes” under videos, similar to what X does with its Community Notes. The Google-owned company says the intent is to provide clarity around things like “when a song is meant to be a parody,” when newly reviewed products are available for purchase, or “when older footage is mistakenly portrayed as a current event.” However, the timing preceding a pivotal U.S. presidential election and facing concerns about deepfakes and misinformation is no doubt intentional. The pilot will initially be available on mobile in the United States. Continue reading YouTube to Tackle Misinformation with Crowdsourced Notes
By
ETCentric StaffMarch 20, 2024
YouTube has added new rules requiring those uploading realistic-looking videos that are “made with altered or synthetic media, including generative AI” to label them using a new tool in Creator Studio. The new labeling “is meant to strengthen transparency with viewers and build trust between creators and their audience,” YouTube says, listing examples of content that require disclosure as “likeness of a realistic person” including voice as well as image, “altering footage of real events or places” and “generating realistic scenes” of fictional major events, “like a tornado moving toward a real town.” Continue reading YouTube Adds GenAI Labeling Requirement for Realistic Video
By
ETCentric StaffMarch 11, 2024
Alibaba is touting a new artificial intelligence system that can animate portraits, making people sing and talk in realistic fashion. Researchers at the Alibaba Group’s Institute for Intelligent Computing developed the generative video framework, calling it EMO, short for Emote Portrait Alive. Input a single reference image along with “vocal audio,” as in talking or singing, and “our method can generate vocal avatar videos with expressive facial expressions and various head poses,” the researchers say, adding that EMO can generate videos of any duration, “depending on the length of video input.” Continue reading Alibaba’s EMO Can Generate Performance Video from Images
By
Paula ParisiJanuary 31, 2024
As parents and educators grapple with figuring out how AI will fit into education, OpenAI is preemptively acting to help answer that question, teaming with learning and child safety group Common Sense Media on informational material and recommended guidelines. The two will also work together to curate “family-friendly GPTs” for the GPT Store that are “based on Common Sense ratings and standards,” the organization said. The partnership aims “to help realize the full potential of AI for teens and families and minimize the risks,” according to Common Sense. Continue reading OpenAI Partners with Common Sense Media on AI Guidelines
By
Paula ParisiDecember 12, 2023
The EU has reached a provisional agreement on the Artificial Intelligence Act, making it the first Western democracy to establish comprehensive AI regulations. The sweeping new law predominantly focuses on so-called “high-risk AI,” establishing parameters — largely in the form of reporting and third-party monitoring — “based on its potential risks and level of impact.” Parliament and the 27-country European Council must still hold final votes before the AI Act is finalized and goes into effect, but the agreement, reached Friday in Brussels after three days of negotiations, means the main points are set. Continue reading EU Makes Provisional Agreement on Artificial Intelligence Act
By
Paula ParisiNovember 29, 2023
California Governor Gavin Newsom has released a report examining the beneficial uses and potential harms of artificial intelligence in state government. Potential plusses include improving access to government services by identifying groups that are hindered due to language barriers or other reasons, while dangers highlight the need to prepare citizens with next generation skills so they don’t get left behind in the GenAI economy. “This is an important first step in our efforts to fully understand the scope of GenAI and the state’s role in deploying it,” Newsom said, calling California’s strategy “a nuanced, measured approach.” Continue reading Newsom Report Examines Use of AI by California Government
By
Paula ParisiNovember 27, 2023
Stability AI has opened research preview on its first foundation model for generative video, Stable Video Diffusion, offering text-to-video and image-to-video. Based on the company’s Stable Diffusion text-to-image model, the new open-source model generates video by animating existing still frames, including “multi-view synthesis.” While the company plans to enhance and extend the model’s capabilities, it currently comes in two versions: SVD, which transforms stills into 576×1024 videos of 14 frames, and SVD-XT that generates up to 24 frames — each at between three and 30 frames per second. Continue reading Stability Introduces GenAI Video Model: Stable Video Diffusion
By
Paula ParisiNovember 8, 2023
CBS is launching a unit charged with identifying misinformation and avoiding deepfakes. Called CBS News Confirmed, it will operate out of the news-and-stations division, ferreting out false information generated by artificial intelligence. Claudia Milne, senior VP of CBS News and Stations and its standards and practices chief will run the new group with Ross Dagan, EVP and head of news operations and transformation, CBS News and Stations. CBS plans to hire forensic journalists and will expand training and invest in technologies to assist them in their role. In addition to flagging deepfakes, CBS News Confirmed will also report on them. Continue reading CBS News Confirmed: New Fact-Checking Unit Examining AI