By
Paula ParisiJune 24, 2024
MainFunc Inc. has raised $60 million on the strength of its principal technology — a free, AI-powered search engine called Genspark. The platform responds to queries by writing custom summaries that are presented in a “Sparkpage,” a one-page overview featuring content from around the web. Genspark joins a growing field of generative AI search engines, the best-known of which is Perplexity, which has raised $250 million since its 2022 launch and is currently valued at about $2.5 billion. Reuters says Genspark’s funding values the company at $260 million. Google also offers “AI Overviews” as part of Google search. Continue reading Genspark Joins Collection of GenAI-Powered Search Engines
By
Paula ParisiJune 21, 2024
Anthropic has launched a powerful new AI model, Claude 3.5 Sonnet, that can analyze text and images and generate text. That its release comes a mere three months after Anthropic debuted Claude 3 indicates just how quickly the field is developing. The Google-backed company says Claude 3.5 Sonnet has set “new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval).” Sonnet is Anthropic’s mid-tier model, between Haiku and, on the high-end, Opus. Anthropic says 3.5 Sonnet is twice as fast as 3 Opus, offering “frontier intelligence at 2x the speed.” Continue reading Anthropic’s Claude 3.5: ‘Frontier Intelligence at 2x the Speed’
By
Paula ParisiJune 20, 2024
Meta Platforms is publicly releasing five new AI models from its Fundamental AI Research (FAIR) team, which has been experimenting with artificial intelligence since 2013. These models including image-to-text, text-to-music generation, and multi-token prediction tools. Meta is introducing a new technique called AudioSeal, an audio watermarking technique designed for the localized detection of AI-generated speech. “AudioSeal makes it possible to pinpoint AI-generated segments within a longer audio snippet,” according to Meta. The feature is timely in light of concern about potential misinformation surrounding the fall presidential election. Continue reading Meta’s FAIR Team Announces a New Collection of AI Models
By
Paula ParisiJune 19, 2024
Runway ML has introduced a new foundation model, Gen-3 Alpha, which the company says can generate high-quality, realistic scenes of up to 10 seconds long from text prompts, still images or a video sample. Offering a variety of camera movements, Gen-3 Alpha will initially roll out to Runway’s paid subscribers, but the company plans to add a free version in the future. Runway says Gen-3 Alpha is the first of a new series of models trained on the company’s new large-scale multimodal infrastructure, which offers improvements “in fidelity, consistency, and motion over Gen-2,” released last year. Continue reading Runway’s Gen-3 Alpha Creates AI Videos Up to 10-Seconds
By
Paula ParisiJune 14, 2024
Northern California startup Luma AI has released Dream Machine, a model that generates realistic videos from text prompts and images. Built on a scalable and multimodal transformer architecture and “trained directly on videos,” Dream Machine can create “action-packed scenes” that are physically accurate and consistent, says Luma, which has a free version of the model in public beta. Dream Machine is what Luma calls the first step toward “a universal imagination engine,” while others are calling it “powerful” and “slammed with traffic.” Though Luma has shared scant details, each posted sequence looks to be about 5 seconds long. Continue reading Luma AI Dream Machine Video Generator in Free Public Beta
By
Paula ParisiJune 14, 2024
China’s Kuaishou Technology has a video generator called Kling AI in public beta that is getting great word-of-mouth, with comments from “incredibly realistic” to “Sora killer,” a reference to OpenAI’s still in closed beta video generator. Kuaishou claims that using only text prompts, Kling can generate “AI videos that closely mimic the real world’s complex motion patterns and physical characteristics,” in sequences as long as two minutes at 30 fps and 1080p, while supporting various aspect ratios. Kuaishou is China’s second most popular short-form video app, after ByteDance’s Douyin, the Chinese version of TikTok. Continue reading ByteDance Rival Kuaishou Creates Kling AI Video Generator
By
Paula ParisiMay 29, 2024
Music startup Suno, which leverages ChatGPT tech with the goal of emulating that app’s success in music, has raised $125 million in Series B funding, resulting in a valuation of $500 million. Founded by Harvard physics PhD turned tech entrepreneur Mikey Shulman, the company is being called “a rising star” in the realm of generative AI. Suno lets people generate original songs by using text prompts or lyrics, with the AI supplying the melodies and harmonies for fully-formed compositions. “We started Suno to build a future where anyone can make music,” according to the company. Continue reading AI Startup Suno Raises Funds to ‘Democratize Music Creation’
By
Paula ParisiMay 28, 2024
Meta Platforms has unveiled its first natively multimodal model, Chameleon, which observers say can make it competitive with frontier model firms. Although Chameleon is not yet released, Meta says internal research indicates it outperforms the company’s own Llama 2 in text-only tasks and “matches or exceeds the performance of much larger models” including Google’s Gemini Pro and OpenAI’s GPT-4V in a mixed-modal generation evaluation “where either the prompt or outputs contain mixed sequences of both images and text.” In addition, Meta calls Chameleon’s image generation “non-trivial,” noting that’s “all in a single model.” Continue reading Meta Advances Multimodal Model Architecture with Chameleon
By
Paula ParisiMay 16, 2024
Google has infused search with more Gemini AI, adding expanded AI Overviews and more planning and research capabilities. “Ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork” culling from “a knowledge base of billions of facts about people, places and things,” explained Google and Alphabet CEO Sundar Pichai at the Google I/O developer conference. AI Overviews will roll out to all U.S. users this week. Coming soon are customizable AI Overview options that can simplify language or add more detail. Continue reading Google Ups AI Quotient with Search-Optimized Gemini Model
Meta Platforms announced an expanded collection of generative AI features, tools and services for advertisers and businesses. The enhanced AI features include full image and text generation, text overlay capabilities, and image expansion for Reels and the Feed in Facebook and Instagram. The updated tools will be available via Meta Ads Manager through Advantage+ creative. According to Meta: “Our goal is to help you at every step of your journey, whether that’s improving ad performance by helping you develop creative variations, automating certain parts of the ad creation process, or increasing your credibility and engagement through Meta Verified.” Continue reading Meta Launches Enhanced Generative AI Tools for Advertisers
By
ETCentric StaffApril 24, 2024
Adobe plans to add generative AI capabilities to its Premiere Pro editing platform and is exploring the update with third-party AI technologies including OpenAI’s Sora, as well as models from Runway and Pika Labs, making it easier “to draw on the strengths of different models” within everyday workflows, according to Adobe. Editors will gain the ability to generate and add objects into scenes or shots, remove unwanted elements with a click, and even extend frames and footage length. The company is also developing a video model for its own Firefly AI for video and audio work in Premiere Pro. Continue reading Adobe Considers Sora, Pika and Runway AI for Premiere Pro
By
ETCentric StaffApril 18, 2024
Airchat is the latest app to take tech leaders in Silicon Valley by storm. Described as a “combination of voice notes and Twitter,” Airchat lets you follow other users and scroll through posts — adding replies, likes and shares — but the twist is the content is generated through audio recordings the app then transcribes. Airchat ranked 27th on the App Store’s social networking chart, even though users must be invited to join. Launched last year by Naval Ravikant, founder of AngelList, and erstwhile Tinder product exec Brian Norgard, Airchat was just relaunched on iOS and Android. Continue reading Audio-First Social Platform Airchat Has Successful Relaunch
By
ETCentric StaffApril 11, 2024
Google is moving its most powerful artificial intelligence model, Gemini 1.5 Pro, into public preview for developers and Google Cloud customers. Gemini 1.5 Pro includes what Google claims is a breakthrough in long context understanding, with the ability to run 1 million tokens of information “opening up new possibilities for enterprises to create, discover and build using AI.” Gemini’s multimodal capabilities allow it to process audio, video, text, code and more, which when combined with long context, “enables enterprises to do things that just weren’t possible with AI before,” according to Google. Continue reading Google Offers Public Preview of Gemini Pro for Cloud Clients
By
ETCentric StaffApril 5, 2024
OpenAI has updated the editor for DALL-E, the artificial intelligence image generator that is part of the ChatGPT premium tiers. The update, based on the DALL-E 3 model, makes it easier for users to adjust their generated images. Shortly after DALL-E 3’s September debut, OpenAI integrated it into ChatGPT, enabling paid subscribers to generate images from text or image prompts. The new DALL-E editor interface lets users edit images “by selecting an area of the image to edit and describing your changes in chat” without using the selection tool. Desired changes can also be prompted “in the conversation panel,” according to OpenAI. Continue reading OpenAI Integrates New Image Editor for DALL-E into ChatGPT
By
ETCentric StaffMarch 21, 2024
Deepgram’s new Aura software turns text into generative audio with a “human-like voice.” The 9-year-old voice recognition company has raised nearly $86 million to date on the strength of its Voice AI platform. Aura is an extremely low-latency text-to-speech voice AI that can be used for voice AI agents, the company says. Paired with Deepgram’s Nova-2 speech-to-text API, developers can use it to “easily (and quickly) exchange real-time information between humans and LLMs to build responsive, high-throughput AI agents and conversational AI applications,” according to Deepgram. Continue reading Deepgram’s Speech Portfolio Now Includes Human-Like Aura