By
Paula ParisiFebruary 21, 2025
ThinkAnalytics has launched an AI-powered platform designed for video service providers. Called ThinkMediaAI, it is said to unify content monetization — including contextual advertising, content curation and content bundling — across a variety of services, from live to CTV, FAST, VOD and more. Headquartered in the UK, with offices in Los Angeles, Singapore and India, ThinkAnalytics leverages a recommendation engine, claiming to track more than 475 million real-time data records and 8 billion recommendations per day. The company plans to showcase the new tech at NAB 2025, April 5-9 in Las Vegas. Continue reading ThinkAnalytics Bows Advertising, Curation Tool ThinkMediaAI
By
Paula ParisiFebruary 10, 2025
Model training continues to hit new lows in terms of cost, a phenomenon known as the commoditization of AI that has rocked Wall Street. An AI reasoning model created for under $50 in cloud compute credits is reportedly performing comparably to established reasoning models such as OpenAI o1 and DeepSeek-R1 on tests of math and coding aptitude. Called s1-32B, it was created by researchers at Stanford and the University of Washington by customizing Alibaba’s Qwen2.5-32B-Instruct, feeding it 1,000 prompts with responses sourced from Google’s new Gemini 2.0 Flash Thinking Experimental reasoning model. Continue reading Reasoning Model Competes with Advanced AI at a Lower Cost
By
Paula ParisiDecember 18, 2024
Twelve Labs has raised $30 million in funding for its efforts to train video-analyzing models. The San Francisco-based company has received strategic investments from notable enterprise infrastructure providers Databricks and SK Telecom as well as Snowflake Ventures and HubSpot Ventures. Twelve Labs targets customers using video across a variety of fields including media and entertainment, professional sports leagues, content creators and business users. The funding coincides with the release of Twelve Labs’ new video foundation model, Marengo 2.7, which applies a multi-vector approach to video understanding. Continue reading Twelve Labs Creating AI That Can Search and Analyze Video
By
Paula ParisiDecember 2, 2024
Anticipating what one outlet calls “the likely imminent release of OpenAI’s Sora,” generative AI video competitors are compelled to step up their game. Luma AI has released a major upgrade to its Dream Machine, speeding its already quick video generation and enabling a chat function for natural language prompts, so you can talk to it as with OpenAI’s ChatGPT. In addition to the new interface, Dream Machine is going mobile and adding a new foundation image model, Luma AI Photon, which “has been purpose built to advance the power and capabilities of Dream Machine,” according to the company. Continue reading Luma AI Upgrades Its Video Generator and Adds Image Model
By
Paula ParisiNovember 22, 2024
Microsoft’s expansion of AI agents within the Copilot Studio ecosystem was a central focus of the company’s Ignite conference. Since the launch of Copilot Studio, more than 100,000 enterprise organizations have created or edited AI agents using the platform. Copilot Studio is getting new features to increase productivity, including multimodal capabilities that take agents beyond text and Retrieval Augmented Generation (RAG) enhancements to enable agents with real-time knowledge from multiple third-party sources, such as Salesforce, ServiceNow, and Zendesk. Integration with Azure is expanded as 1,800 large language models in the Azure catalog are made available. Continue reading Microsoft Pushes Copilot Studio Agents, Adds Azure Models
By
Paula ParisiNovember 18, 2024
YouTube has added a new feature to its Dream Track toolset, which lets select U.S. creators use AI to generate songs using the vocals of artists including John Legend, Demi Lovato, Charli XCX, Charlie Puth and others. Now users can remix Dream Track songs using natural language to describe the changes they would like, stylistic and otherwise. Selecting the “restyle a track” option will steer users to creating a 30-second generative snippet for use in YouTube Shorts. The remixed snippets will credit the original song with “clear attribution” through the Short itself and the Shorts audio pivot page. It will also clearly indicate that the track was restyled with AI, according to Google. Continue reading YouTube Dream Track Toolset Introduces an AI Remix Feature
By
Paula ParisiAugust 22, 2024
Google DeepMind has made its latest AI image generator, Imagen 3, free for use in the U.S. via the company’s ImageFX platform. Imagen 3 will be available in multiple versions, “each optimized for different types of tasks, from generating quick sketches to high-resolution images.” Google announced Imagen 3 at Google I/O in March, and in June made it available to enterprise users through Vertex. Using simplified natural language text input rather than “complex prompt engineering,” Google says Imagen 3 generates high-quality images in a range styles, from photorealistic, painterly and textured to whimsically cartoony. Continue reading Google DeepMind Releases Imagen 3 for Free to U.S. Users
By
Paula ParisiJuly 29, 2024
Airtable, a 10-year-old firm focused on customized apps, is launching Cobuilder, which uses AI to turn a concept into a customizable application “in seconds,” without the need for human coding. The debut adds to a rapidly expanding field of no-code platforms that help non-technical types develop software suitable for enterprise use. “Within the next five years, teams will build the vast majority of applications in-house, customizing them to transform their most critical workflows,” predicts Airtable co-founder and CEO Howie Liu. “To get there, knowledge workers who are closest to the work need to be empowered to build.” Continue reading Airtable Enters No-Code Enterprise App Space with Cobuilder
By
Paula ParisiJuly 17, 2024
Microsoft is working on a new productivity tool that helps artificial intelligence better understand spreadsheets. Still in the experimental phase, SpreadsheetLLM addresses challenges that are unique to applying AI to spreadsheets, “with their extensive two-dimensional grids, various layouts, and diverse formatting options,” the company explains. Hailed as a significant development in the enterprise space, where spreadsheets are used for everything from data entry to financial modeling and are shared among departments, Microsoft points out that as a research area spreadsheet-optimized AI has generally been overlooked in favor of flashier use-cases. Continue reading Microsoft Targets Enterprise Productivity with Spreadsheet AI
By
Paula ParisiJuly 3, 2024
Apple has released a public demo of the 4M AI model it developed in collaboration with the Swiss Federal Institute of Technology Lausanne (EPFL). The technology debuts seven months after the model was first open-sourced, allowing informed observers the opportunity to interact with it and assess its capabilities. Apple says 4M was built by applying masked modeling to a single unified Transformer encoder-decoder “across a wide range of input/output modalities — including text, images, geometric and semantic modalities, as well as neural network feature maps.” Continue reading Apple Launches Public Demo of Its Multimodal 4M AI Model
By
Paula ParisiJune 19, 2024
Google DeepMind has unveiled new research on AI tech it calls V2A (“video-to-audio”) that can generate soundtracks for videos. The initiative complements the wave of AI video generators from companies ranging from biggies like OpenAI and Alibaba to startups such as Luma and Runway, all of which require a separate app to add sound. V2A technology “makes synchronized audiovisual generation possible” by combining video pixels with natural language text prompts “to generate rich soundscapes for the on-screen action,” DeepMind writes, explaining that it can “create shots with a dramatic score, realistic sound effects or dialogue.” Continue reading DeepMind’s V2A Generates Music, Sound Effects, Dialogue
By
ETCentric StaffMarch 19, 2024
Apple researchers have gone public with new multimodal methods for training large language models using both text and images. The results are said to enable AI systems that are more powerful and flexible, which could have significant ramifications for future Apple products. These new models, which Apple calls MM1, support up to 30 billion parameters. The researchers identify multimodal large language models (MLLMs) as “the next frontier in foundation models,” which exceed the performance of LLMs and “excel at tasks like image captioning, visual question answering and natural language inference.” Continue reading Apple Unveils Progress in Multimodal Large Language Models
By
ETCentric StaffMarch 14, 2024
Months-old startup Cognition AI has emerged from stealth mode with Devin, a generative platform it is calling “the world’s first fully autonomous AI software engineer.” Although Cognition has yet to make Devin widely available, much less allow independent testing, if its claims are true it would mark a turning point in the AI coding space, moving it from a field of AI assistants to a full-fledged AI engineer. Based on natural language instruction, Devin could potentially take a project from concept to execution rather than simply suggesting code snippets or offering barebones frameworks. Continue reading Startup Cognition Launches AI Software Coding Engine Devin
By
ETCentric StaffFebruary 22, 2024
“What if you could describe a sound and generate it with AI?,” asks startup ElevenLabs, which set out to do just that, and says it has succeeded. The two-year-old company explains it “used text prompts like ‘waves crashing,’ ‘metal clanging,’ ‘birds chirping,’ and ‘racing car engine’ to generate audio.” Best known for using machine learning to clone voices, the AI firm founded by Google and Palantir alums has yet to make publicly available its new text-to-sound model but began teasing it by releasing online demos this week. Some see the technology as a natural complement to the latest wave of image generators. Continue reading ElevenLabs Promotes Its Latest Advances in AI Audio Effects
By
ETCentric StaffFebruary 21, 2024
Researchers at Amazon have trained what they are calling the largest text-to-speech model ever created, which they claim is exhibiting “emergent” qualities — the ability to inherently improve itself at speaking complex sentences naturally. Called BASE TTS, for Big Adaptive Streamable TTS with Emergent abilities, the new model could pave the way for more human-like interactions with AI, reports suggest. Trained on 100,000 hours of public domain speech data, BASE TTS offers “state-of-the-art naturalness” in English as well as some German, Dutch and Spanish. Text-to-speech models are used in developing voice assistants for smart devices and apps and accessibility. Continue reading Amazon Claims ’Emergent Abilities’ for Text-to-Speech Model