OpenAI In-House Chip Could Be Ready for Testing This Year

OpenAI is getting close to finalizing its first custom chip design, according to an exclusive report from Reuters that emphasizes the Microsoft-backed AI giant’s goal of reducing its dependency on Nvidia chips. The blueprint for the first-generation OpenAI chip could be finalized as soon as the next few months and sent to Taiwan’s TSMC for fabrication, which will take about six months — “unless OpenAI pays substantially more for expedited manufacturing” — according to the report. Even by usual standards, the training-focused chip is already on a fast track to deployment. Continue reading OpenAI In-House Chip Could Be Ready for Testing This Year

Reasoning Model Competes with Advanced AI at a Lower Cost

Model training continues to hit new lows in terms of cost, a phenomenon known as the commoditization of AI that has rocked Wall Street. An AI reasoning model created for under $50 in cloud compute credits is reportedly performing comparably to established reasoning models such as OpenAI o1 and DeepSeek-R1 on tests of math and coding aptitude. Called s1-32B, it was created by researchers at Stanford and the University of Washington by customizing Alibaba’s Qwen2.5-32B-Instruct, feeding it 1,000 prompts with responses sourced from Google’s new Gemini 2.0 Flash Thinking Experimental reasoning model. Continue reading Reasoning Model Competes with Advanced AI at a Lower Cost

Snap Develops a Lightweight Text-to-Video AI Model In-House

Snap has created a lightweight AI text-to-image model that will run on-device, expected to power some Snapchat mobile features in the months ahead. Using an iPhone 16 Pro Max, the model can produce high-resolution images in approximately 1.4 seconds, running on the phone, which reduces computational costs. Snap says the research model “is the continuation of our long-term investment in cutting edge AI and ML technologies that enable some of today’s most advanced interactive developer and consumer experiences.” Among the Snapchat AI features the new model will enhance are AI Snaps and AI Bitmoji Backgrounds. Continue reading Snap Develops a Lightweight Text-to-Video AI Model In-House

Hugging Face Has Developed Tiny Yet Powerful Vision Models

Most people know Hugging Face as a resource-sharing community, but it also builds open-source applications and tools for machine learning. Its recent release of vision-language models small enough to run on smartphones while outperforming competitors that rely on massive data centers is being hailed as “a remarkable breakthrough in AI.”  The new models — SmolVLM-256M and SmolVLM-500M — are optimized for “constrained devices” with less than around 1GB of RAM, making them ideal for mobile devices including laptops and also convenient for those interested in processing large amounts of data cheaply and with a low-energy footprint. Continue reading Hugging Face Has Developed Tiny Yet Powerful Vision Models

CES: Image Sensors Adapt to Light Changes Like Human Eye

CES’s Eureka Park is a section of exhibits where startups and early-stage products from all over the world solicit feedback and explore opportunities. From this year’s Italian delegates at Eureka Park, our team found EYE2DRIVE, a semiconductor company that develops CMOS chips for digital imaging inspired by the human eye. Their image sensors use AI to mimic the human eye’s ability to adapt its response to changing environmental light conditions. As a result, quality and color of the captured image remains unaffected. While currently focusing on autonomous navigation applications, the tech has potential for media production as well. Continue reading CES: Image Sensors Adapt to Light Changes Like Human Eye

CES: How Brands and Marketers Are Integrating AI, Creativity

Billed as a conversation among CMOs, this CES panel — moderated by Consumer Technology Association VP of Marketing & Communications Melissa Harrison — drilled down into how major brands and advertising technology companies are integrating artificial intelligence into their pipelines and organizations. They agreed that, although this is still at the beginning stage and requires experimentation, those who are frozen and have not yet started engaging with AI will quickly be at a learning curve disadvantage. Still, panelists emphasized that AI will not replace human creativity. Continue reading CES: How Brands and Marketers Are Integrating AI, Creativity

CES: Show Features a Surprisingly Small Number of AI Agents

In the never-ending smorgasbord of AI hype, “agents” represent practical and worthwhile potential. AI agents are autonomous AI programs that can understand some context and take action in that context. Agents can autonomously perform a task that involves mapping a goal to its context and parameters (even if they’re not explicitly laid out), process data across multiple formats and ontologies to understand the goal and work through the task, call multiple functions across multiple apps, and take some action to achieve the goal. Unfortunately, however, while many are talking about AI agents, few are promoting actual products at CES. Continue reading CES: Show Features a Surprisingly Small Number of AI Agents

Amazon Testing ‘AI Topics’ Recommendations for Prime Video

Amazon is testing a new way to provide viewers with content recommendations with AI Topics, now in limited beta release for Prime Video. AI Topics eschews traditional recommendation algorithms in favor of AI that “discovers” Prime Video content based on a combination of viewing history and personal interests. Users can request “mind-bending sci-fi” or “fantasy quests,” then navigate seamlessly through topics curated for them that appear on the Prime Video home page. Once a topic is selected, movies, series and linear channels will populate alongside additional related topics. Continue reading Amazon Testing ‘AI Topics’ Recommendations for Prime Video

Meta Rolls Out Watermarking, Behavioral and Concept Models

Meta’s FAIR (Fundamental AI Research) team has unveiled recent work in areas ranging from transparency and safety to agents, and architectures for machine learning. The projects include Meta Motivo, a foundation model for controlling the behavior of virtual embodied agents, and Video Seal, an open-source model for video watermarking. All were developed in the unit’s pursuit of advanced machine intelligence, helping “models to learn new information more effectively and scale beyond current limits.” Meta announced it is sharing the new FAIR research, code, models and datasets so the research community can build upon its work. Continue reading Meta Rolls Out Watermarking, Behavioral and Concept Models

Google DeepMind Touts AI-Powered Quantum Error Detection

Google DeepMind has come up with an error correction technique it says will make quantum computers more reliable, particularly at scale. While quantum computing holds tremendous promise — potentially able to solve in just a few hours problems it would take a conventional computer “billions of years” to figure out, Google claims — the systems are notoriously unstable, due to the delicacy of the “quantum state.” AlphaQubit is an AI-based decoder that identifies quantum computing errors with accuracy. Combining DeepMind’s machine learning expertise with Google Quantum AI error correction, the technique advances efforts to create a reliable quantum computer. Continue reading Google DeepMind Touts AI-Powered Quantum Error Detection

Tubi Introduces Short-Form Video Clips with Scenes Feature

Tubi has come up with a unique way to showcase its catalog of 250,000 movies and TV episodes: a feed of short-form videos similar to TikTok content. Called “Scenes,” the feature is available via Tubi’s mobile app for Android and iOS. Tubi, the Fox Corporation free ad-supported streaming television (FAST) service, hopes Scenes will help Tubi viewers find what to watch as part of a “strategy to provide effortless entertainment on mobile.” Tubi already leverages machine learning and AI models to help personalize its recommendation experience and encourage discovery. Continue reading Tubi Introduces Short-Form Video Clips with Scenes Feature

Startup Noma Aims to Secure the Entire Data and AI Lifecycle

As companies move forward with leveraging their proprietary data in generative AI applications, enterprises are contending with existing security solutions that may be inadequate for that task. Israeli startup Noma Security is addressing that concern. Just out of stealth mode, Noma has raised $32 million in a Series A round led by Ballistic Ventures with support from Glilot Capital Partners, Cyber Club London and a collection of angel investors. While enterprise firms that host their models at large cloud outfits have access to built-in MLOps security tools, those who are self-hosting, using smaller cloud operations, or want added protection might be interested in Noma. Continue reading Startup Noma Aims to Secure the Entire Data and AI Lifecycle

MIT Intros LLM-Inspired Teacher for General Purpose Robots

The Massachusetts Institute of Technology has come up what it thinks is a better way to teach robots general purpose skills. Derived from LLM techniques, the method provides robot intelligence access to an enormous amount of data at once, rather than exposing it to individual programs for specific tasks. Faster and more cost efficient, the approach has been referred to as a “brute force” approach to problem-solving, and machine learners have taken to it in lieu of individualized, task-specific “imitation learning.” Early tests show it outperforming traditional training by more than 20 percent under simulation and real-world conditions. Continue reading MIT Intros LLM-Inspired Teacher for General Purpose Robots

Digital Domain Leverages AWS for Its Virtual Human Initiative

Visual effects studio Digital Domain has brought its Autonomous Virtual Human project to Amazon Web Services, which will provide generative AI and machine learning tools and provide Digital Domain’s creations and processes a home in the global cloud. The collaboration “aims to propel the evolution and global reach of Digital Domain’s AVH technology and expand its use for multiple industries, including entertainment, gaming, healthcare, hospitality, and commercial applications,” Amazon said in a statement that emphasizes “AWS cloud services, particularly Amazon Bedrock,” as providing the infrastructure and adaptability “to drive AVH’s growth.” Continue reading Digital Domain Leverages AWS for Its Virtual Human Initiative

OpenAI Bestows Independent Oversight on Safety Committee

The OpenAI board’s Safety and Security Committee will become an independent board oversight committee, chaired by Zico Kolter, machine learning department chair at Carnegie Mellon University. The committee will be responsible for “the safety and security processes guiding OpenAI’s model deployment and development.” Three OpenAI board members segue from their current SSC roles to the new committee: Quora founder Adam D’Angelo, former Sony Corporation EVP Nicole Seligman and erstwhile NSA chief Paul Nakasone. OpenAI is currently putting together a new funding round that reportedly aims to value the company at $150 billion. Continue reading OpenAI Bestows Independent Oversight on Safety Committee