By
Paula ParisiMarch 13, 2025
Feeling the pressure from the “open agent” movement and specifically Chinese startup Butterfly Effect and its new product Manus, OpenAI has expanded the capabilities of its own AI technology, launching new tools to help businesses and developers build their own agents. The company’s new Responses API has the functionality of two earlier tools, the Chat Completions API (facilitating ChatGPT queries and responses) and the Assistants API (for multi-step reasoning and file access). The company is also issuing an Agents SDK, a suite of tools for creating and deploying agents that bundles the Responses API. Continue reading OpenAI Ramps Up Its Agent Functions as Competition Surges
By
Paula ParisiMarch 13, 2025
Meta Platforms has reportedly begun “a small deployment” of its first in-house chip designed for AI training. The accelerator chip is engineered around the open-standard RISC-V architecture. TSMC produced the working samples now being tested. The goal is to create purpose-specific chips that are more efficient than Nvidia’s general purpose GPUs, enjoying the cost-savings that would come with wide use and reducing reliance on outside chip suppliers in a tight market. If the tests go well, Meta plans to scale up production for expanded use by 2026. Details of the new chip’s specifications remain unknown at this time. Continue reading Meta Tests New AI Accelerator Chip Designed with Broadcom
By
Paula ParisiMarch 12, 2025
Amazon is experimenting with AI dubbing so Prime Video customers globally can experience content from other territories, gaining access more quickly and efficiently to licensed films and TV series. The company is using a hybrid “AI-aided” system in which localization professionals oversee the AI output to ensure quality control. Currently limited to a dozen movies and series that will be AI-dubbed in English and Latin American Spanish, the pilot will expand if the results prove popular with audiences. In December, Netflix experienced backlash against AI-assisted dubbing, with viewers complaining generative mouth adjustments looked unnatural. Continue reading Amazon Prime Video Tests AI Dubbing for Movies and Series
By
Paula ParisiMarch 12, 2025
Taiwan’s Foxconn, the contract manufacturer that assembles Apple’s iPhones, has built its own AI. Called FoxBrain, the company says the large language model was trained in just four weeks with help from Nvidia, using 120 of that company’s H100 chips. FoxBrain has reasoning and mathematical skills and can analyze data and generate code. Initially built for in-house use, Foxconn says it intends to open source the model and hopes it will become a collaborative tool for its partners and enable advancements in manufacturing techniques and supply-chain management. Continue reading Foxconn AI Trained in Four Weeks, Suggesting Industry Shift
By
Paula ParisiMarch 12, 2025
Popular social media platform Pinterest is now labeling generative AI content. The app, which earned a reputation as fertile ground for design inspiration related to hand-crafted goods and human artistry, has recently been plagued by an onslaught of “AI slop,” something its regular users have been complaining of on Reddit and to Pinterest directly. The GenAI content was often used to redirect people to spammy sites, according to a recent report. Pinterest’s labeling news coincides with an earnings report of $1.15 billion in Q4 revenue, marking an 18 percent increase year-over-year. Continue reading Pinterest AI Labeling Policy Unveiled as Q4 Earnings Top $1B
By
ETCentric StaffMarch 11, 2025
CES 2025 welcomed over 141,000 attendees from around the globe to Las Vegas. With more than 4,500 exhibitors, including 1,400 startups, and more than 6,000 media attendees, CES highlights the innovation and technology trends addressing global challenges and shaping the future. This year’s show focused on artificial intelligence, unveiling a wave of innovative offerings — whether practical, visionary or experimental. Among the show’s major trends were AI integration across all industries, shifting demographics and purchasing patterns (with Gen Z the one to watch), sustainability and security, and smart devices and smarter homes. ETC@USC attended the conference for live reporting on products and services. Our post-show report features extensive coverage and perspectives related to key creative, business, and technology areas. Continue reading ETC’s CES 2025 Report: Focus on AI Innovation & Integration
By
Paula ParisiMarch 11, 2025
Butterfly Effect is the latest Chinese AI firm to get global attention, having drummed up interest in Manus, positioned as a “general agent” that can scour online resources to produce reports. Companies like OpenAI and Google are competing in this space, called deep research. Butterfly Effect says Manus has surpassed OpenAI Deep Research on the GAIA benchmark and the world is listening. The Manus Discord server swelled to more than 138,000 members in the past weeks, and “invite codes” to gain access at this “invitation-only” phase are allegedly going for thousands of dollars on Chinese sales app Xianyu. Continue reading Startup Claims AI Agent Manus Is an Autonomy Breakthrough
By
Paula ParisiMarch 11, 2025
Rivalry between World Network, also known as OpenAI CEO Sam Altman’s “other company,” and Elon Musk’s X is heating up in an escalating race to be first out with an “everything app.” World Network is trying to accelerate adoption of a log-in system that relies on “ocular verification” — mapping the unique pattern of the iris — for “anonymous proof-of-human” validation. World already has a free app for iOS and Android, and recently launched a “mini app store” within it, including functions such as chat, transferring cryptocurrency and shopping for microloans. Continue reading Altman’s World Takes on Musk’s X in Race to Everything App
By
Paula ParisiMarch 10, 2025
Google has added Gemini Embedding to its Gemini developer API. This new experimental model for text translates words, phrases and other text inputs into numerical representations, otherwise known as embeddings, which capture their semantic meaning. Embeddings are used in a wide range of applications including document retrieval and classification, potentially reducing costs and improving latency. Google is also testing an expansion of its AI Overviews search feature as part of a Gemini 2.0 update. Called AI Mode, it helps explain complex topics by generating search results that use advanced reasoning and thinking capabilities. Continue reading Google Updates AI Search and Intros Gemini Text Embedding
By
Paula ParisiMarch 10, 2025
Alibaba is making AI news again, releasing another Qwen reasoning model, QwQ-32B, which was trained and scaled using reinforcement learning (RL). The Qwen team says it “has the potential to enhance model performance beyond conventional pretraining and post-training methods.” QwQ-32B, a 32 billion parameter model, “achieves performance comparable to DeepSeek-R1, which boasts 671 billion parameters (with 37 billion activated),” Alibaba claims. While parameters refer to the total set of adjustable weights and biases in the model’s neural network, “activated” parameters are a subset used for a specific inference task, like generating a response. Continue reading Alibaba Says Qwen Reasoning Model on Par with DeepSeek
By
Paula ParisiMarch 7, 2025
Staircase Studios AI — the film, television and gaming studio launched by “Divergent” franchise producer Pouya Shahbazian — has announced its investors and shared plans to produce more than 30 projects at budgets under $500,000 over the next 3-4 years. The company will be using a proprietary AI workflow it invented called ForwardMotion that the company says will revolutionize film and television production. The company has acquired multiple pieces of IP, including more than 20 scripts that have appeared on the Black List, which tallies the most popular unproduced scripts. Continue reading Staircase Studios AI Plans 30 Projects Over Next 3 to 4 Years
By
Paula ParisiMarch 7, 2025
Sesame, an AI startup from Oculus co-founder Brendan Iribe, has created a conversational voice model that many feel has achieved uncanny levels of authenticity. Drawing comparisons to the charismatic vocal centerpiece of the 2013 Warner Bros. film “Her,” Sesame seems to have achieved a new level of engagement among AI voice assistants. While some are describing the tech as “amazing.” others have expressed concern over its capabilities. “Our goal is to achieve ‘voice presence’ — the magical quality that makes spoken interactions feel real, understood and valued,” explains a blog post by Iribe and others. Continue reading AI Startup Sesame Develops Next Stage of Voice Generation
By
Paula ParisiMarch 7, 2025
Samsung shook things up at Mobile World Congress 2025 with a display of its Project Moohan XR headset, which CNBC confirms will be released this year. While the MWC display was just a teaser, and Samsung remained tight-lipped about its specs, the mirrored goggles generated a lot of coverage, including speculation that Samsung may deploy Sony’s 4K Micro OLEDs in the new device, increasing Mohan’s resolution over the Apple Vision Pro by nearly 2 million pixels per eye and offering superior color, too. Samsung worked with Qualcomm and Google to develop Moohan, which will use the new Android XR operating system. Continue reading New Samsung XR Headset Could Use Sony 4K Micro OLEDs
By
Paula ParisiMarch 6, 2025
Amazon is ramping up its AI activity, reportedly planning to release its own advanced reasoning model as part of the company’s Nova family. The Nova line was introduced in December at re:Invent and the new addition could debut as early as June. Its reasoning prowess is being compared to the abilities of OpenAI’s o3-mini and DeepSeek-R1. But reports say Amazon is taking the hybrid reasoning approach embraced by Anthropic’s Claude 3.7 Sonnet (Amazon has a 10 percent stake in Anthropic). The e-retail giant is also preparing for an agentic AI push, having established a dedicated unit, reports say. Continue reading Amazon Plans an AI Push with Nova Reasoning Model, Agents
By
Paula ParisiMarch 6, 2025
Google plans to launch video- and screen-sharing capabilities for Gemini Live by the end of the month as part of the Gemini app on Android, according to discussions coming out of Mobile World Congress in Barcelona this week. Previewed a year ago as Project Astra, the new functionality will allow Gemini Live to accept a video stream captured in real time by the phone’s camera and answer questions about the feed in a conversational way, based on voice input, screen-sharing live videos with Gemini on mobile as Gemini 2.0 currently offers desktop users. Continue reading Google Live Gets Computer Vision Screen Sharing This Month