By
Paula ParisiDecember 11, 2024
Reddit has launched a new AI-powered search tool called Reddit Answers. Reddit is already appearing regularly in Google Search returns. The new interface provides a way users can utilize a conversational model to get answers directly from the social platform. “Once a question is asked, curated summaries of relevant conversations and details across Reddit will appear, including links to related communities and posts,” according to Reddit. Whether users will want to skip their usual go-to search engines in favor of querying Reddit alone could have long term ramifications for the 19-year old social platform, which went public in 2023. Continue reading ‘Reddit Answers’ Wants to Gain More Users Searching In-App
By
Paula ParisiNovember 14, 2024
Ernie, the foundation model for Baidu’s generative AI, has been updated with iRAG technology to mitigate visual hallucinations and a no-code tool called Miaoda that creates apps using natural language. The company behind China’s largest search engine says Ernie now handles 1.5 billion daily user queries, up from 50 million circa its March 2023 launch (a 30x increase). Baidu also debuted Ernie-powered smart glasses from its Xiaodu Technology hardware unit. The Xiaodu AI Glasses features built-in voice activation and cameras for taking photos and video. The news was shared at this week’s Baidu World 2024 in Shanghai. Continue reading Baidu’s Ernie AI Gets Improved Text-to-Image and App Builder
By
Paula ParisiSeptember 26, 2024
Microsoft has released a suite of “Trustworthy AI” features that address concerns about AI security and reliability. The four new capabilities include Correction, a content detection upgrade in Microsoft Azure that “helps fix hallucination issues in real time before users see them.” Embedded Content Safety allows customers to embed Azure AI Content Safety on devices where cloud connectivity is intermittent or unavailable, while two new filters flag AI output of protected material. Additionally, a transparency safeguard providing the company’s AI assistant, Microsoft 365 Copilot, with specific “web search query citations” is coming soon. Continue reading New Microsoft Safety Tools Fix AI Flubs, Detect Proprietary IP
By
Paula ParisiSeptember 16, 2024
OpenAI is previewing a new series of AI models that can reason and correct complex coding mistakes, providing a more efficient solution for developers. Powered by OpenAI o1, the new models are “designed to spend more time thinking before they respond, much like a person would,” and as a result can “solve harder problems than previous models in science, coding, and math,” OpenAI claims, noting that “through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.” The first model in the series is being released in preview in OpenAI’s popular ChatGPT and in the company’s API. Continue reading OpenAI Previews New LLMs Capable of Complex Reasoning
By
Paula ParisiJune 10, 2024
Work management platform Asana is attempting to rebrand the “AI assistant” as “AI teammates” — custom assistants designed to execute key parts of workflows. Currently in beta, AI teammates are more collaborative and goal-oriented and able to “take action with the relevant context and rules of engagement,” Asana explains, contrasting that with “other AI solutions that aimlessly scour data and take action based on unreliable information.” The firm is also debuting a conversational AI chat agent that can answer questions such as: “What blockers are putting this goal at risk?” or “Who at my company knows about this topic?” Continue reading Asana’s Customizable AI Teammates Work Alongside Humans
By
ETCentric StaffMarch 6, 2024
Anthropic has released Claude 3, claiming new industry benchmarks that see the family of three new large language models approaching “near-human” cognitive capability in some instances. Accessible via Anthropic’s website, the three new models — Claude 3 Haiku, Claude 3 Sonnet and Claude 3 Opus — represent successively increased complexity and parameter count. Sonnet is powering the current Claude.ai chatbot and is free, for now, requiring only an email sign-in. Opus comes with the the $20 monthly subscription for Claude Pro. Both are generally available from the Anthropic website and via API in 159 countries, with Haiku coming soon. Continue reading Anthropic’s Claude 3 AI Is Said to Have ‘Near-Human’ Abilities
By
Phil LelyveldJanuary 10, 2024
Dr. Fei-Fei Li, Stanford professor and co-director of Stanford HAI (Human-Centered AI), and Andrew Ng, venture capitalist and managing general partner at Palo Alto-based AI Fund discussed the current state and expected near-term developments in artificial intelligence. As a general purpose technology, AI development will both deepen, as private sector LLMs are developed for industry-specific needs, and broaden, as open source public sector LLMs emerge to address broad societal problems. Expect exciting advances in image models — what Li calls “pixel space.” When implementing AI, think about teams rather than individuals, and think about tasks rather than jobs. Continue reading CES: Session Details the Impact and Future of AI Technology
By
Paula ParisiOctober 27, 2023
The University of Science and Technology of China (USTC) and Tencent YouTu Lab have released a research paper on a new framework called Woodpecker, designed to correct hallucinations in multimodal large language AI models. “Hallucination is a big shadow hanging over the rapidly evolving MLLMs,” writes the group, describing the phenomenon as when MLLMs “output descriptions that are inconsistent with the input image.” Solutions to date focus mainly on “instruction-tuning,” a form of retraining that is data and computation intensive. Woodpecker takes a training-free approach that purports to correct hallucinations from the basis of the generated text. Continue reading Woodpecker: Chinese Researchers Combat AI Hallucinations
By
Paula ParisiJuly 17, 2023
The Federal Trade Commission has opened a civil investigation into OpenAI to determine the extent to which its data policies are harmful to consumers as well as the potentially deleterious effects of misinformation spread through “hallucinations” by its ChatGPT chatbot. The FTC sent OpenAI dozens of questions last week in a 20-page letter instructing the company to contact FTC counsel “as soon as possible to schedule a telephonic meeting within 14 days.” The questions deal with everything from how the company trains its models to the handling of personal data. Continue reading FTC Investigates OpenAI Over Data Policies, Misinformation