New Microsoft Safety Tools Fix AI Flubs, Detect Proprietary IP

Microsoft has released a suite of “Trustworthy AI” features that address concerns about AI security and reliability. The four new capabilities include Correction, a content detection upgrade in Microsoft Azure that “helps fix hallucination issues in real time before users see them.” Embedded Content Safety allows customers to embed Azure AI Content Safety on devices where cloud connectivity is intermittent or unavailable, while two new filters flag AI output of protected material. Additionally, a transparency safeguard providing the company’s AI assistant, Microsoft 365 Copilot, with specific “web search query citations” is coming soon. Continue reading New Microsoft Safety Tools Fix AI Flubs, Detect Proprietary IP

OpenAI Previews New LLMs Capable of Complex Reasoning

OpenAI is previewing a new series of AI models that can reason and correct complex coding mistakes, providing a more efficient solution for developers. Powered by OpenAI o1, the new models are “designed to spend more time thinking before they respond, much like a person would,” and as a result can “solve harder problems than previous models in science, coding, and math,” OpenAI claims, noting that “through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.” The first model in the series is being released in preview in OpenAI’s popular ChatGPT and in the company’s API. Continue reading OpenAI Previews New LLMs Capable of Complex Reasoning

Asana’s Customizable AI Teammates Work Alongside Humans

Work management platform Asana is attempting to rebrand the “AI assistant” as “AI teammates” — custom assistants designed to execute key parts of workflows. Currently in beta, AI teammates are more collaborative and goal-oriented and able to “take action with the relevant context and rules of engagement,” Asana explains, contrasting that with “other AI solutions that aimlessly scour data and take action based on unreliable information.” The firm is also debuting a conversational AI chat agent that can answer questions such as: “What blockers are putting this goal at risk?” or “Who at my company knows about this topic?” Continue reading Asana’s Customizable AI Teammates Work Alongside Humans

Anthropic’s Claude 3 AI Is Said to Have ‘Near-Human’ Abilities

Anthropic has released Claude 3, claiming new industry benchmarks that see the family of three new large language models approaching “near-human” cognitive capability in some instances. Accessible via Anthropic’s website, the three new models — Claude 3 Haiku, Claude 3 Sonnet and Claude 3 Opus — represent successively increased complexity and parameter count. Sonnet is powering the current Claude.ai chatbot and is free, for now, requiring only an email sign-in. Opus comes with the the $20 monthly subscription for Claude Pro. Both are generally available from the Anthropic website and via API in 159 countries, with Haiku coming soon. Continue reading Anthropic’s Claude 3 AI Is Said to Have ‘Near-Human’ Abilities

CES: Session Details the Impact and Future of AI Technology

Dr. Fei-Fei Li, Stanford professor and co-director of Stanford HAI (Human-Centered AI), and Andrew Ng, venture capitalist and managing general partner at Palo Alto-based AI Fund discussed the current state and expected near-term developments in artificial intelligence. As a general purpose technology, AI development will both deepen, as private sector LLMs are developed for industry-specific needs, and broaden, as open source public sector LLMs emerge to address broad societal problems. Expect exciting advances in image models — what Li calls “pixel space.” When implementing AI, think about teams rather than individuals, and think about tasks rather than jobs. Continue reading CES: Session Details the Impact and Future of AI Technology

Woodpecker: Chinese Researchers Combat AI Hallucinations

The University of Science and Technology of China (USTC) and Tencent YouTu Lab have released a research paper on a new framework called Woodpecker, designed to correct hallucinations in multimodal large language AI models. “Hallucination is a big shadow hanging over the rapidly evolving MLLMs,” writes the group, describing the phenomenon as when MLLMs “output descriptions that are inconsistent with the input image.” Solutions to date focus mainly on “instruction-tuning,” a form of retraining that is data and computation intensive. Woodpecker takes a training-free approach that purports to correct hallucinations from the basis of the generated text. Continue reading Woodpecker: Chinese Researchers Combat AI Hallucinations

FTC Investigates OpenAI Over Data Policies, Misinformation

The Federal Trade Commission has opened a civil investigation into OpenAI to determine the extent to which its data policies are harmful to consumers as well as the potentially deleterious effects of misinformation spread through “hallucinations” by its ChatGPT chatbot. The FTC sent OpenAI dozens of questions last week in a 20-page letter instructing the company to contact FTC counsel “as soon as possible to schedule a telephonic meeting within 14 days.” The questions deal with everything from how the company trains its models to the handling of personal data. Continue reading FTC Investigates OpenAI Over Data Policies, Misinformation