By
Paula ParisiNovember 27, 2024
Anthropic is releasing what it hopes will be a new standard in data integration for AI. Called the Model Context Protocol (MCP), its goal is to eliminate the need to customize each integration by having code written each time a company’s data is connected to a model. The open-source MCP tool could become a universal way to link data sources to AI. The aim is to have models querying databases directly. MCP is “a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments,” according to Anthropic. Continue reading Anthropic Protocol Intends to Standardize AI Data Integration
By
Paula ParisiOctober 21, 2024
Nvidia has debuted a new AI model, Llama-3.1-Nemotron-70B-Instruct, that it claims is outperforming competitors GPT-4o from OpenAI and Anthropic’s Claude 3.5 Sonnet. The impressive showing has prompted speculation of an AI shakeup and a significant shift in Nividia’s AI strategy, which has thus far been focused primarily on chipmaking. The model was quietly released on Hugging Face, and Nvidia says as of October 1 it ranked first on three top automatic alignment benchmarks, “edging out strong frontier models” and vaulting Nvidia to the forefront of the LLM field in areas like comprehension, context and generation. Continue reading Nvidia’s Impressive AI Model Could Compete with Top Brands
By
Paula ParisiJuly 25, 2024
In April, Meta Platforms revealed that it was working on an open-source AI model that performed as well as proprietary models from top AI companies such as OpenAI and Anthropic. Now, Meta CEO Mark Zuckerberg says that model has arrived in the form of Llama 3.1 405B, “the first frontier-level open-source AI model.” The company is also releasing “new and improved” Llama 3.1 70B and 8B models. In addition to general cost and performance benefits, the fact that the Llama 3.1 405B model is open source “will make it the best choice for fine-tuning and distilling smaller models,” according to Meta. Continue reading Meta Calls New Llama the First Open-Source Frontier Model
By
Paula ParisiJune 11, 2024
California tech companies are bristling at a state bill that would force them to enact strict safety protocols, including installing “kill switches” to turn-off AI models that present a public risk. Silicon Valley has emerged as a global AI leader, and the proposed law would impact not only OpenAI, but Anthropic, Cohere, Google and Meta Platforms. The bill, SB 1047, focuses on what its lead sponsor, State Senator Scott Wiener, calls “common sense safety standards” for frontier models. Should the bill become law, it could affect even firms like Amazon that provide AI cloud services to California customers even though they are not based in the state. Continue reading Tech Firms Push Back Against California AI Safety Regulation
By
Paula ParisiMay 30, 2024
OpenAI has begun training a new flagship artificial intelligence model to succeed GPT-4, the technology currently associated with ChatGPT. The new model — which some are already calling GPT-5, although OpenAI hasn’t yet shared its name — is expected to take the company’s compute to the next level as it works toward artificial general intelligence (AGI), intelligence equal to or surpassing human cognitive abilities. The company also announced it has formed a new Safety and Security Committee two weeks after dissolving the old one upon the departure of OpenAI co-founder and Chief Scientist Ilya Sutskever. Continue reading OpenAI Is Working on New Frontier Model to Succeed GPT-4
By
ETCentric StaffApril 3, 2024
The United States has entered into an agreement with the United Kingdom to collaboratively develop safety tests for the most advanced AI models. The memorandum of understanding aims at evaluating the societal and national defense risks posed by advanced models. Coming after commitments made at the AI Safety Summit in November, the deal is being described as the world’s first bilateral agreement on AI safety. The agreement, signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, envisions the countries “working to align their scientific approaches” and to accelerate evaluations for AI models, systems and agents. Continue reading U.S. and UK Form Partnership to Accelerate AI Safety Testing
By
Paula ParisiNovember 2, 2023
OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI