By
Paula ParisiSeptember 19, 2023
The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI
By
Paula ParisiSeptember 18, 2023
The Department of Homeland Security is harnessing artificial intelligence, according to a memo by Secretary Alejandro Mayorkas explaining the department will use AI to keep Americans safe while implementing safeguards to ensure civil rights, privacy rights and the U.S. Constitution are not violated. The DHS appointed Eric Hysen as chief AI officer, moving him into the role from his previous post as CIO. “DHS must master this technology, applying it effectively and building a world class workforce that can reap the benefits of Al, while meeting the threats posed by adversaries that wield Al,” Mayorkas wrote. Continue reading DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy
By
Paula ParisiJuly 27, 2023
Advancing President Biden’s push for responsible development of artificial intelligence, top AI firms including Anthropic, Google, Microsoft and OpenAI have launched the Frontier Model Forum, an industry forum that will work collaboratively with outside researchers and policymakers to implement best practices. The new group will focus on AI safety, research into its risks, and disseminating information to the public, governments and civil society. Other companies involved in building bleeding-edge AI models will also be invited to join and participate in technical evaluations and benchmarks. Continue reading Major Tech Players Launch Frontier Model Forum for Safe AI
By
Paula ParisiJuly 24, 2023
President Biden has secured voluntary commitments from seven leading AI companies who say they will support the executive branch goal of advancing safe, secure and transparent development of artificial intelligence. Executives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI convened at the White House on Friday to support the accord, which some criticized as a half measure, claiming the companies have already embraced independent security testing and a commitment to collaborating with each other and the government. Biden stressed the need to deploy AI altruistically, “to help address society’s greatest challenges.” Continue reading Top Tech Firms Support Government’s Planned AI Safeguards
By
Paula ParisiJuly 14, 2023
As a next step in its advances in ethical AI, Adobe has announced its Firefly generative AI platform now supports text prompts in more than 100 international languages. The company says its Firefly AI app has generated over one billion images in Firefly and Photoshop since implementation in March. Adobe has also deployed artificial intelligence in Express, Illustrator and the Creative Cloud. Positioning its latest news as an expansion of global proportions, Adobe’s generative AI products will now support text prompts in native dialects in the standalone Firefly web service, with localization coming to more than 20 additional languages. Continue reading Adobe Pursues Ethical, Responsible AI in the Creative Space
By
Paula ParisiApril 24, 2023
In a move it sees as a force multiplier, Alphabet is consolidating DeepMind and the Brain team from Google Research into a unit called Google DeepMind, uniting the teams responsible for Google Brain with DeepMind, the UK-based artificial intelligence research lab acquired in 2014. Collective accomplishments include AlphaGo, Transformers, WaveNet and AlphaFold, as well as software frameworks like TensorFlow and JAX for expressing, training and deploying large scale ML models. “Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI,” the company announced. Continue reading Google Restructures AI Research Units into Google DeepMind
By
Paula ParisiMarch 27, 2023
After several months of testing, Anthropic is making its AI chatbot Claude available for general release in two configurations: the high-performace Claude and a lighter, cheaper, faster option called Claude Instant. Anthropic was launched in 2021 by a pair of former OpenAI employees, and its Claude chatbots are competitors to that firm’s ChatGPT. Accessible through a chat interface and API in Anthropic’s developer console, Claude is being marketed as the product of training designed to produce a more “helpful, honest, and harmless AI systems.” To that end, Anthropic says “Claude is much less likely to produce harmful outputs.” Continue reading Anthropic Takes Claude Chatbot Public After Months of Tests
By
Paula ParisiFebruary 28, 2023
Snapchat is launching My AI, a new chatbot running a customized version of the latest GPT technology from OpenAI. Available as an experimental feature to subscribers with a $3.99 per month Snapchat+ account, My AI rolls out starting this week, offering everything from birthday gift recommendations to weekend recreational plans, recipes and auto-generated poetry and prose. “As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything,” Snapchat cautions, explaining that “all conversations with My AI will be stored and may be reviewed to improve the product experience.” Continue reading Snapchat’s New AI Chatbot Is Powered by OpenAI GPT Tech
By
Paula ParisiFebruary 21, 2023
With language models like ChatGPT dominating recent tech news, Meta Platforms has unveiled a new artificial intelligence platform of its own called Toolformer that breaks new ground in that it can teach itself to use external apps and APIs. The result, Meta says, is that Toolformer combines the conversational aptitude and other things large language models are good at while shoring up those areas in which it typically does not excel — like math and fact-checking — by figuring out how to use external tools like search engines, calculators and calendars. Continue reading Meta Toolformer Sidesteps AI Language Limits with API Calls
By
Paula ParisiJune 24, 2022
As part of an overhaul of its AI ethics policies, Microsoft is retiring from the public sphere several AI-powered facial analysis tools, including a controversial algorithm that purports to identify a subject’s emotion from images. Other features Microsoft will excise for new users this week and phase out for existing users within a year include those that claim the ability to identify gender and age. Advocacy groups and academics have expressed concern regarding such facial analysis features, characterizing them as unreliable and invasive as well as subject to bias. Continue reading Microsoft Pulls AI Analysis Tool Azure Face from Public Use