By
Paula ParisiOctober 25, 2023
OpenAI is developing an AI tool that can identify images created by artificial intelligence — specifically those made in whole or part by its Dall-E 3 image generator. Calling it a “provenance classifier,” company CTO Mira Murati began publicly discussing the detection app last week but said not to expect it in general release anytime soon. This, despite Murati’s claim it is “almost 99 percent reliable.” That is still not good enough for OpenAI, which knows there is much at stake when the public perception of artists’ work can be impacted by a filter applied by AI, which is notoriously capricious. Continue reading OpenAI Developing ‘Provenance Classifier’ for GenAI Images
By
Paula ParisiOctober 2, 2023
Nvidia’s Picasso continues to gain market share among visual companies looking for an AI foundry to train models for generative use. Getty Images has partnered with Nvidia to create custom foundation models for still images and video. Generative AI by Getty Images lets customers create visuals using Getty’s library of licensed photos. The tool is trained on Getty’s own creative library and has the company’s guarantee of “full indemnification for commercial use.” Getty joins Shutterstock and Adobe among enterprise clients using Picasso. Runway and Cuebric are using it, too — and Picasso is still in development. Continue reading Getty GenAI Tool for Images and Video Is Powered by Nvidia
By
Paula ParisiSeptember 19, 2023
The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI
By
Paula ParisiSeptember 18, 2023
The Department of Homeland Security is harnessing artificial intelligence, according to a memo by Secretary Alejandro Mayorkas explaining the department will use AI to keep Americans safe while implementing safeguards to ensure civil rights, privacy rights and the U.S. Constitution are not violated. The DHS appointed Eric Hysen as chief AI officer, moving him into the role from his previous post as CIO. “DHS must master this technology, applying it effectively and building a world class workforce that can reap the benefits of Al, while meeting the threats posed by adversaries that wield Al,” Mayorkas wrote. Continue reading DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy
By
Paula ParisiSeptember 8, 2023
California Governor Gavin Newsom signed an executive order for state agencies to study artificial intelligence and its impact on society and the economy. “We’re only scratching the surface of understanding what GenAI is capable of,” Newsom suggested. Recognizing “both the potential benefits and risks these tools enable,” he said his administration is “neither frozen by the fears nor hypnotized by the upside.” The move was couched as a “measured approach” that will help California “focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the world’s AI leader.” Continue reading Governor Newsom Orders Study of GenAI Benefits and Risks
By
Paula ParisiSeptember 1, 2023
Google DeepMind and Google Cloud have teamed to launch what they claim is an indelible AI watermark tool, which if it works would mark an industry first. Called SynthID, the technique for identifying AI-generated images is being launched in beta. The technology embeds its digital watermark “directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification,” according to DeepMind. SynthID is being released to a limited number of Google’s Vertex AI customers using Imagen, a Google AI language model that generates photorealistic images. Continue reading Google Introduces an AI Watermark That Cannot Be Removed
By
Paula ParisiAugust 22, 2023
YouTube is developing a plan for responsible AI that includes creating a framework to compensate recording artists and copyright holders for machine-generated music. YouTube’s Music AI Incubator — with support from early partner Universal Music Group — aims to help singers, songwriters, musicians and producers sort out issues like compensation and intellectual property protections, and work with trade groups and government officials on means of enforcement. YouTube CEO Neal Mohan says creators “have embraced AI to streamline and boost their creative processes,” with YouTube logging “more than 1.7 billion views of videos related to AI tools” this year. Continue reading YouTube Launches Music AI Incubator with UMG as Partner
By
Paula ParisiAugust 15, 2023
X is developing a video-calling feature to add as part of its rebranding as an “everything app.” X CEO Linda Yaccarino shared the news in her first television interview since leaving NBCUniversal to become head of Elon Musk’s social media platform in June, when the platform was still known as Twitter. Yaccarino said X users will soon be able to make video calls based on their social ID alone, without sharing phone numbers. Long-form videos, creator subscriptions and the ability to make payments on the platform are additional features that Yaccarino explained will be coming to X. Continue reading Yaccarino: X Getting Video Calls with Its ‘Everything’ Rebrand
By
Paula ParisiAugust 3, 2023
Meta Platforms is amping up its AI play, with plans to launch a suite of personality-driven chatbots as soon as next month. The company has been developing the series of artificially intelligent character bots with a goal of using them to boost engagement with its social media brands by making them available to have “humanlike discussions” on platforms including Facebook, Instagram and WhatsApp. Internally dubbed “personas,” the chatbots simulate characters ranging from historical figures like Abraham Lincoln to a surfer dude that dispenses travel advice. Continue reading Meta Plans Personality-Driven Chatbots to Boost Engagement
By
Paula ParisiAugust 2, 2023
Local TV news may soon undergo an AI-driven revolution that will make artificially-generated newscasts a reality nearly 40 years after digital anchor Max Headroom introduced the concept. Veteran newsman and author Hank Price predicts that while the transition is still a few years in the making, the process is already underway, with AI already being used to alter the voice and images of human anchors and offering the possibility to eventually create computer-generated newsreaders with their own personalities. Comparing the advent of newsroom AI to switching to robotic cameras, he says the move will be costly but save money over time. Continue reading Artificial Intelligence Will Likely Impact the Future of TV News
By
Paula ParisiJuly 24, 2023
President Biden has secured voluntary commitments from seven leading AI companies who say they will support the executive branch goal of advancing safe, secure and transparent development of artificial intelligence. Executives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI convened at the White House on Friday to support the accord, which some criticized as a half measure, claiming the companies have already embraced independent security testing and a commitment to collaborating with each other and the government. Biden stressed the need to deploy AI altruistically, “to help address society’s greatest challenges.” Continue reading Top Tech Firms Support Government’s Planned AI Safeguards
By
Paula ParisiJuly 14, 2023
As a next step in its advances in ethical AI, Adobe has announced its Firefly generative AI platform now supports text prompts in more than 100 international languages. The company says its Firefly AI app has generated over one billion images in Firefly and Photoshop since implementation in March. Adobe has also deployed artificial intelligence in Express, Illustrator and the Creative Cloud. Positioning its latest news as an expansion of global proportions, Adobe’s generative AI products will now support text prompts in native dialects in the standalone Firefly web service, with localization coming to more than 20 additional languages. Continue reading Adobe Pursues Ethical, Responsible AI in the Creative Space
By
Paula ParisiJune 2, 2023
Snapchat is rolling out a new feature for its premium Snapchat+ platform that enables users who send Snaps to My AI let the artificial intelligence know what they’re up to “receive a unique generative Snap back that keeps the conversation going” via My AI Snaps. The feature was previewed at the Snap Partner Summit in April as part of a larger push on AI updates, including the ability to invite the My AI chatbot to participate in group chats with friends and the ability to get AI Lens suggestions and place recommendations. In addition, the My AI chatbot — made free to all users this year — was updated to reply to users’ Snaps with a text-based response. Continue reading Snapchat+ Introduces ‘My AI Snaps’ for Chatbot Snap Backs
By
Paula ParisiMay 23, 2023
Leaders at the G7 Summit in Hiroshima, Japan, are calling for discussions that could lead to global standards and regulations for generative AI, with the aim of responsible use of the technology. The chief executives of the world’s largest economies — which in addition to the host nation include Canada, France, Germany, Italy, the UK, the U.S. (and additionally the EU) — expressed the goal of forming a G7 working group to establish by the end of the year a “Hiroshima AI process” for discussion about uniform policies for dealing with AI technologies including chatbots and image generators. Continue reading G7 Leaders Call for Global AI Standards at Hiroshima Summit
By
Paula ParisiMay 11, 2023
AI startup Anthropic is sharing new details of the “safe AI” principles that helped train its Claude chatbot. Also known as “Constitutional AI,” the method draws inspiration from treatises that range from a Universal Declaration of Human Rights to Apple’s Terms of Service and Anthropic’s own research. “What ‘values’ might a language model have?,” Anthropic asks, noting “our recently published research on Constitutional AI provides one answer by giving language models explicit values determined by a constitution, rather than values determined implicitly via large-scale human feedback.” Continue reading Anthropic Shares Details of Constitutional AI Used on Claude