OpenAI Developing ‘Provenance Classifier’ for GenAI Images
October 25, 2023
OpenAI is developing an AI tool that can identify images created by artificial intelligence — specifically those made in whole or part by its Dall-E 3 image generator. Calling it a “provenance classifier,” company CTO Mira Murati began publicly discussing the detection app last week but said not to expect it in general release anytime soon. This, despite Murati’s claim it is “almost 99 percent reliable.” That is still not good enough for OpenAI, which knows there is much at stake when the public perception of artists’ work can be impacted by a filter applied by AI, which is notoriously capricious.
OpenAI knows first-hand how inaccurate such detection tools can be, having in January released one it said could identify AI-generated writings only to shutter it in July due to reliability issues.
The company says it is continuing to improve that software, too, as part of its overarching goal of a full suite of detection tools that will also encompass audio.
In a blog post about Dall-E, OpenAI writes that its provenance classifer is “over 99 percent accurate at identifying whether an image was generated by DALL-E when the image has not been modified” and “over 95 percent accurate” with images that have been cropped, resized or otherwise modified.
TechCrunch writes that in addition to achieving something closer to 100 percent accuracy across the board, OpenAI is also concerned about “the philosophical question of what, exactly, constitutes an AI-generated image.”
Artwork wholly generated by DALL-E 3 is an obvious example, “but what about an image from DALL-E 3 that’s gone through several rounds of edits, has been combined with other images and then was run through a few post-processing filters?” muses TechCrunch.
“The need for such detection tools is only growing in importance as AI tools can be used to manipulate or fabricate news reports of global events,” Bloomberg writes, noting that Adobe’s Firefly image generator “addresses another aspect of the challenge, by promising to not create content that infringes on intellectual property rights of creators.”
Likewise, Microsoft guarantees enterprise customers protection from copyright infringement claims that may result from its AI Copilot services.
The usefulness of an OpenAI AI image detector optimized for Dall-E 3 may be limited if it cannot spot pictures generated by competing technologies like Midjourney, Stable Diffusion, and Firefly, Digital Trends points out, adding that “anything that can highlight fake images could have a positive impact.”
Apps like Bing Image Creator and Google’s Search Generative Experience (SGE) “are, or will soon be, capable of creating convincing photorealistic images,” per Mashable. Companies that offer such tools often attempt to safeguard “against harmful behavior by adding watermarks or blocking users from creating images with famous people or using identifiable personal information.”
Dall-E will “modify prompts without notice, and quietly throw errors when asked to generate specific outputs even if they comply with published guidelines” to “avoid creating sensitive content involving specific names, artist styles, and ethnicities,” according to Decrypt.
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.