Cryptographic C2PA Protocol Pursues Labeling of AI Content

Launched two years ago, C2PA is an open-source Internet protocol that cryptographically encodes origin metadata into content. The protocol, a more secure form of watermarking, is being put forth as a way of disclosing when material has been created wholly or in part using artificial intelligence, something the White House has said it wants companies to do. Impending European Union regulations will also mandate that some tech platforms label images, audio, and video generated by artificial intelligence using “prominent markings.” More than 1,500 companies are involved with C2PA through the Content Authenticity Initiative, making it a viable solution.

The Coalition for Content Provenance and Authenticity (C2PA) project was initiated by Adobe, Arm, Intel, Microsoft and Truepic, all of whom united to form as part of the non-profit Joint Development Foundation. Other companies, including the BBC, Nikon and Sony subsequently joined the effort to come up with a tagging solution as to synthetic provenance information for audio and/or visual material.

“Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam,” writes MIT Technology Review. C2PA Chair Andrew Jenks says membership has grown by 56 percent in the past six months. Shutterstock recently announced its intent to use C2PA to identify its AI-generated content, including that created by its DALL-E-powered image generator.

Shutterstock CTO Sejal Amin told Technology Review that the company wants to protect artists and users by “supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist’s creation versus AI-generated or modified art.”

Adobe has already incorporated C2PA, “which it calls content credentials,” into products including Photoshop and Firefly, the company’s Content Authenticity Initiative Senior Director Andy Parsons told Technology Review.

Content verification product firm Truepic demonstrates how C2PA works using a deepfake video with Revel AI. Viewers can hover over a tiny icon on their screen’s top right corner, an information box appears that offers information, including a disclosure that reads “contains AI-generated content.”

“Real success here depends on widespread, if not universal, adoption,” another MIT Technology Review article states, emphasizing that usability is key. “You wanna make sure no matter where it goes on the Internet, it’ll be read and ingested in the same way, much like SSL encryption,” said Truepic Public Affairs VP Mounir Ibrahim.

The protocol could play a critical role in U.S. election integrity for 2024, “when all eyes will be watching for AI-generated misinformation,” with project researchers “racing to release new functionality and court more social media platforms before the expected onslaught,” Technology Review reports.

Related:
Shutterstock Joins the Content Authenticity Initiative, Yahoo! Finance, 7/25/23
Publicis Groupe Joins Content Authentication Group C2PA, MediaPost, 6/5/23
What Will Stop AI from Flooding the Internet with Fake Images?, Vox, 6/3/23
Microsoft Pledges to Watermark AI-Generated Images and Videos, TechCrunch, 5/23/23
CAI Promises to Deliver More Transparency in AI Generated Images, PetaPixel, 3/21/23

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.