Adobe has developed a new tool that makes it easier for creatives to securely attribute their work, even if someone takes a screenshot and reposts it online. The Content Authenticity web app, which went into public beta today, allows you to embed invisible, tamper-resistant metadata into images and photos to help identify who owns them.
The new web application, first announced in October, is based on Adobe’s Content Credentials content attribution system. Artists and creatives can add information directly to their work, including links to their social media accounts, websites, and other attributes that can be used to identify them online. The app can also track the editing history of images and helps creatives prevent artificial intelligence from learning from them.
For added security, the Adobe Content Authenticity app and the Behance portfolio platform, which can also be built into Content Credentials, allow creatives to verify their identity through LinkedIn verification. This should make it harder for Content Credentials to be linked to fake online profiles, but given that LinkedIn isn’t exactly known for its creative community (yet), it could also be a shot at X. Formerly known as Twitter, X was one of the founding members of the Adobe Content Authenticity Initiative in 2019, before withdrawing from the partnership and turning its verification system into a paid subscription service owned by Elon Musk.

According to Adobe, the Content Authenticity web app “is currently free while it is in beta,” although the company has not said whether this will change when it becomes publicly available. All you need is an Adobe account (which does not require an active Creative Cloud subscription).
The images you want to apply Content Credentials to must not have been edited or created with other Adobe apps. While Adobe apps like Photoshop can already embed Content Credentials in images, the Content Authenticity web app not only gives users more control over what information to embed, but also allows them to tag up to 50 images, rather than each one individually. Currently, only JPEG and PNG files are supported, but Adobe says that support for larger files and additional media, including video and audio, “will be coming soon.”

Creators can also use the app to add tags to their work that signal to AI developers that they do not have permission to use it for AI training. This is much more efficient than contacting each AI provider directly – which typically requires applying protection to each image individually – but there is no guarantee that these tags will be recognized or honored by every AI company.
Adobe states that it is working with policymakers and industry partners to “create effective, creator-friendly opt-out mechanisms for Content Credentials-based tags.” It is currently one of many protections that users can apply to their work to prevent AI models from learning from it, along with systems such as Glaze and Nightshade. Andy Parsons, senior director of Content Authenticity at Adobe, told The Verge that third-party AI protections are unlikely to interfere with Content Credentials, allowing creatives to use them in a harmonious way in their work.
However, the Content Authenticity app isn’t just for creatives, as it allows anyone to check whether Content Credentials have been applied to images they find online, just like the Content Authenticity extension for Google Chrome that launched last year. The web app verification tool will recover and display Content Credentials even if image hosting platforms have erased them, along with an edit history, if available, that can show whether generative AI tools were used to create or manipulate the image.
An additional benefit is that the Chrome extension and verification tool are independent of third-party support, making it easier to authenticate content on platforms where images are commonly shared without attribution. With AI-enabled editing apps becoming increasingly available and also making manipulation more difficult to detect, Adobe’s content authentication tools can help prevent some people from being misled by convincing fakes online.