Google is First Tech Giant to Test Watermarks to Label AI Images

A gif shows various photos, with halves display depicting watermarked and un-watermarked images.

As images made by artificial intelligence (AI) become more advanced, companies are racing to find ways to combat misinformation. Tuesday, Google announced it is testing a watermark to detect whether an image was made using AI.

The digital watermark was created by the tech monolith’s AI branch, DeepMind. The new tool, SynthID, is still in beta and is being released to a limited number of Vertex AI customers who are using Imagen, according to a release from DeepMind.

Imagen is Google’s text-to-image diffusion model, think Midjourney and Dall-E.

“This technology embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification,” the release explains.

SynthID provides three levels of confidence, “Digital watermark detected,” “Digital watermark not detected,” and “Digital watermark possibly detected.” The last of which DeepMind describes as “Could be generated. Treat with caution.” The first two options simply mean the work is either likely or unlikely, respectively, to be generated by Imagen. Notably, none of those options purport complete confidence.

DeepMind emphasizes that SynthID is not perfect, especially regarding “extreme image manipulations.” However, Google does not explain what would be considered extreme or what might make something more challenging for SynthID to parse.

Beyond identifying AI-generated work made by others, the watermark makes it possible for Imagen users to communicate that their images are AI-made as well, even if those images are later edited or shared by others.

The release illustrates this with a photo that went through multiple edits but still reportedly has a detectable SynthID watermark. Google adds that the SynthID watermark being imperceptible eliminates concerns of it diminishing aesthetics and makes it harder to edit out.

Crucially, however, it is unclear whether SynthID can be used on AI-generated images not made with Imagen or if users can attach a SynthID watermark to AI-generated content created with another model. It seems unlikely, based on the release, which states, “SynthID could be expanded for use across other AI models.” PetaPixel reached out to Google for clarification, and Google explained that SynthID is not available for other AI models, but that it hopes to add it in the future.

“While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information — both intentionally or unintentionally,” DeepMind’s release reads. “Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation.”

Misinformation due to AI-generated images has become somewhat prolific in the past year. Just last week, an AI-made mugshot of former President Donald Trump made the rounds online before the real one was released.

Image generators typically have rules regarding appropriate content. But “racist and conspiratorial” false images can still be made, as is the case with Midjourney, according to a recent study


Update: This article has been updated with clarification from Google concerning the availability of SynthID for third-party AI models.


Image credits: Google

Discussion