Google’s Lack of AI Transparency in the Pixel 9 Pro is Downright Dangerous

A hand holding a green Google Pixel 8 smartphone, showcasing its rear design with a single raised, horizontal camera module, containing multiple camera lenses and a flash. The background is out of focus, emphasizing the phone.

Pretty much as expected, Google leaned hard on the AI capabilities of the Pixel 9 series as selling points over major hardware improvements (a nice new design notwithstanding), but its implementation of these — especially the generative AI additions — lacks anything close to the proper levels of disclosure they needs.

Allison Johnson, formerly of DPReview and now at The Verge, was able to generate wrecks, disasters, drug use, and corpses and add them to real photos in a way that didn’t make them look obviously fake. While Google does seemingly have some guardrails in place to mitigate some prompts, obviously Johnson was able to bypass them.

A vintage pickup truck is parked in front of a large, industrial concrete building with silos. Overhead, a yellow biplane is flying in a partly cloudy blue sky. Power lines and a fence are visible in the foreground.
The generative AI is sometimes pretty convincing. Not perfect of course, but good enough to fool many. | Photo by Chris Niccolls, processed with Google Magic Editor on the Pixel 9 Pro XL

“It’s also never been easier to circulate misleading photos quickly. The tools to convincingly manipulate your photos exist right inside the same device you use to capture it and publish it for all the world to see,” Johnson writes.

Even worse, Google doesn’t appear to be adding any level of transparency that AI was used to create these images. We noticed this almost immediately last week when we got the Pixel 9 Pro XL in hand and saw that Google didn’t add a watermark or show any information in the “info” tab of an AI adjusted image that would call out the generative edits, although the company claims it does adjust the metadata.

With that in mind, Johnson went a step further and uploaded one AI edited photo to Instagram and found that it didn’t get flagged as “AI Info” either, which means Google isn’t adding anything to the metadata that Instagram is looking for — unlike Adobe.

Images made from scratch using Gemini’s image generator use a tagging system called SynthID which tags them as not real, but images altered using generative AI in the Pixel 9’s Magic Editor do not. It’s a frankly shocking, woeful oversight.

Samsung’s addition of an AI watermark was a weak effort at marking AI-generated info but it was at least something. Samsung’s generative AI also wasn’t very good, but a lot has changed in the last seven months and the technology has advanced significantly in the Pixel 9 series. Google’s lack of any transparency here therefore is not only concerning, but it’s downright dangerous.

“Photos that have been edited with Magic Editor include metadata built upon technical standards from IPTC. Our work on Magic Editor is guided by our AI Principles, and we’re focused on moving forward deliberately and carefully, while also providing a helpful editing experience that is informed by user feedback so we can learn, improve, and innovate responsibly on AI together,” the company says regarding the Reimagine prompt in Magic Editor.

“The metadata is built upon technical standards from The International Press Telecommunications Council (IPTC) which provides widely used, common open standards to improve the management and exchange of information about media files between content providers, intermediaries and consumers. We are following its guidance for tagging images edited using generative AI. We will continue to fine-tune our approach for adding transparency around edits.”

Misinformation is already running rampant on social media using fully generated images — including those shared by a U.S. presidential nominee — but those still don’t look like “real” photos upon close inspection. That said, people still fall for them. It’s not hard to imagine the rate of misinformation will increase with what Google has created here.

Sure, using creative language to get around what guardrails exist is a violation of Google’s policies, but there is little to no chance that will stop someone if they can still do it.

“What’s most troubling about all of this is the lack of robust tools to identify this kind of content on the web. Our ability to make problematic images is running way ahead of our ability to identify them,” Johnson says.

The fact that so much about a photo can be altered in a believable way using Google’s Magic Editor with basically no indication that they aren’t real is downright dangerous.


Update 8/21: Added a statement from Google.

Discussion