You Can Now Ask Google Gemini Whether an Image is AI-Generated or Not

A photo of a dog jumping to catch a red frisbee appears next to an AI analysis that confirms the image is mostly AI-generated, with a question below asking if the image was generated by AI.
Google Keyword

Google has a new feature that allows users to find out whether an image is AI-generated or not — a much-needed tool in a world of AI slop.

The new feature is available via Google Gemini 3, the latest installment of the company’s LLM and multi-modal AI. To ascertain whether an image is AI-generated, simply open the Gemini app, upload the image, and ask something like: “Is this image AI-generated?”

Gemini will give an answer, but it is predicated on whether that image contains SynthID, Google’s digital watermarking technology that “embeds imperceptible signals into AI-generated content.” Images that have been generated on one of Google’s models, like Nano Banana, for example, will be flagged by Gemini as AI.

“We introduced SynthID in 2023,” Google says in a blog post. “Since then, over 20 billion AI-generated pieces of content have been watermarked using SynthID, and we have been testing our SynthID Detector, a verification portal, with journalists and media professionals.”

While SynthID is Google’s technology, the company says that it will “continue to invest in more ways to empower you to determine the origin and history of content online.” It plans to incorporate the Coalition for Content Provenance and Authority (C2PA) standard so users will be able to check the provenance of an image created by AI models outside of Google’s ecosystem.

“As part of this, rolling out this week, images generated by Nano Banana Pro (Gemini 3 Pro Image) in the Gemini app, Vertex AI, and Google Ads will have C2PA metadata embedded, providing further transparency into how these images were created,” Google adds. “We look forward to expanding this capability to more products and surfaces in the coming months.”

Can Google Gemini Tell You if an Image is AI-Generated?

I put Gemini’s latest model to the test to see whether it can accurately spot an AI-generated image. Results below.

Screenshot of an AI image detection result showing a cityscape photo through a window. The tool says the image was not created with Google AI but can’t confirm if other AI tools were used.
First, I uploaded a real photo to Gemini. It correctly declared the image “was not created with Google AI.”
A girl with long brown hair and a small dog sit at a table with Starbucks drinks, surrounded by autumn leaves. The image is labeled as an AI-generated image detection result.
Then, I uploaded an AI image made by ChatGPT. OpenAI does not use the SynthID system, which Gemini recognized. However, it did pick up on “several tell-tale signs” typical of AI-generated imagery. It highlighted the distorted Starbucks logos on the cups and the “blocky” look of the cartoon. It even went on to specifically name ChatGPT as the potential source.
A man squats outdoors holding a camera, surrounded by meerkats. A Google AI tool interface indicates the image is AI-generated, with a message confirming detection of a Google AI watermark.
Finally, I uploaded a photo edited on Google’s AI studio. It picked up the SynthID and declared it to be “all or part” created with Google AI. It also comically picked up on “unrealistic animal behavior.”

So far, so good—and once C2PA is added, the system will feel much more complete. The best part is that it offers a relatively simple way to check whether an image was generated by AI. Photographers should consider adding a C2PA signature to their own photos, which can be done easily in Lightroom or Photoshop.

Discussion