At its I/O developer event yesterday, Google announced two new features that will better help users understand the origins of an image — including if it is AI-generated.
The first addition to Google Images will be the “About this image” feature, it will include information about when the specific photo the user is looking at was first indexed by Google, where the images first appeared online, and the other websites the image has appeared on.
Google will include news media websites and fact-checking websites, which it hopes will lead to more information about the image the user is searching for.
If for example, an image was first uploaded by Reuters or CNN then it is more likely to be genuine than if it was uploaded to a random subreddit.
To use the “About this image” feature, users will have to click on the three dots that appear above an image in the search results. Alternatively, they can search with an image on Google Lens or swipe up when viewing an image on the Google App.
The feature is expected to be rolled out in the coming months and later this year Google Chrome web browser users will be able to access “About this image” by right-clicking on a photo.
How Will Google Identify AI-Generated Images?
Yesterday, Google announced that artificial intelligence (AI) image generator Adobe Firefly will be integrated into its conversational generative AI chatbot Bard. And that Google Photos will let you edit your pictures with AI.
AI-generated images from Google’s platforms will feature a label that marks them clearly as synthetic.
Creators and publishers from outside Google’s walls will also be able to add markups to AI images and the company says that major AI image players like Midjourney and Shutterstock are already on board to adopt this feature.
It means that viral AI images, such as Donald Trump being arrested and Pope Francis wearing a puffer jacket that some people believed were genuine, will be more easily distinguished as fake image.
The AI-generated label will be rolled out in the “coming months” and Google says that it hopes to improve visual literacy and to help people assess whether an image is genuine or AI-generated — a problem the world didn’t have 12 months ago.