Adobe May Be Using Your Photos to Train its AI

Adobe Content Analysis

Adobe has added a “Content Analysis” section to its privacy and personal data collection permissions that, unless opted out of, opens photographers’ images to being used to train the company’s artificial intelligence and machine learning models.

It is possible that Content Analysis was added to Adobe’s Privacy and Personal Data page as far back as last summer since the company last updated its FAQ page on the program on August 10, 2022.

A user’s Privacy and Personal Data settings are located in the Account and Security section of the Creative Cloud web interface, and the Content Analysis section specifies that users give Adobe permission to analyze content for use in training its machine learning models. This can be toggled off, but is enabled by default.

“Adobe may analyze your content using techniques such as machine learning (e.g., for pattern recognition) to develop and improve our products and services,” the permission line reads. “If you prefer that Adobe not analyze your files to develop and improve our products and services, you can opt out of content analysis at any time. This setting does not apply in certain limited circumstances.”

Adobe Content Analysis

Adobe specifies that turning content analysis off doesn’t apply when users choose to participate in programs where users proactively submit content to develop and improve its products and services, such as beta and pre-relase programs or any images listed for sale on Adobe Stock.

As Baldur Bjarnason — a web developer and consultant based in Hveragerði, Iceland — says on Mastodon, this program is opt-out, not opt-in, which means that everyone who uses Creative Cloud must actively toggle this option off if they don’t want to be included in the data gathering program.

“Adobe may analyze your Creative Cloud or Document Cloud content to provide product features and improve and develop our products and services,” Adobe explains.

“Creative Cloud and Document Cloud content include but aren’t limited to image, audio, video, text or document files, and associated data. Adobe performs content analysis only on content processed or stored on Adobe’s servers; we don’t analyze content processed or stored locally on your device.”

Basically, Lightroom users who take advantage of Adobe’s photo syncing services have been giving Adobe permission to use their images if the opt-out has not been toggled. Of note, Bjarnason says that this program only applies if photos find their way onto Adobe’s servers.

“This obviously only applies if the pictures touch Adobe’s servers in some way, such as cloud syncing. That’s basically every picture ever uploaded into Lightroom,” he writes. “I’ve been using Lightroom to sync photos from my Windows desktop to my iPad. Now I need to reconsider that.”

“Machine learning-enabled features can help you become more efficient and creative,” Adobe says in defense of its data gathering. “For example, we may use machine learning-enabled features to help you organize and edit your images more quickly and accurately. With object recognition in Lightroom, we can auto-tag photos of your dog or cat.”

Adobe says that the service is used to “develop and improve [its] products and services,” and some have taken that to mean that the company could be gathering photo data to inform an AI-based generative image system like DALL-E 2 or Stable Diffusion. Generative AI has come under fire from artists who say the programs are trained by using their stolen work, and while Adobe is technically asking for permission here, the method that the company has chosen — auto opt-in and not informing users actively that it is happening — has not been well-received.

DPReview points out that under certain circumstances, Adobe says that it is possible that a user’s content could be manually reviewed by humans for product improvement and development purposes.

“This means humans within Adobe (or contracted personnel) could possibly review sensitive media from users’ Creative Cloud files, should a user fall within one of the categories of exceptions mentioned… on Adobe’s Content Analysis FAQ page. This clearly causes concern for photographers and other creatives whose media involves more sensitive imagery,” Gannon Burgett of DPReview writes.

Obviously, the situation raises significant privacy concerns. Adobe likely chose to make the program auto opt-in because if they had done the opposite, the company would not likely have received many users who would willingly opt-in to this program.

“We give customers full control of their privacy preferences and settings. The policy in discussion is not new and has been in place for a decade to help us enhance our products for customers. For anyone who prefers their content be excluded from the analysis, we offer that option here,” a spokesperson from Adobe’s public affairs office tells PetaPixel.

“When it comes to Generative AI, Adobe does not use any data stored on customers’ Creative Cloud accounts to train its experimental Generative AI features. We are currently reviewing our policy to better define Generative AI use cases.”


Update 1/6: Added a statement from Adobe.


Image credits: Header photo licensed via Depositphotos.

Discussion