Adobe Says it isn’t Using Your Photos to Train AI Image Generators
In early January, Adobe came under fire for language used in its terms and conditions that seemed to indicate that it could use photographers’ photos to train generative artificial intelligence systems. The company has reiterated that this is not the case.
The language of its “Content analysis” section in its Privacy and Personal Data settings says that by default, users give Adobe permission to “analyze content using techniques such as machine learning (e.g., for pattern recognition) to develop and improve our products and services.” That sounded a lot like artificial intelligence-based (AI) image generators.
One of the sticking points of this particular section is that Adobe makes it an opt-out, not an opt-in, so many photographers likely had no idea they were already agreeing to it.
“Machine learning-enabled features can help you become more efficient and creative,” Adobe explains. “For example, we may use machine learning-enabled features to help you organize and edit your images more quickly and accurately. With object recognition in Lightroom, we can auto-tag photos of your dog or cat.”
When pressed for comment in PetaPixel’s original coverage on January 5, Adobe didn’t immediately respond leaving many to assume the worst. However, a day later, the company did provide some clarity on the issue to PetaPixel that some photographers may have missed.
“We give customers full control of their privacy preferences and settings. The policy in discussion is not new and has been in place for a decade to help us enhance our products for customers. For anyone who prefers their content be excluded from the analysis, we offer that option here,” a spokesperson from Adobe’s public affairs office told PetaPixel.
“When it comes to Generative AI, Adobe does not use any data stored on customers’ Creative Cloud accounts to train its experimental Generative AI features. We are currently reviewing our policy to better define Generative AI use cases.”
Adobe saying customer data has never been used to train generative AI. @scottbelsky told me they're working on making the policy more explicit. https://t.co/4CuzFjEEt8 https://t.co/QahNv9Q78Q
— Brody Ford (@BrodyFord_) January 18, 2023
It is that stance that was reiterated to Bloomberg’s Brody Ford this week when he interviewed Adobe’s Chief Product Officer Scott Belsky. In that interview, Belsky says that the mass criticism of the language in its Privacy and Personal Data settings served as a “wake-up call” and promised that the policy isn’t intended to be used with AI image generators.
“We are rolling out a new evolution of this policy that is more specific. If we ever allow people to opt-in for generative AI specifically, we need to call it out and explain how we’re using it,” Belsky tells Bloomberg.
“We have to be very explicit about these things.”
AI image generators have been a hot-button topic for many artists over the last year as platforms like Stable Diffusion, DALL-E, and Midjourney have all become exponentially more adept at mimicking human artists and photography. Additionally, many take issue with the fact that these AI systems had to be trained on existing images, which brings up serious content ownership concerns. This week, a group of artists and Getty separately filed lawsuits against Stable Diffusion, claiming it infringed on copyrights.
Image credits: Header photo licensed via Depositphotos.