Adobe’s Tech Eroded the Public’s Trust in Photos, Now They’re Trying to Fix It

As more people gain access to the tools to manipulate images – and as those tools become more accessible – a new study says that the public’s trust in the photos they see online has been significantly harmed.

A study was published by the Adobe earlier this week with the goal of better understanding of how people think about what they see online and how they decide what they trust. The organization also wanted to see how consumers and creatives distinguish between “altered content for good” and “altered content for bad.” The CAI was also interested in understanding what tools or solutions would enable people to feel more able to recognize and evaluate the authenticity and trustworthiness of digital content which the company would then have to balance with creating more tools for creative use.

The insights from the study were based on qualitative and quantitative research that included an online survey of 800 customers and 400 creative professionals in the United States as well as online focus groups.

Slide from Adobe’s Research Deck

What the study found reads rather bleak: the widespread availability and ease of use of creative tools together have dramatically eroded the public’s trust in images and their ability to discern real from fake.

Nearly two-thirds of those surveyed reported that they “frequently” came across fake images online, and 42% saw someone else like or share a fake image.

Though 60% said they think they personally do a good job working out the veracity of an image they find online, they admit they feel as though they lack the education and training to identify images that have been edited by professionals. This particular finding seems to say that a majority of people think they can identify a fake image only if it is obviously fake but are less certain of their abilities if an image is edited well – this is not entirely reassuring.

Roughly twice as many people worry about altering images to influence impressions of a politician (72%), an issue or event (69%) or to convey news and information (67%) than worry about images that have been altered for artistic expression (34%).

Slide from Adobe’s Research Deck

Roughly two-thirds or more expressed concern that fake images will cause people to believe misleading information or fake news (74%), distrust the news (69%), no longer take the news seriously (68%), and tune out the news (63%). Some might argue that the entire point of spreading disinformation is to achieve that last point. 35% say the media is most responsible for addressing the issue, followed by companies that provide a platform to share images publicly – like social media (25%), companies that provide tools to alter images (19%), and the government (19%). 19% of consumers place the onus on creative professionals, and an even greater number of creatives hold themselves accountable (27%).

The study confirms that the public believes that the widespread access to the tools needed to make falsified images has led to a distrust in media.

“People do not place sole responsibility for solving the problem on any one group. Although media companies and social media rise to the top of the list, research suggests that everyone has a role to play,” the report reads.

Adobe seems to understand that part of the blame for this problem rests on its shoulders. By creating the tools to make these changes for good or for ill, and developing software improvements that only streamline that ability, Adobe is providing people everything they need to continue to drive a wedge further between truth and lies.

We really believe in what we’re doing with the CAI… our commitment to this is really huge

Earlier this year, Adobe announced Neural Filters as part of its 2021 Photoshop update which allow for unprecedented adjustments to images from the ability to change light direction and age, to facial expressions. It was argued that these filters opened the door to digital mayhem, as the tools to deception suddenly became sliders.

Adobe is in a tough spot. On the one hand, the company wants to continue to develop software to make creativity easier. Will Allen, Vice President, Community Products at Adobe, tells PetaPixel argues Adobe isn’t adding more to its platform that hasn’t been possible for decades.

“Is the fact that some things you can do with our tools getting easier over time? Absolutely. Is it new? No. These things are as old as the very medium itself, like the image of Teddy Roosevelt riding a moose,” Allen argues. “Anything that we launched this year isn’t what wasn’t possible before, but cuts out a lot of the drudgery work that creatives were already doing.”

“We believe in the power of creativity and the enormous benefits it can bring,” Allen says. “From family photography to making the latest and greatest film, we believe in widening the access to power creative tools.”

Allen says Adobe and its many groups stand behind the concept of “creativity for all,” but are fully aware that if they did nothing, the situation would only get worse.

Adobe is the founder of the Content Authenticity Initiative, and one of the main reasons they created the organization was to combat these trust problem head-on.

“We want to enable responsible creativity,” he says. When asked about the ramifications of some of the new additions like Neural Filters, Allen conceded that the company fully realizes the potential for harm. “We are absolutely aware of it, it’s one of the reasons we launched the Content Authenticity Initiative.”

So while the company knows it is partially to blame – as there is a significant difference between the skill it takes to manipulate film in a dark room to put a President on a moose and moving a slider left and right to turn a frown into a smile – it has chosen to act to rectify the issues.

This research… reaffirms what we are doing with the content authenticity initiative.

“We really grapple with this,” Allen attests. “We really believe in what we’re doing with the CAI. I am fortunate to work with an absolutely amazing group of folks on this, and our commitment to this is really huge. We’re doing this as an open standard, and working with people to design this in such a way to accommodate all use cases.”

The company wants to continue to push the envelope with technology and feature enhancements, but at the same time wants to find a solution to the problem of image trust. Adobe formed the Content Authenticity Initiative to help re-establish trust in what is seen online. The primary way it is doing this is through an opt-in attribution solution that it is architecting as an open standard in conjunction with numerous parties.

“We want this to be as easy as possible to adopt across the digital media and online ecosystem, and the standards will be written in the open as a JDF Project,” the CAI says. “Our goal is to increase online transparency and accelerate progress for consumers and creators alike. Adobe, along with Twitter, Qualcomm, Truepic, The New York Times, and others are collaborating through the CAI to create open standards and technology that enable everyone to know the source of visual content and to see how it was edited.”

Often referred to as provenance, attribution empowers content creators and editors, regardless of their geographic location or degree of access to technology, to disclose information about who created or changed an asset, what was changed, and how. While detection can help address the problem of trust in media reactively by identifying content suspected to be deceptive, attribution proactively adds a layer of transparency so consumers can be informed in their decisions.

Content with attribution exposes indicators of authenticity so that consumers can have awareness of who has altered content and what exactly has been changed. This ability to provide content attribution for creators, publishers, and consumers is essential to engender trust online.

We are absolutely aware of [the ramifications of Neural Filters], it’s one of the reasons we launched the Content Authenticity Initiative.

“Creatives put a lot of the onus on themselves,” Graeme Traynor, Managing Director of Insights at the Glover Park Group says. Graeme was one of the leaders of this particular study. “Having led the focus groups and the survey, creative professionals are naturally more aware than consumers of how images can be altered. They put an onus on themselves to use these tools in an appropriate and responsible way. It’s perfectly fine to change an image for artistic expression, but not for an image for news events that could impact politics.”

The goals of the CAI, through this open-source attribution system, are designed to be twofold. On one hand, it will give creatives the ability to tag correct images so that those viewing it know who made it, when, and what – if anything – was altered. On the other, it will direclty address a second issue the study discovered: fear of theft and plagiarism.

According to the study, more 86% of creative professionals worry about having their work stolen or plagiarized, and around 75% have already experienced this at least once. The study determined that the idea of placing a tag on digital images received strong support from creative professionals, who view it as a potential solution to plagiarism issues. 73% supported the idea, and 83% said it would go a long way to ensure creators and artists get credit for their work.

The CAI believes an open-source attribution system is the key to solving the problem of trust.

“This research tells me a number of interesting things, and reaffirms what we are doing with the content authenticity initiative,” Allen says. “Creatives worry, rightfully, about not getting credit for their work. How do I enable them to get more credit and attribution for the work they’ve done? How can you know and trust a piece of content, and to know key facts about it? Who created it, where it came from, and then how it was edited? Was it pulled into Photoshop or another program? The initiative was launched to answer how to address these things.”

“Not only do creatives want to get credit for their work, they see this as a necessary step for combating misinformation.”

By attaching the answers to all those questions as data points to an image while it travels through the ecosystem, Allen and the group believe that it can be the solution to solve multiple problems.

For more on how the solution being built by the Content Authenticity Initiative works, read PetaPixel’s previous coverage here.


Editor’s note: A previous version of this article stated the study was performed by the Content Authenticity Initiative when it was in fact commissioned by Adobe directly. We apologize for this error.

Discussion