Photoshop’s New ‘Generative Fill’ Uses AI to Expand or Change Photos

Generative Fill

Adobe is adding its Firefly generative artificial intelligence (AI) directly into Photoshop, allowing for a new Generative Fill function that can extend images or remove objects and giving the AI access to Photoshop’s power and precision.

Firefly is what Adobe calls its generative AI — basically the company’s answer to Midjourney. It was announced in March and while it’s not quite up to the level that its competitors are, it did get there with the claim that it didn’t steal artists’ work to inform the algorithm like its contemporaries did.

The combination of Firefly and Photoshop is being billed as “deep integration” of the generative AI with Photoshop’s core tools, so it results in a lot more than just image generation.

The new feature that Adobe is mainly focusing on is called Generative Fill which, on paper, sounds akin to what DALL-E from OpenAI was calling outpainting. In short, the feature automatically matches perspective, lightning, and style in an image and allow it to extend or remove content from that image non-destructively in a few seconds using only simple text prompts.

Adobe says that Generative Fill allows editors to rapidly iterate on different concepts because it adds or subtracts from images non-destructively.

“Create newly generated content in generative layers, enabling you to rapidly iterate through a myriad of creative possibilities and reverse the effects when you want, without impacting your original image,” the company says. “Experiment with off-the-wall ideas, ideate different concepts and produce boundless variations of high-quality content as fast as you can type.”

Adobe provided a few examples of what the technology is capable of:

Generative Fill
Original image
Generative Fill
After Generative Fill
Generative Fill
Original image
Generative Fill
After Generative Fill

Where Generative Fill separates itself from OpenAI’s outpainting is through a total reimagining of the background, which Adobe shows is possible with its new feature (with varying success).

Generative Fill
Original image
Generative Fill
After Generative Fill
Generative Fill
Original image
Generative Fill
After Generative Fill

In many of the examples, it is easy not to notice the expanded areas of the image at first glance. Only after a closer inspection does a lack of sharpness and clarity show that there is a difference between the original image and what was produced with Generative Fill. That said, there are some provided examples that look very clearly like edits, so obviously the AI has mixed success at this early stage.

Generative Fill
Original image
Generative Fill
After Generative Fill

Generative Fill is also being made available as a module in the Firefly beta which works in a browser, allowing users to experiment with the new feature on the web.

That’s not the only feature coming to Photoshop that is powered by Firefly. Adobe also previewed a “Contextual Task Bar” that is able to intelligently select and mask a subject at the push of a button, a new “Remove” tool that allows editors to select an object and immediately remove it, and a set of Adjustment Presets that can automatically edit a photo to fit a certain look.

Adobe AI
Adjustment Presets
Adobe AI
Remove Tool

Adobe says that all these AI enhancements are based on the same core principles that Firefly was built on.

“Adobe develops and deploys all AI capabilities with a customer-centric approach and according to its AI Ethics principles to ensure content and data transparency. Generative Fill supports Content Credentials, serving an essential role in ensuring people know whether a piece of content was created by a human, AI-generated or AI-edited,” the company claims.

“Content Credentials are like ‘nutrition labels’ for digital content and remain associated with content wherever it is used, published or stored, enabling proper attribution and helping consumers make informed decisions about digital content.”

Adobe Photoshop’s Generative Fill feature is being made available as a desktop-based beta today and will roll out in general availability at some point in second half of 2023.


Image credits: Adobe

Discussion