Adobe is Adding Firefly Generative AI into Video, Audio, and Animation Apps
Hot on the heels of the debut of its generative artificial intelligence (AI) Firefly, Adobe has announced that it will be integrating its capabilities into the company’s video, audio, animation, and motion graphics design apps.
Announced last month, Firefly is Adobe’s answer to the likes of DALL-E and Midjourney and can create images and text effects just from written descriptions. Adobe separated itself from the pack, though, by noting that it wasn’t trained on “stolen” photos like the other generative AI platforms but instead was trained exclusively on Adobe Stock images, openly licensed content, and public domain content where the copyright has expired. The result was that Firefly is nowhere near the capability of its competitors yet, but it should get there with time and use.
Most creatives probably didn’t assume Adobe would keep Firefly stuck with just image generation, and today those folks have been proven correct as the company has announced it will be adding Firefly’s smarts into a suite of apps.
“We are expanding the vision for Firefly to imagine ways we can bring generative AI into Adobe’s video, audio, animation, and motion graphics design apps,” Adobe’s Ashley Still writes on the company’s blog.
“We are truly in the golden age of video — short-form video is ubiquitous in news, social media, and entertainment. And, insatiable demand, multiplying channels, and globally distributed teams make it extra challenging to scale the production of high-quality creative work efficiently.”
Adobe says that it is exploring a range of ways to integrate Firefly, including text-to-color enhancements, advanced music and sound effects, text effects and logos, AI-powered script analysis and storyboard generation, and creative assistants.
The company intends to add the ability to change color schemes, the time of day, or even the visible season in videos that have already been recorded, which Adobe says will allow editors to instantly alter the mood of a shot and evoke a specific tone or feel.
“With a simple prompt like ‘Make this scene feel warm and inviting,’ the time between imagination and final product can all but disappear,” Still says.
For music, Adobe intends to allow editors to generate royalty-free custom sounds and music either as a placeholder or for final publication.
Additionally, the company says that Firefly may be able to dramatically accelerate pre-production, production, and post-production workflows using AI script analysis to automatically create storyboards and pre-visualizations.
Firefly should also be able to even recommend b-roll clips for rough or final cuts.
Since Firefly can already create stills text effects, it shouldn’t be a surprise to hear that this is being translated into motion graphics.
“With a few simple words and in a matter of minutes, creators can generate subtitles, logos and title cards and custom contextual animations,” Still adds.
Finally, Adobe believes Firefly will be able to generate personalized AI-powered “how-tos” that users can tap in order to learn new skills and “accelerate processes from initial vision to creation and editing.”
Adobe isn’t ready to actually launch these features just yet. The company says it’s still working on them but editors can expect to see them arriving in its suite of video and audio apps starting later this year.
Image Credits: Adobe