Firefly is Adobe’s generative artificial intelligence flagship, a text-to-image tool that powers generative fill-in for Photoshop and text-to-vector in Illustrator. However, the Firefly 2024 model will power three AI video tools inside the Premiere Pro video editing software: Genetic Expansion, Object Addition and Removal, and Text to Video.
Premiere already allows video editors to integrate Adobe’s creative cloud software such as After Effects (motion graphics) and Audition (audio mixing). Today’s announcement shows that Adobe will take the same approach to AI. The company also announced that it is in “early exploration” of integrating video generation from OpenAI (Sora), Runway, and Pika Labs into Performance.
More from IndieWire
Last week, Adobe showed off these new gen AI tools to IndieWire and shared examples (see the video at the top of the page) of what will be available inside Premiere Pro before the end of the year.
Genital extension
All editors wish but a few more frames to facilitate editing or a smoother transition. The Gen Extend tool would allow editors to use AI to create new frames of up to “a few seconds” that Adobe promises will “seamlessly add frames to make longer clips.”
In the preview video, the tool extends the actor’s stock performance by a few seconds to fill a gap in the editor’s timeline. While performance stretching introduces ethical and union issues, VFX tricks have long been used by filmmakers to blend footage to buy a few extra frames. This tool could allow editors to create real-time solutions within Premiere.
Object Addition & Removal
Adding and removing objects from a frame is often VFX intensive work, but Firefly’s tool would allow an editor to do it in real time. The key will be if gen AI can maintain photorealism. The examples offered in the Adobe preview (adding diamond stacks to the briefcase and removing an electrical box from the frame) appear photoelectric, maintaining image and lighting consistency. In its press release and briefing with IndieWire, Adobe insisted that the upcoming Firefly video model generated these pixels without further manipulation.
Early results look promising, but until editors test them on a wide variety of footage we won’t know how applicable these tools will be in the near future.
Text to Video
Firefly is nothing new when it comes to the ability to use text-to-video tools to create shots. It’s great that Adobe will give filmmakers the ability to access text on video from within Premiere, which will include competitors Firefly, Sora, Runway, and Pika Labs. Adobe clearly expects editors to use these tools in their daily workflow.
Copyright & Content Credentials
Many gen AI image and video models are built on copyrighted material. In a embarrassing interview, OpenAI CTO Sora couldn’t even answer simple questions about what trained his video generation model.
However, filmmakers praised Adobe for using only licensed images to train Firefly. Adobe executives confirmed to IndieWire that the soon-to-be-released Firefly video model will continue to follow the same rules.
It’s strange, then, that Adobe will integrate third-party AI companies that are far less ethical when it comes to copyright. However, Adobe said it will continue to use content credentials.
“As a founding member of the Content Authentication Initiative, Adobe promises to attach Content Credentials – a free, open source technology that acts as a nutrition label for online content – to assets produced within its applications so that users can see how made content and what AI models were used to generate the content created on Adobe platforms,” reads today’s press release.
The durability of these nutrition labels — and how easily they can be peeled off the jar — remains to be seen.
AI Audio Tools
In May, Premiere’s audio workflow will receive a series of upgraded tools, some of which are AI-powered and are available in beta for a while. These include:
Interactive fade handles: Editors can create custom audio transitions by dragging clip handles to create audio clips.
Audio Badge Required for audio category tagging: AI automatically tags audio clips as dialogue, music, sound effects or environment.
Badges of consequence: Visual indicators show which clips have effects, quickly add new ones, and automatically open effect parameters from the sequencer.
Redesigned waveforms in the timeline: Waveforms smartly resize as track height changes on clips, and new colors make sequences easier to read.
AI powered Speech Enhancement Tool: Eliminates unwanted noise and improves poorly recorded dialogue; this one has been generally available since February.
Best of IndieWire
Sign up for the Indiewire Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.