Adobe is ushering in the next generation of AI art with an upcoming version of Premiere Pro.
It’s been about two years since Midjourney ushered in AI art, consisting of art generated entirely from scratch as well as “inpainting” and “outpainting.” Outpainting attracted attention because AI art was being used to essentially extend the boundaries of photographs and paintings, creating a plausible addition to what wasn’t there. Now Adobe is doing the same with Premiere Pro.
On Monday, Adobe showed off a video version of what it calls Generative Fill, the same technique that it uses for Adobe Photoshop and its Adobe Firefly generative AI art. “Crop” a photo outside of its boundaries and Photoshop will extend it. You can also remove or replace an element in a photo like swapping a crown for a baseball cap.
Generative AI within Premiere Pro will do the same thing, Adobe said. In the demonstration video shown below, Adobe is showing how Premiere will allow an editor to select (via its Magic Lasso) an object in a scene and remove it from the entire clip across multiple frames. Likewise, an editor will be able to use the AI-powered Generative Fill and “extend” the scene as well. In the demonstration, the extension applies to a calm, focused shot of a single individual and I would assume that it will be most effective in this use as well. Adding complex motion or transitions will be far more difficult. It’s not clear whether Adobe has that capability as yet.
Adobe / YouTube
But Adobe is also showing off its own text-to-video generation tools, which it’s calling an extension of Firefly. Users will be able to create small video clips from a text prompt. Adobe is also demonstrating an integration with AI video makers who are competing with its own tools. Video editors and content creators will be able to stitch in video clips from Sora, OpenAI’s text-to-video generative AI tool, as well as Runway’s AI video. They’ll be treated just like any piece of footage, Adobe said.
Adobe also said that it plans to treat its AI-generated footage, using its own internal tools or third-party AI clips, the same way as it treats AI-generated static images: by identifying them as AI generated images via its “Content Credentials” logo. That logo is stored in the file’s metadata.
Finally, AI will be used to improve audio editing. AI will be used to identify clips as video, audio, or other data, and launch the appropriate tools. AI will also be used to drag clip handles to create audio fades, Adobe said.
Unfortunately, Adobe isn’t saying when these new AI features will arrive.