The Verge - Artificial Intelligences 2024年10月14日
Adobe’s AI video model is here, and it’s already inside Premiere Pro
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Adobe进军生成式AI视频领域,推出Firefly Video Model,包含多种新工具。Premiere Pro中的Generative Extend可微调视频片段,网页上的Text-to-Video和Image-to-Video工具可根据文本和图像提示生成视频,但存在一些限制,如时长、画质等。此外,Adobe强调其工具在商业上的安全性及可嵌入内容凭证的特点。这些工具在Adobe的MAX会议上宣布推出。

🎬 Adobe的Firefly Video Model可生成多种风格,Premiere Pro中的Generative Extend工具处于测试阶段,能延长视频片段,适用于小调整,可生成720p或1080p、24FPS的扩展片段,也可用于音频编辑,但有一定限制。

📝 网页上的Text-to-Video工具类似其他视频生成器,用户输入文本描述即可生成视频,能模拟多种风格,生成的片段可通过相机控制选项进一步细化,如调整相机角度、运动和拍摄距离等。

🖼 Image-to-Video工具更进一步,用户可添加参考图像和文本提示以更好地控制结果,可用于制作素材或帮助可视化重拍,但效果存在一些问题,如电缆晃动和背景移动等。

Adobe’s Firefly Video Model can generate a range of styles, including ‘realism’ (as pictured). | Image: Adobe

Adobe is making the jump into generative AI video. The company’s Firefly Video Model, which has been teased since earlier this year, is launching today across a handful of new tools, including some right inside Premiere Pro that will allow creatives to extend footage and generate video from still images and text prompts.

The first tool — Generative Extend — is launching in beta for Premiere Pro. It can be used to extend the end or beginning of footage that’s slightly too short, or make adjustments mid-shot, such as to correct shifting eye-lines or unexpected movement.

Clips can only be extended by two seconds, so Generative Extend is only really suitable for small tweaks, but that could replace the need to retake footage to correct tiny issues. Extended clips can be generated at either 720p or 1080p at 24 FPS. It can also be used on audio to help smooth out edits, albeit with limitations. It’ll extend sound effects and ambient “room tone” by up to ten seconds, for example, but not spoken dialog or music.

Image:Adobe
The new Generative Extend tool in Premiere Pro can fill gaps in footage that would ordinarily require a full reshoot, such as adding a few extra steps to this person walking next to a car.

Two other video generation tools are launching on the web. Adobe’s Text-to-Video and Image-to-Video tools, first announced in September, are now rolling out as a limited public beta in the Firefly web app.

Text-to-Video functions similarly to other video generators like Runway and OpenAI’s Sora — users just need to plug in a text description for what they want to generate. It can emulate a variety of styles like regular “real” film, 3D animation, and stop motion, and the generated clips can be further refined using a selection of “camera controls” that simulate things like camera angles, motion, and shooting distance.

Image: Adobe
This is what some of the camera control options look like to adjust the generated output.

Image-to-Video goes a step further by letting users add a reference image alongside a text prompt to provide more control over the results. Adobe suggests this could be used to make b-roll from images and photographs, or help visualize reshoots by uploading a still from an existing video. The before and after example below shows this isn’t really capable of replacing reshoots directly, however, as several errors like wobbling cables and shifting backgrounds are visible in the results.

Video: Adobe
Here’s the original clip...
Video: Adobe
...and this is what it looks like Image-to-Video ‘remakes’ the footage. Notice how the yellow cable is wobbling for no reason?

You won’t be making entire movies with this tech any time soon, either. The maximum length of Text-to-Video and Image-to-Video clips is currently five seconds, and the quality tops out at 720p and 24 frames per second. By comparison, OpenAI says that Sora can generate videos up to a minute long “while maintaining visual quality and adherence to the user’s prompt” — but that’s not available to the public yet despite being announced months before Adobe’s tools.

Video: Adobe
The model is restricted to producing clips that are around four seconds long, like this example of an AI-generated baby dragon scrambling around in magma.

Text-to-Video, Image-to-Video, and Generative Extend all take about 90 seconds to generate, but Adobe says it’s working on a “turbo mode” to cut that down. And restricted as it may be, Adobe says its tools powered by its AI video model are “commercially safe” because they’re trained on content that the creative software giant was permitted to use. Given models from other providers like Runway are being scrutinized for allegedly being trained on thousands of scraped YouTube videos — or in Meta’s case, maybe even your personal videos — commercial viability could be a deal cincher for some users.

One other benefit is that videos created or edited using Adobe’s Firefly video model can be embedded with Content Credentials to help disclose AI usage and ownership rights when published online. It’s not clear when these tools will be out of beta, but at least they’re publicly available — which is more than we can say for OpenAI’s Sora, Meta’s Movie Gen, and Google’s Veo generators.

The AI video launches were announced today at Adobe’s MAX conference, where the company is also introducing a number of other AI-powered features across its creative apps.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Adobe Firefly Video Model 生成式AI视频 Premiere Pro 视频创作工具
相关文章