The Verge - Artificial Intelligences 07月17日 21:00
Adobe’s new AI tool turns silly noises into realistic audio effects
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Adobe在其Firefly AI平台中推出了两项创新功能,旨在革新音频和视频的生成式创作。首先是“生成音效”工具,它允许用户通过模仿声音(如火箭升空声)或录制特定声音(如马蹄声)并结合文本描述来创建自定义音效,并能精确匹配视频画面。该功能基于“Project Super Sonic”实验,支持各种冲击声和环境声。其次,Firefly文本到视频生成器新增了“构图参考”功能,用户可上传参考视频来指导生成视频的构图,并支持关键帧裁剪和多种风格预设,如动漫、黏土动画等,虽然部分风格效果尚待提升,但Adobe正努力整合第三方AI模型,巩固其在创意软件领域的领先地位。

🔊 **革新音效生成方式:** Adobe新推出的“生成音效”工具允许用户通过模仿声音录制或直接输入文本描述来创建自定义音效。例如,用户可以录制“嗖”的声音来模拟火箭升空,或匹配视频中的马蹄声录制“哒哒”声,并辅以“马蹄在混凝土上”的文本描述,AI将生成多种音效供选择,极大地提升了音效创作的灵活性和便捷性。

🎬 **视频创作的构图参考:** Firefly文本到视频生成器新增了“构图参考”功能,用户可以上传参考视频,让AI生成的视频在构图上模仿参考视频的风格和布局。此外,关键帧裁剪功能允许用户上传视频的首尾帧,AI将在此基础上生成中间过渡,这使得用户能够更精确地控制视频的视觉呈现,实现更具个性化的创作。

🎨 **多样化风格预设与未来展望:** 新功能还引入了多种风格预设,如动漫、黏土动画等,用户可快速选择并应用于视频生成。尽管部分预设的实际效果(如黏土动画的早期3D感)仍有提升空间,但Adobe表示未来将支持更多第三方AI模型,并可能将这些控制和预设功能扩展到这些模型上。这表明Adobe正积极布局,以应对生成式AI领域的快速发展,保持其在创意软件市场的竞争力。

⏱️ **音效匹配与时间同步:** “生成音效”工具的界面设计类似于视频编辑时间线,允许用户将创建的音效与上传的视频进行时间上的精确匹配。用户可以边播放视频,边同步录制与视频动作相符的声音,如马蹄声与马匹行走的节奏同步,确保音效与画面融为一体,增强视频的真实感和沉浸感。

In this example, you’d record yourself mimicking the sound of a rocket taking off. Have at it.

Adobe is launching new generative AI filmmaking tools that provide fun ways to create sound effects and control generated video outputs. Alongside the familiar text prompts that typically allow you to describe what Adobe’s Firefly AI models should make or edit, users can now use onomatopoeia-like voice recordings to generate custom sounds, and use reference footage to guide the movements in Firefly-generated videos.

The Generate Sound Effects tool that’s launching in beta on the Firefly app can be used with recorded and generated footage, and provides greater control over audio generation than Google’s Veo 3 video tool. The interface resembles a video editing timeline and allows users to match the effects they create in time with uploaded footage. For example, users can play a video of a horse walking along a road and simultaneously record “clip clop” noises in time with its hoof steps, alongside a text description that says “hooves on concrete.” The tool will then generate four sound effect options to choose from.

This builds on the Project Super Sonic experiment that Adobe showed off at its Max event in October. It doesn’t work for speech, but does support the creation of impact sounds like twigs snapping, footsteps, zipper effects, and more, as well as atmospheric noises like nature sounds and city ambience.

New advanced controls are also coming to the Firefly Text-to-Video generator. Composition Reference allows users to upload a video alongside their text prompt to mirror the composition of that footage in the generated video, which should make it easier to achieve specific results, compared to repeatedly inputting text descriptions alone. Keyframe cropping will let users crop and upload images of the first and last frames that Firefly can use to generate video between, and new style presets provide a selection of visual styles that users can quickly select, including anime, vector art, claymation, and more.

These style presets are only available to use with Adobe’s own Firefly video AI model. The results leave something to be desired if the live demo I saw was any indication — the “claymation” option just looked like early 2000s 3D animation. But Adobe is continuing to add support for rival AI models within its own tools, and Adobe’s Generative AI lead Alexandru Costin told The Verge that similar controls and presets may be available to use with third-party AI models in the future. That suggests that Adobe is vying to keep its place at the top of the creative software foodchain as AI tools grow in popularity, even if it lags behind the likes of OpenAI and Google in the generative models themselves.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Adobe Firefly 生成式AI 音效生成 视频创作 AI工具
相关文章