MarkTechPost@AI 前天 01:00
Stanford Researchers Propose FramePack: A Compression-based AI Framework to Tackle Drifting and Forgetting in Long-Sequence Video Generation Using Efficient Context Management and Sampling
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

斯坦福大学的研究人员开发了一种名为FramePack的新型框架,旨在解决长时间视频生成中的“漂移”和“遗忘”问题。FramePack通过对输入帧进行分层压缩和改进采样策略,实现了高效、高质量的视频生成。该框架能够保持固定的Transformer上下文长度,从而实现高效扩展。FramePack通过几何级数压缩早期帧,并采用3D patchify内核,有效降低了计算成本。此外,其防漂移采样方法和反向时间采样技术显著提升了视频质量,并可轻松集成到现有模型中。

🎬 FramePack的核心在于保持固定的Transformer上下文长度,这使得模型能够处理更长的视频序列,而不会增加计算成本。

📐 FramePack使用几何级数(λ = 2)压缩早期帧,显著减少了上下文长度,即使对于大量输入帧也是如此。

🧱 FramePack应用3D patchify内核,如(2, 4, 4), (4, 8, 8)和(8, 16, 16),每个内核都使用独立参数训练,以确保学习的稳定性。

🔄 FramePack采用防漂移采样方法,利用双向上下文和早期端点生成,从而提高整体视频质量。

🖼️ 反向时间采样在图像到视频的生成任务中表现出色,通过锚定高质量的用户输入帧来提高生成效果。

🚀 FramePack能够在图像扩散训练中实现大规模批处理,从而实现高效学习和更高的吞吐量。

➕ FramePack可以与HunyuanVideo和Wan等现有模型集成,无需完全重新训练。

✂️ FramePack提供了多种尾部处理策略(例如,全局池化、最小包含),对视觉保真度的影响可以忽略不计。

Video generation, a branch of computer vision and machine learning, focuses on creating sequences of images that simulate motion and visual realism over time. It requires models to maintain coherence across frames, capture temporal dynamics, and generate new visuals conditioned on prior frames or inputs. This domain has seen rapid advances, especially with the integration of DL techniques such as diffusion models and transformers. These models have empowered systems to produce increasingly longer and higher-quality video sequences. However, generating coherent frames across extended sequences remains computationally intensive and prone to degradation in quality due to issues like memory limitations and accumulated prediction errors.

A major challenge in video generation is maintaining visual consistency while minimizing computational overhead. As frames are generated sequentially, any error in earlier frames tends to propagate, leading to noticeable visual drift in longer sequences. Simultaneously, models struggle to retain memory of initial frames, causing inconsistencies in motion and structure, often referred to as the forgetting problem. Efforts to address one issue tend to worsen the other. Increasing memory depth enhances temporal coherence but also accelerates the spread of errors. Reducing dependence on prior frames helps curb error accumulation but increases the likelihood of inconsistency. Balancing these conflicting requirements is a fundamental obstacle in next-frame prediction tasks.

Various techniques have emerged to mitigate forgetting and drifting. Noise scheduling and augmentation methods adjust the input conditions to modulate the influence of past frames, as seen in frameworks like DiffusionForcing and RollingDiffusion. Anchor-based planning methods and guidance using history frames have also been tested. Also, a range of architectures aim to improve efficiency, linear and sparse attention mechanisms, low-bit computations, and distillation approaches help reduce resource demands. Long video generation frameworks like Phenaki, NUWA-XL, and StreamingT2V introduce structural changes or novel generation paradigms to extend temporal coherence. Despite these innovations, the field still lacks a unified and computationally efficient approach that can reliably balance memory and error control.

Researchers at Stanford University introduced a new architecture called FramePack to address these interlinked challenges. This structure hierarchically compresses input frames based on their temporal importance, ensuring that recent frames receive higher fidelity representation while older ones are progressively downsampled. By doing so, the method maintains a fixed transformer context length regardless of the video’s duration. This effectively removes the context length bottleneck and allows for efficient scaling without exponential growth in computation. In parallel, FramePack incorporates anti-drifting sampling techniques that utilize bi-directional context by generating anchor frames first, particularly the beginning and end of a sequence, before interpolating the in-between content. Another variant even reverses the generation order, starting from the last known high-quality frame and working backward. This inverted sampling proves particularly effective in scenarios such as image-to-video generation, where a static image is used to generate a full motion sequence.

The FramePack design is built around a prioritized compression system that limits the transformer’s total context length. In standard video diffusion models like Hunyuan or Wan, each 480p frame generates approximately 1560 tokens of context. When predicting the next frame using a Diffusion Transformer (DiT), the total context length increases linearly with the number of input and output frames. For example, with 100 input frames and one predicted frame, the context length could exceed 157,000 tokens, which becomes computationally impractical.

FramePack addresses this by applying a progressive compression schedule based on frame importance. More recent frames are considered more relevant and are allocated higher resolution, while older frames are increasingly downsampled. The compression follows a geometric progression controlled by a parameter, typically set to 2, which reduces the context length for each earlier frame by half. For instance, the most recent frame may use full resolution, the next one half, then a quarter, and so on. This design ensures that the total context length stays within a fixed limit, no matter how many frames are input.

Compression is implemented using 3D patchifying kernels, such as (2, 4, 4), (4, 8, 8), and (8, 16, 16), which control how frames are broken into smaller patches before processing. These kernels are trained with independent parameters to stabilize learning. For cases where the input sequence is extremely long, low-importance tail frames are either dropped, minimally included, or globally pooled to avoid unnecessary overhead. This allows FramePack to manage videos of arbitrary length efficiently while maintaining high model performance.

Performance metrics confirm the practical value of FramePack. When integrated into pretrained diffusion models like HunyuanVideo and Wan, FramePack reduced the memory usage per step while enabling larger batch sizes, up to the scale commonly used in image diffusion training. The anti-drifting techniques substantially improved visual quality. By reducing the diffusion scheduler’s aggressiveness and balancing the shift timesteps, the models showed fewer artifacts and greater frame-to-frame coherence. The inverted sampling approach, particularly, resulted in better approximation of known frames, enabling high-fidelity generation when a target image is known. These improvements occurred without additional training from scratch, demonstrating the adaptability of the FramePack module as a plug-in enhancement to existing architectures.

This research thoroughly examines and addresses the core difficulties of next-frame video generation. The researchers developed FramePack, an approach that applies progressive input compression and modified sampling strategies to ensure scalable, high-quality video generation. Through fixed context lengths, adaptive patchifying, and innovative sampling order, FramePack succeeds in preserving both memory and visual clarity over long sequences. Its modular integration into pretrained models highlights its practical utility and future potential across varied video generation applications.

Several Key Takeaways from the Research on Framepack include:


Check out the Paper and GitHub Page. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

[Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

The post Stanford Researchers Propose FramePack: A Compression-based AI Framework to Tackle Drifting and Forgetting in Long-Sequence Video Generation Using Efficient Context Management and Sampling appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

FramePack 视频生成 AI框架 斯坦福
相关文章