MarkTechPost@AI 18小时前
MDM-Prime: A generalized Masked Diffusion Models (MDMs) Framework that Enables Partially Unmasked Tokens during Sampling
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

MDM-Prime是一种改进的掩码扩散模型(MDM),通过引入部分掩码方案来提高数据生成效率。传统MDM在生成离散数据时,存在大量无变化的步骤,导致计算浪费。MDM-Prime将token分解为子token,允许token在中间状态存在,从而减少冗余计算,提升预测质量。实验结果表明,MDM-Prime在文本和图像生成任务上均表现出色,在OpenWebText数据集上实现更低的困惑度,在CIFAR-10和ImageNet-32上获得有竞争力的FID分数,无需自回归技术即可超越现有模型。

💡 **MDM的局限性:** 传统的掩码扩散模型(MDM)在生成离散数据时,由于重复处理相同的输入,导致大量步骤未更新序列,造成计算浪费,最高可达37%的步骤无变化,这促使研究者开发更高效的采样方法。

✨ **Prime方案的核心:** Prime是一种部分掩码方案,它允许token通过掩码token的子部分来呈现中间状态,区别于传统的二元掩码。这种方法有助于模型逐步揭示token信息,从而提高预测质量并减少冗余计算。

🚀 **MDM-Prime的架构改进:** MDM-Prime对MDM进行了改进,在子token级别引入部分掩码。它将每个token分解为子token序列,使用可逆函数。这使得模型能够在扩散过程中生成更平滑的中间状态,从而减少空闲步骤。逆向过程使用关于这些子token的变分界限进行训练。

🥇 **实验结果与表现:** 在文本生成方面,MDM-Prime在OpenWebText数据集上表现出显著的困惑度降低和空闲步数减少,尤其是在子token粒度ℓ ≥ 4时。在图像生成方面,MDM-Prime在CIFAR-10和ImageNet-32上,以ℓ = 2实现了更好的样本质量和更低的FID分数,同时效率更高。它在条件图像生成任务中也表现出色,通过预测部分观察到的图像中的掩码子token来生成连贯的输出。

Introduction to MDMs and Their Inefficiencies

Masked Diffusion Models (MDMs) are powerful tools for generating discrete data, such as text or symbolic sequences, by gradually unmasking tokens over time. In each step, tokens are either masked or unmasked. However, it’s been observed that many steps in the reverse process don’t change the sequence, leading to repeated processing of identical inputs and wasted computation. Up to 37% of steps may not update the sequence at all. This inefficiency highlights a key limitation in current MDMs, prompting the development of more efficient sampling methods that minimize idle steps and maximize the utilization of each generation step.

Evolution and Enhancements in MDMs

The concept of discrete diffusion models originated from early work on binary data, later expanding to practical applications such as text and image generation through various noise strategies. Recent efforts have refined MDMs by simplifying training objectives and exploring alternative latent representations. Enhancements include blending autoregressive methods with MDMs, guiding sampling with energy-based models, and selectively remasking tokens to boost output quality. Other studies have focused on distillation to reduce the number of sampling steps efficiently. Additionally, some methods use continuous noise (e.g., Gaussian) to model discrete data; however, approaches like Bit Diffusion struggle with intractable likelihoods due to their reliance on quantization.

Introducing Prime: A Partial Masking Scheme

Researchers from the Vector Institute, NVIDIA, and National Taiwan University introduced a method called Partial Masking (Prime) to enhance MDMs. Unlike traditional binary masking, Prime lets tokens assume intermediate states by masking sub-parts of a token’s encoded form. This allows the model to gradually reveal token information, improving prediction quality and reducing redundant computation. The enhanced model, MDM-Prime, achieves strong results, with lower perplexity on text (15.36 on OpenWebText) and competitive FID scores on image tasks (3.26 on CIFAR-10, 6.98 on ImageNet-32), outperforming previous MDMs and autoregressive models without utilizing autoregressive techniques.

Architecture and Training Improvements

MDM-Prime is a modified masked diffusion model that introduces partial masking at the sub-token level. Instead of treating each token as a single unit, they decompose it into a sequence of sub-tokens using an invertible function. This enables the model to generate smoother intermediate states during diffusion, thereby reducing the number of idle steps. The reverse process is trained using a variational bound over these sub-tokens. To address dependencies among sub-tokens and avoid invalid outputs, the model learns a joint probability distribution while filtering out inconsistent sequences. The architecture includes an efficient encoder-decoder design optimized for sub-token processing.

Empirical Evaluation on Text and Image Tasks

The study evaluates MDM-Prime on both text and image generation tasks. On text generation using the OpenWebText dataset, MDM-Prime shows significant improvements in perplexity and idle step ratio, especially when the sub-token granularity ℓ ≥ 4. It outperforms previous methods without relying on autoregressive strategies and generalizes well across various zero-shot benchmarks. For image generation on CIFAR-10 and ImageNet-32, MDM-Prime with ℓ = 2 achieves better sample quality and lower FID scores compared to baselines, while being more efficient. It also performs well in conditional image generation tasks, producing coherent outputs by predicting masked sub-tokens from partially observed images.

Conclusion and Broader Implications

In conclusion, scientific understanding has evolved from viewing atoms as the smallest units of matter to recognizing more fundamental particles, as evidenced by discoveries such as the electron and the Standard Model. Similarly, in generative modeling, the study introduces Prime, a method that breaks down discrete data tokens into finer sub-token components. Built on MDMs, Prime improves efficiency by allowing tokens to exist in intermediate states, avoiding repeated computation on unchanged inputs. This enables more detailed and expressive modeling. Their approach outperforms previous methods in both text (with a perplexity of 15.36) and image generation (achieving competitive FID scores), offering a powerful tool for precise data generation.


Check out the Paper, Project Page and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

The post MDM-Prime: A generalized Masked Diffusion Models (MDMs) Framework that Enables Partially Unmasked Tokens during Sampling appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

MDM-Prime 掩码扩散模型 数据生成 人工智能
相关文章