cs.AI updates on arXiv.org 07月21日 12:06
GIFT: Gradient-aware Immunization of diffusion models against malicious Fine-Tuning with safe concepts retention
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了一种名为GIFT的梯度感知免疫技术,旨在防御扩散模型遭受恶意微调攻击,同时保持其生成安全内容的能力。通过将免疫视为一个双层优化问题,GIFT在降低模型表示有害概念的能力的同时,保持其在安全数据上的性能,实验结果表明该方法在抵抗恶意微调攻击方面具有显著效果。

arXiv:2507.13598v1 Announce Type: cross Abstract: We present GIFT: a {G}radient-aware {I}mmunization technique to defend diffusion models against malicious {F}ine-{T}uning while preserving their ability to generate safe content. Existing safety mechanisms like safety checkers are easily bypassed, and concept erasure methods fail under adversarial fine-tuning. GIFT addresses this by framing immunization as a bi-level optimization problem: the upper-level objective degrades the model's ability to represent harmful concepts using representation noising and maximization, while the lower-level objective preserves performance on safe data. GIFT achieves robust resistance to malicious fine-tuning while maintaining safe generative quality. Experimental results show that our method significantly impairs the model's ability to re-learn harmful concepts while maintaining performance on safe content, offering a promising direction for creating inherently safer generative models resistant to adversarial fine-tuning attacks.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

扩散模型 恶意微调 GIFT技术 免疫技术 安全生成
相关文章