cs.AI updates on arXiv.org 07月08日 13:54
From 2:4 to 8:16 sparsity patterns in LLMs for Outliers and Weights with Variance Correction
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了8:16半结构化稀疏性在大型语言模型压缩中的应用,展示其超越性能阈值的能力,与2:4稀疏性相比,提供更大灵活性且存储开销小。同时,应用稀疏结构化模式对显著权重进行优化,并证明简单技术如方差校正和SmoothQuant能提高稀疏模型性能。

arXiv:2507.03052v1 Announce Type: cross Abstract: As large language models (LLMs) grow in size, efficient compression techniques like quantization and sparsification are critical. While quantization maintains performance with reduced precision, structured sparsity methods, such as N:M sparsification, often fall short due to limited flexibility, and sensitivity to outlier weights. We explore 8:16 semi-structured sparsity, demonstrating its ability to surpass the Performance Threshold-where a compressed model matches the accuracy of its uncompressed or smaller counterpart under equivalent memory constraints. Compared to 2:4 sparsity, 8:16 offers greater flexibility with minimal storage overhead (0.875 vs. 0.75 bits/element). We also apply sparse structured patterns for salient weights, showing that structured sparsity for outliers is competitive with unstructured approaches leading to equivalent or better results. Finally, we demonstrate that simple techniques such as variance correction and SmoothQuant like weight equalization improve sparse models performance.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LLM压缩 稀疏性 8:16半结构化 性能提升 模型优化
相关文章