cs.AI updates on arXiv.org 04月29日 12:08
Understanding the Skill Gap in Recurrent Language Models: The Role of the Gather-and-Aggregate Mechanism
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文研究了Transformer和SSM(状态空间模型)在上下文检索方面的表现。研究发现,这两种架构都采用了相同的“收集与聚合”(G&A)机制。G&A机制中,收集头负责从上下文中提取相关信息,聚合头则将这些信息整合到最终表示中。研究表明,G&A机制集中在少数几个头部,这使得它们成为关键的性能瓶颈。例如,禁用修剪后的Llama-3.1-8B模型中的单个收集或聚合头,会显著降低其在MMLU测试中检索正确答案的能力。研究还发现,SSM在实现G&A方面存在挑战,这导致了其注意力模式不如Transformer模型清晰。这项研究为理解Transformer和SSM在上下文检索方面的差异提供了新的视角,并为结合它们的优势提供了可能性。

🔍 研究发现,Transformer和SSM都使用“收集与聚合”(G&A)机制进行上下文检索。G&A机制包括收集头和聚合头,前者提取相关信息,后者整合信息。

💡 G&A机制集中在少数几个头部,这些头部成为性能的关键瓶颈。例如,禁用单个G&A头会导致模型在MMLU等任务中性能下降。

⚠️ SSM在实现G&A方面面临挑战,这导致其注意力模式不如Transformer模型清晰。SSM的注意力模式更平滑,而有效的G&A依赖于清晰的标记转换。

🛠️ 针对SSM的G&A挑战,研究提出了一些改进方法。例如,在预训练的混合模型中,注意力组件可以自然地承担聚合头的角色。或者,在预训练的纯SSM中,用基于注意力的变体替换单个G&A头,可以显著提高检索性能。

arXiv:2504.18574v1 Announce Type: cross Abstract: SSMs offer efficient processing of long sequences with fixed state sizes, but struggle with algorithmic tasks like retrieving past context. In this work, we examine how such in-context retrieval operates within Transformer- and SSM-based language models. We find that both architectures develop the same fundamental Gather-and-Aggregate (G&A) mechanism. A Gather Head first identifies and extracts relevant information from the context, which an Aggregate Head then integrates into a final representation. Across both model types, G&A concentrates in just a few heads, making them critical bottlenecks even for benchmarks that require a basic form of retrieval. For example, disabling a single Gather or Aggregate Head of a pruned Llama-3.1-8B degrades its ability to retrieve the correct answer letter in MMLU, reducing accuracy from 66% to 25%. This finding suggests that in-context retrieval can obscure the limited knowledge demands of certain tasks. Despite strong MMLU performance with retrieval intact, the pruned model fails on other knowledge tests. Similar G&A dependencies exist in GSM8K, BBH, and dialogue tasks. Given the significance of G&A in performance, we show that retrieval challenges in SSMs manifest in how they implement G&A, leading to smoother attention patterns rather than the sharp token transitions that effective G&A relies on. Thus, while a gap exists between Transformers and SSMs in implementing in-context retrieval, it is confined to a few heads, not the entire model. This insight suggests a unified explanation for performance differences between Transformers and SSMs while also highlighting ways to combine their strengths. For example, in pretrained hybrid models, attention components naturally take on the role of Aggregate Heads. Similarly, in a pretrained pure SSM, replacing a single G&A head with an attention-based variant significantly improves retrieval.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Transformer SSM 上下文检索 G&A机制 语言模型
相关文章