MarkTechPost@AI 02月19日
DeepSeek AI Introduces NSA: A Hardware-Aligned and Natively Trainable Sparse Attention Mechanism for Ultra-Fast Long-Context Training and Inference
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

DeepSeek AI 推出 NSA,一种硬件对齐且原生可训练的稀疏注意力机制,专为超快速长文本训练和推理而设计。它通过动态分层方法,首先压缩tokens,然后选择性地保留重要tokens,并采用滑动窗口分支来确保局部上下文的保留。这种三管齐下的策略,即压缩、选择和滑动窗口,创建了一个既能捕捉全局依赖关系又能捕捉局部依赖关系的压缩表示。NSA 的设计充分考虑了硬件限制,通过优化 GPU 资源利用率,显著降低了推理和训练的延迟,为改进长文本建模提供了一个有希望的方案。

🚀**动态分层稀疏策略**:NSA采用动态分层稀疏策略,通过压缩、选择和滑动窗口三个步骤,有效地处理长文本序列,降低计算成本,同时保留关键信息。

💽**硬件感知设计**:NSA在设计时充分考虑了硬件约束,通过优化GPU资源利用率,减少内存访问,从而显著提高了训练和推理速度。实验结果表明,前向传播速度提升高达9倍,反向传播速度提升高达6倍。

🎯**卓越的性能表现**:在MMLU、GSM8K和DROP等基准测试中,NSA的性能与传统的全注意力模型相当甚至更好。尤其在长文本场景中,NSA能够保持全局感知和局部精确度,展现出强大的竞争力。

🔍**高检索准确率**:在“大海捞针”任务中,即使序列长度达到64k tokens,NSA也能实现高检索准确率,这归功于其分层设计,将粗略的全局扫描与精细的局部选择相结合。

In recent years, language models have been pushed to handle increasingly long contexts. This need has exposed some inherent problems in the standard attention mechanisms. The quadratic complexity of full attention quickly becomes a bottleneck when processing long sequences. Memory usage and computational demands increase rapidly, making it challenging for practical applications such as multi-turn dialogues or complex reasoning tasks. Moreover, while sparse attention methods promise theoretical improvements, they often struggle to translate those benefits into real-world speedups.

Many of these challenges arise from a disconnect between theoretical efficiency and practical implementation. Reducing computational overhead without losing essential information is not a simple task. This has led researchers to rethink attention mechanisms so that they can better balance performance with efficiency. Addressing these issues is a crucial step toward building models that are both scalable and effective.

DeepSeek AI researchers introduce NSA, a hardware-aligned and natively trainable sparse attention mechanism for ultra-fast long-context training and inference. NSA integrates both algorithmic innovations and hardware-aligned optimizations to reduce the computational cost of processing long sequences. NSA uses a dynamic hierarchical approach. It begins by compressing groups of tokens into summarized representations. Then, it selectively retains only the most relevant tokens by computing importance scores. In addition, a sliding window branch ensures that local context is preserved. This three-pronged strategy—compression, selection, and sliding window—creates a condensed representation that still captures both global and local dependencies.

The design of NSA is also mindful of hardware constraints. By implementing specialized kernels optimized for modern GPUs, NSA achieves reduced latency in both inference and training. This careful blend of algorithmic strategy and hardware alignment makes NSA a promising candidate for improving long-context modeling.

Technical Details and Benefits

NSA’s architecture rests on two main pillars: a hardware-aware design and a training-friendly algorithm. The compression mechanism uses a learnable multilayer perceptron to aggregate sequential tokens into block-level representations. This captures high-level patterns while reducing the need for full-resolution processing.

Following compression, the token selection module operates in a blockwise manner. It selects continuous token blocks that show similar attention scores, which helps minimize random memory access. The sliding window component is responsible for handling local context. By separating local and global information, NSA manages to preserve fine details essential for many tasks. On the hardware side, NSA optimizes the use of GPU resources. Queries are loaded into SRAM in groups, and redundant key-value transfers are minimized by sharing memory efficiently. These optimizations lead to noticeable speedups in both forward and backward computations. Experimental results indicate improvements of up to 9× in forward propagation and 6× in backward propagation for long sequences.

Core components of NSA:

Results and Insights

The research presents a careful evaluation of NSA across various tasks. On benchmarks such as MMLU, GSM8K, and DROP, NSA achieves performance comparable to, or even better than, traditional full attention models. The design also proves effective in long-context scenarios, where maintaining both global awareness and local precision is critical.

One interesting observation is NSA’s high retrieval accuracy in needle-in-a-haystack tasks with sequences as long as 64k tokens. This is largely due to its hierarchical design that blends coarse global scanning with detailed local selection. The results also show that NSA’s decoding speed scales well with increasing sequence length, thanks to its reduced memory access footprint. These insights suggest that NSA’s balanced approach—combining compression, selection, and sliding window processing—offers a practical way to handle long sequences efficiently without sacrificing accuracy.

Conclusion

NSA marks a thoughtful step forward in the design of sparse attention mechanisms. By integrating trainability with hardware-aligned optimizations, NSA addresses the dual challenges of computational efficiency and effective long-context modeling. Its three-tiered approach, which includes token compression, selective attention, and sliding window processing, reduces computational overhead while preserving important context.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 75k+ ML SubReddit.

Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets

The post DeepSeek AI Introduces NSA: A Hardware-Aligned and Natively Trainable Sparse Attention Mechanism for Ultra-Fast Long-Context Training and Inference appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

NSA 稀疏注意力机制 长文本处理 DeepSeek AI 硬件加速
相关文章