MarkTechPost@AI 02月26日
Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Meta GenAI的研究人员提出了一种利用稀疏自编码器(SAE)实现多样性感知的数据选择策略,以提升大型语言模型(LLM)的指令调优效果。该方法通过SAE量化数据多样性,并开发了SAE-GreedSelect和SAE-SimScale两种数据选择算法,分别适用于数据量有限和较大的场景。实验结果表明,该策略在Alpaca和WizardLM_evol_instruct_70k数据集上表现优于现有技术,有效降低了训练成本,并提供了对模型行为更深入的洞察,从而使指令调优更加高效和可解释。

💡Meta GenAI的研究人员利用稀疏自编码器(SAE),提出了一种多样性感知的数据选择策略,旨在提升大型语言模型(LLM)的指令调优效果,并提高模型的可解释性。

📚该研究开发了两种数据选择算法:SAE-GreedSelect,适用于数据量有限的情况,通过优化特征利用率来选择数据;SAE-SimScale,则适用于较大的数据集,通过基于相似性的抽样来扩展数据选择。

📈实验结果表明,所提出的数据选择方法SAE-GreedSelect和SAE-SimScale在多个数据集和评估指标上均优于现有的基线方法,尤其是在较大的数据规模下,SAE-SimScale表现出显著的改进。

Pre-trained LLMs require instruction tuning to align with human preferences. Still, the vast data collection and rapid model iteration often lead to oversaturation, making efficient data selection a crucial yet underexplored area. Existing quality-driven selection methods, such as LIMA and AlpaGasus, tend to overlook the importance of data diversity and complexity, essential for enhancing model performance. While scaling LLMs has proven beneficial, optimizing instruction fine-tuning (IFT) relies on training data’s quality, diversity, and complexity. However, measuring these factors remains challenging, with recent research calling for quantifiable metrics to assess dataset diversity rather than relying on subjective claims. Sparse autoencoders (SAEs) have recently emerged as effective tools for interpreting LLMs by ensuring mono-semantic representations, making them valuable for analyzing data selection mechanisms.

Sparse autoencoders have significantly improved LLM interpretability by enforcing sparsity in representations, thereby enhancing feature independence. Early works in sparse coding and dictionary learning laid the foundation for structured data representations, later applied to transformers to decode contextual embeddings. Recent research has highlighted the challenges of polysemantic neurons encoding multiple concepts, prompting efforts to develop monosemantic neurons for better interpretability. In parallel, data selection methods, such as ChatGPT-based scoring and gradient-based clustering, have been explored to refine instruction tuning. Despite advancements, accurately quantifying data quality, diversity, and complexity remains complex, necessitating further research into effective metrics and selection strategies to optimize instruction tuning in LLMs.

Researchers at Meta GenAI introduce a diversity-aware data selection strategy using SAEs to improve instruction tuning. SAEs help quantify data diversity and enhance model interpretability, explaining methods like selecting the longest response. They develop two selection algorithms: SAE-GreedSelect for limited data and SAE-SimScale for larger datasets. Experiments on Alpaca and WizardLM_evol_instruct_70k datasets demonstrate superior performance over prior techniques. Their approach refines data selection, reduces training costs, and offers deeper insights into model behavior, making instruction tuning more efficient and interpretable.

The study introduces two diversity-driven data selection methods using SAEs. SAE-GreedSelect optimizes feature utilization for selecting limited data, while SAE-SimScale scales data selection using similarity-based sampling. Experiments on Llama-2-13b, Gemma-2-9b, and Llama-2-7b-base validate the approach using Alpaca-52k and WizardLM_evol_instruct_70k datasets. Comparisons with baselines like Longest-response, #InsTag, and Repr Filter demonstrate superior performance. Models are trained using standardized settings and evaluated with IFEval, LLM- and Human-as-a-Judge methods, and benchmarks like MMLU and TruthfulQA. Results highlight improved instruction tuning efficiency and interpretability while maintaining simplicity in parameter tuning.

Selecting the 1,000 longest responses is an effective baseline for supervised fine-tuning (SFT), likely because longer responses contain more learnable information. A strong correlation (r = 0.92) between text length and feature richness in an SAE supports this hypothesis. The proposed data selection methods, SAE-GreedSelect and SAE-SimScale, outperform existing baselines, particularly at larger data scales. SAE-SimScale achieves notable improvements across multiple datasets and evaluation metrics, highlighting its robustness. Further experiments confirm its effectiveness across model sizes and architectures, reinforcing its potential for optimizing scalable data selection strategies.

In conclusion, the study introduces an approach to measuring data diversity using learned monosemanticity in sparse autoencoders. A new data selection algorithm for instruction tuning was developed, improving model performance across various datasets. The method consistently outperforms existing selection techniques and demonstrates that longer instruction-response pairs enhance model capabilities. The approach also improves efficiency by reducing data requirements and training costs. Additionally, it offers insights into model behavior and can be extended to preference data selection or improving model safety. This strategy ensures better alignment with human preferences while maintaining diversity and complexity in training data.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 80k+ ML SubReddit.

Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets

The post Enhancing Instruction Tuning in LLMs: A Diversity-Aware Data Selection Strategy Using Sparse Autoencoders appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

稀疏自编码器 指令调优 数据选择 LLM
相关文章