cs.AI updates on arXiv.org 06月05日 12:54
Model Alignment Search
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了如何通过因果干预,将神经表征相似性与行为联系起来。研究提出了一种方法,通过学习正交变换,在对齐的子空间中分离和交换来自多个分布式网络表征的行为信息。研究表明,该方法可用于在冻结的神经网络之间转移行为,并补充相关性相似性度量。此外,还介绍了一种高效的子空间正交化技术,并探讨了该方法在不同模型和任务中比较特定表征信息的应用。研究结果揭示了在比较神经表征时,因果关系的重要性,以及如何通过干预手段更深入地理解神经系统的工作机制。

🧠该研究的核心是提出一种方法,通过因果干预,将神经表征的相似性与行为联系起来。这种方法通过学习正交变换,在对齐的子空间中分离和交换来自多个分布式网络表征的行为信息,从而实现对神经系统更深层次的理解。

💡研究展示了该方法在冻结神经网络之间的行为转移能力,类似于模型拼接。同时,该方法可以补充相关性相似性度量,如RSA,提供了新的视角来分析神经表征。

⚙️研究引入了一种高效的子空间正交化技术,使用Gram-Schmidt过程,可用于分布式对齐搜索(DAS),从而能够对更大的模型进行分析。该技术在减少模型比较所需矩阵数量方面具有显著优势。

🔬研究通过实验和理论分析,表明该方法在需要时可以等同于模型拼接,或者采取更严格的形式来捕捉因果信息。此外,研究还展示了如何通过辅助损失来训练因果相关的对齐,即使在训练期间只能读取两个网络之一的表征(如生物网络)。

🔢最后,研究以数字表征为例,探讨了该方法在跨任务和模型中比较特定表征信息方面的应用,为研究者提供了一种新的工具,用于分析和比较不同神经系统中的表征。

arXiv:2501.06164v5 Announce Type: replace-cross Abstract: When can we say that two neural systems are the same? The answer to this question is goal-dependent, and it is often addressed through correlative methods such as Representational Similarity Analysis (RSA) and Centered Kernel Alignment (CKA). What nuances do we miss, however, when we fail to causally probe the representations? Do the dangers of cause vs. correlation exist in comparative representational analyses? In this work, we introduce a method for connecting neural representational similarity to behavior through causal interventions. The method learns orthogonal transformations that find an aligned subspace in which behavioral information from multiple distributed networks' representations can be isolated and interchanged. We first show that the method can be used to transfer the behavior from one frozen Neural Network (NN) to another in a manner similar to model stitching, and we show how the method can complement correlative similarity measures like RSA. We then introduce an efficient subspace orthogonalization technique using the Gram-Schmidt process -- that can also be used for Distributed Alignment Search (DAS) -- allowing us to perform analyses on larger models. Next, we empirically and theoretically show how our method can be equivalent to model stitching when desired, or it can take a form that is more restrictive to causal information, and in both cases, it reduces the number of required matrices for a comparison of n models from quadratic to linear in n. We then show how we can augment the loss objective with an auxiliary loss to train causally relevant alignments even when we can only read the representations from one of the two networks during training (like with biological networks). Lastly, we use number representations as a case study to explore how our method can be used to compare specific types of representational information across tasks and models.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

神经表征 因果干预 行为关联 神经网络 模型比较
相关文章