computational biology blog 2024年11月27日
What’s wrong with modeling?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了神经科学中常用的自下而上建模方法的局限性,指出这种方法过于注重细节和现有信息,而忽略了某些可能对神经元功能至关重要的因素,例如神经元的内在兴奋性。文章认为,自上而下的建模方法,从功能和演化角度出发,更有利于理解神经元的功能,并以神经元内在兴奋性在学习中的作用为例,说明了自上而下建模的优势。文章强调,在建模过程中,需要考虑生物系统的整体功能和演化压力,而不是仅仅关注局部细节。

🤔**自下而上建模方法的局限性:**这种方法侧重于构建详细的网络模型,包含神经元的种类和突触连接等信息,但可能忽略了某些对神经元功能至关重要的因素,例如神经元的内在兴奋性。

💡**自上而下建模方法的优势:**从功能和演化角度出发,考虑生物系统整体功能和演化压力,更有利于理解神经元的功能,例如神经元的内在兴奋性在学习中的重要作用。

⏳**神经元内在兴奋性被忽视的历史:**几十年来,神经元在行为学习中离子通道的适应性变化被忽略,直到“内在兴奋性”概念被接受,才逐渐被纳入计算模型,并发现其在多种功能中的作用。

📚**论文发表经历:**本文最初于2005年发表在arxiv上,直到2013年才被正式发表,这反映了当时学术界对神经元内在兴奋性的认识和重视程度的变化。

A typical task in theoretical neuroscience is “modeling”, for instance, building and analysing network models of cerebellar cortex that incorporate the diversity of cell types and synaptic connections observed in this structure. The goal is to better understand the functions of these diverse cell types and connections. Approaches from statistical physics, nonlinear dynamics and machine learning can be used, and models should be “ constrained “ by electrophysiological, transcriptomic and connectomic data.

This sounds all very well and quite innocuous. It’s “state of the art”. So what is wrong with this approach that dominates current computational neuroscience?

What is wrong is the belief that a detailed, bottom-up model could fulfill the goal of understanding its function. This is only a way to synthesize existing information. Many, most aspects of the model are selected in advance, and much existing information is left out, since it is irrelevant for the model. Irrelevant for the model does not mean that it may not be crucial for the function of the real biological object, such as a neuron. For instance, for decades the fact that many neurons adapt in terms of their ion channels under behavioral learning was simply ignored. Then the notion of whole-neuron learning became acceptable, and under the term “intrinsic excitability” it slowly became part of computational modeling, and now various functions are being discovered, where those changes were first dismissed as “homeostatic” , i.e. only functions in terms of house-keeping were accepted.

If we started a top-down model in terms of what makes sense (what would evolution prefer), and what is logically required or useful for the neuron to operate, then we would have realized a long time ago that whole neuron (intrinsic) excitability is a crucial part of learning. This paper was first published on arxiv 2005, but accepted for publication only in 2013, when intrinsic excitability had become more widely known.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

神经科学 建模 自下而上 自上而下 神经元内在兴奋性
相关文章