cs.AI updates on arXiv.org 06月05日 10:53
Modelling the Effects of Hearing Loss on Neural Coding in the Auditory Midbrain with Variational Conditioning
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文提出了一种新型的变分条件模型,用于学习听力损失在听觉中脑神经活动中的编码。通过对健康和噪声暴露动物的神经活动记录进行分析,该模型能够准确预测听力正常动物神经反应的62%可解释方差,以及听力受损动物神经反应的68%。该模型仅使用每个动物的6个自由参数来表征听力损失,并通过贝叶斯优化,模拟了样本外动物的真实活动。研究结果为开发参数化听力损失补偿模型提供了基础,这些模型旨在直接恢复听力受损大脑中的正常神经编码,并通过人机交互优化快速适应新用户。

👂 听觉系统建模的挑战:传统的听觉建模主要集中在耳蜗的早期阶段,而对听觉中枢的处理由于其复杂性,难以通过手工构建模型。同时,直接训练深度神经网络(DNN)所需的数据集也难以获得。

🧠 新型变分条件模型:本文提出了一种新型模型,用于学习听力损失在听觉中脑神经活动中的编码。该模型通过分析健康和噪声暴露动物的神经活动记录,实现了对听力损失的参数化。

🔬 模型性能与应用:该模型能够准确预测听力正常和受损动物的神经反应,并能通过贝叶斯优化模拟样本外动物的真实活动。这项研究为开发参数化听力损失补偿模型奠定了基础,这些模型能够快速适应新用户,恢复听力受损大脑的正常神经编码。

arXiv:2506.03088v1 Announce Type: cross Abstract: The mapping from sound to neural activity that underlies hearing is highly non-linear. The first few stages of this mapping in the cochlea have been modelled successfully, with biophysical models built by hand and, more recently, with DNN models trained on datasets simulated by biophysical models. Modelling the auditory brain has been a challenge because central auditory processing is too complex for models to be built by hand, and datasets for training DNN models directly have not been available. Recent work has taken advantage of large-scale high resolution neural recordings from the auditory midbrain to build a DNN model of normal hearing with great success. But this model assumes that auditory processing is the same in all brains, and therefore it cannot capture the widely varying effects of hearing loss. We propose a novel variational-conditional model to learn to encode the space of hearing loss directly from recordings of neural activity in the auditory midbrain of healthy and noise exposed animals. With hearing loss parametrised by only 6 free parameters per animal, our model accurately predicts 62\% of the explainable variance in neural responses from normal hearing animals and 68% for hearing impaired animals, within a few percentage points of state of the art animal specific models. We demonstrate that the model can be used to simulate realistic activity from out of sample animals by fitting only the learned conditioning parameters with Bayesian optimisation, achieving crossentropy loss within 2% of the optimum in 15-30 iterations. Including more animals in the training data slightly improved the performance on unseen animals. This model will enable future development of parametrised hearing loss compensation models trained to directly restore normal neural coding in hearing impaired brains, which can be quickly fitted for a new user by human in the loop optimisation.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

听觉 神经活动 深度学习 听力损失 模型
相关文章