MarkTechPost@AI 2024年07月08日
Researchers at IT University of Copenhagen Propose Self-Organizing Neural Networks for Enhanced Adaptability
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

哥本哈根IT大学的研究人员提出了一种名为终身神经发育程序(LNDPs)的新方法,通过结合图变换器架构和门控循环单元(GRUs),使神经网络能够在整个生命周期中自我组织和适应。LNDPs通过动态调整网络结构和连接,显著提高了在各种强化学习任务中的适应速度和学习效率。

🌐 LNDPs通过引入图变换器架构和GRUs,实现了神经网络的自我组织和差异化,基于局部神经活动和全局环境奖励进行动态适应。这种方法克服了传统神经网络在静态和预定义发育阶段的局限,为AI系统提供了更自然的适应能力。

🔄 LNDPs的核心组件包括节点和边缘模型、突触生成和修剪功能,这些都被集成到一个图变换器层中。节点状态通过图变换器层的输出进行更新,而边缘则通过GRUs根据突触前后的神经元状态和接收到的奖励进行更新。这种结构可塑性通过动态添加或移除节点间的连接来实现。

🚀 研究显示,LNDPs在多个强化学习任务中表现出色,如Cartpole、Acrobot、Pendulum和觅食任务。特别是在需要快速适应和非平稳动态的环境中,具有结构可塑性的LNDPs网络显著优于静态网络。此外,自发活动(SA)阶段的引入极大地提高了性能,使网络在接触环境之前就能发展出功能性结构。

Artificial neural networks (ANNs) traditionally lack the adaptability and plasticity seen in biological neural networks. This limitation poses a significant challenge for their application in dynamic and unpredictable environments. The inability of ANNs to continuously adapt to new information and changing conditions hinders their effectiveness in real-time applications such as robotics and adaptive systems. Developing ANNs that can self-organize, learn from experiences, and adapt throughout their lifetime is crucial for advancing the field of artificial intelligence (AI).

Current methods addressing neural plasticity include meta-learning and developmental encodings. Meta-learning techniques, such as gradient-based methods, aim to create adaptable ANNs but often come with high computational costs and complexity. Developmental encodings, including Neural Developmental Programs (NDPs), show potential in evolving functional neural structures but are confined to pre-defined growth phases and lack mechanisms for continuous adaptation. These existing methods are limited by computational inefficiency, scalability issues, and an inability to handle non-stationary environments, making them unsuitable for many real-time applications.

The researchers from the IT University of Copenhagen introduce Lifelong Neural Developmental Programs (LNDPs), a novel approach extending NDPs to incorporate synaptic and structural plasticity throughout an agent’s lifetime. LNDPs utilize a graph transformer architecture combined with Gated Recurrent Units (GRUs) to enable neurons to self-organize and differentiate based on local neuronal activity and global environmental rewards. This approach allows dynamic adaptation of the network’s structure and connectivity, addressing the limitations of static and pre-defined developmental phases. The introduction of spontaneous activity (SA) as a mechanism for pre-experience development further enhances the network’s ability to self-organize and develop innate skills, making LNDPs a significant contribution to the field.

LNDPs involve several key components: node and edge models, synaptogenesis, and pruning functions, all integrated into a graph transformer layer. Nodes’ states are updated using the output of the graph transformer layer, which includes information about node activations and structural features. Edges are modeled with GRUs that update based on pre-and post-synaptic neuron states and received rewards. Structural plasticity is achieved through synaptogenesis and pruning functions that dynamically add or remove connections between nodes. The framework is implemented using various reinforcement learning tasks, including Cartpole, Acrobot, Pendulum, and a foraging task, with hyperparameters optimized using the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES).

The researchers demonstrate the effectiveness of LNDPs across several reinforcement learning tasks, including Cartpole, Acrobot, Pendulum, and a foraging task. The below key performance metrics from the paper show that networks with structural plasticity significantly outperform static networks, especially in environments requiring rapid adaptation and non-stationary dynamics. In the Cartpole task, LNDPs with structural plasticity achieved higher rewards in initial episodes, showcasing faster adaptation capabilities. The inclusion of spontaneous activity (SA) phases greatly enhanced performance, enabling networks to develop functional structures before interacting with the environment. Overall, LNDPs demonstrated superior adaptation speed and learning efficiency, highlighting their potential for developing adaptable and self-organizing AI systems.

In conclusion, LNDPs represent a framework for evolving self-organizing neural networks that incorporate lifelong plasticity and structural adaptability. By addressing the limitations of static ANNs and existing developmental encoding methods, LNDPs offer a promising approach for developing AI systems capable of continuous learning and adaptation. This proposed method demonstrates significant improvements in adaptation speed and learning efficiency across various reinforcement learning tasks, highlighting its potential impact on AI research. Overall, LNDPs represent a substantial step towards more naturalistic and adaptable AI systems.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter

Join our Telegram Channel and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 46k+ ML SubReddit

The post Researchers at IT University of Copenhagen Propose Self-Organizing Neural Networks for Enhanced Adaptability appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

LNDPs 自组织神经网络 强化学习 AI适应性
相关文章