MarkTechPost@AI 2024年07月10日
Efficient Continual Learning for Spiking Neural Networks with Time-Domain Compression
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

这项研究提出了一种基于重演的持续学习方法,专门针对脉冲神经网络 (SNNs),它在内存效率方面处于领先地位,并旨在与资源有限的设备无缝协作。该方法利用一种名为潜在重演 (LR) 的技术,该技术存储过去经验的子集,并使用它们来训练网络处理新任务。研究人员通过在时间轴上应用有损压缩,实现了对 SNN 的内存需求的显著减少。该方法在样本增量和类别增量两种持续学习配置中进行了测试,结果表明该方法在保持高精度的同时,显著减少了内存需求。

🤔 **持续学习的挑战:** 传统的人工神经网络 (ANNs) 在持续学习中会遇到灾难性遗忘问题,即在学习新任务时会忘记之前学到的知识。为了解决这个问题,研究人员提出了一种基于重演的持续学习方法,该方法通过存储过去经验的子集来训练网络,从而避免遗忘。

🤖 **脉冲神经网络 (SNNs) 的优势:** 脉冲神经网络 (SNNs) 是一种受生物神经元启发的模型,它以脉冲的形式传递信息。与传统的人工神经网络相比,SNNs 具有低功耗和高效率的优势,使其在边缘设备上应用具有巨大潜力。

⏳ **时间域压缩技术:** 为了进一步提高内存效率,研究人员在时间轴上应用了有损压缩技术,从而减少了存储过去经验所需的内存空间。这种压缩技术在不显著降低精度的情况下,能够显著减少内存需求。

📈 **实验结果:** 该方法在样本增量和类别增量两种持续学习配置中进行了测试,结果表明该方法在保持高精度的同时,显著减少了内存需求。例如,在样本增量配置中,该方法在 SHD 测试集上实现了 92.46% 的 Top-1 准确率,并且仅需要 6.4 MB 的 LR 数据。

🏆 **贡献:** 这项研究为在边缘设备上实现高效且准确的持续学习提供了一种新的方法,这对于开发能够适应不断变化的数据流的智能系统具有重要意义。

Advances in hardware and software have enabled AI integration into low-power IoT devices, such as ultra-low-power microcontrollers. However, deploying complex ANNs on these devices requires techniques like quantization and pruning to meet their constraints. Additionally, edge AI models can face errors due to shifts in data distribution between training and operational environments. Furthermore, many applications now need AI algorithms to adapt to individual users while ensuring privacy and reducing internet connectivity.

One new paradigm that has emerged to meet these problems is continuous learning or CL. This is the capacity to learn from new situations constantly without losing any of the information that has already been discovered. The best CL solutions, known as rehearsal-based methods, reduce the likelihood of forgetting by continually teaching the learner fresh data and examples from previously acquired tasks. However, this approach requires more storage space on the device. A possible trade-off in accuracy may be involved with rehearsal-free approaches, which depend on specific adjustments to the network architecture or learning strategy to make models resilient to forgetting without storing samples on-device. Several ANN models, such as CNNs, require large amounts of on-device storage for complicated learning data, which might burden CL at the edge, particularly rehearsal-based approaches.

Given this, Spiking Neural Networks (SNNs) are a potential paradigm for energy-efficient time series processing thanks to their great accuracy and efficiency. By exchanging information in spikes, which are brief, discrete changes in the membrane potential of a neuron, SNNs mimic the activity of organic neurons. These spikes can be easily recorded as 1-bit data in digital structures, opening up opportunities for constructing CL solutions. The use of online learning in software and hardware SNNs has been studied, but the investigation of CL techniques in SNNs using Rehearsal-free approaches is limited.

New research by a team at the University of Bologna, Politecnico di Torino, ETH Zurich, introduces the state-of-the-art implementation of Rehearsal-based CL for SNNs that is memory efficient and designed to work seamlessly with devices with limited resources. The researchers use a Rehearsal-based technique, specifically Latent Replay (LR), to enable CL on SNNs. LR is a method that stores a subset of past experiences and uses them to train the network on new tasks. This algorithm has proven to reach state-of-the-art classification accuracy on CNNs. Using SNNs’ resilient information encoding to accuracy reduction, they apply a lossy compression on the time axis, which is a novel way to decrease the rehearsal memory. 

The team’s approach is not only robust but also impressively efficient. They use two popular CL configurations, Sample-Incremental and Class-Incremental CL, to test their approach. They target a keyword detection application utilizing Recurrent SNN. By learning ten new classes from an initial set of 10 pre-learned ones, they test the proposed approach in an extensive Multi-Class-Incremental CL procedure to show its efficiency. On the Spiking Heidelberg Dataset (SHD) test set, their approach achieved a Top-1 accuracy of 92.46% in the Sample-Incremental arrangement, with 6.4 MB of LR data required. This happens when adding a new scenario, improving accuracy by 23.64% while retaining all previously taught ones. While learning a new class with an accuracy of 92.50% in the Class-Incremental setup, the method achieved a Top-1 accuracy of 92% while consuming 3.2 MB of data, with a loss of up to 3.5% on the previous classes. By combining compression with selecting the best LR index, the memory needed for the rehearsal data was decreased by 140 times, with a loss of accuracy of only up to 4% compared to the naïve method. In addition, when learning the set of 10 new keywords in the Multi-Class-Incremental setup, the team attained an accuracy of 78.4 percent using compressed rehearsal data. These findings lay the groundwork for a novel method of CL on edge that is both power-efficient and accurate. 


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter

Join our Telegram Channel and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 46k+ ML SubReddit

The post Efficient Continual Learning for Spiking Neural Networks with Time-Domain Compression appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

持续学习 脉冲神经网络 边缘计算 深度学习 人工智能
相关文章