cs.AI updates on arXiv.org 04月14日 12:02
Federated Class-Incremental Learning with Prompting
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了联邦学习在Web技术发展背景下的应用,特别关注了在客户端数据持续更新和新类别出现情况下的联邦类增量学习(FCIL)问题。由于传统方法难以应对数据动态变化和非独立同分布的挑战,研究提出了FCILPT方法。该方法通过引入提示(Prompt)机制,在保护数据隐私和有限内存的条件下,有效缓解了模型对旧类别的灾难性遗忘。实验结果表明,FCILPT在CIFAR-100、Mini-ImageNet和Tiny-ImageNet数据集上取得了显著的性能提升。

💡 联邦学习在Web技术发展中越来越重要,尤其是在保护数据隐私方面。然而,现有方法通常假设客户端数据是固定的,这与现实世界中的数据动态变化不符。

🔄 针对这一问题,研究聚焦于联邦类增量学习(FCIL),它旨在处理客户端数据持续生成和新类别出现的情况。FCIL面临着旧类别灾难性遗忘和客户端数据非独立同分布的挑战。

✨ 为了解决这些问题,本文提出了FCILPT方法。该方法使用提示(Prompt)来缓解旧类的灾难性遗忘,将与任务相关和无关的知识编码到提示中,从而保留客户端的旧知识和新知识。

🤝 FCILPT在全局聚合之前,对本地客户端的提示池中的任务信息进行排序,以对齐不同客户端的任务信息。这确保了相同任务的知识得到充分整合,解决了由于不同客户端在同一增量任务中缺乏类别而导致非独立同分布的问题。

✅ 在CIFAR-100、Mini-ImageNet和Tiny-ImageNet数据集上的实验结果表明,FCILPT在准确性方面优于现有方法,证明了其有效性。

arXiv:2310.08948v2 Announce Type: replace-cross Abstract: As Web technology continues to develop, it has become increasingly common to use data stored on different clients. At the same time, federated learning has received widespread attention due to its ability to protect data privacy when let models learn from data which is distributed across various clients. However, most existing works assume that the client's data are fixed. In real-world scenarios, such an assumption is most likely not true as data may be continuously generated and new classes may also appear. To this end, we focus on the practical and challenging federated class-incremental learning (FCIL) problem. For FCIL, the local and global models may suffer from catastrophic forgetting on old classes caused by the arrival of new classes and the data distributions of clients are non-independent and identically distributed (non-iid). In this paper, we propose a novel method called Federated Class-Incremental Learning with PrompTing (FCILPT). Given the privacy and limited memory, FCILPT does not use a rehearsal-based buffer to keep exemplars of old data. We choose to use prompts to ease the catastrophic forgetting of the old classes. Specifically, we encode the task-relevant and task-irrelevant knowledge into prompts, preserving the old and new knowledge of the local clients and solving the problem of catastrophic forgetting. We first sort the task information in the prompt pool in the local clients to align the task information on different clients before global aggregation. It ensures that the same task's knowledge are fully integrated, solving the problem of non-iid caused by the lack of classes among different clients in the same incremental task. Experiments on CIFAR-100, Mini-ImageNet, and Tiny-ImageNet demonstrate that FCILPT achieves significant accuracy improvements over the state-of-the-art methods.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

联邦学习 数据隐私 FCILPT Prompt 类增量学习
相关文章