cs.AI updates on arXiv.org 07月25日 12:28
Caching Techniques for Reducing the Communication Cost of Federated Learning in IoT Environments
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文提出智能缓存策略,包括FIFO、LRU和基于优先级的策略,以降低联邦学习中不必要的模型更新传输,提高带宽使用效率,实验表明该策略在CIFAR-10和医疗数据集上有效降低通信成本,同时保持模型精度。

arXiv:2507.17772v1 Announce Type: cross Abstract: Federated Learning (FL) allows multiple distributed devices to jointly train a shared model without centralizing data, but communication cost remains a major bottleneck, especially in resource-constrained environments. This paper introduces caching strategies - FIFO, LRU, and Priority-Based - to reduce unnecessary model update transmissions. By selectively forwarding significant updates, our approach lowers bandwidth usage while maintaining model accuracy. Experiments on CIFAR-10 and medical datasets show reduced communication with minimal accuracy loss. Results confirm that intelligent caching improves scalability, memory efficiency, and supports reliable FL in edge IoT networks, making it practical for deployment in smart cities, healthcare, and other latency-sensitive applications.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

联邦学习 智能缓存 通信效率 模型精度 边缘物联网
相关文章