MarkTechPost@AI 2024年09月27日
DP-Norm: A Novel AI Algorithm for Highly Privacy-Preserving Decentralized Federated Learning (FL)
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

DP-Norm是一种用于高度隐私保护的分散式联邦学习的新型AI算法,解决了数据隐私和非独立同分布数据带来的挑战。

🎯DP-Norm引入DP扩散过程到Edge Consensus Learning(ECL)中,作为模型变量的线性约束,增强对非独立同分布数据的鲁棒性。

💪为解决噪声和干扰问题,DP-Norm在团队研究中加入了去噪过程,以减轻因双重变量交换导致的范数爆炸式增长,确保隐私保护的消息传递。

📈DP-Norm的更新规则使用算子分裂技术推导,特别是Peaceman-Rachford分裂,在本地对原始和双重变量的更新以及通过图进行隐私保护的消息传递之间交替进行,提高了模型的收敛性。

🔬研究者使用Fashion MNIST数据集进行实验,DP-Norm在测试准确性方面超过了其他分散式方法,尤其在更高隐私设置下表现出色。

Federated Learning (FL) is a successful solution for decentralized model training that prioritizes data privacy, allowing several nodes to learn together without sharing data. It’s especially important in sensitive areas such as medical analysis, industrial anomaly detection, and voice processing. 

Recent FL advancements emphasize decentralized network architectures to address challenges posed by non-IID (non-independent and identically distributed) data, which can compromise privacy during model updates. Studies show that even small differences in model parameters may leak confidential information, underscoring the need for effective privacy strategies. Differential privacy (DP) techniques have been integrated into decentralized FL to enhance privacy by adding controlled Gaussian noise to the exchanged information. While these methods can be adapted from single-node training to decentralized settings, their introduction may degrade learning performance due to interferences and the nature of non-IID data allocation.

To overcome these problems, a research team from Japan proposes a primal-dual differential privacy algorithm with denoising normalization, termed DP-Norm. This approach introduces a DP diffusion process into Edge Consensus Learning (ECL) as linear constraints on model variables, enhancing robustness against non-IID data. While addressing noise and interference, the team incorporates a denoising process to mitigate explosive norm increases from dual variable exchanges, ensuring privacy-preserving message passing.

In particular, the approach applies DP diffusion to message forwarding in the ECL framework, with Gaussian noise added to the dual variables to limit information leakage. However, during pre-testing, it was discovered that including this noise caused the learning process to stall due to an increase in the norm of the dual variables. To reduce noise buildup, the cost function incorporates a denoising normalization term ρ(λ). This normalization prevents the norm from expanding rapidly while preserving the privacy benefits of the DP diffusion process. The update rule for DP-Norm is derived using operator splitting techniques, particularly Peaceman-Rachford splitting, and alternates between local updates to the primal and dual variables and privacy-preserving message passing over a graph. This approach ensures that the model variables at each node approach the stationary point more effectively, even with noise and non-IID data issues. Including a denoising process (ρ(λ)) further enhances the algorithm’s stability. Compared to DP-SGD for decentralized FL, DP-Norm with denoising reduces gradient drift caused by non-IID data and excessive noise, leading to improved model convergence. Lastly, the algorithm’s performance is analyzed through privacy and convergence evaluations, where the minimal noise level required for (ε,δ)-DP is determined, and the effects of DP diffusion and denoising on convergence are discussed.

The researchers used the Fashion MNIST dataset to compare the DP-Norm technique against previous approaches (DP-SGD and DP-ADMM) for image classification. Each node had access to non-IID subsets of data, and both convex logistic regression and the non-convex ResNet-10 model were tested. Five approaches, including DP-Norm with and without normalization, have been investigated in various privacy settings (ε={∞,1,0.5}, δ=0.001). DP-Norm (α>0) surpasses other decentralized approaches regarding test accuracy, especially in higher privacy settings. The approach decreases DP diffusion noise by denoising, ensuring steady performance even under higher privacy constraints. 

In conclusion,  the study presented DP-Norm, a privacy-preserving method for decentralized, federated learning that ensures (ε, δ)-DP. The approach combines message forwarding, local model updates, and denoising normalization. According to the theoretical research, DP-Norm outperforms DP-SGD and DP-ADMM in terms of noise levels and convergence. Experimentally, DP-Norm regularly performed close to single-node reference scores, demonstrating its stability and usefulness in non-IID contexts.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

Don’t Forget to join our 52k+ ML SubReddit

The post DP-Norm: A Novel AI Algorithm for Highly Privacy-Preserving Decentralized Federated Learning (FL) appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

DP-Norm 联邦学习 隐私保护 模型收敛
相关文章