arXiv:2410.15045v3 Announce Type: replace-cross Abstract: Federated Unlearning (FU) enables the removal of specific clients' data influence from trained models. However, in non-IID settings, removing clients creates critical side effects: remaining clients with similar data distributions suffer disproportionate performance degradation, while the global model's stability deteriorates. These vulnerable clients then have reduced incentives to stay in the federation, potentially triggering a cascade of withdrawals that further destabilize the system. To address this challenge, we develop a theoretical framework that quantifies how data heterogeneity impacts unlearning outcomes. Based on these insights, we model FU as a Stackelberg game where the server strategically offers payments to retain crucial clients based on their contribution to both unlearning effectiveness and system stability. Our rigorous equilibrium analysis reveals how data heterogeneity fundamentally shapes the trade-offs between system-wide objectives and client interests. Our approach improves global stability by up to 6.23\%, reduces worst-case client degradation by 10.05\%, and achieves up to 38.6\% runtime efficiency over complete retraining.