Craft-DFL: Navigating the Performance of Decentralized Federated Learning Deployments
Date
2024-08-27
Authors
Jiang, Chengyan
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The widespread adoption of smartphones and smart wearable devices has led to the widespread use of Centralized Federated Learning (CFL) for training powerful machine learning models while preserving data privacy. However, CFL faces limitations due to overreliance on a central server, which impacts latency and system robustness. Decentralized Federated Learning (DFL) is introduced to address these challenges. DFL facilitates direct collaboration among participating devices without relying on a central server. Each device can independently connect with other devices and share model parameters.
This work explores crucial factors influencing the convergence and generalization capacity of DFL models, emphasizing network topologies, non-IID data distribution, and training strategies. We first derive the convergence rate of different DFL model deployment strategies. Then, we comprehensively analyze various network topologies (e.g., linear, ring, star, and mesh) with different degrees of non-IID data and evaluate them over widely adopted machine-learning models (e.g., classical, deep neural networks, and Large Language Models) and real-world datasets.
The results reveal that models converge to the optimal one for IID data. However, the convergence rate is inversely proportional to the degree of non-IID data distribution. Our findings will serve as valuable guidelines for designing effective DFL model deployments in practical applications.
Description
Keywords
Decentralized Federated Learning