| ||||
| ||||
![]() Title:MPLS: Stacking Diverse Layers into One Model for Decentralized Federated Learning Conference:EURO-PAR 2025 Tags:Decentralized Federated Learning, Device Heterogeneity and Edge Network Abstract: Traditional Federated Learning (FL) enables collaborative training of deep neural networks (DNNs) across massive edge devices while preserving data privacy. However, its reliance on a centralized parameter server (PS) introduces communication bottlenecks and security risks. To address these issues, Decen- tralized Federated Learning (DFL) has emerged, which adopts peer-to-peer (P2P) communication to eliminate the PS. Despite its promise, DFL faces critical chal- lenges: (1) limited bandwidth resources, (2) dynamic network conditions, and (3) data heterogeneity among devices. To conquer these challenges, we design and implement a communication-efficient DFL framework with peer and layer selection, namely MPLS, which has the following advantages. 1) Different from exchanging an entire model between two workers in previous works, each worker just collects multiple sub-models (i.e., some critical layers) from the chosen peers and stacks them into one model for aggregation. 2) MPLS adopts asynchronous training among workers without any coordinator and enables each worker to develop the peer and layer selection strategy adaptively via the proposed list scheduling algorithm. We implement MPLS on a physical platform, and extensive experiments on real-world DNNs and datasets demonstrate that MPLS achieves 2.1-4.2× speedup compared to the baselines. MPLS: Stacking Diverse Layers into One Model for Decentralized Federated Learning ![]() MPLS: Stacking Diverse Layers into One Model for Decentralized Federated Learning | ||||
| Copyright © 2002 – 2025 EasyChair |
