Abstract:
In cross-device federated learning (FL) [1], a machine learning model is developed by leveraging communication between a central server and numerous edge device clients. In practical scenarios, the ability of these edge devices to consistently relay updates can vary significantly. Consequently, some devices may communicate more frequently than others. This paper explores strategies for forming groups of active clients to enhance the accuracy of the server model. Specifically, we examine the case where these groups are organized and activated cyclically. We address the open question of whether leveraging the heterogeneity in communication capabilities by allowing certain clients to be part of multiple groups is beneficial. Our theoretical convergence analysis and experimental results show that enabling clients to join multiple groups significantly improves accuracy. We also show empirically that utilizing local data partitioning does not cause significant performance loss, while it can have huge benefits like reducing local computation and power utilisation as well as reduced training times. The proposed strategies have the potential to boost the effectiveness of cross-device FL in environments with heterogeneous communication capabilities.