Federated learning (FL) is proposed to address the security vulnerabilities of conventional distributed deep learning. Since the capabilities of participating FL clients are highly variable in terms of both statistical and system aspects, FL training will face diminished convergence accuracy and speed. Hence, we propose CHAFL (client heterogeneity aware federated learning) to address client heterogeneity (i.e., statistical and system heterogeneity) in synchronous FL. CHAFL selects clients based on global loss with contribution, defined as local loss, enabling higher round-to-accuracy for the global model than previous studies. Additionally, to handle system heterogeneity, it proposes a lightweight algorithm that eliminates the profiling process previously employed to calculate adaptive local epochs in existing studies, thereby improving time-to-accuracy. In order to verify the effectiveness of CHAFL, we exploit three benchmark datasets on non-IID and system heterogeneous setting for empirical evaluation. Compared to the baseline, CHAFL achieves an accuracy improvement of 3.1 to 5.7%, along with 1.04 to 1.73x higher round-to-accuracy and 1.03 to 1.98x higher time-to-accuracy.
This work was jointly supported by the Basic Science Research Program of National Research Foundation of Korea (2021R1F1A1062779), the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2023-2018-0- 01431) supervised by the IITP(Institute for Information & Communications Technology Planning & Evaluation), and the Korea Institute of Science and Technology Information (KISTI) (KSC2022-CRE-0406).