As data privacy becomes increasingly important, federated learning applied to the training of deep learning models while ensuring the data privacy of devices is entering the spotlight. Federated learning makes it possible to process all data at once while processing data independently from various devices without collecting distributed local data in a central server. However, there are still challenges to overcome for the system of devices in federated learning such as communication overheads and the heterogeneity of the system. In this paper, we propose the Adjusting Mini-Batch and Local Epoch (AMBLE) approach, which adaptively adjusts the local mini-batch and local epoch size for heterogeneous devices in federated learning and updates the parameters synchronously. With AMBLE, we enhance the computational efficiency by removing stragglers and scaling the local learning rate to improve the model convergence rate and accuracy. We verify that federated learning with AMBLE is a stably trained model with a faster convergence speed and higher accuracy than FedAvg and adaptive batch size scheme for both identically and independently distributed (IID) and non-IID cases.
This research was jointly supported by the Basic Science Research Program ( 2021R1F1A1062779 ) of the National Research Foundation of Korea (NRF) funded by the Ministry of Education, the supercomputing application department at Korea Institute of Science and Technology Information ( KSC-2021-CRE-0363 ), and the MSIT ( Ministry of Science and ICT ), Korea, under the ITRC (Information Technology Research Center) support program ( IITP-2021-2018-0-01431 ) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation). We would like to thank Editage ( www.editage.co.kr ) for English language editing.