Federated Learning (FL) is an emerging machine learning technique used to train big data in resource-constrained situations. However, three main challenging issues are identified regarding communication resources in FL. First, the parameter server (PS) that collects user devices' models is in the remote cloud. In the process of aggregating the model, the issues may burden the path links between the PS and high-traffic local nodes. Second, the network can be congested owing to the large size of the model parameters. Third, PS-side links may be highly stressed if the number of participating clients is large. In the present study, we propose a resource efficient FL scheme, in which clusters' clients are based on the location, and the communication range of each client, which selects partial clients in a cluster, updates the model by exploiting the Pareto principle. Simulation results show that our proposed scheme reduces wireless network traffics while maintaining a slightly higher accuracy than the legacy FL mechanism.
This work was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (MSIT) (NRF-2020R1A2C1102284 and NRF-2021R1A2C1012776). This work was also supported by the BK21 FOUR program of the NRF of Korea funded by the Ministry of Education (NRF-5199991514504).