Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jung, June Pyo | - |
dc.contributor.author | Ko, Young Bae | - |
dc.contributor.author | Lim, Sung Hwa | - |
dc.date.issued | 2024-04-01 | - |
dc.identifier.issn | 1424-8220 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/dev/handle/2018.oak/34162 | - |
dc.description.abstract | Federated learning (FL) is an emerging distributed learning technique through which models can be trained using the data collected by user devices in resource-constrained situations while protecting user privacy. However, FL has three main limitations: First, the parameter server (PS), which aggregates the local models that are trained using local user data, is typically far from users. The large distance may burden the path links between the PS and local nodes, thereby increasing the consumption of the network and computing resources. Second, user device resources are limited, but this aspect is not considered in the training of the local model and transmission of the model parameters. Third, the PS-side links tend to become highly loaded as the number of participating clients increases. The links become congested owing to the large size of model parameters. In this study, we propose a resource-efficient FL scheme. We follow the Pareto optimality concept with the biased client selection to limit client participation, thereby ensuring efficient resource consumption and rapid model convergence. In addition, we propose a hierarchical structure with location-based clustering for device-to-device communication using k-means clustering. Simulation results show that with prate at 0.75, the proposed scheme effectively reduced transmitted and received network traffic by 75.89% and 78.77%, respectively, compared to the FedAvg method. It also achieves faster model convergence compared to other FL mechanisms, such as FedAvg and D2D-FedAvg. | - |
dc.description.sponsorship | This research was funded by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (MSIT) (NRF-2020R1A2C1102284 & NRF-2021R1A2C1012776). | - |
dc.language.iso | eng | - |
dc.publisher | Multidisciplinary Digital Publishing Institute (MDPI) | - |
dc.subject.mesh | Efficiency models | - |
dc.subject.mesh | FAST model | - |
dc.subject.mesh | Federated learning | - |
dc.subject.mesh | Local model | - |
dc.subject.mesh | Mobile communications | - |
dc.subject.mesh | Model convergence | - |
dc.subject.mesh | Modeling parameters | - |
dc.subject.mesh | Pareto-optimality | - |
dc.subject.mesh | Resource efficiencies | - |
dc.subject.mesh | User devices | - |
dc.title | Federated Learning with Pareto Optimality for Resource Efficiency and Fast Model Convergence in Mobile Environments † | - |
dc.type | Article | - |
dc.citation.title | Sensors | - |
dc.citation.volume | 24 | - |
dc.identifier.bibliographicCitation | Sensors, Vol.24 | - |
dc.identifier.doi | 10.3390/s24082476 | - |
dc.identifier.pmid | 38676094 | - |
dc.identifier.scopusid | 2-s2.0-85191409670 | - |
dc.identifier.url | http://www.mdpi.com/journal/sensors | - |
dc.subject.keyword | federated learning | - |
dc.subject.keyword | mobile communication | - |
dc.subject.keyword | Pareto optimality | - |
dc.description.isoa | true | - |
dc.subject.subarea | Analytical Chemistry | - |
dc.subject.subarea | Information Systems | - |
dc.subject.subarea | Atomic and Molecular Physics, and Optics | - |
dc.subject.subarea | Biochemistry | - |
dc.subject.subarea | Instrumentation | - |
dc.subject.subarea | Electrical and Electronic Engineering | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.