Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yun, Won Joon | - |
dc.contributor.author | Kwak, Yunseok | - |
dc.contributor.author | Baek, Hankyul | - |
dc.contributor.author | Jung, Soyi | - |
dc.contributor.author | Ji, Mingyue | - |
dc.contributor.author | Bennis, Mehdi | - |
dc.contributor.author | Park, Jihong | - |
dc.contributor.author | Kim, Joongheon | - |
dc.date.issued | 2023-12-01 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/dev/handle/2018.oak/33225 | - |
dc.description.abstract | Federated learning (FL) is a key enabler for efficient communication and computing, leveraging devices' distributed computing capabilities. However, applying FL in practice is challenging due to the local devices' heterogeneous energy, wireless channel conditions, and non-independently and identically distributed (non-IID) data distributions. To cope with these issues, this paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNN). Integrating FL with SNNs is challenging due to time-varying channel conditions and data distributions. In addition, existing multi-width SNN training algorithms are sensitive to the data distributions across devices, which makes SNN ill-suited for FL. Motivated by this, we propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models. By applying SC, SlimFL exchanges the superposition of multiple-width configurations decoded as many times as possible for a given communication throughput. Leveraging ST, SlimFL aligns the forward propagation of different width configurations while avoiding inter-width interference during backpropagation. We formally prove the convergence of SlimFL. The result reveals that SlimFL is not only communication-efficient but also deals with non-IID data distributions and poor channel conditions, which is also corroborated by data-intensive simulations. | - |
dc.description.sponsorship | This work was supported in part by the National Research Foundation of Korea (NRF-Korea) under Grant 2022R1A2C2004869 and in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) Grant through the Korea Government [Ministry of Science and Information and Communications Technology (MSIT)], Intelligent 6G Wireless Access System, under Grant 2021-0-00467. | - |
dc.language.iso | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.subject.mesh | Convergence | - |
dc.subject.mesh | Data distribution | - |
dc.subject.mesh | Encodings | - |
dc.subject.mesh | Federated learning | - |
dc.subject.mesh | Heterogeneous devices | - |
dc.subject.mesh | Neural-networks | - |
dc.subject.mesh | Slimmable neural network | - |
dc.subject.mesh | Super-position coding | - |
dc.title | SlimFL: Federated Learning With Superposition Coding Over Slimmable Neural Networks | - |
dc.type | Article | - |
dc.citation.endPage | 2514 | - |
dc.citation.startPage | 2499 | - |
dc.citation.title | IEEE/ACM Transactions on Networking | - |
dc.citation.volume | 31 | - |
dc.identifier.bibliographicCitation | IEEE/ACM Transactions on Networking, Vol.31, pp.2499-2514 | - |
dc.identifier.doi | 10.1109/tnet.2022.3231864 | - |
dc.identifier.scopusid | 2-s2.0-85147216166 | - |
dc.identifier.url | https://ieeexplore.ieee.org/servlet/opac?punumber=90 | - |
dc.subject.keyword | Federated learning | - |
dc.subject.keyword | heterogeneous devices | - |
dc.subject.keyword | slimmable neural network | - |
dc.description.isoa | true | - |
dc.subject.subarea | Software | - |
dc.subject.subarea | Computer Science Applications | - |
dc.subject.subarea | Computer Networks and Communications | - |
dc.subject.subarea | Electrical and Electronic Engineering | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.