Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Seungjun | - |
dc.contributor.author | Yu, Miri | - |
dc.contributor.author | Yoon, Daegun | - |
dc.contributor.author | Oh, Sangyoon | - |
dc.date.issued | 2023-01-01 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/36969 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85169297175&origin=inward | - |
dc.description.abstract | Federated learning (FL) was proposed for training a deep neural network model using millions of user data. The technique has attracted considerable attention owing to its privacy-preserving characteristic. However, two major challenges exist. The first is the limitation of simultaneously participating clients. If the number of clients increases, the single parameter server easily becomes a bottleneck and is prone to have stragglers. The second is data heterogeneity, which adversely affects the accuracy of the global model. Because data should remain at user devices to preserve privacy, we cannot use data shuffling, which is used to homogenize training data in traditional distributed deep learning. We propose a client clustering and model aggregation method, CCFed, to increase the number of simultaneously participating clients and mitigate the data heterogeneity problem. CCFed improves the learning performance using set partition modeling to let data be evenly distributed between clusters and mitigate the effect of a non-IID environment. Experiments show that we can achieve a 2.7-14% higher accuracy using CCFed compared with FedAvg, where CCFed requires approximately 50% less number of rounds compared with FedAvg training on benchmark datasets. | - |
dc.description.sponsorship | This research was supported by the Korea Insitute of Science and TechnologyInformation(KISTI) (P22010) and by the Basic Science Research Program Through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2022R1F1A1062779). | - |
dc.language.iso | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.subject.mesh | Client clustering | - |
dc.subject.mesh | Clusterings | - |
dc.subject.mesh | Data heterogeneity | - |
dc.subject.mesh | Federated learning | - |
dc.subject.mesh | Heterogeneity effects | - |
dc.subject.mesh | Hierarchical aggregation | - |
dc.subject.mesh | Neural network model | - |
dc.subject.mesh | Privacy preserving | - |
dc.subject.mesh | Single parameter | - |
dc.subject.mesh | User data | - |
dc.title | Can hierarchical client clustering mitigate the data heterogeneity effect in federated learning? | - |
dc.type | Conference | - |
dc.citation.conferenceDate | 2023.5.15. ~ 2023.5.19. | - |
dc.citation.conferenceName | 2023 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2023 | - |
dc.citation.edition | 2023 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2023 | - |
dc.citation.endPage | 808 | - |
dc.citation.startPage | 799 | - |
dc.citation.title | 2023 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2023 | - |
dc.identifier.bibliographicCitation | 2023 IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2023, pp.799-808 | - |
dc.identifier.doi | 10.1109/ipdpsw59300.2023.00134 | - |
dc.identifier.scopusid | 2-s2.0-85169297175 | - |
dc.identifier.url | http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=10196463 | - |
dc.subject.keyword | client clustering | - |
dc.subject.keyword | data heterogeneity | - |
dc.subject.keyword | federated learning | - |
dc.subject.keyword | hierarchical aggregation | - |
dc.type.other | Conference Paper | - |
dc.description.isoa | false | - |
dc.subject.subarea | Computer Networks and Communications | - |
dc.subject.subarea | Hardware and Architecture | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.