Ajou University repository

Staleness-aware semi-asynchronous federated learning
  • 유미리
Citations

SCOPUS

0

Citation Export

DC Field Value Language
dc.contributor.advisorSangyoon Oh-
dc.contributor.author유미리-
dc.date.issued2024-02-
dc.identifier.other33730-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/39203-
dc.description학위논문(석사)--인공지능학과,2024. 2-
dc.description.abstractAs the attempts to distribute deep learning using personal data have increased, the importance of federated learning (FL) has also increased. Attempts have been made to overcome the core challenges of federated learning (i.e., statistical and system heterogeneity) using synchronous or asynchronous protocols. However, stragglers reduce training efficiency in terms of latency and accuracy in each protocols, respectively. To solve straggler issues, a semi-asynchronous protocol that combines the two protocols can be applied to FL; however, effectively handling the staleness of the local model is a difficult problem. We proposed SASAFL to solve the training inefficiency caused by staleness in semi-asynchronous FL. SASAFL enables stable training by considering the quality of the global model to synchronise the servers and clients. In addition, it achieves high accuracy and low latency by adjusting the number of participating clients in response to changes in global loss and immediately processing clients that did not to participate in the previous round. An evaluation was conducted under various conditions to verify the effectiveness of the SASAFL. SASAFL achieved 19.69% higher accuracy than the baseline, 2.32 times higher round-to-accuracy, and 2.24 times higher latency-to-accuracy. Additionally, SASAFL always achieved target accuracy that the baseline can’t reach.-
dc.description.tableofcontents1 Introduction 1_x000D_ <br>2 Background and related work 4_x000D_ <br> 2.1 Synchronous FL 4_x000D_ <br> 2.2 Asynchronous FL 5_x000D_ <br> 2.3 Semi-asynchronous FL 8_x000D_ <br>3 Motivation 12_x000D_ <br>4 SASAFL: Staleness Aware Semi Asynchronous Federated Learning 16_x000D_ <br> 4.1 Global model reception policy 20_x000D_ <br> 4.1.1 Various types of clients 20_x000D_ <br> 4.1.2 Details of global model reception policy 21_x000D_ <br> 4.2 Adjusting number of participating clients 23_x000D_ <br> 4.2.1 Details of adjusting number of participating clients 23_x000D_ <br> 4.3 The SASAFL protocol 25_x000D_ <br>5 Experiment and evaluation 29_x000D_ <br> 5.1 Experiment setup 29_x000D_ <br> 5.1.1 Testbed 29_x000D_ <br> 5.1.2 Benchmark 30_x000D_ <br> 5.1.3 Model and dataset 30_x000D_ <br> 5.1.4 Training parameters 30_x000D_ <br> 5.1.5 Metrics 31_x000D_ <br> 5.2 Experiment results 31_x000D_ <br> 5.2.1 Lag tolerance 31_x000D_ <br> 5.2.2 Training curve 32_x000D_ <br> 5.2.3 Accuracy and latency performance 34_x000D_ <br> 5.2.4 Limitation of SASAFL 35_x000D_ <br>6 Conclusion and future works 48_x000D_ <br>_x000D_-
dc.language.isoeng-
dc.publisherThe Graduate School, Ajou University-
dc.rights아주대학교 논문은 저작권에 의해 보호받습니다.-
dc.titleStaleness-aware semi-asynchronous federated learning-
dc.typeThesis-
dc.contributor.affiliation아주대학교 대학원-
dc.contributor.alternativeNameMiri Yu-
dc.contributor.department일반대학원 인공지능학과-
dc.date.awarded2024-02-
dc.description.degreeMaster-
dc.identifier.urlhttps://dcoll.ajou.ac.kr/dcollection/common/orgView/000000033730-
dc.subject.keywordFederated Learning-
dc.subject.keywordGlobal loss-
dc.subject.keywordSemi-asynchronous-
dc.subject.keywordStaleness-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Total Views & Downloads

File Download

  • There are no files associated with this item.