Ajou University repository

DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsificationoa mark
Citations

SCOPUS

0

Citation Export

Publication Year
2023-08-07
Journal
ACM International Conference Proceeding Series
Publisher
Association for Computing Machinery
Citation
ACM International Conference Proceeding Series, pp.746-755
Keyword
distributed deep learninggradient sparsificationscalability
Mesh Keyword
Balanced loadsBin packing algorithmComputational costsComputational loadsDistributed deep learningGradient sparsificationSparsificationSubtaskUser requirementsWorkers'
All Science Classification Codes (ASJC)
SoftwareHuman-Computer InteractionComputer Vision and Pattern RecognitionComputer Networks and Communications
Abstract
Gradient sparsification is a widely adopted solution for reducing the excessive communication traffic in distributed deep learning. However, most existing gradient sparsifiers have relatively poor scalability because of considerable computational cost of gradient selection and/or increased communication traffic owing to gradient build-up. To address these challenges, we propose a novel gradient sparsification scheme, DEFT, that partitions the gradient selection task into sub tasks and distributes them to workers. DEFT differs from existing sparsifiers, wherein every worker selects gradients among all gradients. Consequently, the computational cost can be reduced as the number of workers increases. Moreover, gradient build-up can be eliminated because DEFT allows workers to select gradients in partitions that are non-intersecting (between workers). Therefore, even if the number of workers increases, the communication traffic can be maintained as per user requirement. To avoid the loss of significance of gradient selection, DEFT selects more gradients in the layers that have a larger gradient norm than the other layers. Because every layer has a different computational load, DEFT allocates layers to workers using a bin-packing algorithm to maintain a balanced load of gradient selection between workers. In our empirical evaluation, DEFT shows a significant improvement in training performance in terms of speed in gradient selection over existing sparsifiers while achieving high convergence performance.
Language
eng
URI
https://aurora.ajou.ac.kr/handle/2018.oak/36993
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85179891167&origin=inward
DOI
https://doi.org/10.1145/3605573.3605609
Journal URL
http://portal.acm.org/
Type
Conference
Funding
The authors would like to thank the anonymous reviewers for their insightful feedback. This work was jointly supported by the Korea Institute of Science and Technology Information (KSC-2022-CRE-0406), BK21 FOUR program (NRF5199991014091), and Basic Science Research Program (2021R1F1A1062779) of National Research Foundation of Korea.
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Oh, Sangyoon Image
Oh, Sangyoon오상윤
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download