Ajou University repository

DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsificationoa mark
Citations

SCOPUS

0

Citation Export

DC Field Value Language
dc.contributor.authorYoon, Daegun-
dc.contributor.authorOh, Sangyoon-
dc.date.issued2023-08-07-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/36993-
dc.identifier.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85179891167&origin=inward-
dc.description.abstractGradient sparsification is a widely adopted solution for reducing the excessive communication traffic in distributed deep learning. However, most existing gradient sparsifiers have relatively poor scalability because of considerable computational cost of gradient selection and/or increased communication traffic owing to gradient build-up. To address these challenges, we propose a novel gradient sparsification scheme, DEFT, that partitions the gradient selection task into sub tasks and distributes them to workers. DEFT differs from existing sparsifiers, wherein every worker selects gradients among all gradients. Consequently, the computational cost can be reduced as the number of workers increases. Moreover, gradient build-up can be eliminated because DEFT allows workers to select gradients in partitions that are non-intersecting (between workers). Therefore, even if the number of workers increases, the communication traffic can be maintained as per user requirement. To avoid the loss of significance of gradient selection, DEFT selects more gradients in the layers that have a larger gradient norm than the other layers. Because every layer has a different computational load, DEFT allocates layers to workers using a bin-packing algorithm to maintain a balanced load of gradient selection between workers. In our empirical evaluation, DEFT shows a significant improvement in training performance in terms of speed in gradient selection over existing sparsifiers while achieving high convergence performance.-
dc.description.sponsorshipThe authors would like to thank the anonymous reviewers for their insightful feedback. This work was jointly supported by the Korea Institute of Science and Technology Information (KSC-2022-CRE-0406), BK21 FOUR program (NRF5199991014091), and Basic Science Research Program (2021R1F1A1062779) of National Research Foundation of Korea.-
dc.language.isoeng-
dc.publisherAssociation for Computing Machinery-
dc.subject.meshBalanced loads-
dc.subject.meshBin packing algorithm-
dc.subject.meshComputational costs-
dc.subject.meshComputational loads-
dc.subject.meshDistributed deep learning-
dc.subject.meshGradient sparsification-
dc.subject.meshSparsification-
dc.subject.meshSubtask-
dc.subject.meshUser requirements-
dc.subject.meshWorkers'-
dc.titleDEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification-
dc.typeConference-
dc.citation.conferenceDate2023.8.7. ~ 2023.8.10.-
dc.citation.conferenceName52nd International Conference on Parallel Processing, ICPP 2023-
dc.citation.edition52nd International Conference on Parallel Processing, ICPP 2023 - Main Conference Proceedings-
dc.citation.endPage755-
dc.citation.startPage746-
dc.citation.titleACM International Conference Proceeding Series-
dc.identifier.bibliographicCitationACM International Conference Proceeding Series, pp.746-755-
dc.identifier.doi10.1145/3605573.3605609-
dc.identifier.scopusid2-s2.0-85179891167-
dc.identifier.urlhttp://portal.acm.org/-
dc.subject.keyworddistributed deep learning-
dc.subject.keywordgradient sparsification-
dc.subject.keywordscalability-
dc.type.otherConference Paper-
dc.description.isoatrue-
dc.subject.subareaSoftware-
dc.subject.subareaHuman-Computer Interaction-
dc.subject.subareaComputer Vision and Pattern Recognition-
dc.subject.subareaComputer Networks and Communications-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Oh, Sangyoon Image
Oh, Sangyoon오상윤
Department of Software and Computer Engineering
Read More

Total Views & Downloads

File Download