Ajou University repository

Design of Knowledge Distillation using Multiple Assistants for Large Gap between Teacher and Student
  • 손원철
Citations

SCOPUS

0

Citation Export

Advisor
황원준
Affiliation
아주대학교 일반대학원
Department
일반대학원 인공지능학과
Publication Year
2021-02
Publisher
The Graduate School, Ajou University
Keyword
computer visiondeep learningmodel compressionmodel optimizationobject classification
Description
학위논문(석사)--아주대학교 일반대학원 :인공지능학과,2021. 2
Alternative Abstract
With the success of deep neural networks, knowledge distillation which guides the learning of a small student network from a large teacher network is being actively studied for model compression and transfer learning. However, few studies have been performed to resolve the poor learning issue of the student network when the student and teacher model sizes significantly differ. In this paper, we propose a densely guided knowledge distillation using multiple teacher assistants that gradually decreases the model size to efficiently bridge the gap between the teacher and student networks. To stimulate more efficient learning of the student network, we guide each teacher assistant to every other smaller teacher assistants iteratively. Specifically, when teaching a smaller teacher assistant at the next step, the existing larger teacher assistants from the previous step are used as well as the teacher network. Moreover, we design stochastic teaching where, for each mini-batch, a teacher or teacher assistants are randomly dropped. This acts as a regularizer to improve the efficiency of teaching of the student network. Thus, the student can always learn salient distilled knowledge from the multiple sources. Additionally, there is a demand for on-the-fly computational systems with low power requirements, such as system-on-chips and embedded devices. We revise a parallel teacher assistant knowledge distillation as a way to use convolutional neural networks on the on-the-fly system where we consider a student and a teacher using 1 × 𝑁 and 𝑁 × 𝑁shape filters, respectively. We verified the effectiveness of the proposed method for a classification task using CIFAR-10, CIFAR-100, SVHN and Tiny ImageNet. We also achieved significant performance improvements with various backbone architectures such as ResNet, WideResNet, and VGG.
Language
eng
URI
https://dspace.ajou.ac.kr/handle/2018.oak/20013
Fulltext

Type
Thesis
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Total Views & Downloads

File Download

  • There are no files associated with this item.