SCOPUS
0Citation Export
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.advisor | Wonjun Hwang | - |
| dc.contributor.author | 이준민 | - |
| dc.date.issued | 2024-08 | - |
| dc.identifier.other | 33856 | - |
| dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/39281 | - |
| dc.description | 학위논문(석사)--인공지능학과,2024. 8 | - |
| dc.description.abstract | In our research, we introduce a novel approach to knowledge distillation aimed at enhancing the computational efficiency of 3D object detection within a teacher-student framework. The essence of our method lies in enabling the student model to distill knowledge from the teacher model, thereby reducing computational complexity while minimizing the performance gap between the two models throughout the training process. Traditionally, knowledge distillation techniques have primarily focused on improving the performance of classifiers and have often proven inapplicable or less effective for 3D object detection tasks. _x000D_ <br>_x000D_ <br>To address this problem, we proposed a method using an autoencoder to effectively distill the teacher’s fused information into the student’s BEV through knowledge distillation. This enables the student model to learn important but difficult-to-capture feature representations from the teacher model, thus allowing it to learn effectively and efficiently. Moreover, we introduce a training strategy that not only reduces the parameters of the student network but also enhances its performance compared to existing models. This dual objective of parameter reduction and performance improvement is achieved through careful design choices and optimization techniques, ensuring that the student model achieves competitive results with fewer computational resources. To validate the efficacy of our proposed methodology, we conduct comprehensive experiments using the nuScenes dataset, a widely used benchmark in the field of 3D object detection. Our experiments are based on the ResNet[16] model architecture, which serves as the backbone for both the teacher and student networks. Through rigorous experimentation and evaluation, we demonstrate the effectiveness and practical applicability of our approach in the context of real-world object detection tasks. | - |
| dc.description.tableofcontents | Ⅰ. Introduction 1_x000D_ <br>Ⅱ. Related Work 5_x000D_ <br>Ⅲ. Network Overview 8_x000D_ <br> Ⅰ. Framework Overview 8_x000D_ <br> Ⅱ. Proposed Method 9_x000D_ <br>Ⅳ. Experimental Results and Discussion 14_x000D_ <br> Ⅰ. Implementation Details 14_x000D_ <br> Ⅱ. Datasets 15_x000D_ <br> Ⅲ. Evaluation metrics 16_x000D_ <br> Ⅳ. Comparative Approaches 17_x000D_ <br> Ⅴ. Fusion, BEV knowledge distillation comparison 18_x000D_ <br> Ⅵ. Ablation Study: Comparison results of L1, L2, KL_Divergence 19_x000D_ <br>Ⅴ. Conclusion 21_x000D_ <br>Ⅵ. References 24_x000D_ | - |
| dc.language.iso | eng | - |
| dc.publisher | The Graduate School, Ajou University | - |
| dc.rights | 아주대학교 논문은 저작권에 의해 보호받습니다. | - |
| dc.title | Sensor Fusion based AutoEncoder Feature Distillation for 3D Object Detection | - |
| dc.title.alternative | 센서퓨전 기반 3D 객체 검출을 위한 특징맵 지식 증류 연구 | - |
| dc.type | Thesis | - |
| dc.contributor.affiliation | 아주대학교 대학원 | - |
| dc.contributor.alternativeName | 이준민 | - |
| dc.contributor.department | 일반대학원 인공지능학과 | - |
| dc.date.awarded | 2024-08 | - |
| dc.description.degree | Master | - |
| dc.identifier.url | https://dcoll.ajou.ac.kr/dcollection/common/orgView/000000033856 | - |
| dc.subject.keyword | Capacity gap | - |
| dc.subject.keyword | Knowledge Distillation | - |
| dc.subject.keyword | Representation ability | - |
| dc.subject.keyword | Sensor Fusion | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.