Ajou University repository

이미지 데이터 변조의 적대적 공격에 대한 CNN과 SNN의 강건성 비교연구
  • 김규형
Citations

SCOPUS

0

Citation Export

DC Field Value Language
dc.contributor.advisor조위덕-
dc.contributor.author김규형-
dc.date.issued2024-02-
dc.identifier.other33680-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/38915-
dc.description학위논문(석사)--지식정보공학과,2024. 2-
dc.description.abstract본 연구는 CIFAR-10 데이터셋을 사용하여 적대적 공격에 대한 컨볼루셔널 신경망(CNN)과 스파이킹 신경망(SNN)의 강건성을 비교 분석한다. 연구에서는 Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini & Wagner (C&W), Deepfool 등 다양한 적대적 공격 방법을 사용했다. 이러한 공격은 하이퍼파라미터를 변화시키며 테스트되었다. 연구의 초점은 각 적대적 공격에 대한 하이퍼파라미터 설정이 다른 경우 SNN과 CNN의 정확도를 비교하는 것이다. 또한, CNN에 대해서는 Gradient-weighted Class Activation Mapping (Grad-CAM)을, SNN에 대해서는 Spike Activation Mapping (SAM)을 사용하여 결과를 시각화함으로써, 적대적 조건에서의 성능을 종합적으로 비교한다.|This study conducts a comparative analysis of the robustness of Convolutional Neural Networks (CNNs) and Spiking Neural Networks (SNNs) against adversarial attacks using the CIFAR-10 dataset. We employed various adversarial attack methods, including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini & Wagner (C&W), and Deepfool. These attacks were tested by varying their hyperparameters. The study focuses on comparing the accuracy of SNNs and CNNs under different hyperparameter settings for each adversarial attack. Additionally, the results were visualized using Gradient-weighted Class Activation Mapping (Grad-CAM) for CNNs and Spike Activation Mapping (SAM) for SNNs, providing a comprehensive comparison of their performance under adversarial conditions.-
dc.description.tableofcontentsI 서론(Introduction) 1_x000D_ <br> A. 연구 배경 1_x000D_ <br> B. 연구 목적 1_x000D_ <br>II 예비 연구(Preliminaries) 3_x000D_ <br> A. 딥러닝과 신경망의 기본 개념 3_x000D_ <br> B. CNN (Convolutional Neural Network) 4_x000D_ <br> C. SNN (Spiking Neural Network) 5_x000D_ <br> D. 공격 시나리오 개요 (FGSM, PGD, C&W, Deepfool) 7_x000D_ <br> 1. FGSM (Fast Gradient Sign Method) 7_x000D_ <br> 2. PGD (Projected Gradient Descent) 7_x000D_ <br> 3. C&W (Carlini & Wagner) 8_x000D_ <br> 4. Deepfool 9_x000D_ <br> E. Grad-CAM (Gradient-weighted Class Activation Mapping) 10_x000D_ <br> F. SAM (Spiking Activity Map) 11_x000D_ <br>III 연구 방법론(Research Methodology) 13_x000D_ <br> A. 데이터 세트 13_x000D_ <br> B. CNN과 SNN의 아키텍처 설계 14_x000D_ <br> 1. CNN (Convolutional Neural Network) 14_x000D_ <br> 2. SNN (Spike Neural Network) 15_x000D_ <br> C. Leaky Integrate-and-Fire (LIF) 모델 16_x000D_ <br> D. Surrogate Gradient Descent 방법론 17_x000D_ <br> E. Bayesian 최적화를 통한 하이퍼파라미터 튜닝 18_x000D_ <br>IV 실험 및 결과(Experiments and Results) 20_x000D_ <br> A. 훈련 과정 및 최적화 전략 20_x000D_ <br> 1. CNN 훈련 과정 및 최적화 20_x000D_ <br> 2. SNN 훈련 과정 및 최적화 22_x000D_ <br> B. 공격 시나리오별 분류 정확도 24_x000D_ <br> 1. FGSM 24_x000D_ <br> 2. PGD 25_x000D_ <br> 3. C&W 27_x000D_ <br> 4. Deepfool 28_x000D_ <br> C. Grad-CAM 및 SAM을 통한 시각적 분석 36_x000D_ <br> 1. Grad-CAM with CNN 36_x000D_ <br> (A) FGSM 36_x000D_ <br> (B) PGD 37_x000D_ <br> (C) C&W 39_x000D_ <br> (D) Deepfool 40_x000D_ <br> 2. SAM with SNN 42_x000D_ <br> (A) FGSM 42_x000D_ <br> (B) PGD 44_x000D_ <br> (C) C&W 47_x000D_ <br> (D) Deepfool 49_x000D_ <br>V 고찰(Discussion) 52_x000D_ <br> A. 결과 해석 52_x000D_ <br> 1. FGSM Accuracy 52_x000D_ <br> 2. PGD Accuracy 52_x000D_ <br> 3. C&W Accuracy 53_x000D_ <br> 4. Deepfool Accuracy 53_x000D_ <br> 5. FGSM Grad-CAM(CNN) 53_x000D_ <br> 6. PGD Grad-CAM(CNN) 54_x000D_ <br> 7. C&W Grad-CAM(CNN) 54_x000D_ <br> 8. Deepfool Grad-CAM(CNN) 55_x000D_ <br> 9. FGSM SAM(SNN) 55_x000D_ <br> 10. PGD SAM(SNN) 56_x000D_ <br> 11. C&W SAM(SNN) 56_x000D_ <br> 12. Deepfool SAM(SNN) 56_x000D_ <br> B. SNN의 장단점 58_x000D_ <br> C. 다른 논문과의 비교 59_x000D_ <br> D. 연구의 한계와 추가 연구 방향 60_x000D_ <br> 1. 연구의 한계 60_x000D_ <br> 2. 추가 연구 방향 61_x000D_ <br>VI 결론(Conclusion) 63_x000D_ <br> A. 주요 결론 63_x000D_ <br> B. 의견 64_x000D_ <br>VII 참고문헌(References) 66_x000D_-
dc.language.isokor-
dc.publisherThe Graduate School, Ajou University-
dc.rights아주대학교 논문은 저작권에 의해 보호받습니다.-
dc.title이미지 데이터 변조의 적대적 공격에 대한 CNN과 SNN의 강건성 비교연구-
dc.typeThesis-
dc.contributor.affiliation아주대학교 대학원-
dc.contributor.alternativeNameKIMKYUHYEONG-
dc.contributor.department일반대학원 지식정보공학과-
dc.date.awarded2024-02-
dc.description.degreeMaster-
dc.identifier.urlhttps://dcoll.ajou.ac.kr/dcollection/common/orgView/000000033680-
dc.subject.keywordAdversarial Attack-
dc.subject.keywordCIFAR-10 Dataset-
dc.subject.keywordConvolutional Neural Network-
dc.subject.keywordGrad-CAM-
dc.subject.keywordSpike Activation Map-
dc.subject.keywordSpiking Neural Network-
dc.description.alternativeAbstractThis study conducts a comparative analysis of the robustness of Convolutional Neural Networks (CNNs) and Spiking Neural Networks (SNNs) against adversarial attacks using the CIFAR-10 dataset. We employed various adversarial attack methods, including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini & Wagner (C&W), and Deepfool. These attacks were tested by varying their hyperparameters. The study focuses on comparing the accuracy of SNNs and CNNs under different hyperparameter settings for each adversarial attack. Additionally, the results were visualized using Gradient-weighted Class Activation Mapping (Grad-CAM) for CNNs and Spike Activation Mapping (SAM) for SNNs, providing a comprehensive comparison of their performance under adversarial conditions.-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Total Views & Downloads

File Download

  • There are no files associated with this item.