Ajou University repository

이미지 데이터 변조의 적대적 공격에 대한 CNN과 SNN의 강건성 비교연구
  • 김규형
Citations

SCOPUS

0

Citation Export

Advisor
조위덕
Affiliation
아주대학교 대학원
Department
일반대학원 지식정보공학과
Publication Year
2024-02
Publisher
The Graduate School, Ajou University
Keyword
Adversarial AttackCIFAR-10 DatasetConvolutional Neural NetworkGrad-CAMSpike Activation MapSpiking Neural Network
Description
학위논문(석사)--지식정보공학과,2024. 2
Abstract
본 연구는 CIFAR-10 데이터셋을 사용하여 적대적 공격에 대한 컨볼루셔널 신경망(CNN)과 스파이킹 신경망(SNN)의 강건성을 비교 분석한다. 연구에서는 Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini & Wagner (C&W), Deepfool 등 다양한 적대적 공격 방법을 사용했다. 이러한 공격은 하이퍼파라미터를 변화시키며 테스트되었다. 연구의 초점은 각 적대적 공격에 대한 하이퍼파라미터 설정이 다른 경우 SNN과 CNN의 정확도를 비교하는 것이다. 또한, CNN에 대해서는 Gradient-weighted Class Activation Mapping (Grad-CAM)을, SNN에 대해서는 Spike Activation Mapping (SAM)을 사용하여 결과를 시각화함으로써, 적대적 조건에서의 성능을 종합적으로 비교한다.|This study conducts a comparative analysis of the robustness of Convolutional Neural Networks (CNNs) and Spiking Neural Networks (SNNs) against adversarial attacks using the CIFAR-10 dataset. We employed various adversarial attack methods, including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini & Wagner (C&W), and Deepfool. These attacks were tested by varying their hyperparameters. The study focuses on comparing the accuracy of SNNs and CNNs under different hyperparameter settings for each adversarial attack. Additionally, the results were visualized using Gradient-weighted Class Activation Mapping (Grad-CAM) for CNNs and Spike Activation Mapping (SAM) for SNNs, providing a comprehensive comparison of their performance under adversarial conditions.
Alternative Abstract
This study conducts a comparative analysis of the robustness of Convolutional Neural Networks (CNNs) and Spiking Neural Networks (SNNs) against adversarial attacks using the CIFAR-10 dataset. We employed various adversarial attack methods, including Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Carlini & Wagner (C&W), and Deepfool. These attacks were tested by varying their hyperparameters. The study focuses on comparing the accuracy of SNNs and CNNs under different hyperparameter settings for each adversarial attack. Additionally, the results were visualized using Gradient-weighted Class Activation Mapping (Grad-CAM) for CNNs and Spike Activation Mapping (SAM) for SNNs, providing a comprehensive comparison of their performance under adversarial conditions.
Language
kor
URI
https://aurora.ajou.ac.kr/handle/2018.oak/38915
Journal URL
https://dcoll.ajou.ac.kr/dcollection/common/orgView/000000033680
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Total Views & Downloads

File Download

  • There are no files associated with this item.