Ajou University repository

Experimental Verification of AI Adversarial Attacks using Open Deep Learning Library
  • 정재한
Citations

SCOPUS

0

Citation Export

Advisor
손태식
Affiliation
아주대학교 일반대학원
Department
일반대학원 컴퓨터공학과
Publication Year
2019-08
Publisher
The Graduate School, Ajou University
Description
학위논문(석사)--아주대학교 일반대학원 :컴퓨터공학과,2019. 8
Alternative Abstract
Deep-learning solves many problems by automatically learning datasets. However, such a Deep-learning model can be threatened by Adversarial Attack. In this paper, we used image datasets and network datasets in the Deep-Learning classification model. The adversarial sample generated by the malicious attacker had been experimentally verified that the classification accuracy of the model is lowered. A common network dataset NSL-KDD and a common image set MNIST were used. We used the Tensorlfow and PyTorch library to create an Autoencoder classification model and a Convolution neural network (CNN) classification model. Detection accuracy was measured by injecting Adversarial sample into this model. We will construct each deep-learning model as basic and measure the effect of the adversarial sample on the general effect. The adversarial sample was generated by the Fast Gradient Sign Method (FGSM) and the Jacobian-based Saliency Map Attack (JSMA) method. The classification accuracy decreased from 99% to 50% by the adversarial sample.
Language
eng
URI
https://dspace.ajou.ac.kr/handle/2018.oak/15547
Fulltext

Type
Thesis
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Total Views & Downloads

File Download

  • There are no files associated with this item.