Ajou University repository

Adversarial attack-based security vulnerability verification using deep learning library for multimedia video surveillance
Citations

SCOPUS

12

Citation Export

DC Field Value Language
dc.contributor.authorJeong, Jae Han-
dc.contributor.authorKwon, Sungmoon-
dc.contributor.authorHong, Man Pyo-
dc.contributor.authorKwak, Jin-
dc.contributor.authorShon, Taeshik-
dc.date.issued2020-06-01-
dc.identifier.urihttps://dspace.ajou.ac.kr/dev/handle/2018.oak/30569-
dc.description.abstractRecently, although deep learning has been employed in various fields, it poses the risk of a possible adversarial attack. In this study, we experimentally verified that classification accuracy in the image classification model of deep learning is lowered by adversarial samples generated by malicious attackers. We used the MNIST dataset, a representative image sample, and the NSL-KDD dataset, a representative network data. We measured the detection accuracy by injecting adversarial samples into the Autoencoder and Convolution Neural Network (CNN) classification models created using the TensorFlow and PyTorch libraries. Adversarial samples were generated by transforming the MNIST and NSL-KDD test datasets using the Jacobian-based Saliency Map Attack (JSMA) method and Fast Gradient Sign Method (FGSM). While measuring the accuracy by injecting the samples into the classification model, we verified that the detection accuracy was reduced by a minimum of 21.82% and a maximum of 39.08%.-
dc.description.sponsorship- This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2018-2016-0-00304) supervised by the IITP (Institute for Information & communications Technology Promotion) - This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2018R1D1A1B07043349) - This research was supported by an IITP grant funded by the Korean government (MSIT) (No. 2018-0-00336, Advanced Manufacturing Process Anomaly Detection to prevent the Smart Factory Operation Failure by Cyber Attacks) - This work was supported by the Ajou University research fund-
dc.language.isoeng-
dc.publisherSpringer-
dc.subject.meshAdversarial attack-
dc.subject.meshAuto encoders-
dc.subject.meshMNIST-
dc.subject.meshNSL-KDD-
dc.subject.meshSecurity-
dc.titleAdversarial attack-based security vulnerability verification using deep learning library for multimedia video surveillance-
dc.typeArticle-
dc.citation.endPage16091-
dc.citation.startPage16077-
dc.citation.titleMultimedia Tools and Applications-
dc.citation.volume79-
dc.identifier.bibliographicCitationMultimedia Tools and Applications, Vol.79, pp.16077-16091-
dc.identifier.doi10.1007/s11042-019-7262-8-
dc.identifier.scopusid2-s2.0-85060797456-
dc.identifier.urlhttps://link.springer.com/journal/11042-
dc.subject.keywordAdversarial attack-
dc.subject.keywordAutoencoder-
dc.subject.keywordCNN-
dc.subject.keywordDeep learning-
dc.subject.keywordMNIST-
dc.subject.keywordNSL-KDD-
dc.subject.keywordSecurity-
dc.description.isoafalse-
dc.subject.subareaSoftware-
dc.subject.subareaMedia Technology-
dc.subject.subareaHardware and Architecture-
dc.subject.subareaComputer Networks and Communications-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

KWAK, JIN Image
KWAK, JIN곽진
Department of Cyber Security
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.