Ajou University repository

Inverse-Based Approach to Explaining and Visualizing Convolutional Neural Networks
Citations

SCOPUS

11

Citation Export

DC Field Value Language
dc.contributor.authorKwon, Hyuk Jin-
dc.contributor.authorKoo, Hyung Il-
dc.contributor.authorSoh, Jae Woong-
dc.contributor.authorCho, Nam Ik-
dc.date.issued2022-12-01-
dc.identifier.urihttps://dspace.ajou.ac.kr/dev/handle/2018.oak/32184-
dc.description.abstractThis article presents a new method for understanding and visualizing convolutional neural networks (CNNs). Most existing approaches to this problem focus on a global score and evaluate the pixelwise contribution of inputs to the score. The analysis of CNNs for multilabeled outputs or regression has not yet been considered in the literature, despite their success on image classification tasks with well-defined global scores. To address this problem, we propose a new inverse-based approach that computes the inverse of a feedforward pass to identify activations of interest in lower layers. We developed a layerwise inverse procedure based on two observations: 1) inverse results should have consistent internal activations to the original forward pass and 2) a small amount of activation in inverse results is desirable for human interpretability. Experimental results show that the proposed method allows us to analyze CNNs for classification and regression in the same framework. We demonstrated that our method successfully finds attributions in the inputs for image classification with comparable performance to state-of-the-art methods. To visualize the tradeoff between various methods, we developed a novel plot that shows the tradeoff between the amount of activations and the rate of class reidentification. In the case of regression, our method showed that conventional CNNs for single image super-resolution overlook a portion of frequency bands that may result in performance degradation.-
dc.description.sponsorshipThis work was supported in part by the National Research Foundation of Korea (NRF) Grant through the Korean Government [Ministry of Science and ICT (MSIT)] under Grant 2021R1A2C2007220 and in part by Samsung Electronics Company Ltd.-
dc.language.isoeng-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.subject.meshConvolutional neural network-
dc.subject.meshImage super resolutions-
dc.subject.meshImage super-resolution-
dc.subject.meshImages classification-
dc.subject.meshInterpretable machine learning-
dc.subject.meshInverse approach-
dc.subject.meshInverse approach.-
dc.subject.meshMachine-learning-
dc.subject.meshPerturbation method-
dc.subject.meshSuperresolution-
dc.subject.meshTask analysis-
dc.subject.meshHumans-
dc.subject.meshNeural Networks, Computer-
dc.titleInverse-Based Approach to Explaining and Visualizing Convolutional Neural Networks-
dc.typeArticle-
dc.citation.endPage7329-
dc.citation.startPage7318-
dc.citation.titleIEEE Transactions on Neural Networks and Learning Systems-
dc.citation.volume33-
dc.identifier.bibliographicCitationIEEE Transactions on Neural Networks and Learning Systems, Vol.33, pp.7318-7329-
dc.identifier.doi10.1109/tnnls.2021.3084757-
dc.identifier.pmid34138716-
dc.identifier.scopusid2-s2.0-85112208470-
dc.identifier.urlhttp://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5962385-
dc.subject.keywordConvolutional neural networks (CNNs)-
dc.subject.keywordimage classification-
dc.subject.keywordimage super-resolution (SR)-
dc.subject.keywordinterpretable machine learning-
dc.subject.keywordinverse approach-
dc.description.isoafalse-
dc.subject.subareaSoftware-
dc.subject.subareaComputer Science Applications-
dc.subject.subareaComputer Networks and Communications-
dc.subject.subareaArtificial Intelligence-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

 KOO, HYUNG IL Image
KOO, HYUNG IL구형일
Department of Electrical and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.