DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Jaewon | - |
dc.contributor.author | Kim, Myung Jun | - |
dc.contributor.author | Shin, Hyunjung | - |
dc.date.issued | 2023-01-01 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/36926 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85151511094&origin=inward | - |
dc.description.abstract | Autoencoders are widely used for dimensionality reduction nonlinearly. However, determining the number of nodes in the autoencoder embedding space is still a challenging task. The number of nodes in the bottleneck layer, which is an encoded representation, is estimated and determined by users. Therefore, to maintain embedding performance and reduce the complexity of the model, an indicator that automatically selects the number of bottleneck nodes is needed. This study proposes a method for automatically estimating the adequate number of nodes in the bottleneck layer while training the model. The basic idea of the proposed method is to eliminate lazy nodes which rarely affect the model performance based on the weight distribution of the bottleneck layer. Since the proposed method takes place in the learning process of the autoencoder, it has the advantage of accelerating the training speed. The proposed method showed better or similar performances in classification accuracy. | - |
dc.description.sponsorship | ACKNOWLEDGMENT This research was supported by BK21 FOUR program of the National Research Foundation of Korea funded by the Ministry of Education(NRF5199991014091), Institute for Information communications Technology Promotion(IITP) grant funded by the Korea government (MSIP) (No. S2022A068600023), the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2021R1A2C2003474) , and the Ajou University research fund. | - |
dc.language.iso | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.subject.mesh | Auto encoders | - |
dc.subject.mesh | Dimension estimation | - |
dc.subject.mesh | Dimensionality reduction | - |
dc.subject.mesh | Embeddings | - |
dc.subject.mesh | Learning process | - |
dc.subject.mesh | Modeling performance | - |
dc.subject.mesh | Neural-networks | - |
dc.subject.mesh | Performance | - |
dc.subject.mesh | Performance based | - |
dc.subject.mesh | Weight distributions | - |
dc.title | Lazy Node-Dropping Autoencoder | - |
dc.type | Conference | - |
dc.citation.conferenceDate | 2023.2.13. ~ 2023.2.16. | - |
dc.citation.conferenceName | 2023 IEEE International Conference on Big Data and Smart Computing, BigComp 2023 | - |
dc.citation.edition | Proceedings - 2023 IEEE International Conference on Big Data and Smart Computing, BigComp 2023 | - |
dc.citation.endPage | 68 | - |
dc.citation.startPage | 64 | - |
dc.citation.title | Proceedings - 2023 IEEE International Conference on Big Data and Smart Computing, BigComp 2023 | - |
dc.identifier.bibliographicCitation | Proceedings - 2023 IEEE International Conference on Big Data and Smart Computing, BigComp 2023, pp.64-68 | - |
dc.identifier.doi | 10.1109/bigcomp57234.2023.00018 | - |
dc.identifier.scopusid | 2-s2.0-85151511094 | - |
dc.identifier.url | http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=10066534 | - |
dc.subject.keyword | Autoencoder | - |
dc.subject.keyword | Dimension Estimation | - |
dc.subject.keyword | Dimensionality Reduction | - |
dc.subject.keyword | Neural Networks | - |
dc.type.other | Conference Paper | - |
dc.description.isoa | false | - |
dc.subject.subarea | Artificial Intelligence | - |
dc.subject.subarea | Computer Science Applications | - |
dc.subject.subarea | Computer Vision and Pattern Recognition | - |
dc.subject.subarea | Information Systems | - |
dc.subject.subarea | Information Systems and Management | - |
dc.subject.subarea | Statistics, Probability and Uncertainty | - |
dc.subject.subarea | Health Informatics | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.