Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Soohyun | - |
dc.contributor.author | Son, Seok Bin | - |
dc.contributor.author | Lee, Youn Kyu | - |
dc.contributor.author | Jung, Soyi | - |
dc.contributor.author | Kim, Joongheon | - |
dc.date.issued | 2023-12-01 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/dev/handle/2018.oak/33853 | - |
dc.description.abstract | In many deep neural network (DNN) applications, the difficulty of gathering high-quality data in industry fields hinders the practical use of DNN. Thus, the concept of transfer learning (TL) has emerged, which leverages the pretrained knowledge of the DNN which was built based on large-scale datasets. For this TL objective, this paper suggests two-stage architectural fine-tuning for reducing the costs and time while exploring the most efficient DNN model, inspired by neural architecture search (NAS). The first stage is mutation, which reduces the search costs using a priori architectural information. Moreover, the next stage is early-stopping, which reduces NAS costs by terminating the search process in the middle of computation. The data-intensive experimental results verify that the proposed method outperforms benchmarks. | - |
dc.description.sponsorship | This work was supported in part by the National Research Foundation of Korea (NRF\u2010Korea) under Grant and in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) Grant through the Korea Government [Ministry of Science and Information and Communications Technology (MSIT)], Intelligent 6G Wireless Access System, under Grant. | - |
dc.language.iso | eng | - |
dc.publisher | John Wiley and Sons Inc | - |
dc.subject.mesh | Fine tuning | - |
dc.subject.mesh | High quality data | - |
dc.subject.mesh | Images processing | - |
dc.subject.mesh | NET architecture | - |
dc.subject.mesh | Neural architectures | - |
dc.subject.mesh | Neural net architecture | - |
dc.subject.mesh | Neural network application | - |
dc.subject.mesh | Practical use | - |
dc.subject.mesh | Search costs | - |
dc.subject.mesh | Transfer learning | - |
dc.title | Two-stage architectural fine-tuning for neural architecture search in efficient transfer learning | - |
dc.type | Article | - |
dc.citation.title | Electronics Letters | - |
dc.citation.volume | 59 | - |
dc.identifier.bibliographicCitation | Electronics Letters, Vol.59 | - |
dc.identifier.doi | 10.1049/ell2.13066 | - |
dc.identifier.scopusid | 2-s2.0-85180133284 | - |
dc.identifier.url | https://ietresearch.onlinelibrary.wiley.com/loi/1350911x | - |
dc.subject.keyword | image processing | - |
dc.subject.keyword | neural net architecture | - |
dc.subject.keyword | neural nets | - |
dc.description.isoa | true | - |
dc.subject.subarea | Electrical and Electronic Engineering | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.