Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jung, Soo Hyun | - |
dc.contributor.author | Bok Lee, Tae | - |
dc.contributor.author | Heo, Yong Seok | - |
dc.date.issued | 2022-01-01 | - |
dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/36841 | - |
dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85126151674&origin=inward | - |
dc.description.abstract | Most recent face deblurring methods have focused on utilizing facial shape priors such as face landmarks and parsing maps. While these priors can provide facial geometric cues effectively, they are insufficient to contain local texture details that act as important clues to solve face deblurring problem. To deal with this, we focus on estimating the deep features of pre-trained face recognition networks (e.g., VGGFace network) that include rich information about sharp faces as a prior, and adopt a generative adversarial network (GAN) to learn it. To this end, we propose a deep feature prior guided network (DFPGnet) that restores facial details using the estimated the deep feature prior from a blurred image. In our DFPGnet, the generator is divided into two streams including prior estimation and deblurring streams. Since the estimated deep features of the prior estimation stream are learned from the VGGFace network which is trained for face recognition not for deblurring, we need to alleviate the discrepancy of feature distributions between the two streams. Therefore, we present feature transform modules at the connecting points of the two streams. In addition, we propose a channel-attention feature discriminator and prior loss, which encourages the generator to focus on more important channels for deblurring among the deep feature prior during training. Experimental results show that our method achieves state-of-the-art performance both qualitatively and quantitatively. | - |
dc.description.sponsorship | Acknowledgement. This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1C1C1007446), and in part by the BK21 FOUR program of the National Research Foundation of Korea funded by the Ministry of Education (NRF5199991014091). | - |
dc.language.iso | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.subject.mesh | Deblurring | - |
dc.subject.mesh | Deblurring problems | - |
dc.subject.mesh | Face landmarks | - |
dc.subject.mesh | Facial shape | - |
dc.subject.mesh | Images synthesis | - |
dc.subject.mesh | Learn+ | - |
dc.subject.mesh | Local Texture | - |
dc.subject.mesh | Shape priors | - |
dc.subject.mesh | Two-stream | - |
dc.subject.mesh | Video synthesis | - |
dc.title | Deep Feature Prior Guided Face Deblurring | - |
dc.type | Conference | - |
dc.citation.conferenceDate | 2022.1.4. ~ 2022.1.8. | - |
dc.citation.conferenceName | 22nd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022 | - |
dc.citation.edition | Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022 | - |
dc.citation.endPage | 893 | - |
dc.citation.startPage | 884 | - |
dc.citation.title | Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022 | - |
dc.identifier.bibliographicCitation | Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, pp.884-893 | - |
dc.identifier.doi | 10.1109/wacv51458.2022.00096 | - |
dc.identifier.scopusid | 2-s2.0-85126151674 | - |
dc.identifier.url | http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=9706406 | - |
dc.subject.keyword | Computational Photography | - |
dc.subject.keyword | Image and Video Synthesis | - |
dc.type.other | Conference Paper | - |
dc.description.isoa | false | - |
dc.subject.subarea | Computer Vision and Pattern Recognition | - |
dc.subject.subarea | Computer Science Applications | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.