Ajou University repository

Deep Feature Prior Guided Face Deblurring
Citations

SCOPUS

0

Citation Export

Publication Year
2022-01-01
Journal
Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, pp.884-893
Keyword
Computational PhotographyImage and Video Synthesis
Mesh Keyword
DeblurringDeblurring problemsFace landmarksFacial shapeImages synthesisLearn+Local TextureShape priorsTwo-streamVideo synthesis
All Science Classification Codes (ASJC)
Computer Vision and Pattern RecognitionComputer Science Applications
Abstract
Most recent face deblurring methods have focused on utilizing facial shape priors such as face landmarks and parsing maps. While these priors can provide facial geometric cues effectively, they are insufficient to contain local texture details that act as important clues to solve face deblurring problem. To deal with this, we focus on estimating the deep features of pre-trained face recognition networks (e.g., VGGFace network) that include rich information about sharp faces as a prior, and adopt a generative adversarial network (GAN) to learn it. To this end, we propose a deep feature prior guided network (DFPGnet) that restores facial details using the estimated the deep feature prior from a blurred image. In our DFPGnet, the generator is divided into two streams including prior estimation and deblurring streams. Since the estimated deep features of the prior estimation stream are learned from the VGGFace network which is trained for face recognition not for deblurring, we need to alleviate the discrepancy of feature distributions between the two streams. Therefore, we present feature transform modules at the connecting points of the two streams. In addition, we propose a channel-attention feature discriminator and prior loss, which encourages the generator to focus on more important channels for deblurring among the deep feature prior during training. Experimental results show that our method achieves state-of-the-art performance both qualitatively and quantitatively.
Language
eng
URI
https://aurora.ajou.ac.kr/handle/2018.oak/36841
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85126151674&origin=inward
DOI
https://doi.org/10.1109/wacv51458.2022.00096
Journal URL
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=9706406
Type
Conference
Funding
Acknowledgement. This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019R1C1C1007446), and in part by the BK21 FOUR program of the National Research Foundation of Korea funded by the Ministry of Education (NRF5199991014091).
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Heo,Yong Seok  Image
Heo,Yong Seok 허용석
Department of Electrical and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.