Ajou University repository

Continuous Facial Motion Deblurringoa mark
Citations

SCOPUS

1

Citation Export

DC Field Value Language
dc.contributor.authorLee, Tae Bok-
dc.contributor.authorHan, Sujy-
dc.contributor.authorHeo, Yong Seok-
dc.date.issued2022-01-01-
dc.identifier.issn2169-3536-
dc.identifier.urihttps://dspace.ajou.ac.kr/dev/handle/2018.oak/32808-
dc.description.abstractWe introduce a novel framework for continuous facial motion deblurring that restores the continuous sharp moment latent in a single motion-blurred face image via a moment control factor. Although a motion-blurred image is the accumulated signal of continuous sharp moments during the exposure time, most existing single image deblurring approaches aim to restore a fixed number of frames using multiple networks and training stages. To address this problem, we propose a continuous facial motion deblurring network based on GAN (CFMD-GAN), which is a novel framework for restoring the continuous moment latent in a single motion-blurred face image with a single network and a single training stage. To stabilize the network training, we train the generator to restore continuous moments in the order determined by our facial motion-based reordering process (FMR) utilizing domain-specific knowledge of the face. Moreover, we propose an auxiliary regressor that helps our generator produce more accurate images by estimating continuous sharp moments. Furthermore, we introduce a control-adaptive (ContAda) block that performs spatially deformable convolution and channel-wise attention as a function of the control factor. Extensive experiments on the 300VW datasets demonstrate that the proposed framework generates a various number of continuous output frames by varying the moment control factor. Compared with the recent single-to-single image deblurring networks trained with the same 300VW training set, the proposed method show the superior performance in restoring the central sharp frame in terms of perceptual metrics, including LPIPS, FID and Arcface identity distance. The proposed method outperforms the existing single-to-video deblurring method for both qualitative and quantitative comparisons. In our experiments on the 300VW test set, the proposed framework reached 33.14 dB and 0.93 for recovery of 7 sharp frames in PSNR and SSIM, respectively.-
dc.language.isoeng-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.subject.meshAC-GAN-
dc.subject.meshContinuous facial motion deblurring-
dc.subject.meshControl factors-
dc.subject.meshControl-adaptive block-
dc.subject.meshDecoding-
dc.subject.meshFace-
dc.subject.meshFacial motions-
dc.subject.meshFeatures extraction-
dc.subject.meshMotion deblurring-
dc.subject.meshMotion-blurred-
dc.titleContinuous Facial Motion Deblurring-
dc.typeArticle-
dc.citation.endPage76094-
dc.citation.startPage76079-
dc.citation.titleIEEE Access-
dc.citation.volume10-
dc.identifier.bibliographicCitationIEEE Access, Vol.10, pp.76079-76094-
dc.identifier.doi10.1109/access.2022.3190089-
dc.identifier.scopusid2-s2.0-85134244800-
dc.identifier.urlhttp://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6287639-
dc.subject.keywordAC-GAN-
dc.subject.keywordContinuous facial motion deblurring-
dc.subject.keywordcontrol-adaptive block-
dc.description.isoatrue-
dc.subject.subareaComputer Science (all)-
dc.subject.subareaMaterials Science (all)-
dc.subject.subareaEngineering (all)-
dc.subject.subareaElectrical and Electronic Engineering-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Heo,Yong Seok  Image
Heo,Yong Seok 허용석
Department of Electrical and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.