Ajou University repository

ABDGAN: Arbitrary Time Blur Decomposition Using Critic-Guided TripleGANoa mark
Citations

SCOPUS

0

Citation Export

DC Field Value Language
dc.contributor.authorLee, Tae Bok-
dc.contributor.authorHeo, Yong Seok-
dc.date.issued2024-08-01-
dc.identifier.issn1424-8220-
dc.identifier.urihttps://dspace.ajou.ac.kr/dev/handle/2018.oak/34378-
dc.description.abstractRecent studies have proposed methods for extracting latent sharp frames from a single blurred image. However, these methods still suffer from limitations in restoring satisfactory images. In addition, most existing methods are limited to decomposing a blurred image into sharp frames with a fixed frame rate. To address these problems, we present an Arbitrary Time Blur Decomposition Triple Generative Adversarial Network (ABDGAN) that restores sharp frames with flexible frame rates. Our framework plays a min–max game consisting of a generator, a discriminator, and a time-code predictor. The generator serves as a time-conditional deblurring network, while the discriminator and the label predictor provide feedback to the generator on producing realistic and sharp image depending on given time code. To provide adequate feedback for the generator, we propose a critic-guided (CG) loss by collaboration of the discriminator and time-code predictor. We also propose a pairwise order-consistency (POC) loss to ensure that each pixel in a predicted image consistently corresponds to the same ground-truth frame. Extensive experiments show that our method outperforms previously reported methods in both qualitative and quantitative evaluations. Compared to the best competitor, the proposed ABDGAN improves PSNR, SSIM, and LPIPS on the GoPro test set by (Formula presented.), (Formula presented.), and (Formula presented.), respectively. For the B-Aist++ test set, our method shows improvements of (Formula presented.), (Formula presented.), and (Formula presented.) in PSNR, SSIM, and LPIPS, respectively, compared to the best competitive method.-
dc.description.sponsorshipThis work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant 2022R1F1A1065702.-
dc.language.isoeng-
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)-
dc.subject.meshArbitrary time-
dc.subject.meshArbitrary time blur decomposition-
dc.subject.meshContinous motion-
dc.subject.meshContinuous motion deblurring-
dc.subject.meshCritic-guided loss-
dc.subject.meshImage deblurring-
dc.subject.meshMotion deblurring-
dc.subject.meshPairwise order-consistency loss-
dc.subject.meshSingle image deblurring-
dc.subject.meshSingle images-
dc.subject.meshTriple generative adversarial network-
dc.titleABDGAN: Arbitrary Time Blur Decomposition Using Critic-Guided TripleGAN-
dc.typeArticle-
dc.citation.titleSensors-
dc.citation.volume24-
dc.identifier.bibliographicCitationSensors, Vol.24-
dc.identifier.doi10.3390/s24154801-
dc.identifier.pmid39123847-
dc.identifier.scopusid2-s2.0-85200775910-
dc.identifier.urlhttp://www.mdpi.com/journal/sensors-
dc.subject.keywordarbitrary time blur decomposition-
dc.subject.keywordcontinuous motion deblurring-
dc.subject.keywordcritic-guided loss-
dc.subject.keywordpairwise order-consistency loss-
dc.subject.keywordsingle image deblurring-
dc.subject.keywordTriple Generative Adversarial Networks-
dc.description.isoatrue-
dc.subject.subareaAnalytical Chemistry-
dc.subject.subareaInformation Systems-
dc.subject.subareaAtomic and Molecular Physics, and Optics-
dc.subject.subareaBiochemistry-
dc.subject.subareaInstrumentation-
dc.subject.subareaElectrical and Electronic Engineering-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Heo,Yong Seok  Image
Heo,Yong Seok 허용석
Department of Electrical and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.