Citation Export
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Tae Bok | - |
dc.contributor.author | Heo, Yong Seok | - |
dc.date.issued | 2024-08-01 | - |
dc.identifier.issn | 1424-8220 | - |
dc.identifier.uri | https://dspace.ajou.ac.kr/dev/handle/2018.oak/34378 | - |
dc.description.abstract | Recent studies have proposed methods for extracting latent sharp frames from a single blurred image. However, these methods still suffer from limitations in restoring satisfactory images. In addition, most existing methods are limited to decomposing a blurred image into sharp frames with a fixed frame rate. To address these problems, we present an Arbitrary Time Blur Decomposition Triple Generative Adversarial Network (ABDGAN) that restores sharp frames with flexible frame rates. Our framework plays a min–max game consisting of a generator, a discriminator, and a time-code predictor. The generator serves as a time-conditional deblurring network, while the discriminator and the label predictor provide feedback to the generator on producing realistic and sharp image depending on given time code. To provide adequate feedback for the generator, we propose a critic-guided (CG) loss by collaboration of the discriminator and time-code predictor. We also propose a pairwise order-consistency (POC) loss to ensure that each pixel in a predicted image consistently corresponds to the same ground-truth frame. Extensive experiments show that our method outperforms previously reported methods in both qualitative and quantitative evaluations. Compared to the best competitor, the proposed ABDGAN improves PSNR, SSIM, and LPIPS on the GoPro test set by (Formula presented.), (Formula presented.), and (Formula presented.), respectively. For the B-Aist++ test set, our method shows improvements of (Formula presented.), (Formula presented.), and (Formula presented.) in PSNR, SSIM, and LPIPS, respectively, compared to the best competitive method. | - |
dc.description.sponsorship | This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant 2022R1F1A1065702. | - |
dc.language.iso | eng | - |
dc.publisher | Multidisciplinary Digital Publishing Institute (MDPI) | - |
dc.subject.mesh | Arbitrary time | - |
dc.subject.mesh | Arbitrary time blur decomposition | - |
dc.subject.mesh | Continous motion | - |
dc.subject.mesh | Continuous motion deblurring | - |
dc.subject.mesh | Critic-guided loss | - |
dc.subject.mesh | Image deblurring | - |
dc.subject.mesh | Motion deblurring | - |
dc.subject.mesh | Pairwise order-consistency loss | - |
dc.subject.mesh | Single image deblurring | - |
dc.subject.mesh | Single images | - |
dc.subject.mesh | Triple generative adversarial network | - |
dc.title | ABDGAN: Arbitrary Time Blur Decomposition Using Critic-Guided TripleGAN | - |
dc.type | Article | - |
dc.citation.title | Sensors | - |
dc.citation.volume | 24 | - |
dc.identifier.bibliographicCitation | Sensors, Vol.24 | - |
dc.identifier.doi | 10.3390/s24154801 | - |
dc.identifier.pmid | 39123847 | - |
dc.identifier.scopusid | 2-s2.0-85200775910 | - |
dc.identifier.url | http://www.mdpi.com/journal/sensors | - |
dc.subject.keyword | arbitrary time blur decomposition | - |
dc.subject.keyword | continuous motion deblurring | - |
dc.subject.keyword | critic-guided loss | - |
dc.subject.keyword | pairwise order-consistency loss | - |
dc.subject.keyword | single image deblurring | - |
dc.subject.keyword | Triple Generative Adversarial Networks | - |
dc.description.isoa | true | - |
dc.subject.subarea | Analytical Chemistry | - |
dc.subject.subarea | Information Systems | - |
dc.subject.subarea | Atomic and Molecular Physics, and Optics | - |
dc.subject.subarea | Biochemistry | - |
dc.subject.subarea | Instrumentation | - |
dc.subject.subarea | Electrical and Electronic Engineering | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.