Citation Export
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Jung, Sunghoon | - |
| dc.contributor.author | Heo, Yong Seok | - |
| dc.date.issued | 2025-02-01 | - |
| dc.identifier.issn | 2079-9292 | - |
| dc.identifier.uri | https://aurora.ajou.ac.kr/handle/2018.oak/38485 | - |
| dc.identifier.uri | https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85217625232&origin=inward | - |
| dc.description.abstract | Text-to-image generation aims to create visually compelling images aligned with input prompts, but challenges such as subject mixing and subject neglect, often caused by semantic leakage during the generation process, remain, particularly in multi-subject scenarios. To mitigate this, existing methods optimize attention maps in diffusion models, using static loss functions at each time step, often leading to suboptimal results due to insufficient consideration of varying characteristics across diffusion stages. To address this problem, we propose a novel framework that adaptively guides the attention maps by dividing the diffusion process into four intervals: initial, layout, shape, and refinement. We adaptively optimize attention maps using interval-specific strategies and a dynamic loss function. Additionally, we introduce a seed filtering method based on the self-attention map analysis to detect and address the semantic leakage by restarting the generation process with new noise seeds when necessary. Extensive experiments on various datasets demonstrate that our method achieves significant improvements in generating images aligned with input prompts, outperforming previous approaches both quantitatively and qualitatively. | - |
| dc.description.sponsorship | This work was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant 2022R1F1A1065702; and in part by the Institute of Information & communications Technology Planning & Evaluation (IITP) under the Artificial Intelligence Convergence Innovation Human Resources Development (IITP-2025-RS-2023-00255968) grant funded by the Korea government (MSIT). | - |
| dc.language.iso | eng | - |
| dc.publisher | Multidisciplinary Digital Publishing Institute (MDPI) | - |
| dc.title | Temporal Adaptive Attention Map Guidance for Text-to-Image Diffusion Models | - |
| dc.type | Article | - |
| dc.citation.number | 3 | - |
| dc.citation.title | Electronics (Switzerland) | - |
| dc.citation.volume | 14 | - |
| dc.identifier.bibliographicCitation | Electronics (Switzerland), Vol.14 No.3 | - |
| dc.identifier.doi | 10.3390/electronics14030412 | - |
| dc.identifier.scopusid | 2-s2.0-85217625232 | - |
| dc.identifier.url | www.mdpi.com/journal/electronics | - |
| dc.subject.keyword | attention map-based diffusion optimization | - |
| dc.subject.keyword | semantic leakage | - |
| dc.subject.keyword | text-to-image generation | - |
| dc.type.other | Article | - |
| dc.identifier.pissn | 20799292 | - |
| dc.description.isoa | true | - |
| dc.subject.subarea | Control and Systems Engineering | - |
| dc.subject.subarea | Signal Processing | - |
| dc.subject.subarea | Hardware and Architecture | - |
| dc.subject.subarea | Computer Networks and Communications | - |
| dc.subject.subarea | Electrical and Electronic Engineering | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.