Ajou University repository

Temporal Adaptive Attention Map Guidance for Text-to-Image Diffusion Modelsoa mark
Citations

SCOPUS

1

Citation Export

DC Field Value Language
dc.contributor.authorJung, Sunghoon-
dc.contributor.authorHeo, Yong Seok-
dc.date.issued2025-02-01-
dc.identifier.issn2079-9292-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/38485-
dc.identifier.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85217625232&origin=inward-
dc.description.abstractText-to-image generation aims to create visually compelling images aligned with input prompts, but challenges such as subject mixing and subject neglect, often caused by semantic leakage during the generation process, remain, particularly in multi-subject scenarios. To mitigate this, existing methods optimize attention maps in diffusion models, using static loss functions at each time step, often leading to suboptimal results due to insufficient consideration of varying characteristics across diffusion stages. To address this problem, we propose a novel framework that adaptively guides the attention maps by dividing the diffusion process into four intervals: initial, layout, shape, and refinement. We adaptively optimize attention maps using interval-specific strategies and a dynamic loss function. Additionally, we introduce a seed filtering method based on the self-attention map analysis to detect and address the semantic leakage by restarting the generation process with new noise seeds when necessary. Extensive experiments on various datasets demonstrate that our method achieves significant improvements in generating images aligned with input prompts, outperforming previous approaches both quantitatively and qualitatively.-
dc.description.sponsorshipThis work was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant 2022R1F1A1065702; and in part by the Institute of Information & communications Technology Planning & Evaluation (IITP) under the Artificial Intelligence Convergence Innovation Human Resources Development (IITP-2025-RS-2023-00255968) grant funded by the Korea government (MSIT).-
dc.language.isoeng-
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)-
dc.titleTemporal Adaptive Attention Map Guidance for Text-to-Image Diffusion Models-
dc.typeArticle-
dc.citation.number3-
dc.citation.titleElectronics (Switzerland)-
dc.citation.volume14-
dc.identifier.bibliographicCitationElectronics (Switzerland), Vol.14 No.3-
dc.identifier.doi10.3390/electronics14030412-
dc.identifier.scopusid2-s2.0-85217625232-
dc.identifier.urlwww.mdpi.com/journal/electronics-
dc.subject.keywordattention map-based diffusion optimization-
dc.subject.keywordsemantic leakage-
dc.subject.keywordtext-to-image generation-
dc.type.otherArticle-
dc.identifier.pissn20799292-
dc.description.isoatrue-
dc.subject.subareaControl and Systems Engineering-
dc.subject.subareaSignal Processing-
dc.subject.subareaHardware and Architecture-
dc.subject.subareaComputer Networks and Communications-
dc.subject.subareaElectrical and Electronic Engineering-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Heo,Yong Seok  Image
Heo,Yong Seok 허용석
Department of Electrical and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.