Ajou University repository

Counting Guidance for High Fidelity Text-to-Image Synthesis
Citations

SCOPUS

1

Citation Export

Publication Year
2025-01-01
Journal
Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025, pp.899-908
Keyword
diffusion modelsgenerative modelstext-to-image generation
Mesh Keyword
Counting networksDiffusion modelGenerative modelHigh qualityHigh-fidelityImage diffusionImage generationsImages synthesisPerformanceText-to-image generation
All Science Classification Codes (ASJC)
Artificial IntelligenceComputer Science ApplicationsComputer Vision and Pattern RecognitionHuman-Computer InteractionModeling and SimulationRadiology, Nuclear Medicine and Imaging
Abstract
Recently, there have been significant improvements in the quality and performance of text-to-image generation, largely due to the impressive results attained by diffusion models. However, text-to-image diffusion models sometimes struggle to create high-fidelity content for the given input prompt. One specific issue is their difficulty in generating the precise number of objects specified in the text prompt. For example, when provided with the prompt 'five apples and ten lemons on a table,' images generated by diffusion models often contain an incorrect number of objects. In this paper, we present a method to improve diffusion models so that they accurately produce the correct object count based on the input prompt. We adopt a counting network that performs reference-less class-agnostic counting for any given image. We calculate the gradients of the counting network and refine the predicted noise for each step. To address the presence of multiple types of objects in the prompt, we utilize novel attention map guidance to obtain high-quality masks for each object. Finally, we guide the denoising process using the calculated gradients for each object. Through extensive experiments and evaluation, we demonstrate that the proposed method significantly enhances the fidelity of diffusion models with respect to object count.
Language
eng
URI
https://aurora.ajou.ac.kr/handle/2018.oak/38564
https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105003642863&origin=inward
DOI
https://doi.org/10.1109/wacv61041.2025.00097
Journal URL
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=10943266
Type
Conference Paper
Funding
This work was supported in part by Institute of Information & communications Technology Planning & Evaluation (IITP) under the Artificial Intelligence Convergence Innovation Human Resources Development (IITP-2024-RS-2023-00255968) grant funded by the Korea government (MSIT) ITRC and by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2024-2020-0-01461) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation).
Show full item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

 KOO, HYUNG IL Image
KOO, HYUNG IL구형일
Department of Electrical and Computer Engineering
Read More

Total Views & Downloads

File Download

  • There are no files associated with this item.