Ajou University repository

Personalizing Diffusion Inpainting Model with Text-Free Finetuning
  • 김범조
Citations

SCOPUS

0

Citation Export

DC Field Value Language
dc.contributor.advisorKyung-Ah Sohn-
dc.contributor.author김범조-
dc.date.issued2024-02-
dc.identifier.other33534-
dc.identifier.urihttps://aurora.ajou.ac.kr/handle/2018.oak/38947-
dc.description학위논문(석사)--인공지능학과,2024. 2-
dc.description.abstractThis thesis introduces a novel approach to subject-driven image generation, advancing the field by overcoming the limitations of traditional text-to-image diffusion models. Our method employs a model that generates images from reference images without the need for language-based prompts. By integrating our proposed module named as visual detail preserving module, the model captures intricate visual details and textures of subjects, addressing the common challenge of overfitting associated with a limited number of training samples. We further refine the model's performance through a modified classifier-free guidance technique and feature concatenation, enabling the generation of images where subjects are naturally positioned and harmonized within diverse scene contexts. Quantitative assessments using CLIP and DINO scores, complemented by a user study, demonstrate our model's superiority in fidelity, editability, and overall quality of generated images. Our contributions not only show the potential of leveraging pre-trained models and visual patch embeddings in subject-driven editing but also highlight the balance between diversity and fidelity in image generation tasks. Keywords: Diffusion Model, Image Generation, Image Inpainting, Subject-Driven Generation, Image Manipulation-
dc.description.tableofcontents1. Introduction 1_x000D_ <br>2. Related Works 5_x000D_ <br>2.1 Diffusion Model 5_x000D_ <br>2.2 Subject-Driven Generation 7_x000D_ <br>2.3 Controlling Pre-trained Diffusion Models . 8_x000D_ <br>2.4 Diffusion Models Inference with Guidance . 9_x000D_ <br>3. Method 12_x000D_ <br>3.1 Preliminaries 12_x000D_ <br>3.2 Training Phase 14_x000D_ <br>3.2.1 Feature Extraction . 15_x000D_ <br>3.2.2 Feature Injection 16_x000D_ <br>3.3 Inference Phase . 17_x000D_ <br>4. Experiments 20_x000D_ <br>4.1 Experiments Details 20_x000D_ <br>4.2 Comparable Results. 22_x000D_ <br>4.2.1 Qualitative Result 23_x000D_ <br>4.2.2 Quantitative Result 24_x000D_ <br>4.3 Ablation Study 26_x000D_ <br>5. Conclusion 29_x000D_ <br>Discussions and Future works 29_x000D_ <br>Reference 31-
dc.language.isoeng-
dc.publisherThe Graduate School, Ajou University-
dc.rights아주대학교 논문은 저작권에 의해 보호받습니다.-
dc.titlePersonalizing Diffusion Inpainting Model with Text-Free Finetuning-
dc.typeThesis-
dc.contributor.affiliation아주대학교 대학원-
dc.contributor.alternativeNameBEOMJO KIM-
dc.contributor.department일반대학원 인공지능학과-
dc.date.awarded2024-02-
dc.description.degreeMaster-
dc.identifier.urlhttps://dcoll.ajou.ac.kr/dcollection/common/orgView/000000033534-
dc.subject.keywordDiffusion Model-
dc.subject.keywordImage Generation-
dc.subject.keywordImage Inpainting-
dc.subject.keywordImage manipulation-
dc.subject.keywordSubject-Driven Generation-
Show simple item record

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Total Views & Downloads

File Download

  • There are no files associated with this item.