lzyhha commited on
Commit
22bcee2
Β·
verified Β·
1 Parent(s): 8bfd206

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +201 -3
README.md CHANGED
@@ -1,3 +1,201 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - VisualCloze/Graph200K
5
+ pipeline_tag: image-to-image
6
+ library_name: diffusers
7
+ base_model:
8
+ - black-forest-labs/FLUX.1-Fill-dev
9
+ tags:
10
+ - text-to-image
11
+ - image-to-image
12
+ - flux
13
+ - lora
14
+ - in-context-learning
15
+ - universal-image-generation
16
+ - ai-tools
17
+ - VisualCloze
18
+ - VisualClozePipeline
19
+ ---
20
+
21
+ # VisualCloze: A Universal Image Generation Framework via Visual In-Context Learning (LoRA weight for with <strong><span style="color:red">Diffusers</span></strong>)
22
+
23
+ <div align="center">
24
+
25
+ [[Paper](https://arxiv.org/abs/2504.07960)] &emsp; [[Project Page](https://visualcloze.github.io/)] &emsp; [[Github](https://github.com/lzyhha/VisualCloze)]
26
+
27
+ </div>
28
+
29
+ <div align="center">
30
+
31
+ [[πŸ€— <strong><span style="color:hotpink">Diffusers</span></strong> Implementation](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/visualcloze)]
32
+
33
+ </div>
34
+
35
+ <div align="center">
36
+
37
+ [[πŸ€— Online Demo](https://huggingface.co/spaces/VisualCloze/VisualCloze)] &emsp; [[πŸ€— Full Model Card](https://huggingface.co/VisualCloze/VisualClozePipeline-512)] &emsp; [[πŸ€— Dataset Card](https://huggingface.co/datasets/VisualCloze/Graph200K)]
38
+
39
+ </div>
40
+
41
+ ![Examples](https://github.com/lzyhha/VisualCloze/raw/main/figures/seen.jpg)
42
+
43
+ If you find VisualCloze is helpful, please consider to star ⭐ the [<strong><span style="color:hotpink">Github Repo</span></strong>](https://github.com/lzyhha/VisualCloze). Thanks!
44
+
45
+ ## πŸ“° News
46
+ - [2025-5-15] πŸ€—πŸ€—πŸ€— VisualCloze has been merged into the [<strong><span style="color:hotpink">official pipelines of diffusers</span></strong>](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/visualcloze).
47
+
48
+ ## 🌠 Key Features
49
+
50
+ An in-context learning based universal image generation framework.
51
+
52
+ 1. Support various in-domain tasks.
53
+ 2. Generalize to <strong><span style="color:hotpink"> unseen tasks</span></strong> through in-context learning.
54
+ 3. Unify multiple tasks into one step and generate both target image and intermediate results.
55
+ 4. Support reverse-engineering a set of conditions from a target image.
56
+
57
+ πŸ”₯ Examples are shown in the [project page](https://visualcloze.github.io/).
58
+
59
+ ## πŸ”§ Installation
60
+
61
+ <strong><span style="color:hotpink">You can install the official </span></strong> [diffusers](https://github.com/huggingface/diffusers.git).
62
+
63
+ ```bash
64
+ pip install git+https://github.com/huggingface/diffusers.git
65
+ ```
66
+
67
+ ### πŸ’» Diffusers Usage
68
+
69
+ [![Huggingface VisualCloze](https://img.shields.io/static/v1?label=Demo&message=Huggingface%20Gradio&color=orange)](https://huggingface.co/spaces/VisualCloze/VisualCloze)
70
+
71
+ The weights in this model contains the LoRA weights supporting diffusers and
72
+ [Full Model Card](https://huggingface.co/VisualCloze/VisualClozePipeline-512) provides the full weights.
73
+
74
+ While this model uses the `resolution` of 512, a model trained with the `resolution` of 384 is
75
+ released at [Full Model Card 384](https://huggingface.co/VisualCloze/VisualClozePipeline-384) and [LoRA Model Card 384](https://huggingface.co/VisualCloze/VisualClozePipeline-LoRA-384).
76
+ The `resolution` means that each image will be resized to it before being
77
+ concatenated to avoid the out-of-memory error. To generate high-resolution images, we use the SDEdit technology for upsampling the generated results.
78
+
79
+ #### Example with Depth-to-Image:
80
+
81
+ <img src="./visualcloze_diffusers_example_depthtoimage.jpg" width="60%" height="50%" alt="Example with Depth-to-Image"/>
82
+
83
+ ```python
84
+ import torch
85
+ from diffusers import VisualClozePipeline
86
+ from diffusers.utils import load_image
87
+
88
+
89
+ # Load in-context images (make sure the paths are correct and accessible)
90
+ image_paths = [
91
+ # in-context examples
92
+ [
93
+ load_image('https://github.com/lzyhha/VisualCloze/raw/main/examples/examples/93bc1c43af2d6c91ac2fc966bf7725a2/93bc1c43af2d6c91ac2fc966bf7725a2_depth-anything-v2_Large.jpg'),
94
+ load_image('https://github.com/lzyhha/VisualCloze/raw/main/examples/examples/93bc1c43af2d6c91ac2fc966bf7725a2/93bc1c43af2d6c91ac2fc966bf7725a2.jpg'),
95
+ ],
96
+ # query with the target image
97
+ [
98
+ load_image('https://github.com/lzyhha/VisualCloze/raw/main/examples/examples/79f2ee632f1be3ad64210a641c4e201b/79f2ee632f1be3ad64210a641c4e201b_depth-anything-v2_Large.jpg'),
99
+ None, # No image needed for the query in this case
100
+ ],
101
+ ]
102
+
103
+ # Task and content prompt
104
+ task_prompt = "Each row outlines a logical process, starting from [IMAGE1] gray-based depth map with detailed object contours, to achieve [IMAGE2] an image with flawless clarity."
105
+ content_prompt = """A serene portrait of a young woman with long dark hair, wearing a beige dress with intricate
106
+ gold embroidery, standing in a softly lit room. She holds a large bouquet of pale pink roses in a black box,
107
+ positioned in the center of the frame. The background features a tall green plant to the left and a framed artwork
108
+ on the wall to the right. A window on the left allows natural light to gently illuminate the scene.
109
+ The woman gazes down at the bouquet with a calm expression. Soft natural lighting, warm color palette,
110
+ high contrast, photorealistic, intimate, elegant, visually balanced, serene atmosphere."""
111
+
112
+ # Load the VisualClozePipeline
113
+ pipe = VisualClozePipeline.from_pretrained("black-forest-labs/FLUX.1-Fill-dev", resolution=512, torch_dtype=torch.bfloat16)
114
+ pipe.load_lora_weights('VisualCloze/VisualClozePipeline-LoRA-512', weight_name='visualcloze-lora-512.safetensors')
115
+ pipe.to("cuda")
116
+
117
+ # Run the pipeline
118
+ image_result = pipe(
119
+ task_prompt=task_prompt,
120
+ content_prompt=content_prompt,
121
+ image=image_paths,
122
+ upsampling_width=1024,
123
+ upsampling_height=1024,
124
+ upsampling_strength=0.4,
125
+ guidance_scale=30,
126
+ num_inference_steps=30,
127
+ max_sequence_length=512,
128
+ generator=torch.Generator("cpu").manual_seed(0)
129
+ ).images[0][0]
130
+
131
+ # Save the resulting image
132
+ image_result.save("visualcloze.png")
133
+ ```
134
+
135
+
136
+ #### Example with Virtual Try-On:
137
+
138
+ <img src="./visualcloze_diffusers_example_tryon.jpg" width="60%" height="50%" alt="Example with Virtual Try-On"/>
139
+
140
+ ```python
141
+ import torch
142
+ from diffusers import VisualClozePipeline
143
+ from diffusers.utils import load_image
144
+
145
+
146
+ # Load in-context images (make sure the paths are correct and accessible)
147
+ # The images are from the VITON-HD dataset at https://github.com/shadow2496/VITON-HD
148
+ image_paths = [
149
+ # in-context examples
150
+ [
151
+ load_image('https://github.com/lzyhha/VisualCloze/raw/main/examples/examples/tryon/00700_00.jpg'),
152
+ load_image('https://github.com/lzyhha/VisualCloze/raw/main/examples/examples/tryon/03673_00.jpg'),
153
+ load_image('https://github.com/lzyhha/VisualCloze/raw/main/examples/examples/tryon/00700_00_tryon_catvton_0.jpg'),
154
+ ],
155
+ # query with the target image
156
+ [
157
+ load_image('https://github.com/lzyhha/VisualCloze/raw/main/examples/examples/tryon/00555_00.jpg'),
158
+ load_image('https://github.com/lzyhha/VisualCloze/raw/main/examples/examples/tryon/12265_00.jpg'),
159
+ None
160
+ ],
161
+ ]
162
+
163
+ # Task and content prompt
164
+ task_prompt = "Each row shows a virtual try-on process that aims to put [IMAGE2] the clothing onto [IMAGE1] the person, producing [IMAGE3] the person wearing the new clothing."
165
+ content_prompt = None
166
+
167
+ # Load the VisualClozePipeline
168
+ pipe = VisualClozePipeline.from_pretrained("black-forest-labs/FLUX.1-Fill-dev", resolution=512, torch_dtype=torch.bfloat16)
169
+ pipe.load_lora_weights('VisualCloze/VisualClozePipeline-LoRA-512', weight_name='visualcloze-lora-512.safetensors')
170
+ pipe.to("cuda")
171
+
172
+ # Run the pipeline
173
+ image_result = pipe(
174
+ task_prompt=task_prompt,
175
+ content_prompt=content_prompt,
176
+ image=image_paths,
177
+ upsampling_height=1632,
178
+ upsampling_width=1232,
179
+ upsampling_strength=0.3,
180
+ guidance_scale=30,
181
+ num_inference_steps=30,
182
+ max_sequence_length=512,
183
+ generator=torch.Generator("cpu").manual_seed(0)
184
+ ).images[0][0]
185
+
186
+ # Save the resulting image
187
+ image_result.save("visualcloze.png")
188
+ ```
189
+
190
+ ### Citation
191
+
192
+ If you find VisualCloze useful for your research and applications, please cite using this BibTeX:
193
+
194
+ ```bibtex
195
+ @article{li2025visualcloze,
196
+ title={VisualCloze: A Universal Image Generation Framework via Visual In-Context Learning},
197
+ author={Li, Zhong-Yu and Du, Ruoyi and Yan, Juncheng and Zhuo, Le and Li, Zhen and Gao, Peng and Ma, Zhanyu and Cheng, Ming-Ming},
198
+ journal={arXiv preprint arXiv:2504.07960},
199
+ year={2025}
200
+ }
201
+ ```