Update README.md
Browse files
README.md
CHANGED
|
@@ -26,20 +26,156 @@ The model is created by Dongxu Li, Junnan Li, Steven C.H. Hoi.
|
|
| 26 |
|
| 27 |
## Uses
|
| 28 |
|
| 29 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
-
## How to Get Started with the Model
|
| 33 |
|
| 34 |
-
|
|
|
|
| 35 |
|
| 36 |
-
|
|
|
|
|
|
|
| 37 |
|
| 38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
## Citation
|
| 41 |
|
| 42 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 43 |
|
| 44 |
**BibTeX:**
|
| 45 |
|
|
|
|
| 26 |
|
| 27 |
## Uses
|
| 28 |
|
|
|
|
| 29 |
|
| 30 |
+
### Zero-Shot Subject Driven Generation
|
| 31 |
+
```python
|
| 32 |
+
from diffusers.pipelines import BlipDiffusionPipeline
|
| 33 |
+
from diffusers.utils import load_image
|
| 34 |
|
|
|
|
| 35 |
|
| 36 |
+
blip_diffusion_pipe= BlipDiffusionPipeline.from_pretrained('ayushtues/blipdiffusion')
|
| 37 |
+
blip_diffusion_pipe.to('cuda')
|
| 38 |
|
| 39 |
+
cond_subject = ["dog"]
|
| 40 |
+
tgt_subject = ["dog"]
|
| 41 |
+
text_prompt_input = ["swimming underwater"]
|
| 42 |
|
| 43 |
|
| 44 |
+
cond_image = load_image("https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg")
|
| 45 |
+
num_output = 1
|
| 46 |
+
|
| 47 |
+
iter_seed = 88888
|
| 48 |
+
guidance_scale = 7.5
|
| 49 |
+
num_inference_steps = 50
|
| 50 |
+
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
|
| 51 |
+
|
| 52 |
+
output = blip_diffusion_pipe(
|
| 53 |
+
text_prompt_input,
|
| 54 |
+
cond_image,
|
| 55 |
+
cond_subject,
|
| 56 |
+
tgt_subject,
|
| 57 |
+
guidance_scale=guidance_scale,
|
| 58 |
+
num_inference_steps=num_inference_steps,
|
| 59 |
+
neg_prompt=negative_prompt,
|
| 60 |
+
height=512,
|
| 61 |
+
width=512,
|
| 62 |
+
)
|
| 63 |
+
output[0][0].save("image.png")
|
| 64 |
+
```
|
| 65 |
+
Input Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" style="width:500px;"/>
|
| 66 |
+
|
| 67 |
+
Generatred Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog_underwater.png" style="width:500px;"/>
|
| 68 |
+
|
| 69 |
+
### Controlled subject-driven generation
|
| 70 |
+
|
| 71 |
+
```python
|
| 72 |
+
from diffusers.pipelines import BlipDiffusionControlNetPipeline
|
| 73 |
+
from diffusers.utils import load_image
|
| 74 |
+
from controlnet_aux import CannyDetector
|
| 75 |
+
|
| 76 |
+
blip_diffusion_pipe= BlipDiffusionControlNetPipeline.from_pretrained("ayushtues/blipdiffusion-controlnet")
|
| 77 |
+
blip_diffusion_pipe.to('cuda')
|
| 78 |
+
|
| 79 |
+
style_subject = ["flower"] # subject that defines the style
|
| 80 |
+
tgt_subject = ["teapot"] # subject to generate.
|
| 81 |
+
text_prompt = ["on a marble table"]
|
| 82 |
+
cldm_cond_image = load_image("https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg").resize((512, 512))
|
| 83 |
+
canny = CannyDetector()
|
| 84 |
+
cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type='pil')
|
| 85 |
+
cldm_cond_image = [cldm_cond_image ]
|
| 86 |
+
|
| 87 |
+
style_image = load_image("https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg")
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
num_output = 1
|
| 91 |
+
guidance_scale = 7.5
|
| 92 |
+
num_inference_steps = 50
|
| 93 |
+
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
|
| 94 |
+
|
| 95 |
+
output = blip_diffusion_pipe(
|
| 96 |
+
text_prompt,
|
| 97 |
+
style_image,
|
| 98 |
+
cldm_cond_image,
|
| 99 |
+
style_subject,
|
| 100 |
+
tgt_subject,
|
| 101 |
+
guidance_scale=guidance_scale,
|
| 102 |
+
num_inference_steps=num_inference_steps,
|
| 103 |
+
neg_prompt=negative_prompt,
|
| 104 |
+
height=512,
|
| 105 |
+
width=512,
|
| 106 |
+
)
|
| 107 |
+
output[0][0].save("image.png")
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
|
| 111 |
+
Canny Edge Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" style="width:500px;"/>
|
| 112 |
+
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/canny_generated.png" style="width:500px;"/>
|
| 113 |
+
|
| 114 |
+
### Controlled subject-driven generation Scribble
|
| 115 |
+
```python
|
| 116 |
+
from diffusers.pipelines import BlipDiffusionControlNetPipeline
|
| 117 |
+
from diffusers.utils import load_image
|
| 118 |
+
from controlnet_aux import HEDdetector
|
| 119 |
+
|
| 120 |
+
blip_diffusion_pipe= BlipDiffusionControlNetPipeline.from_pretrained("ayushtues/blipdiffusion-controlnet")
|
| 121 |
+
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-scribble")
|
| 122 |
+
blip_diffusion_pipe.controlnet = controlnet
|
| 123 |
+
blip_diffusion_pipe.to('cuda')
|
| 124 |
+
|
| 125 |
+
style_subject = ["flower"] # subject that defines the style
|
| 126 |
+
tgt_subject = ["bag"] # subject to generate.
|
| 127 |
+
text_prompt = ["on a table"]
|
| 128 |
+
cldm_cond_image = load_image("https://huggingface.co/lllyasviel/sd-controlnet-scribble/resolve/main/images/bag.png" ).resize((512, 512))
|
| 129 |
+
hed = HEDdetector.from_pretrained("lllyasviel/Annotators")
|
| 130 |
+
cldm_cond_image = hed(cldm_cond_image)
|
| 131 |
+
cldm_cond_image = [cldm_cond_image ]
|
| 132 |
+
|
| 133 |
+
style_image = load_image("https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg")
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
num_output = 1
|
| 137 |
+
iter_seed = 88888
|
| 138 |
+
guidance_scale = 7.5
|
| 139 |
+
num_inference_steps = 50
|
| 140 |
+
negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate"
|
| 141 |
+
|
| 142 |
+
output = blip_diffusion_pipe(
|
| 143 |
+
text_prompt,
|
| 144 |
+
style_image,
|
| 145 |
+
cldm_cond_image,
|
| 146 |
+
style_subject,
|
| 147 |
+
tgt_subject,
|
| 148 |
+
guidance_scale=guidance_scale,
|
| 149 |
+
num_inference_steps=num_inference_steps,
|
| 150 |
+
neg_prompt=negative_prompt,
|
| 151 |
+
height=512,
|
| 152 |
+
width=512,
|
| 153 |
+
)
|
| 154 |
+
output[0][0].save("image.png"')
|
| 155 |
+
```
|
| 156 |
+
|
| 157 |
+
Input Style Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" style="width:500px;"/>
|
| 158 |
+
Scribble Input : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble.png" style="width:500px;"/>
|
| 159 |
+
Generated Image : <img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/scribble_output.png" style="width:500px;"/>
|
| 160 |
+
|
| 161 |
+
## Model Architecture
|
| 162 |
+
|
| 163 |
+
Blip-Diffusion learns a **pre-trained subject representation**. uch representation aligns with text embeddings and in the meantime also encodes the subject appearance. This allows efficient fine-tuning of the model for high-fidelity subject-driven applications, such as text-to-image generation, editing and style transfer.
|
| 164 |
+
|
| 165 |
+
To this end, they design a two-stage pre-training strategy to learn generic subject representation. In the first pre-training stage, they perform multimodal representation learning, which enforces BLIP-2 to produce text-aligned visual features based on the input image. In the second pre-training stage, they design a subject representation learning task, called prompted context generation, where the diffusion model learns to generate novel subject renditions based on the input visual features.
|
| 166 |
+
|
| 167 |
+
To achieve this, they curate pairs of input-target images with the same subject appearing in different contexts. Specifically, they synthesize input images by composing the subject with a random background. During pre-training, they feed the synthetic input image and the subject class label through BLIP-2 to obtain the multimodal embeddings as subject representation. The subject representation is then combined with a text prompt to guide the generation of the target image.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+
The architecture is also compatible to integrate with established techniques built on top of the diffusion model, such as ControlNet.
|
| 172 |
+
|
| 173 |
+
They attach the U-Net of the pre-trained ControlNet to that of BLIP-Diffusion via residuals. In this way, the model takes into account the input structure condition, such as edge maps and depth maps, in addition to the subject cues. Since the model inherits the architecture of the original latent diffusion model, they observe satisfying generations using off-the-shelf integration with pre-trained ControlNet without further training.
|
| 174 |
+
|
| 175 |
+
<img src="https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/arch_controlnet.png" style="width:50%;"/>
|
| 176 |
|
| 177 |
## Citation
|
| 178 |
|
|
|
|
| 179 |
|
| 180 |
**BibTeX:**
|
| 181 |
|