text stringlengths 0 5.54k |
|---|
IF |
Overview |
DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. |
The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: |
Stage 1: a base model that generates 64x64 px image based on text prompt, |
Stage 2: a 64x64 px => 256x256 px super-resolution model, and a |
Stage 3: a 256x256 px => 1024x1024 px super-resolution model |
Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, |
which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. |
Stage 3 is Stability’s x4 Upscaling model. |
The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. |
Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. |
Usage |
Before you can use IF, you need to accept its usage conditions. To do so: |
Make sure to have a Hugging Face account and be loggin in |
Accept the license on the model card of DeepFloyd/IF-I-IF-v1.0 and DeepFloyd/IF-II-L-v1.0 |
Make sure to login locally. Install huggingface_hub |
Copied |
pip install huggingface_hub --upgrade |
run the login function in a Python shell |
Copied |
from huggingface_hub import login |
login() |
and enter your Hugging Face Hub access token. |
Next we install diffusers and dependencies: |
Copied |
pip install diffusers accelerate transformers safetensors |
The following sections give more in-detail examples of how to use IF. Specifically: |
Text-to-Image Generation |
Image-to-Image Generation |
Inpainting |
Reusing model weights |
Speed optimization |
Memory optimization |
Available checkpoints |
Stage-1 |
DeepFloyd/IF-I-IF-v1.0 |
DeepFloyd/IF-I-L-v1.0 |
DeepFloyd/IF-I-M-v1.0 |
Stage-2 |
DeepFloyd/IF-II-L-v1.0 |
DeepFloyd/IF-II-M-v1.0 |
Stage-3 |
stabilityai/stable-diffusion-x4-upscaler |
Demo |
Google Colab |
Text-to-Image Generation |
By default diffusers makes use of model cpu offloading |
to run the whole IF pipeline with as little as 14 GB of VRAM. |
Copied |
from diffusers import DiffusionPipeline |
from diffusers.utils import pt_to_pil |
import torch |
# stage 1 |
stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) |
stage_1.enable_model_cpu_offload() |
# stage 2 |
stage_2 = DiffusionPipeline.from_pretrained( |
"DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 |
) |
stage_2.enable_model_cpu_offload() |
# stage 3 |
safety_modules = { |
"feature_extractor": stage_1.feature_extractor, |
"safety_checker": stage_1.safety_checker, |
"watermarker": stage_1.watermarker, |
} |
stage_3 = DiffusionPipeline.from_pretrained( |
"stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 |
) |
stage_3.enable_model_cpu_offload() |
prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' |
generator = torch.manual_seed(1) |
# text embeds |
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) |
# stage 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.