text
stringlengths
0
5.54k
Parameters
prompt (str or List[str], optional) —
prompt to be encoded
Encodes the prompt into text encoder hidden states.
device: (torch.device, optional):
torch device to place the resulting embeddings on
num_images_per_prompt (int, optional, defaults to 1):
number of images that should be generated per prompt
do_classifier_free_guidance (bool, optional, defaults to True):
whether to use classifier free guidance or not
negative_prompt (str or List[str], optional):
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead.
Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
prompt_embeds (torch.FloatTensor, optional):
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument.
negative_prompt_embeds (torch.FloatTensor, optional):
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument.
DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding.
The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model
Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling.
Stage 3 is Stability AI’s x4 Upscaling model.
The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset.
Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept t...
login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weig...
from diffusers.utils import pt_to_pil, make_image_grid
import torch
# stage 1
stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
stage_1.enable_model_cpu_offload()
# stage 2
stage_2 = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_model_cpu_offload()
# stage 3
safety_modules = {
"feature_extractor": stage_1.feature_extractor,
"safety_checker": stage_1.safety_checker,
"watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
)
stage_3.enable_model_cpu_offload()
prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
generator = torch.manual_seed(1)
# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)
# stage 1
stage_1_output = stage_1(
prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt"
).images
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")
# stage 2
stage_2_output = stage_2(
image=stage_1_output,
prompt_embeds=prompt_embeds,
negative_prompt_embeds=negative_embeds,
generator=generator,
output_type="pt",
).images
#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png")
# stage 3
stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images
#stage_3_output[0].save("./if_stage_III.png")
make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3) Text Guided Image-to-Image Generation The same IF model weights can be used for text-guided image-to-image translation or image variation.
In this case just make sure to load the weights using the IFImg2ImgPipeline and IFImg2ImgSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines
without loading them twice by making use of the components argument as explained here. Copied from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline
from diffusers.utils import pt_to_pil, load_image, make_image_grid
import torch
# download image
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
original_image = load_image(url)
original_image = original_image.resize((768, 512))
# stage 1
stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
stage_1.enable_model_cpu_offload()
# stage 2
stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained(