text
stringlengths
0
5.54k
laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
variant. tokenizer (CLIPTokenizer) —
Tokenizer of class
CLIPTokenizer. tokenizer_2 (CLIPTokenizer) —
Second Tokenizer of class
CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) —
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") —
Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
stabilityai/stable-diffusion-xl-base-1-0. Pipeline for text-to-image generation using Stable Diffusion XL and k-diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weigh...
computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to
computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) —
Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to
mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) —
Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to
mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.114...
that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional...
prompt to be encoded prompt_2 (str or List[str], optional) —
The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is
used in both text-encoders
device — (torch.device):
torch device num_images_per_prompt (int) —
number of images that should be generated per prompt do_classifier_free_guidance (bool) —
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). negative_prompt_2 (str or List[str], optional) —
The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and
text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. pooled_prompt_embeds (torch.FloatTensor, optional) —
Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting.
If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) —
Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt
input argument. lora_scale (float, optional) —
A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) —
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) ...
key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply ...
DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding.
The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model
Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling.
Stage 3 is Stability AI’s x4 Upscaling model.
The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset.
Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept t...
login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weig...
from diffusers.utils import pt_to_pil, make_image_grid
import torch
# stage 1
stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16)
stage_1.enable_model_cpu_offload()
# stage 2
stage_2 = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
)
stage_2.enable_model_cpu_offload()
# stage 3
safety_modules = {
"feature_extractor": stage_1.feature_extractor,
"safety_checker": stage_1.safety_checker,
"watermarker": stage_1.watermarker,
}
stage_3 = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16
)
stage_3.enable_model_cpu_offload()
prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
generator = torch.manual_seed(1)
# text embeds
prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt)
# stage 1
stage_1_output = stage_1(
prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt"
).images
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")
# stage 2
stage_2_output = stage_2(
image=stage_1_output,
prompt_embeds=prompt_embeds,
negative_prompt_embeds=negative_embeds,
generator=generator,
output_type="pt",
).images
#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png")