text
stringlengths
0
5.54k
device = "cuda"
model_id_or_path = "thu-ml/unidiffuser-v1"
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)
# Image variation can be performed with an image-to-text generation followed by a text-to-image generation:
# 1. Image-to-text generation
image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg"
init_image = load_image(image_url).resize((512, 512))
sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0)
i2t_text = sample.text[0]
print(i2t_text)
# 2. Text-to-image generation
sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0)
final_image = sample.images[0]
final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch
from diffusers import UniDiffuserPipeline
device = "cuda"
model_id_or_path = "thu-ml/unidiffuser-v1"
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
pipe.to(device)
# Text variation can be performed with a text-to-image generation followed by a image-to-text generation:
# 1. Text-to-image generation
prompt = "an elephant under the sea"
sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0)
t2i_image = sample.images[0]
t2i_image.save("unidiffuser_text2img_sample_image.png")
# 2. Image-to-text generation
sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0)
final_prompt = sample.text[0]
print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserP...
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This
is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) β€”
Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) β€”
A CLIPVisionModel to encode images as part of its image representation along with the VAE
latent representation. image_processor (CLIPImageProcessor) β€”
CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) β€”
A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) β€”
Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser
embedding. text_tokenizer (GPT2Tokenizer) β€”
A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) β€”
A U-ViT model with UNNet-style skip connections between transformer
layers to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The
original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned
image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_i...
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds.
Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) β€”
Image or tensor representing an image batch. Required for image-conditioned text generation
(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The width in pixels of the generated image. data_type (int, optional, defaults to 1) β€”
The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type
embedding; this is added for compatibility with the
UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. guidance_scale (float, optional, defaults to 8.0) β€”
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β€”
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in
text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) β€”
The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and
img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are
supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) β€”
The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and
text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are
supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) β€”
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
A torch.Generator to make
generation deterministic. latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint
image-text generation. Can be used to tweak the same generation with different prompts. If not
provided, a latents tensor is generated by sampling using the supplied random generator. This assumes
a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents,
vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
provided, text embeddings are generated from the prompt input argument. Used in text-conditioned
image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used