text stringlengths 0 5.54k |
|---|
You can reset the mode with UniDiffuserPipeline.reset_mode(), after which the pipeline will once again infer the mode. You can also generate only an image or only text (which the UniDiffuser paper calls “marginal” generation since we sample from the marginal distribution of images and text, respectively): Copied # Un... |
# Image-only generation |
pipe.set_image_mode() |
sample_image = pipe(num_inference_steps=20).images[0] |
# Text-only generation |
pipe.set_text_mode() |
sample_text = pipe(num_inference_steps=20).text[0] Text-to-Image Generation UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. |
Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): Copied import torch |
from diffusers import UniDiffuserPipeline |
device = "cuda" |
model_id_or_path = "thu-ml/unidiffuser-v1" |
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) |
pipe.to(device) |
# Text-to-image generation |
prompt = "an elephant under the sea" |
sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) |
t2i_image = sample.images[0] |
t2i_image The text2img mode requires that either an input prompt or prompt_embeds be supplied. You can set the text2img mode manually with UniDiffuserPipeline.set_text_to_image_mode(). Image-to-Text Generation Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text ... |
from diffusers import UniDiffuserPipeline |
from diffusers.utils import load_image |
device = "cuda" |
model_id_or_path = "thu-ml/unidiffuser-v1" |
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) |
pipe.to(device) |
# Image-to-text generation |
image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" |
init_image = load_image(image_url).resize((512, 512)) |
sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) |
i2t_text = sample.text[0] |
print(i2t_text) The img2text mode requires that an input image be supplied. You can set the img2text mode manually with UniDiffuserPipeline.set_image_to_text_mode(). Image Variation The UniDiffuser authors suggest performing image variation through a “round-trip” generation method, where given an input image, we first... |
This produces a new image which is semantically similar to the input image: Copied import torch |
from diffusers import UniDiffuserPipeline |
from diffusers.utils import load_image |
device = "cuda" |
model_id_or_path = "thu-ml/unidiffuser-v1" |
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) |
pipe.to(device) |
# Image variation can be performed with an image-to-text generation followed by a text-to-image generation: |
# 1. Image-to-text generation |
image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" |
init_image = load_image(image_url).resize((512, 512)) |
sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) |
i2t_text = sample.text[0] |
print(i2t_text) |
# 2. Text-to-image generation |
sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) |
final_image = sample.images[0] |
final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch |
from diffusers import UniDiffuserPipeline |
device = "cuda" |
model_id_or_path = "thu-ml/unidiffuser-v1" |
pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) |
pipe.to(device) |
# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: |
# 1. Text-to-image generation |
prompt = "an elephant under the sea" |
sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) |
t2i_image = sample.images[0] |
t2i_image.save("unidiffuser_text2img_sample_image.png") |
# 2. Image-to-text generation |
sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) |
final_prompt = sample.text[0] |
print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserP... |
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This |
is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) — |
Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) — |
A CLIPVisionModel to encode images as part of its image representation along with the VAE |
latent representation. image_processor (CLIPImageProcessor) — |
CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) — |
A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) — |
Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser |
embedding. text_tokenizer (GPT2Tokenizer) — |
A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) — |
A U-ViT model with UNNet-style skip connections between transformer |
layers to denoise the encoded image latents. scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The |
original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned |
image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_i... |
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. |
Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) — |
Image or tensor representing an image batch. Required for image-conditioned text generation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.