text
stringlengths
0
5.54k
text_encoder = T5EncoderModel.from_pretrained(
"DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit"
)
# text to image
pipe = DiffusionPipeline.from_pretrained(
"DeepFloyd/IF-I-XL-v1.0",
text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder
unet=None,
device_map="auto",
)
prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"'
prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)
# Remove the pipeline so we can re-load the pipeline with the unet
del text_encoder
del pipe
gc.collect()
torch.cuda.empty_cache()
pipe = IFPipeline.from_pretrained(
"DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto"
)
generator = torch.Generator().manual_seed(0)
stage_1_output = pipe(
prompt_embeds=prompt_embeds,
negative_prompt_embeds=negative_embeds,
output_type="pt",
generator=generator,
).images
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png")
# Remove the pipeline so we can load the super-resolution pipeline
del pipe
gc.collect()
torch.cuda.empty_cache()
# First super resolution
pipe = IFSuperResolutionPipeline.from_pretrained(
"DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto"
)
generator = torch.Generator().manual_seed(0)
stage_2_output = pipe(
image=stage_1_output,
prompt_embeds=prompt_embeds,
negative_prompt_embeds=negative_embeds,
output_type="pt",
generator=generator,
).images
#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png")
make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolution.py Image-to-Image Generation - pipeline_if_inpainting.py Image-to-Image Generation - pipeline_if_inpainting_superresolution.py Image-to-Image Generation - IFPipeline class diffusers.IFPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) β†’ ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) β€”
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds.
instead. num_inference_steps (int, optional, defaults to 100) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. timesteps (List[int], optional) β€”
Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps
timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) β€”
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale is defined as w of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt,
usually at the expense of lower image quality. negative_prompt (str or List[str], optional) β€”
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). num_images_per_prompt (int, optional, defaults to 1) β€”
The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) β€”
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) β€”
The width in pixels of the generated image. eta (float, optional, defaults to 0.0) β€”
Corresponds to parameter eta (Ξ·) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) β€”
One or a list of torch generator(s)
to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. output_type (str, optional, defaults to "pil") β€”
The output format of the generate image. Choose between
PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) β€”
Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) β€”
A function that will be called every callback_steps steps during inference. The function will be
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
The frequency at which the callback function will be called. If not specified, the callback will be
called at every step. clean_caption (bool, optional, defaults to True) β€”
Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to
be installed. If the dependencies are not installed, the embeddings will be created from the raw
prompt. cross_attention_kwargs (dict, optional) β€”
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under
self.processor in
diffusers.models.attention_processor. Returns
~pipelines.stable_diffusion.IFPipelineOutput or tuple
~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`.
Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline