text stringlengths 0 5.54k |
|---|
#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") |
# Remove the pipeline so we can load the super-resolution pipeline |
del pipe |
gc.collect() |
torch.cuda.empty_cache() |
# First super resolution |
pipe = IFSuperResolutionPipeline.from_pretrained( |
"DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" |
) |
generator = torch.Generator().manual_seed(0) |
stage_2_output = pipe( |
image=stage_1_output, |
prompt_embeds=prompt_embeds, |
negative_prompt_embeds=negative_embeds, |
output_type="pt", |
generator=generator, |
).images |
#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") |
make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolut... |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. num_inference_steps (int, optional, defaults to 100) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. timesteps (List[int], optional) β |
Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps |
timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) β |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). num_images_per_prompt (int, optional, defaults to 1) β |
The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) β |
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) β |
The width in pixels of the generated image. eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to |
schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) β |
One or a list of torch generator(s) |
to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. output_type (str, optional, defaults to "pil") β |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) β |
Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) β |
A function that will be called every callback_steps steps during inference. The function will be |
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function will be called. If not specified, the callback will be |
called at every step. clean_caption (bool, optional, defaults to True) β |
Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to |
be installed. If the dependencies are not installed, the embeddings will be created from the raw |
prompt. cross_attention_kwargs (dict, optional) β |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under |
self.processor in |
diffusers.models.attention_processor. Returns |
~pipelines.stable_diffusion.IFPipelineOutput or tuple |
~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked ... |
Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline |
>>> from diffusers.utils import pt_to_pil |
>>> import torch |
>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) |
>>> pipe.enable_model_cpu_offload() |
>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' |
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) |
>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images |
>>> # save intermediate image |
>>> pil_image = pt_to_pil(image) |
>>> pil_image[0].save("./if_stage_I.png") |
>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( |
... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 |
... ) |
>>> super_res_1_pipe.enable_model_cpu_offload() |
>>> image = super_res_1_pipe( |
... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" |
... ).images |
>>> # save intermediate image |
>>> pil_image = pt_to_pil(image) |
>>> pil_image[0].save("./if_stage_I.png") |
>>> safety_modules = { |
... "feature_extractor": pipe.feature_extractor, |
... "safety_checker": pipe.safety_checker, |
... "watermarker": pipe.watermarker, |
... } |
>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.