text stringlengths 0 5.54k |
|---|
prompt. cross_attention_kwargs (dict, optional) β |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under |
self.processor in |
diffusers.models.attention_processor. Returns |
~pipelines.stable_diffusion.IFPipelineOutput or tuple |
~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. |
Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline |
>>> from diffusers.utils import pt_to_pil |
>>> import torch |
>>> from PIL import Image |
>>> import requests |
>>> from io import BytesIO |
>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" |
>>> response = requests.get(url) |
>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") |
>>> original_image = original_image |
>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" |
>>> response = requests.get(url) |
>>> mask_image = Image.open(BytesIO(response.content)) |
>>> mask_image = mask_image |
>>> pipe = IFInpaintingPipeline.from_pretrained( |
... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 |
... ) |
>>> pipe.enable_model_cpu_offload() |
>>> prompt = "blue sunglasses" |
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) |
>>> image = pipe( |
... image=original_image, |
... mask_image=mask_image, |
... prompt_embeds=prompt_embeds, |
... negative_prompt_embeds=negative_embeds, |
... output_type="pt", |
... ).images |
>>> # save intermediate image |
>>> pil_image = pt_to_pil(image) |
>>> pil_image[0].save("./if_stage_I.png") |
>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( |
... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 |
... ) |
>>> super_res_1_pipe.enable_model_cpu_offload() |
>>> image = super_res_1_pipe( |
... image=image, |
... mask_image=mask_image, |
... original_image=original_image, |
... prompt_embeds=prompt_embeds, |
... negative_prompt_embeds=negative_embeds, |
... ).images |
>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) β |
prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) β |
whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) β |
number of images that should be generated per prompt |
device β (torch.device, optional): |
torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. |
Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. clean_caption (bool, defaults to False) β |
If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingSuperResolutionPipeline class diffusers.IFInpaintingSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None mask_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 clean_caption: bool = True ) β ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) β |
Image, or tensor representing an image batch, that will be used as the starting point for the |
process. original_image (torch.FloatTensor or PIL.Image.Image) β |
The original image that image was varied from. mask_image (PIL.Image.Image) β |
Image, or tensor representing an image batch, to mask image. White pixels in the mask will be |
repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted |
to a single channel (luminance) before use. If itβs a tensor, it should contain one color channel (L) |
instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 0.8) β |
Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image |
will be used as a starting point, adding more noise to it the larger the strength. The number of |
denoising steps depends on the amount of noise initially added. When strength is 1, added noise will |
be maximum and the denoising process will run for the full number of iterations specified in |
num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) β |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. num_inference_steps (int, optional, defaults to 100) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. timesteps (List[int], optional) β |
Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps |
timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) β |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). num_images_per_prompt (int, optional, defaults to 1) β |
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to |
schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) β |
One or a list of torch generator(s) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.