text stringlengths 0 5.54k |
|---|
to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. output_type (str, optional, defaults to "pil") β |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) β |
Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) β |
A function that will be called every callback_steps steps during inference. The function will be |
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function will be called. If not specified, the callback will be |
called at every step. cross_attention_kwargs (dict, optional) β |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under |
self.processor in |
diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) β |
The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) β |
Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to |
be installed. If the dependencies are not installed, the embeddings will be created from the raw |
prompt. Returns |
~pipelines.stable_diffusion.IFPipelineOutput or tuple |
~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. |
Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline |
>>> from diffusers.utils import pt_to_pil |
>>> import torch |
>>> from PIL import Image |
>>> import requests |
>>> from io import BytesIO |
>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" |
>>> response = requests.get(url) |
>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") |
>>> original_image = original_image |
>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" |
>>> response = requests.get(url) |
>>> mask_image = Image.open(BytesIO(response.content)) |
>>> mask_image = mask_image |
>>> pipe = IFInpaintingPipeline.from_pretrained( |
... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 |
... ) |
>>> pipe.enable_model_cpu_offload() |
>>> prompt = "blue sunglasses" |
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) |
>>> image = pipe( |
... image=original_image, |
... mask_image=mask_image, |
... prompt_embeds=prompt_embeds, |
... negative_prompt_embeds=negative_embeds, |
... output_type="pt", |
... ).images |
>>> # save intermediate image |
>>> pil_image = pt_to_pil(image) |
>>> pil_image[0].save("./if_stage_I.png") |
>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( |
... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 |
... ) |
>>> super_res_1_pipe.enable_model_cpu_offload() |
>>> image = super_res_1_pipe( |
... image=image, |
... mask_image=mask_image, |
... original_image=original_image, |
... prompt_embeds=prompt_embeds, |
... negative_prompt_embeds=negative_embeds, |
... ).images |
>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) β |
prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) β |
whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) β |
number of images that should be generated per prompt |
device β (torch.device, optional): |
torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. |
Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. clean_caption (bool, defaults to False) β |
If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. |
ConsistencyDecoderScheduler This scheduler is a part of the ConsistencyDecoderPipeline and was introduced in DALL-E 3. The original codebase can be found at openai/consistency_models. ConsistencyDecoderScheduler class diffusers.schedulers.ConsistencyDecoderScheduler < source > ( num_train_timesteps: int = 1024 sigma_data: float = 0.5 ) scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) β torch.FloatTensor Parameters sample (torch.FloatTensor) β |
The input sample. timestep (int, optional) β |
The current timestep in the diffusion chain. Returns |
torch.FloatTensor |
A scaled input sample. |
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the |
current timestep. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) β ~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) β |
The direct output from the learned diffusion model. timestep (float) β |
The current timestep in the diffusion chain. sample (torch.FloatTensor) β |
A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) β |
A random number generator. return_dict (bool, optional, defaults to True) β |
Whether or not to return a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.