text
stringlengths
0
5.54k
current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) β€”
The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) β€”
The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union generator: Optional = None return_dict: bool = True ) β†’ SchedulerOu...
The direct output from learned diffusion model. timestep (float) β€”
The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) β€”
A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) β€”
A random number generator. return_dict (bool) β€”
Whether or not to return a SchedulerOutput or tuple. Returns
SchedulerOutput or tuple
If return_dict is True, ~schedulers.scheduling_ddim.SchedulerOutput is returned, otherwise a
tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) β€”
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the
denoising loop. Base class for the output of a scheduler’s step function.
Diffusers πŸ€— Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, πŸ€— Diffusers is a modular toolbox that supports both. Our libr...
GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from University of Wisconsin-Madison, Columbia University, and Microsoft. The StableDiffusionGLIGENPipeline and StableDiffusionGLIGENTextImagePipeline can generate photorealistic images conditioned on grounding inpu...
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β€”
A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) β€”
Classification module that estimates whether generated images could be considered offensive or harmful.
Please refer to the model card for more details
about a model’s potential harms. feature_extractor (CLIPImageProcessor) β€”
A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic method...
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases...
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) β€”
The phrases to guide what to include in each of the regions defined by the corresponding
gligen_boxes. There should only be one phrase per bounding box. gligen_boxes (List[List[float]]) β€”
The bounding boxes that identify rectangular regions of the image that are going to be filled with the
content described by the corresponding gligen_phrases. Each rectangular box is defined as a
List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) β€”
The input image, if provided, is inpainted with objects described by the gligen_boxes and
gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) β€”
Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image
Generation. Scheduled Sampling factor is only varied for
scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) β€”
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) β€”
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β€”
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
A torch.Generator to make
generation deterministic. latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") β€”
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β€”
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple. callback (Callable, optional) β€”
A function that calls every callback_steps steps during inference. The function is called with the
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β€”
The frequency at which the callback function is called. If not specified, the callback is called at
every step. cross_attention_kwargs (dict, optional) β€”
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
self.processor. guidance_rescale (float, optional, defaults to 0.0) β€”
Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are
Flawed. Guidance rescale factor should fix overexposure when
using zero terminal SNR. clip_skip (int, optional) β€”
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Returns
StableDiffusionPipelineOutput or tuple
If return_dict is True, StableDiffusionPipelineOutput is returned,
otherwise a tuple is returned where the first element is a list with the generated images and the
second element is a list of bools indicating whether the corresponding generated image contains
β€œnot-safe-for-work” (nsfw) content.
The call function to the pipeline for generation. Examples: Copied >>> import torch
>>> from diffusers import StableDiffusionGLIGENPipeline
>>> from diffusers.utils import load_image
>>> # Insert objects described by text at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained(
... "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> input_image = load_image(
... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png"
... )
>>> prompt = "a birthday cake"
>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]]
>>> phrases = ["a birthday cake"]
>>> images = pipe(
... prompt=prompt,
... gligen_phrases=phrases,