text stringlengths 0 5.54k |
|---|
Corresponds to parameter eta (η) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) — |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. |
ip_adapter_image — (PipelineImageInput, optional): |
Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. callback (Callable, optional) — |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. cross_attention_kwargs (dict, optional) — |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Returns |
StableDiffusionPipelineOutput or tuple |
If return_dict is True, StableDiffusionPipelineOutput is returned, |
otherwise a tuple is returned where the first element is a list with the generated images and the |
second element is a list of bools indicating whether the corresponding generated image contains |
“not-safe-for-work” (nsfw) content. |
The call function to the pipeline for generation. Examples: Copied >>> import torch |
>>> from diffusers import StableDiffusionSAGPipeline |
>>> pipe = StableDiffusionSAGPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 |
... ) |
>>> pipe = pipe.to("cuda") |
>>> prompt = "a photo of an astronaut riding a horse on mars" |
>>> image = pipe(prompt, sag_scale=0.75).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_... |
prompt to be encoded |
device — (torch.device): |
torch device num_images_per_prompt (int) — |
number of images that should be generated per prompt do_classifier_free_guidance (bool) — |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) — |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images... |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — |
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or |
None if safety checking could not be performed. Output class for Stable Diffusion pipelines. |
Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU ... |
import torch |
unet = UNet2DConditionModel.from_pretrained( |
"latent-consistency/lcm-sdxl", |
torch_dtype=torch.float16, |
variant="fp16", |
) |
pipe = StableDiffusionXLPipeline.from_pretrained( |
"stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", |
).to("cuda") |
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) |
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" |
generator = torch.manual_seed(0) |
image = pipe( |
prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 |
).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size do... |
from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler |
from diffusers.utils import make_image_grid, load_image |
unet = UNet2DConditionModel.from_pretrained( |
"SimianLuo/LCM_Dreamshaper_v7", |
subfolder="unet", |
torch_dtype=torch.float16, |
) |
pipe = AutoPipelineForImage2Image.from_pretrained( |
"Lykon/dreamshaper-7", |
unet=unet, |
torch_dtype=torch.float16, |
variant="fp16", |
).to("cuda") |
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" |
init_image = load_image(url) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.