text stringlengths 0 5.54k |
|---|
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — |
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — |
A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — |
A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — |
A EulerDiscreteScheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — |
The prompt or prompts to guide image upscaling. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — |
Image or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a |
latent output from a Stable Diffusion model or an image tensor in the range [-1, 1]. It is considered |
a latent if image.shape[1] is 4; otherwise, it is considered to be an image representation and |
encoded using this pipeline’s vae encoder. num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — |
The prompt or prompts to guide what to not include in image generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — |
Corresponds to parameter eta (η) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) — |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. callback (Callable, optional) — |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. Returns |
StableDiffusionPipelineOutput or tuple |
If return_dict is True, StableDiffusionPipelineOutput is returned, |
otherwise a tuple is returned where the first element is a list with the generated images. |
The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline |
>>> import torch |
>>> pipeline = StableDiffusionPipeline.from_pretrained( |
... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 |
... ) |
>>> pipeline.to("cuda") |
>>> model_id = "stabilityai/sd-x2-latent-upscaler" |
>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) |
>>> upscaler.to("cuda") |
>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" |
>>> generator = torch.manual_seed(33) |
>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images |
>>> with torch.no_grad(): |
... image = pipeline.decode_latents(low_res_latents) |
>>> image = pipeline.numpy_to_pil(image)[0] |
>>> image.save("../images/a1.png") |
>>> upscaled_image = upscaler( |
... prompt=prompt, |
... image=low_res_latents, |
... num_inference_steps=20, |
... guidance_scale=0, |
... generator=generator, |
... ).images[0] |
>>> upscaled_image.save("../images/a2.png") enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — |
The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — |
The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will |
default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state |
dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU |
and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward |
method called. Offloading happens on a submodule basis. Memory savings are higher than with |
enable_model_cpu_offload, but performance is lower. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — |
When "auto", halves the input to the attention heads, so attention will be computed in two steps. If |
"max", maximum amount of memory will be saved by running only one slice at a time. If a number is |
provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim |
must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor |
in slices to compute attention in several steps. For more than one attention head, the computation is performed |
sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch |
2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable |
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch |
>>> from diffusers import StableDiffusionPipeline |
>>> pipe = StableDiffusionPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", |
... torch_dtype=torch.float16, |
... use_safetensors=True, |
... ) |
>>> prompt = "a photo of an astronaut riding a horse on mars" |
>>> pipe.enable_attention_slicing() |
>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is |
computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — |
Override the default None operator for use as op argument to the |
memory_efficient_attention() |
function of xFormers. Enable memory efficient attention from xFormers. When this |
option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed |
up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.