text stringlengths 0 5.54k |
|---|
original = StableDiffusionPipeline.from_pretrained( |
"CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, |
).to("cuda") Given a prompt, get the inference time for the original model: Copied import time |
seed = 2023 |
generator = torch.manual_seed(seed) |
NUM_ITERS_TO_RUN = 3 |
NUM_INFERENCE_STEPS = 25 |
NUM_IMAGES_PER_PROMPT = 4 |
prompt = "a golden vase with different flowers" |
start = time.time_ns() |
for _ in range(NUM_ITERS_TO_RUN): |
images = original( |
prompt, |
num_inference_steps=NUM_INFERENCE_STEPS, |
generator=generator, |
num_images_per_prompt=NUM_IMAGES_PER_PROMPT |
).images |
end = time.time_ns() |
original_sd = f"{(end - start) / 1e6:.1f}" |
print(f"Execution time -- {original_sd} ms\n") |
"Execution time -- 45781.5 ms" Time the distilled model inference: Copied start = time.time_ns() |
for _ in range(NUM_ITERS_TO_RUN): |
images = distilled( |
prompt, |
num_inference_steps=NUM_INFERENCE_STEPS, |
generator=generator, |
num_images_per_prompt=NUM_IMAGES_PER_PROMPT |
).images |
end = time.time_ns() |
distilled_sd = f"{(end - start) / 1e6:.1f}" |
print(f"Execution time -- {distilled_sd} ms\n") |
"Execution time -- 29884.2 ms" original Stable Diffusion (45781.5 ms) distilled Stable Diffusion (29884.2 ms) Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the... |
distilled.vae = AutoencoderTiny.from_pretrained( |
"sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, |
).to("cuda") Time the distilled model and distilled VAE inference: Copied start = time.time_ns() |
for _ in range(NUM_ITERS_TO_RUN): |
images = distilled( |
prompt, |
num_inference_steps=NUM_INFERENCE_STEPS, |
generator=generator, |
num_images_per_prompt=NUM_IMAGES_PER_PROMPT |
).images |
end = time.time_ns() |
distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" |
print(f"Execution time -- {distilled_tiny_sd} ms\n") |
"Execution time -- 27165.7 ms" distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms) |
🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide... |
We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concern... |
Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt. From a very hi... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" |
).to("cuda") Pass a prompt to the pipeline to generate an image: Copied image = pipeline( |
"stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" |
).images[0] |
image Popular models The most common text-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different b... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" |
).to("cuda") |
generator = torch.Generator("cuda").manual_seed(31) |
image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] |
image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive ... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16" |
).to("cuda") |
generator = torch.Generator("cuda").manual_seed(31) |
image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] |
image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Im... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 |
).to("cuda") |
generator = torch.Generator("cuda").manual_seed(31) |
image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] |
image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5. Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add... |
from diffusers.utils import load_image |
import torch |
controlnet = ControlNetModel.from_pretrained( |
"lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16" |
).to("cuda") |
pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png") Pass the controlnet to the AutoPipelineForText2Image, and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" |
).to("cuda") |
generator = torch.Generator("cuda").manual_seed(31) |
image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.