text stringlengths 0 5.54k |
|---|
The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — |
The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOu... |
The direct output from learned diffusion model. timestep (int) — |
The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — |
A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — |
A random number generator. return_dict (bool) — |
Whether or not to return a SchedulerOutput or tuple. Returns |
SchedulerOutput or tuple |
If return_dict is True, SchedulerOutput is returned, otherwise a |
tuple is returned where the first element is the sample tensor. |
Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with |
the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — |
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the |
denoising loop. Base class for the output of a scheduler’s step function. |
Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose o... |
However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination |
of both qualitative and quantitative evaluations provides a stronger signal when choosing one model |
over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. The methods shown in this document can also be used to evaluate different noise sched... |
DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website: PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure ... |
# prompts = load_dataset("nateraw/parti-prompts", split="train") |
# prompts = prompts.shuffle() |
# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] |
# Fixing these sample prompts in the interest of reproducibility. |
sample_prompts = [ |
"a corgi", |
"a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", |
"a car with no windows", |
"a cube made of porcupine", |
'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', |
] Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): Copied import torch |
seed = 0 |
generator = torch.manual_seed(seed) |
images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: Once several images are generated from all the prompts ... |
more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the |
training progress. In our training scripts, we support this utility with additional support for |
logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP sc... |
import torch |
model_ckpt = "CompVis/stable-diffusion-v1-4" |
sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") Generate some images with multiple prompts: Copied prompts = [ |
"a photo of an astronaut riding a horse on mars", |
"A high tech solarpunk utopia in the Amazon rainforest", |
"A pikachu fine dining with a view to the Eiffel Tower", |
"A mecha robot in a favela in expressionist style", |
"an insect robot preparing a delicious meal", |
"A small cabin on top of a snowy mountain in the style of Disney, artstation", |
] |
images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images |
print(images.shape) |
# (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score |
from functools import partial |
clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") |
def calculate_clip_score(images, prompts): |
images_int = (images * 255).astype("uint8") |
clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() |
return round(float(clip_score), 4) |
sd_clip_score = calculate_clip_score(images, prompts) |
print(f"CLIP score: {sd_clip_score}") |
# CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator whi... |
fixed seed with the v1-4 Stable Diffusion checkpoint: Copied seed = 0 |
generator = torch.manual_seed(seed) |
images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" |
sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) |
images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) |
print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") |
# CLIP Score with v-1-4: 34.9102 |
sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) |
print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") |
# CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there... |
were crawled from the web and extracted from alt and similar tags associated an image on the internet. |
They are not necessarily representative of what a human being would use to describe an image. Hence we |
had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be... |
dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") |
dataset.features Copied {'input': Value(dtype='string', id=None), |
'edit': Value(dtype='string', id=None), |
'output': Value(dtype='string', id=None), |
'image': Image(decode=True, id=None)} Here we have: input is a caption corresponding to the image. edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 |
print(f"Original caption: {dataset[idx]['input']}") |
print(f"Edit instruction: {dataset[idx]['edit']}") |
print(f"Modified caption: {dataset[idx]['output']}") Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cl... |
Edit instruction: make the isles all white marble |
Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowher... |
instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( |
"timbrooks/instruct-pix2pix", torch_dtype=torch.float16 |
).to(device) Now, we perform the edits: Copied import numpy as np |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.