text stringlengths 0 5.54k |
|---|
def download(url, local_filepath): |
r = requests.get(url) |
with open(local_filepath, "wb") as f: |
f.write(r.content) |
return local_filepath |
dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" |
local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) |
with ZipFile(local_filepath, "r") as zipper: |
zipper.extractall(".") Copied from PIL import Image |
import os |
dataset_path = "sample-imagenet-images" |
image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) |
real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-proc... |
def preprocess_image(image): |
image = torch.tensor(image).unsqueeze(0) |
image = image.permute(0, 3, 1, 2) / 255.0 |
return F.center_crop(image, (256, 256)) |
real_images = torch.cat([preprocess_image(image) for image in real_images]) |
print(real_images.shape) |
# torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler |
dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) |
dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) |
dit_pipeline = dit_pipeline.to("cuda") |
words = [ |
"cassette player", |
"chainsaw", |
"chainsaw", |
"church", |
"gas pump", |
"gas pump", |
"gas pump", |
"parachute", |
"parachute", |
"tench", |
] |
class_ids = dit_pipeline.get_label_ids(words) |
output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np") |
fake_images = output.images |
fake_images = torch.tensor(fake_images) |
fake_images = fake_images.permute(0, 3, 1, 2) |
print(fake_images.shape) |
# torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics. Copied from torchmetrics.image.fid import FrechetInceptionDistance |
fid = FrechetInceptionDistance(normalize=True) |
fid.update(real_images, real=True) |
fid.update(fake_images, real=False) |
print(f"FID: {float(fid.compute())}") |
# FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, ther... |
hard to reproduce paper results unless the authors carefully disclose the FID |
measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images. Fake images. |
K-Diffusion k-diffusion is a popular library created by Katherine Crowson. We provide StableDiffusionKDiffusionPipeline and StableDiffusionXLKDiffusionPipeline that allow you to run Stable DIffusion with samplers from k-diffusion. Note that most the samplers from k-diffusion are implemented in Diffusers and we recommen... |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — |
Frozen text-encoder. Stable Diffusion uses the text portion of |
CLIP, specifically |
the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — |
Tokenizer of class |
CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — |
Classification module that estimates whether generated images could be considered offensive or harmful. |
Please, refer to the model card for details. feature_extractor (CLIPImageProcessor) — |
Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weig... |
prompt to be encoded |
device — (torch.device): |
torch device num_images_per_prompt (int) — |
number of images that should be generated per prompt do_classifier_free_guidance (bool) — |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) — |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLKDiffusionPipeline class diffusers.StableDiffusionXLKDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelW... |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — |
Frozen text-encoder. Stable Diffusion XL uses the text portion of |
CLIP, specifically |
the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — |
Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of |
CLIP, |
specifically the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.