text
stringlengths
0
5.54k
~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple. Returns
~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple
If return_dict is True,
~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput is returned, otherwise
a tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).
What is safetensors ?
safetensors is a different format
from the classic .bin which uses Pytorch which uses pickle. It contains the
exact same data, which is just the model weights (or tensors).
Pickle is notoriously unsafe which allow any malicious file to execute arbitrary code.
The hub itself tries to prevent issues from it, but it’s not a silver bullet.
safetensors first and foremost goal is to make loading machine learning models safe
in the sense that no takeover of your computer can be done.
Hence the name.
Why use safetensors ?
Safety can be one reason, if you’re attempting to use a not well known model and
you’re not sure about the source of the file.
And a secondary reason, is the speed of loading. Safetensors can load models much faster
than regular pickle files. If you spend a lot of times switching models, this can be
a huge timesave.
Numbers taken AMD EPYC 7742 64-Core Processor
Copied
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
# Loaded in safetensors 0:00:02.033658
# Loaded in Pytorch 0:00:02.663379
This is for the entire loading time, the actual weights loading time to load 500MB:
Copied
Safetensors: 3.4873ms
PyTorch: 172.7537ms
Performance in general is a tricky business, and there are a few things to understand:
If you’re using the model for the first time from the hub, you will have to download the weights.
That’s extremely likely to be much slower than any loading method, therefore you will not see any difference
If you’re loading the model for the first time (let’s say after a reboot) then your machine will have to
actually read the disk. It’s likely to be as slow in both cases. Again the speed difference may not be as visible (this depends on hardware and the actual model).
The best performance benefit is when the model was already loaded previously on your computer and you’re switching from one model to another. Your OS, is trying really hard not to read from disk, since this is slow, so it will keep the files around in RAM, making it loading again much faster. Since safetensors is doing zero-copy of the tensors, reloading will be faster than pytorch since it has at least once extra copy to do.
How to use safetensors ?
If you have safetensors installed, and all the weights are available in safetensors format, \
then by default it will use that instead of the pytorch weights.
If you are really paranoid about this, the ultimate weapon would be disabling torch.load:
Copied
import torch
def _raise():
raise RuntimeError("I don't want to use pickle")
torch.load = lambda *args, **kwargs: _raise()
I want to use model X but it doesn't have safetensors weights.
Just go to this space.
This will create a new PR with the weights, let’s say refs/pr/22.
This space will download the pickled version, convert it, and upload it on the hub as a PR.
If anything bad is contained in the file, it’s Huggingface hub that will get issues, not your own computer.
And we’re equipped with dealing with it.
Then in order to use the model, even before the branch gets accepted by the original author you can do:
Copied
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", revision="refs/pr/22")
or you can test it directly online with this space.
And that’s it !
Anything unclear, concerns, or found a bugs ? Open an issue
Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) —
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) —
Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) —
A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) —
A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) —
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) —
Classification module that estimates whether generated images could be considered offensive or harmful.
Please refer to the model card for more details
about a model’s potential harms. feature_extractor (CLIPImageProcessor) —
A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. 🧪 This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) —
An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) —
Image or tensor representing an image batch to be inpainted (parts of the image are masked out with
mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) —
Image or tensor representing an image batch to mask image. White pixels in the mask are repainted,
while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel