text stringlengths 0 5.54k |
|---|
Dual Image and Text Guided Generation |
vq_diffusion |
Vector Quantized Diffusion Model for Text-to-Image Synthesis |
Text-to-Image Generation |
Note: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers. |
However, most of them can be adapted to use different scheduler components or even different model components. Some pipeline examples are shown in the Examples below. |
Pipelines API |
Diffusion models often consist of multiple independently-trained models or other previously existing components. |
Each model has been trained independently on a different task and the scheduler can easily be swapped out and replaced with a different one. |
During inference, we however want to be able to easily load all components and use them in inference - even if one component, e.g. CLIP’s text encoder, originates from a different library, such as Transformers. To that end, all pipelines provide the following functionality: |
from_pretrained method that accepts a Hugging Face Hub repository id, e.g. runwayml/stable-diffusion-v1-5 or a path to a local directory, e.g. |
”./stable-diffusion”. To correctly retrieve which models and components should be loaded, one has to provide a model_index.json file, e.g. runwayml/stable-diffusion-v1-5/model_index.json, which defines all components that should be |
loaded into the pipelines. More specifically, for each model/component one needs to define the format <name>: ["<library>", "<class name>"]. <name> is the attribute name given to the loaded instance of <class name> which can be found in the library or pipeline folder called "<library>". |
save_pretrained that accepts a local path, e.g. ./stable-diffusion under which all models/components of the pipeline will be saved. For each component/model a folder is created inside the local path that is named after the given attribute name, e.g. ./stable_diffusion/unet. |
In addition, a model_index.json file is created at the root of the local path, e.g. ./stable_diffusion/model_index.json so that the complete pipeline can again be instantiated |
from the local path. |
to which accepts a string or torch.device to move all models that are of type torch.nn.Module to the passed device. The behavior is fully analogous to PyTorch’s to method. |
__call__ method to use the pipeline in inference. __call__ defines inference logic of the pipeline and should ideally encompass all aspects of it, from pre-processing to forwarding tensors to the different models and schedulers, as well as post-processing. The API of the __call__ method can strongly vary from pipeline ... |
each pipeline, one should look directly into the respective pipeline. |
Note: All pipelines have PyTorch’s autograd disabled by decorating the __call__ method with a torch.no_grad decorator because pipelines should |
not be used for training. If you want to store the gradients during the forward pass, we recommend writing your own pipeline, see also our community-examples |
Contribution |
We are more than happy about any contribution to the officially supported pipelines 🤗. We aspire |
all of our pipelines to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only. |
Self-contained: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file itself, should be inherited from (and only from) the DiffusionPipeline class or be directly attached to the model and scheduler components of the... |
Easy-to-use: Pipelines should be extremely easy to use - one should be able to load the pipeline and |
use it for its designated task, e.g. text-to-image generation, in just a couple of lines of code. Most |
logic including pre-processing, an unrolled diffusion loop, and post-processing should all happen inside the __call__ method. |
Easy-to-tweak: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. We try to make the pipeline code as readabl... |
One-purpose-only: Pipelines should be used for one task and one task only. Even if two tasks are very similar from a modeling point of view, e.g. image2image translation and in-painting, pipelines shall be used for one task only to keep them easy-to-tweak and readable. |
Examples |
Text-to-Image generation with Stable Diffusion |
Copied |
# make sure you're logged in with `huggingface-cli login` |
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler |
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") |
pipe = pipe.to("cuda") |
prompt = "a photo of an astronaut riding a horse on mars" |
image = pipe(prompt).images[0] |
image.save("astronaut_rides_horse.png") |
Image-to-Image text-guided generation with Stable Diffusion |
The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. |
Copied |
import requests |
from PIL import Image |
from io import BytesIO |
from diffusers import StableDiffusionImg2ImgPipeline |
# load the pipeline |
device = "cuda" |
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to( |
device |
) |
# let's download an initial image |
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" |
response = requests.get(url) |
init_image = Image.open(BytesIO(response.content)).convert("RGB") |
init_image = init_image.resize((768, 512)) |
prompt = "A fantasy landscape, trending on artstation" |
images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images |
images[0].save("fantasy_landscape.png") |
You can also run this example on colab |
Tweak prompts reusing seeds and latents |
You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. This notebook shows how to do it step by step. You can also run it in Google Colab . |
In-painting using Stable Diffusion |
The StableDiffusionInpaintPipeline lets you edit specific parts of an image by providing a mask and text prompt. |
Copied |
import PIL |
import requests |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.