text
stringlengths
0
5.54k
Copied
>>> image = pipeline("An image of a squirrel in Picasso style").images[0]
The output is by default wrapped into a PIL Image object.
You can save the image by simply calling:
Copied
>>> image.save("image_of_squirrel_painting.png")
Note: You can also use the pipeline locally by downloading the weights via:
Copied
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
and then loading the saved weights into the pipeline.
Copied
>>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")
Running the pipeline is then identical to the code above as it’s the same model architecture.
Copied
>>> generator.to("cuda")
>>> image = generator("An image of a squirrel in Picasso style").images[0]
>>> image.save("image_of_squirrel_painting.png")
Diffusion systems can be used with multiple different schedulers each with their
pros and cons. By default, Stable Diffusion runs with PNDMScheduler, but it’s very simple to
use a different scheduler. E.g. if you would instead like to use the EulerDiscreteScheduler scheduler,
you could use it as follows:
Copied
>>> from diffusers import EulerDiscreteScheduler
>>> pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
>>> # change scheduler to Euler
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
For more in-detail information on how to change between schedulers, please refer to the Using Schedulers guide.
Stability AI’s Stable Diffusion model is an impressive image generation model
and can do much more than just generating images from text. We have dedicated a whole documentation page,
just for Stable Diffusion here.
If you want to know how to optimize Stable Diffusion to run on less memory, higher inference speeds, on specific hardware, such as Mac, or with ONNX Runtime, please have a look at our
optimization pages:
Optimized PyTorch on GPU
Mac OS with PyTorch
ONNX
OpenVINO
If you want to fine-tune or train your diffusion model, please have a look at the training section
Finally, please be considerate when distributing generated images publicly 🤗.
Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon!
VQDiffusionScheduler
Overview
Original paper can be found here
VQDiffusionScheduler
class diffusers.VQDiffusionScheduler
<
source
>
(
num_vec_classes: int
num_train_timesteps: int = 100
alpha_cum_start: float = 0.99999
alpha_cum_end: float = 9e-06
gamma_cum_start: float = 9e-06
gamma_cum_end: float = 0.99999
)
Parameters
num_vec_classes (int) —
The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked
latent pixel.
num_train_timesteps (int) —
Number of diffusion steps used to train the model.
alpha_cum_start (float) —
The starting cumulative alpha value.
alpha_cum_end (float) —
The ending cumulative alpha value.
gamma_cum_start (float) —
The starting cumulative gamma value.