text stringlengths 0 5.54k |
|---|
"beta_end": 0.012, |
"beta_schedule": "scaled_linear", |
"beta_start": 0.00085, |
"clip_sample": false, |
"num_train_timesteps": 1000, |
"set_alpha_to_one": false, |
"skip_prk_steps": true, |
"steps_offset": 1, |
"timestep_spacing": "leading", |
"trained_betas": null |
} We can see that the scheduler is of type PNDMScheduler. |
Cool, now let’s compare the scheduler in its performance to other schedulers. |
First we define a prompt on which we will test all the different schedulers: Copied prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: Copie... |
image = pipeline(prompt, generator=generator).images[0] |
image Changing the scheduler Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property compatibles |
which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. Copied pipeline.scheduler.compatibles Output: Copied [diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, |
diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, |
diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, |
diffusers.schedulers.scheduling_ddim.DDIMScheduler, |
diffusers.schedulers.scheduling_ddpm.DDPMScheduler, |
diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, |
diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, |
diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, |
diffusers.schedulers.scheduling_pndm.PNDMScheduler, |
diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, |
diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, |
diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, |
diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, |
diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler] Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: EulerDiscreteScheduler, LMSDiscreteScheduler, DDIMScheduler, DDPMScheduler, HeunDiscreteScheduler, DPMSolverMultistepScheduler... |
convenient config property in combination with the from_config() function. Copied pipeline.scheduler.config returns a dictionary of the configuration of the scheduler: Output: Copied FrozenDict([('num_train_timesteps', 1000), |
('beta_start', 0.00085), |
('beta_end', 0.012), |
('beta_schedule', 'scaled_linear'), |
('trained_betas', None), |
('skip_prk_steps', True), |
('set_alpha_to_one', False), |
('prediction_type', 'epsilon'), |
('timestep_spacing', 'leading'), |
('steps_offset', 1), |
('_use_default_values', ['timestep_spacing', 'prediction_type']), |
('_class_name', 'PNDMScheduler'), |
('_diffusers_version', '0.21.4'), |
('clip_sample', False)]) This configuration can then be used to instantiate a scheduler |
of a different class that is compatible with the pipeline. Here, |
we change the scheduler to the DDIMScheduler. Copied from diffusers import DDIMScheduler |
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) Cool, now we can run the pipeline again to compare the generation quality. Copied generator = torch.Generator(device="cuda").manual_seed(8) |
image = pipeline(prompt, generator=generator).images[0] |
image If you are a JAX/Flax user, please check this section instead. Compare schedulers So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. |
A number of better schedulers have been released that can be run with much fewer steps; let’s compare them here: LMSDiscreteScheduler usually leads to better results: Copied from diffusers import LMSDiscreteScheduler |
pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) |
generator = torch.Generator(device="cuda").manual_seed(8) |
image = pipeline(prompt, generator=generator).images[0] |
image EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. Copied from diffusers import EulerDiscreteScheduler |
pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) |
generator = torch.Generator(device="cuda").manual_seed(8) |
image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] |
image and: Copied from diffusers import EulerAncestralDiscreteScheduler |
pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) |
generator = torch.Generator(device="cuda").manual_seed(8) |
image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] |
image DPMSolverMultistepScheduler gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Copied from diffusers import DPMSolverMultistepScheduler |
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) |
generator = torch.Generator(device="cuda").manual_seed(8) |
image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] |
image As you can see, most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different |
schedulers to compare results. Changing the Scheduler in Flax If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DPM-Solver++ scheduler: Copied import jax |
import numpy as np |
from flax.jax_utils import replicate |
from flax.training.common_utils import shard |
from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler |
model_id = "runwayml/stable-diffusion-v1-5" |
scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( |
model_id, |
subfolder="scheduler" |
) |
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( |
model_id, |
scheduler=scheduler, |
revision="bf16", |
dtype=jax.numpy.bfloat16, |
) |
params["scheduler"] = scheduler_state |
# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) |
prompt = "a photo of an astronaut riding a horse on mars" |
num_samples = jax.device_count() |
prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) |
prng_seed = jax.random.PRNGKey(0) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.