Buckets:

hf-doc-build/doc-dev / diffusers /pr_12249 /en /api /schedulers /consistency_decoder.md
rtrm's picture
|
download
raw
3.82 kB
# ConsistencyDecoderScheduler
This scheduler is a part of the `ConsistencyDecoderPipeline` and was introduced in [DALL-E 3](https://openai.com/dall-e-3).
The original codebase can be found at [openai/consistency_models](https://github.com/openai/consistency_models).
## ConsistencyDecoderScheduler[[diffusers.schedulers.ConsistencyDecoderScheduler]]
#### diffusers.schedulers.ConsistencyDecoderScheduler[[diffusers.schedulers.ConsistencyDecoderScheduler]]
[Source](https://github.com/huggingface/diffusers/blob/vr_12249/src/diffusers/schedulers/scheduling_consistency_decoder.py#L80)
A scheduler for the consistency decoder used in Stable Diffusion pipelines.
This scheduler implements a two-step denoising process using consistency models for decoding latent representations
into images.
This model inherits from [SchedulerMixin](/docs/diffusers/pr_12249/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/pr_12249/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic
methods the library implements for all schedulers such as loading and saving.
scale_model_inputdiffusers.schedulers.ConsistencyDecoderScheduler.scale_model_inputhttps://github.com/huggingface/diffusers/blob/vr_12249/src/diffusers/schedulers/scheduling_consistency_decoder.py#L158[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[int] = None"}]- **sample** (`torch.Tensor`) --
The input sample.
- **timestep** (`int`, *optional*) --
The current timestep in the diffusion chain.0`torch.Tensor`A scaled input sample.
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
current timestep.
**Parameters:**
num_train_timesteps (`int`, *optional*, defaults to `1024`) : The number of diffusion steps to train the model.
sigma_data (`float`, *optional*, defaults to `0.5`) : The standard deviation of the data distribution. Used for computing the skip and output scaling factors.
**Returns:**
``torch.Tensor``
A scaled input sample.
#### set_timesteps[[diffusers.schedulers.ConsistencyDecoderScheduler.set_timesteps]]
[Source](https://github.com/huggingface/diffusers/blob/vr_12249/src/diffusers/schedulers/scheduling_consistency_decoder.py#L121)
Sets the discrete timesteps used for the diffusion chain (to be run before inference).
**Parameters:**
num_inference_steps (`int`, *optional*) : The number of diffusion steps used when generating samples with a pre-trained model. Currently, only `2` inference steps are supported.
device (`str` or `torch.device`, *optional*) : The device to which the timesteps should be moved to. If `None`, the timesteps are not moved.
#### step[[diffusers.schedulers.ConsistencyDecoderScheduler.step]]
[Source](https://github.com/huggingface/diffusers/blob/vr_12249/src/diffusers/schedulers/scheduling_consistency_decoder.py#L175)
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion
process from the learned model outputs (most often the predicted noise).
**Parameters:**
model_output (`torch.Tensor`) : The direct output from the learned diffusion model.
timestep (`float` or `torch.Tensor`) : The current timestep in the diffusion chain.
sample (`torch.Tensor`) : A current instance of a sample created by the diffusion process.
generator (`torch.Generator`, *optional*) : A random number generator for reproducibility.
return_dict (`bool`, *optional*, defaults to `True`) : Whether or not to return a `ConsistencyDecoderSchedulerOutput` or `tuple`.
**Returns:**
``ConsistencyDecoderSchedulerOutput` or `tuple``
If `return_dict` is `True`,
`ConsistencyDecoderSchedulerOutput` is returned, otherwise
a tuple is returned where the first element is the sample tensor.

Xet Storage Details

Size:
3.82 kB
·
Xet hash:
88a3d86d8da78b2c582adba4529bc48bc5e80e924f60cb226f84c2284afacb26

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.