text stringlengths 0 5.54k |
|---|
thresholding (bool, default False) — |
whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). |
Note that the thresholding method is unsuitable for latent-space diffusion models (such as |
stable-diffusion). |
dynamic_thresholding_ratio (float, default 0.995) — |
the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen |
(https://arxiv.org/abs/2205.11487). Valid only when thresholding=True. |
sample_max_value (float, default 1.0) — |
the threshold value for dynamic thresholding. Valid only when thresholding=True. |
Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising |
diffusion probabilistic models (DDPMs) with non-Markovian guidance. |
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ |
function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. |
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and |
from_pretrained() functions. |
For more details, see the original paper: https://arxiv.org/abs/2010.02502 |
scale_model_input |
< |
source |
> |
( |
sample: FloatTensor |
timestep: typing.Optional[int] = None |
) |
→ |
torch.FloatTensor |
Parameters |
sample (torch.FloatTensor) — input sample |
timestep (int, optional) — current timestep |
Returns |
torch.FloatTensor |
scaled input sample |
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the |
current timestep. |
set_timesteps |
< |
source |
> |
( |
num_inference_steps: int |
device: typing.Union[str, torch.device] = None |
) |
Parameters |
num_inference_steps (int) — |
the number of diffusion steps used when generating samples with a pre-trained model. |
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. |
step |
< |
source |
> |
( |
model_output: FloatTensor |
timestep: int |
sample: FloatTensor |
eta: float = 0.0 |
use_clipped_model_output: bool = False |
generator = None |
variance_noise: typing.Optional[torch.FloatTensor] = None |
return_dict: bool = True |
) |
→ |
~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple |
Parameters |
model_output (torch.FloatTensor) — direct output from learned diffusion model. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.