text stringlengths 0 5.54k |
|---|
Fast scheduler which often times generates good outputs with 20-30 steps. |
EulerAncestralDiscreteScheduler |
class diffusers.EulerAncestralDiscreteScheduler |
< |
source |
> |
( |
num_train_timesteps: int = 1000 |
beta_start: float = 0.0001 |
beta_end: float = 0.02 |
beta_schedule: str = 'linear' |
trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None |
prediction_type: str = 'epsilon' |
) |
Parameters |
num_train_timesteps (int) — number of diffusion steps used to train the model. |
beta_start (float) — the starting beta value of inference. |
beta_end (float) — the final beta value. |
beta_schedule (str) — |
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from |
linear or scaled_linear. |
trained_betas (np.ndarray, optional) — |
option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. |
prediction_type (str, default epsilon, optional) — |
prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion |
process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 |
https://imagen.research.google/video/paper.pdf) |
Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson: |
https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72 |
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ |
function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. |
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and |
from_pretrained() functions. |
scale_model_input |
< |
source |
> |
( |
sample: FloatTensor |
timestep: typing.Union[float, torch.FloatTensor] |
) |
→ |
torch.FloatTensor |
Parameters |
sample (torch.FloatTensor) — input sample |
timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain |
Returns |
torch.FloatTensor |
scaled input sample |
Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. |
set_timesteps |
< |
source |
> |
( |
num_inference_steps: int |
device: typing.Union[str, torch.device] = None |
) |
Parameters |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.