Buckets:
| # EulerDiscreteScheduler | |
| The Euler scheduler (Algorithm 2) is from the [Elucidating the Design Space of Diffusion-Based Generative Models](https://huggingface.co/papers/2206.00364) paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original [k-diffusion](https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51) implementation by [Katherine Crowson](https://github.com/crowsonkb/). | |
| ## EulerDiscreteScheduler[[diffusers.EulerDiscreteScheduler]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.EulerDiscreteScheduler</name><anchor>diffusers.EulerDiscreteScheduler</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/schedulers/scheduling_euler_discrete.py#L135</source><parameters>[{"name": "num_train_timesteps", "val": ": int = 1000"}, {"name": "beta_start", "val": ": float = 0.0001"}, {"name": "beta_end", "val": ": float = 0.02"}, {"name": "beta_schedule", "val": ": str = 'linear'"}, {"name": "trained_betas", "val": ": typing.Union[numpy.ndarray, typing.List[float], NoneType] = None"}, {"name": "prediction_type", "val": ": str = 'epsilon'"}, {"name": "interpolation_type", "val": ": str = 'linear'"}, {"name": "use_karras_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_exponential_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "use_beta_sigmas", "val": ": typing.Optional[bool] = False"}, {"name": "sigma_min", "val": ": typing.Optional[float] = None"}, {"name": "sigma_max", "val": ": typing.Optional[float] = None"}, {"name": "timestep_spacing", "val": ": str = 'linspace'"}, {"name": "timestep_type", "val": ": str = 'discrete'"}, {"name": "steps_offset", "val": ": int = 0"}, {"name": "rescale_betas_zero_snr", "val": ": bool = False"}, {"name": "final_sigmas_type", "val": ": str = 'zero'"}]</parameters><paramsdesc>- **num_train_timesteps** (`int`, defaults to 1000) -- | |
| The number of diffusion steps to train the model. | |
| - **beta_start** (`float`, defaults to 0.0001) -- | |
| The starting `beta` value of inference. | |
| - **beta_end** (`float`, defaults to 0.02) -- | |
| The final `beta` value. | |
| - **beta_schedule** (`str`, defaults to `"linear"`) -- | |
| The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from | |
| `linear` or `scaled_linear`. | |
| - **trained_betas** (`np.ndarray`, *optional*) -- | |
| Pass an array of betas directly to the constructor to bypass `beta_start` and `beta_end`. | |
| - **prediction_type** (`str`, defaults to `epsilon`, *optional*) -- | |
| Prediction type of the scheduler function; can be `epsilon` (predicts the noise of the diffusion process), | |
| `sample` (directly predicts the noisy sample`) or `v_prediction` (see section 2.4 of [Imagen | |
| Video](https://imagen.research.google/video/paper.pdf) paper). | |
| - **interpolation_type(`str`,** defaults to `"linear"`, *optional*) -- | |
| The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of | |
| `"linear"` or `"log_linear"`. | |
| - **use_karras_sigmas** (`bool`, *optional*, defaults to `False`) -- | |
| Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If `True`, | |
| the sigmas are determined according to a sequence of noise levels {σi}. | |
| - **use_exponential_sigmas** (`bool`, *optional*, defaults to `False`) -- | |
| Whether to use exponential sigmas for step sizes in the noise schedule during the sampling process. | |
| - **use_beta_sigmas** (`bool`, *optional*, defaults to `False`) -- | |
| Whether to use beta sigmas for step sizes in the noise schedule during the sampling process. Refer to [Beta | |
| Sampling is All You Need](https://huggingface.co/papers/2407.12173) for more information. | |
| - **timestep_spacing** (`str`, defaults to `"linspace"`) -- | |
| The way the timesteps should be scaled. Refer to Table 2 of the [Common Diffusion Noise Schedules and | |
| Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) for more information. | |
| - **steps_offset** (`int`, defaults to 0) -- | |
| An offset added to the inference steps, as required by some model families. | |
| - **rescale_betas_zero_snr** (`bool`, defaults to `False`) -- | |
| Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and | |
| dark samples instead of limiting it to samples with medium brightness. Loosely related to | |
| [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506). | |
| - **final_sigmas_type** (`str`, defaults to `"zero"`) -- | |
| The final `sigma` value for the noise schedule during the sampling process. If `"sigma_min"`, the final | |
| sigma is the same as the last sigma in the training schedule. If `zero`, the final sigma is set to 0.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Euler scheduler. | |
| This model inherits from [SchedulerMixin](/docs/diffusers/pr_12595/en/api/schedulers/overview#diffusers.SchedulerMixin) and [ConfigMixin](/docs/diffusers/pr_12595/en/api/configuration#diffusers.ConfigMixin). Check the superclass documentation for the generic | |
| methods the library implements for all schedulers such as loading and saving. | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>scale_model_input</name><anchor>diffusers.EulerDiscreteScheduler.scale_model_input</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/schedulers/scheduling_euler_discrete.py#L295</source><parameters>[{"name": "sample", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}]</parameters><paramsdesc>- **sample** (`torch.Tensor`) -- | |
| The input sample. | |
| - **timestep** (`int`, *optional*) -- | |
| The current timestep in the diffusion chain.</paramsdesc><paramgroups>0</paramgroups><rettype>`torch.Tensor`</rettype><retdesc>A scaled input sample.</retdesc></docstring> | |
| Ensures interchangeability with schedulers that need to scale the denoising model input depending on the | |
| current timestep. Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>set_begin_index</name><anchor>diffusers.EulerDiscreteScheduler.set_begin_index</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/schedulers/scheduling_euler_discrete.py#L285</source><parameters>[{"name": "begin_index", "val": ": int = 0"}]</parameters><paramsdesc>- **begin_index** (`int`) -- | |
| The begin index for the scheduler.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Sets the begin index for the scheduler. This function should be run from pipeline before the inference. | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>set_timesteps</name><anchor>diffusers.EulerDiscreteScheduler.set_timesteps</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/schedulers/scheduling_euler_discrete.py#L319</source><parameters>[{"name": "num_inference_steps", "val": ": int = None"}, {"name": "device", "val": ": typing.Union[str, torch.device] = None"}, {"name": "timesteps", "val": ": typing.Optional[typing.List[int]] = None"}, {"name": "sigmas", "val": ": typing.Optional[typing.List[float]] = None"}]</parameters><paramsdesc>- **num_inference_steps** (`int`) -- | |
| The number of diffusion steps used when generating samples with a pre-trained model. | |
| - **device** (`str` or `torch.device`, *optional*) -- | |
| The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. | |
| - **timesteps** (`List[int]`, *optional*) -- | |
| Custom timesteps used to support arbitrary timesteps schedule. If `None`, timesteps will be generated | |
| based on the `timestep_spacing` attribute. If `timesteps` is passed, `num_inference_steps` and `sigmas` | |
| must be `None`, and `timestep_spacing` attribute will be ignored. | |
| - **sigmas** (`List[float]`, *optional*) -- | |
| Custom sigmas used to support arbitrary timesteps schedule schedule. If `None`, timesteps and sigmas | |
| will be generated based on the relevant scheduler attributes. If `sigmas` is passed, | |
| `num_inference_steps` and `timesteps` must be `None`, and the timesteps will be generated based on the | |
| custom sigmas schedule.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Sets the discrete timesteps used for the diffusion chain (to be run before inference). | |
| </div> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>step</name><anchor>diffusers.EulerDiscreteScheduler.step</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/schedulers/scheduling_euler_discrete.py#L576</source><parameters>[{"name": "model_output", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Union[float, torch.Tensor]"}, {"name": "sample", "val": ": Tensor"}, {"name": "s_churn", "val": ": float = 0.0"}, {"name": "s_tmin", "val": ": float = 0.0"}, {"name": "s_tmax", "val": ": float = inf"}, {"name": "s_noise", "val": ": float = 1.0"}, {"name": "generator", "val": ": typing.Optional[torch._C.Generator] = None"}, {"name": "return_dict", "val": ": bool = True"}]</parameters><paramsdesc>- **model_output** (`torch.Tensor`) -- | |
| The direct output from learned diffusion model. | |
| - **timestep** (`float`) -- | |
| The current discrete timestep in the diffusion chain. | |
| - **sample** (`torch.Tensor`) -- | |
| A current instance of a sample created by the diffusion process. | |
| - **s_churn** (`float`) -- | |
| - **s_tmin** (`float`) -- | |
| - **s_tmax** (`float`) -- | |
| - **s_noise** (`float`, defaults to 1.0) -- | |
| Scaling factor for noise added to the sample. | |
| - **generator** (`torch.Generator`, *optional*) -- | |
| A random number generator. | |
| - **return_dict** (`bool`) -- | |
| Whether or not to return a [EulerDiscreteSchedulerOutput](/docs/diffusers/pr_12595/en/api/schedulers/euler#diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput) or | |
| tuple.</paramsdesc><paramgroups>0</paramgroups><rettype>[EulerDiscreteSchedulerOutput](/docs/diffusers/pr_12595/en/api/schedulers/euler#diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput) or `tuple`</rettype><retdesc>If return_dict is `True`, [EulerDiscreteSchedulerOutput](/docs/diffusers/pr_12595/en/api/schedulers/euler#diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput) is | |
| returned, otherwise a tuple is returned where the first element is the sample tensor.</retdesc></docstring> | |
| Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion | |
| process from the learned model outputs (most often the predicted noise). | |
| </div></div> | |
| ## EulerDiscreteSchedulerOutput[[diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput]] | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <docstring><name>class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput</name><anchor>diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput</anchor><source>https://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/schedulers/scheduling_euler_discrete.py#L36</source><parameters>[{"name": "prev_sample", "val": ": Tensor"}, {"name": "pred_original_sample", "val": ": typing.Optional[torch.Tensor] = None"}]</parameters><paramsdesc>- **prev_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) -- | |
| Computed sample `(x_{t-1})` of previous timestep. `prev_sample` should be used as next model input in the | |
| denoising loop. | |
| - **pred_original_sample** (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` for images) -- | |
| The predicted denoised sample `(x_{0})` based on the model output from the current timestep. | |
| `pred_original_sample` can be used to preview progress or for guidance.</paramsdesc><paramgroups>0</paramgroups></docstring> | |
| Output class for the scheduler's `step` function output. | |
| </div> | |
| <EditOnGithub source="https://github.com/huggingface/diffusers/blob/main/docs/source/en/api/schedulers/euler.md" /> |
Xet Storage Details
- Size:
- 12.5 kB
- Xet hash:
- 1341789d3aae1a5c2db688f9ffbc29e2275141be572a55e8cb71faaf45130a84
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.