text stringlengths 0 5.54k |
|---|
there is no previous alpha. When this option is True the previous alpha product is fixed to 1, |
otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — |
An offset added to the inference steps. You can use a combination of offset=1 and |
set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable |
Diffusion. prediction_type (str, defaults to epsilon, optional) — |
Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), |
sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen |
Video paper). thresholding (bool, defaults to False) — |
Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such |
as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — |
The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — |
The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — |
The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and |
Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — |
The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions |
c_skip and c_out. Increasing this will decrease the approximation error (although the approximation |
error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — |
Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and |
dark samples instead of limiting it to samples with medium brightness. Loosely related to |
--offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with |
non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config |
attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be |
accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving |
functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — |
The input sample. timestep (int, optional) — |
The current timestep in the diffusion chain. Returns |
torch.FloatTensor |
A scaled input sample. |
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the |
current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — |
The number of diffusion steps used when generating samples with a pre-trained model. If used, |
timesteps must be None. device (str or torch.device, optional) — |
The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — |
The original number of inference steps, which will be used to generate a linearly-spaced timestep |
schedule (which is different from the standard diffusers implementation). We will then take |
num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as |
our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — |
Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default |
timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep |
schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.sched... |
The direct output from learned diffusion model. timestep (float) — |
The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — |
A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — |
A random number generator. return_dict (bool, optional, defaults to True) — |
Whether or not to return a LCMSchedulerOutput or tuple. Returns |
~schedulers.scheduling_utils.LCMSchedulerOutput or tuple |
If return_dict is True, LCMSchedulerOutput is returned, otherwise a |
tuple is returned where the first element is the sample tensor. |
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion |
process from the learned model outputs (most often the predicted noise). |
Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is g... |
functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to |
the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler |
class. Use from_config() to load a different compatible scheduler class (should be overridden |
by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — |
Can be either: |
A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on |
the Hub. |
A path to a directory (for example ./my_model_directory) containing the scheduler |
configuration saved with save_pretrained(). |
subfolder (str, optional) — |
The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) — |
Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) — |
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache |
is not used. force_download (bool, optional, defaults to False) — |
Whether or not to force the (re-)download of the model weights and configuration files, overriding the |
cached versions if they exist. resume_download (bool, optional, defaults to False) — |
Whether or not to resume downloading the model weights and configuration files. If set to False, any |
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — |
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — |
Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — |
Whether to only load local model weights and configuration files or not. If set to True, the model |
won’t be downloaded from the Hub. token (str or bool, optional) — |
The token to use as HTTP bearer authorization for remote files. If True, the token generated from |
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — |
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier |
allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with |
huggingface-cli login. You can also activate the special |
“offline-mode” to use this method in a |
firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — |
Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — |
Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the |
repository you want to push to with repo_id (will default to the name of save_directory in your |
namespace). kwargs (Dict[str, Any], optional) — |
Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the |
from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — |
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the |
denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, th... |
The name of the repository you want to push your model, scheduler, or pipeline files to. It should |
contain your organization name when pushing to an organization. repo_id can also be a path to a local |
directory. commit_message (str, optional) — |
Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — |
Whether or not the repository created should be private. token (str, optional) — |
The token to use as HTTP bearer authorization for remote files. The token generated when running |
huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — |
Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — |
Whether or not to convert the model weights to the safetensors format. variant (str, optional) — |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.