text stringlengths 0 5.54k |
|---|
< |
source |
> |
( |
sample: FloatTensor |
sigma: float |
generator: typing.Optional[torch._C.Generator] = None |
) |
Explicit Langevin-like βchurnβ step of adding noise to the sample according to a factor gamma_i β₯ 0 to reach a |
higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. |
TODO Args: |
scale_model_input |
< |
source |
> |
( |
sample: FloatTensor |
timestep: typing.Optional[int] = None |
) |
β |
torch.FloatTensor |
Parameters |
sample (torch.FloatTensor) β input sample |
timestep (int, optional) β current timestep |
Returns |
torch.FloatTensor |
scaled input sample |
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the |
current timestep. |
set_timesteps |
< |
source |
> |
( |
num_inference_steps: int |
device: typing.Union[str, torch.device] = None |
) |
Parameters |
num_inference_steps (int) β |
the number of diffusion steps used when generating samples with a pre-trained model. |
Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. |
step |
< |
source |
> |
( |
model_output: FloatTensor |
sigma_hat: float |
sigma_prev: float |
sample_hat: FloatTensor |
return_dict: bool = True |
) |
β |
KarrasVeOutput or tuple |
Parameters |
model_output (torch.FloatTensor) β direct output from learned diffusion model. |
sigma_hat (float) β TODO |
sigma_prev (float) β TODO |
sample_hat (torch.FloatTensor) β TODO |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.