text stringlengths 0 5.54k |
|---|
half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel |
>>> # load model |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision |
>>> params = model.to_bf16(params) |
>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) |
>>> # then pass the mask as follows |
>>> from flax import traverse_util |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> flat_params = traverse_util.flatten_dict(params) |
>>> mask = { |
... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) |
... for path in flat_params |
... } |
>>> mask = traverse_util.unflatten_dict(mask) |
>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) β |
A PyTree of model parameters. mask (Union[Dict, FrozenDict]) β |
A PyTree with same structure as the params tree. The leaves should be booleans. It should be True |
for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the |
params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full |
half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel |
>>> # load model |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> # By default, the model params will be in fp32, to cast these to float16 |
>>> params = model.to_fp16(params) |
>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) |
>>> # then pass the mask as follows |
>>> from flax import traverse_util |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> flat_params = traverse_util.flatten_dict(params) |
>>> mask = { |
... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) |
... for path in flat_params |
... } |
>>> mask = traverse_util.unflatten_dict(mask) |
>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) β |
A PyTree of model parameters. mask (Union[Dict, FrozenDict]) β |
A PyTree with same structure as the params tree. The leaves should be booleans. It should be True |
for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the |
model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel |
>>> # Download model and configuration from huggingface.co |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> # By default, the model params will be in fp32, to illustrate the use of this method, |
>>> # we'll first cast to fp16 and back to fp32 |
>>> params = model.to_f16(params) |
>>> # now cast back to fp32 |
>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = F... |
The name of the repository you want to push your model, scheduler, or pipeline files to. It should |
contain your organization name when pushing to an organization. repo_id can also be a path to a local |
directory. commit_message (str, optional) β |
Message to commit while pushing. Default to "Upload {object}". private (bool, optional) β |
Whether or not the repository created should be private. token (str, optional) β |
The token to use as HTTP bearer authorization for remote files. The token generated when running |
huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) β |
Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) β |
Whether or not to convert the model weights to the safetensors format. variant (str, optional) β |
If specified, weights are saved in the format pytorch_model.<variant>.bin. Upload model, scheduler, or pipeline files to the π€ Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel |
unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") |
# Push the `unet` to your namespace with the name "my-finetuned-unet". |
unet.push_to_hub("my-finetuned-unet") |
# Push the `unet` to an organization with the name "my-finetuned-unet". |
unet.push_to_hub("your-org/my-finetuned-unet") |
AudioLDM AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM |
is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP |
latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional |
sound effects, human speech and music. The abstract from the paper is: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propos... |
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapTextModelWithProjection) β |
Frozen text-encoder (ClapTextModelWithProjection, specifically the |
laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) β |
A RobertaTokenizer to tokenize text. unet (UNet2DConditionModel) β |
A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) β |
Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using AudioLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 10 guidance_scale: float = 2.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 gener... |
The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 5.12) β |
The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 10) β |
The number of denoising steps. More denoising steps usually lead to a higher quality audio at the |
expense of slower inference. guidance_scale (float, optional, defaults to 2.5) β |
A higher guidance scale value encourages the model to generate audio that is closely linked to the text |
prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β |
The prompt or prompts to guide what to not include in audio generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) β |
The number of waveforms to generate per prompt. eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.