text stringlengths 0 5.54k |
|---|
If a configuration is provided with config, kwargs are directly passed to the underlying |
model’s __init__ method (we assume all relevant updates to the configuration have already been |
done). |
If a configuration is not provided, kwargs are first passed to the configuration class |
initialization function from_config(). Each key of the kwargs that corresponds |
to a configuration attribute is used to override said attribute with the supplied kwargs value. |
Remaining keys that do not correspond to any configuration attribute are passed to the underlying |
model’s __init__ function. |
Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel |
>>> # Download model and configuration from huggingface.co and cache. |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: |
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated |
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — |
Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — |
A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — |
Whether the process calling this is the main process or not. Useful during distributed training and you |
need to call this function on all processes. In this case, set is_main_process=True only on the main |
process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — |
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the |
repository you want to push to with repo_id (will default to the name of save_directory in your |
namespace). kwargs (Dict[str, Any], optional) — |
Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the |
from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — |
A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — |
A PyTree with same structure as the params tree. The leaves should be booleans. It should be True |
for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast |
the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full |
half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel |
>>> # load model |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision |
>>> params = model.to_bf16(params) |
>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) |
>>> # then pass the mask as follows |
>>> from flax import traverse_util |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> flat_params = traverse_util.flatten_dict(params) |
>>> mask = { |
... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) |
... for path in flat_params |
... } |
>>> mask = traverse_util.unflatten_dict(mask) |
>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — |
A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — |
A PyTree with same structure as the params tree. The leaves should be booleans. It should be True |
for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the |
params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full |
half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel |
>>> # load model |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> # By default, the model params will be in fp32, to cast these to float16 |
>>> params = model.to_fp16(params) |
>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) |
>>> # then pass the mask as follows |
>>> from flax import traverse_util |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> flat_params = traverse_util.flatten_dict(params) |
>>> mask = { |
... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) |
... for path in flat_params |
... } |
>>> mask = traverse_util.unflatten_dict(mask) |
>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — |
A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — |
A PyTree with same structure as the params tree. The leaves should be booleans. It should be True |
for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the |
model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel |
>>> # Download model and configuration from huggingface.co |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> # By default, the model params will be in fp32, to illustrate the use of this method, |
>>> # we'll first cast to fp16 and back to fp32 |
>>> params = model.to_f16(params) |
>>> # now cast back to fp32 |
>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — |
The name of the repository you want to push your model, scheduler, or pipeline files to. It should |
contain your organization name when pushing to an organization. repo_id can also be a path to a local |
directory. commit_message (str, optional) — |
Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — |
Whether or not the repository created should be private. token (str, optional) — |
The token to use as HTTP bearer authorization for remote files. The token generated when running |
huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — |
Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — |
Whether or not to convert the model weights to the safetensors format. variant (str, optional) — |
If specified, weights are saved in the format pytorch_model.<variant>.bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel |
unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") |
# Push the `unet` to your namespace with the name "my-finetuned-unet". |
unet.push_to_hub("my-finetuned-unet") |
# Push the `unet` to an organization with the name "my-finetuned-unet". |
unet.push_to_hub("your-org/my-finetuned-unet") |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.