text stringlengths 0 5.54k |
|---|
train the model, set it back in training mode with model.train(). To use private or gated models, log-in with |
huggingface-cli login. You can also activate the special |
“offline-mode” to use this method in a |
firewalled environment. Example: Copied from diffusers import UNet2DConditionModel |
unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 a... |
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated |
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — |
Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) — |
Whether or not to return only the number of non-embedding parameters. Returns |
int |
The number of parameters. |
Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel |
model_id = "runwayml/stable-diffusion-v1-5" |
unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") |
unet.num_parameters(only_trainable=True) |
859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — |
Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — |
Whether the process calling this is the main process or not. Useful during distributed training and you |
need to call this function on all processes. In this case, set is_main_process=True only on the main |
process to avoid race conditions. save_function (Callable) — |
The function to use to save the state dictionary. Useful during distributed training when you need to |
replace torch.save with another method. Can be configured with the environment variable |
DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — |
Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — |
If specified, weights are saved in the format pytorch_model.<variant>.bin. push_to_hub (bool, optional, defaults to False) — |
Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the |
repository you want to push to with repo_id (will default to the name of save_directory in your |
namespace). kwargs (Dict[str, Any], optional) — |
Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the |
from_pretrained() class method. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — |
The list of adapters to set or the adapter name in case of single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT |
official documentation: https://huggingface.co/docs/peft FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and |
saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = <class 'jax.numpy.float32'> *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — |
Can be either: |
A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model |
hosted on the Hub. |
A path to a directory (for example ./my_model_directory) containing the model weights saved |
using save_pretrained(). |
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — |
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and |
jax.numpy.bfloat16 (on TPUs). |
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If |
specified, all the computation will be performed with the given dtype. |
This only specifies the dtype of the computation and does not influence the dtype of model |
parameters. |
If you wish to change the dtype of the model parameters, see to_fp16() and |
to_bf16(). |
model_args (sequence of positional arguments, optional) — |
All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) — |
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache |
is not used. force_download (bool, optional, defaults to False) — |
Whether or not to force the (re-)download of the model weights and configuration files, overriding the |
cached versions if they exist. resume_download (bool, optional, defaults to False) — |
Whether or not to resume downloading the model weights and configuration files. If set to False, any |
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — |
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — |
Whether to only load local model weights and configuration files or not. If set to True, the model |
won’t be downloaded from the Hub. revision (str, optional, defaults to "main") — |
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier |
allowed by Git. from_pt (bool, optional, defaults to False) — |
Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) — |
Can be used to update the configuration object (after it is loaded) and initiate the model (for |
example, output_attentions=True). Behaves differently depending on whether a config is provided or |
automatically loaded: |
If a configuration is provided with config, kwargs are directly passed to the underlying |
model’s __init__ method (we assume all relevant updates to the configuration have already been |
done). |
If a configuration is not provided, kwargs are first passed to the configuration class |
initialization function from_config(). Each key of the kwargs that corresponds |
to a configuration attribute is used to override said attribute with the supplied kwargs value. |
Remaining keys that do not correspond to any configuration attribute are passed to the underlying |
model’s __init__ function. |
Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel |
>>> # Download model and configuration from huggingface.co and cache. |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") |
>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). |
>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly... |
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated |
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — |
Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — |
A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — |
Whether the process calling this is the main process or not. Useful during distributed training and you |
need to call this function on all processes. In this case, set is_main_process=True only on the main |
process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — |
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the |
repository you want to push to with repo_id (will default to the name of save_directory in your |
namespace). kwargs (Dict[str, Any], optional) — |
Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the |
from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — |
A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — |
A PyTree with same structure as the params tree. The leaves should be booleans. It should be True |
for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast |
the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.