text
stringlengths
0
5.54k
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. token (str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. from_flax (bool, optional, defaults to False) —
Load the model weights from a Flax checkpoint save file. subfolder (str, optional, defaults to "") —
The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) —
Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more
information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) —
A map that specifies where each submodule should go. It doesn’t need to be defined for each
parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the
same device.
Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For
more information about each option see designing a device
map. max_memory (Dict, optional) —
A dictionary device identifier for the maximum memory. Will default to the maximum memory available for
each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) —
The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) —
If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if
the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True
when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) —
Speed up model loading only loading the pretrained weights and not initializing the weights. This also
tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model.
Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this
argument to True will raise an error. variant (str, optional) —
Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when
loading from_flax. use_safetensors (bool, optional, defaults to None) —
If set to None, the safetensors weights are downloaded if they’re available and if the
safetensors library is installed. If set to True, the model is forcibly loaded from safetensors
weights. If set to False, safetensors weights are not loaded. Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - model.eval() - by default, and dropout modules are deactivated. To
train the model, set it back in training mode with model.train(). To use private or gated models, log-in with
huggingface-cli login. You can also activate the special
“offline-mode” to use this method in a
firewalled environment. Example: Copied from diffusers import UNet2DConditionModel
unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match:
- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) —
Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) —
Whether or not to return only the number of non-embedding parameters. Returns
int
The number of parameters.
Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel
model_id = "runwayml/stable-diffusion-v1-5"
unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet")
unet.num_parameters(only_trainable=True)
859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) —
Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) —
Whether the process calling this is the main process or not. Useful during distributed training and you
need to call this function on all processes. In this case, set is_main_process=True only on the main
process to avoid race conditions. save_function (Callable) —
The function to use to save the state dictionary. Useful during distributed training when you need to
replace torch.save with another method. Can be configured with the environment variable
DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) —
Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) —
If specified, weights are saved in the format pytorch_model.<variant>.bin. push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace). kwargs (Dict[str, Any], optional) —
Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the
from_pretrained() class method. FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and
saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = <class 'jax.numpy.float32'> *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) —
Can be either:
A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model
hosted on the Hub.
A path to a directory (for example ./my_model_directory) containing the model weights saved
using save_pretrained().
dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) —
The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and
jax.numpy.bfloat16 (on TPUs).
This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If
specified, all the computation will be performed with the given dtype.
This only specifies the dtype of the computation and does not influence the dtype of model
parameters.
If you wish to change the dtype of the model parameters, see to_fp16() and
to_bf16().
model_args (sequence of positional arguments, optional) —
All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. force_download (bool, optional, defaults to False) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. resume_download (bool, optional, defaults to False) —
Whether or not to resume downloading the model weights and configuration files. If set to False, any
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) —
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. revision (str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. from_pt (bool, optional, defaults to False) —
Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) —
Can be used to update the configuration object (after it is loaded) and initiate the model (for
example, output_attentions=True). Behaves differently depending on whether a config is provided or
automatically loaded: