text stringlengths 0 5.54k |
|---|
Mirror source to resolve accessibility issues if youβre downloading a model in China. We do not |
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more |
information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both π€ Diffusers and |
Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in π€ Diffusers format: Copied from diffusers import StableDiffusionPipeline |
import torch |
model_id = "runwayml/stable-diffusion-v1-5" |
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") |
pipe.load_textual_inversion("sd-concepts-library/cat-toy") |
prompt = "A <cat-toy> backpack" |
image = pipe(prompt, num_inference_steps=50).images[0] |
image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first |
(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline |
import torch |
model_id = "runwayml/stable-diffusion-v1-5" |
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") |
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") |
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." |
image = pipe(prompt, num_inference_steps=50).images[0] |
image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) β |
Can be either: |
A link to the .ckpt file (for example |
"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt") on the Hub. |
A path to a file containing all pipeline weights. |
torch_dtype (str or torch.dtype, optional) β |
Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the |
dtype is automatically derived from the modelβs weights. force_download (bool, optional, defaults to False) β |
Whether or not to force the (re-)download of the model weights and configuration files, overriding the |
cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) β |
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache |
is not used. resume_download (bool, optional, defaults to False) β |
Whether or not to resume downloading the model weights and configuration files. If set to False, any |
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) β |
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) β |
Whether to only load local model weights and configuration files or not. If set to True, the model |
wonβt be downloaded from the Hub. token (str or bool, optional) β |
The token to use as HTTP bearer authorization for remote files. If True, the token generated from |
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") β |
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier |
allowed by Git. use_safetensors (bool, optional, defaults to None) β |
If set to None, the safetensors weights are downloaded if theyβre available and if the |
safetensors library is installed. If set to True, the model is forcibly loaded from safetensors |
weights. If set to False, safetensors weights are not loaded. extract_ema (bool, optional, defaults to False) β |
Whether to extract the EMA weights or not. Pass True to extract the EMA weights which usually yield |
higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. upcast_attention (bool, optional, defaults to None) β |
Whether the attention computation should always be upcasted. image_size (int, optional, defaults to 512) β |
The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable |
Diffusion v2 base model. Use 768 for Stable Diffusion v2. prediction_type (str, optional) β |
The prediction type the model was trained on. Use 'epsilon' for all Stable Diffusion v1 models and |
the Stable Diffusion v2 base model. Use 'v_prediction' for Stable Diffusion v2. num_in_channels (int, optional, defaults to None) β |
The number of input channels. If None, it is automatically inferred. scheduler_type (str, optional, defaults to "pndm") β |
Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. load_safety_checker (bool, optional, defaults to True) β |
Whether to load the safety checker or not. text_encoder (CLIPTextModel, optional, defaults to None) β |
An instance of CLIPTextModel to use, specifically the |
clip-vit-large-patch14 variant. If this |
parameter is None, the function loads a new instance of CLIPTextModel by itself if needed. vae (AutoencoderKL, optional, defaults to None) β |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If |
this parameter is None, the function will load a new instance of [CLIP] by itself, if needed. tokenizer (CLIPTokenizer, optional, defaults to None) β |
An instance of CLIPTokenizer to use. If this parameter is None, the function loads a new instance |
of CLIPTokenizer by itself if needed. original_config_file (str) β |
Path to .yaml config file corresponding to the original architecture. If None, will be |
automatically inferred by looking for a key that only exists in SD2.0 models. kwargs (remaining dictionary of keyword arguments, optional) β |
Can be used to overwrite load and saveable variables (for example the pipeline components of the |
specific pipeline class). The overwritten components are directly passed to the pipelines __init__ |
method. See example below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors |
format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline |
>>> # Download pipeline from huggingface.co and cache. |
>>> pipeline = StableDiffusionPipeline.from_single_file( |
... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" |
... ) |
>>> # Download pipeline from local file |
>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt |
>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") |
>>> # Enable float16 and move to GPU |
>>> pipeline = StableDiffusionPipeline.from_single_file( |
... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", |
... torch_dtype=torch.float16, |
... ) |
>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) β |
See lora_state_dict(). kwargs (dict, optional) β |
See lora_state_dict(). adapter_name (str, optional) β |
Adapter name to be used for referencing the loaded adapter model. If not specified, it will use |
default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and |
self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into |
self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded |
into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) β |
Directory to save LoRA parameters to. Will be created if it doesnβt exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) β |
State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) β |
State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text |
encoder LoRA state dict because it comes from π€ Transformers. is_main_process (bool, optional, defaults to True) β |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.