text
stringlengths
0
5.54k
--dataset_name="huggan/flowers-102-categories" \
--output_dir="ddpm-ema-flowers-64" \
--mixed_precision="fp16" \
--push_to_hub
</hfoption>
<hfoption id="multi-GPU">
If you’re training with more than one GPU, add the --multi_gpu parameter to the training command: Copied accelerate launch --multi_gpu train_unconditional.py \
--dataset_name="huggan/flowers-102-categories" \
--output_dir="ddpm-ema-flowers-64" \
--mixed_precision="fp16" \
--push_to_hub
</hfoption>
</hfoptions>
The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference: Copied from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda")
image = pipeline().images[0]
Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is g...
functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to
the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler
class. Use from_config() to load a different compatible scheduler class (should be overridden
by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) —
Can be either:
A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on
the Hub.
A path to a directory (for example ./my_model_directory) containing the scheduler
configuration saved with save_pretrained().
subfolder (str, optional) —
The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) —
Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) —
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache
is not used. force_download (bool, optional, defaults to False) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist. resume_download (bool, optional, defaults to False) —
Whether or not to resume downloading the model weights and configuration files. If set to False, any
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) —
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) —
Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) —
Whether to only load local model weights and configuration files or not. If set to True, the model
won’t be downloaded from the Hub. token (str or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True, the token generated from
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") —
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier
allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with
huggingface-cli login. You can also activate the special
“offline-mode” to use this method in a
firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) —
Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) —
Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the
repository you want to push to with repo_id (will default to the name of save_directory in your
namespace). kwargs (Dict[str, Any], optional) —
Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the
from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) —
Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the
denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, th...
The name of the repository you want to push your model, scheduler, or pipeline files to. It should
contain your organization name when pushing to an organization. repo_id can also be a path to a local
directory. commit_message (str, optional) —
Message to commit while pushing. Default to "Upload {object}". private (bool, optional) —
Whether or not the repository created should be private. token (str, optional) —
The token to use as HTTP bearer authorization for remote files. The token generated when running
huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) —
Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) —
Whether or not to convert the model weights to the safetensors format. variant (str, optional) —
If specified, weights are saved in the format pytorch_model.<variant>.bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel
unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet")
# Push the `unet` to your namespace with the name "my-finetuned-unet".
unet.push_to_hub("my-finetuned-unet")
# Push the `unet` to an organization with the name "my-finetuned-unet".
unet.push_to_hub("your-org/my-finetuned-unet")
AutoPipeline 🤗 Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If you’re new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For ex...
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True
).to("cuda")
prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune"
image = pipeline(prompt, num_inference_steps=25).images[0]
image Under the hood, AutoPipelineForText2Image: automatically detects a "stable-diffusion" class from the model_index.json file loads the corresponding text-to-image StableDiffusionPipeline based on the "stable-diffusion" class name Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion"...
import torch
import requests
from PIL import Image
from io import BytesIO
pipeline = AutoPipelineForImage2Image.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
use_safetensors=True,
).to("cuda")
prompt = "a portrait of a dog wearing a pearl earring"
url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg"
response = requests.get(url)