text stringlengths 0 5.54k |
|---|
prompt_embeds=prompt_embeds, # generated from Compel |
negative_prompt_embeds=negative_prompt_embeds, # generated from Compel |
).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For exampl... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") |
pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. |
Load adapters There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modi... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") |
prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" |
image = pipeline(prompt).images[0] |
image Textual inversion Textual inversion is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt.... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now you can load the textual inversion embeddings with the load_textual_inversion() method and generate some images. Let’s load the sd-concepts-library/gta5-artwork embeddings and you’ll need to ... |
prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, <gta5-artwork> style" |
image = pipeline(prompt).images[0] |
image Textual inversion can also be trained on undesirable things to create negative embeddings to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with load_tex... |
"sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" |
) Now you can use the token to generate an image with the negative embeddings: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" |
negative_prompt = "EasyNegative" |
image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] |
image LoRA Low-Rank Adaptation (LoRA) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") Then use the load_lora_weights() method to load the ostris/super-cereal-sdxl-lora weights and specify the weights filename from the repository: Copied pipeline.load_lora_weights("ostr... |
prompt = "bears, pizza bites" |
image = pipeline(prompt).images[0] |
image The load_lora_weights() method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: the LoRA weights don’t have separate identifiers for the UNet and text encoder the LoRA weights have separate identifiers for the UNet and text encod... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") |
pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") |
# use cnmt in the prompt to trigger the LoRA |
prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" |
image = pipeline(prompt).images[0] |
image For both load_lora_weights() and load_attn_procs(), you can pass the cross_attention_kwargs={"scale": 0.5} parameter to adjust how much of the LoRA weights to use. A value of 0 is the same as only using the base model weights, and a value of 1 is equivalent to using the fully finetuned LoRA. To unload the LoRA w... |
import torch |
pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") |
pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") Generate an image: Copied # use bl3uprint in the prompt to trigger the LoRA |
prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" |
image = pipeline(prompt).images[0] |
image Some limitations of using Kohya LoRAs with 🤗 Diffusers include: Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained here. LyCORIS checkpoints aren’t fully supported. The load_lora_weights() method loads LyCORIS checkpoints with LoRA and LoCon modules, but Ha... |
Official IP-Adapter checkpoints are available from h94/IP-Adapter. To start, load a Stable Diffusion checkpoint. Copied from diffusers import AutoPipelineForText2Image |
import torch |
from diffusers.utils import load_image |
pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Then load the IP-Adapter weights and add it to the pipeline with the load_ip_adapter() method. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15... |
generator = torch.Generator(device="cpu").manual_seed(33) |
images = pipeline( |
prompt='best quality, high quality, wearing sunglasses', |
ip_adapter_image=image, |
negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", |
num_inference_steps=50, |
generator=generator, |
).images[0] |
images IP-Adapter Plus IP-Adapter relies on an image encoder to generate image features. If the IP-Adapter repository contains an image_encoder subfolder, the image encoder is automatically loaded and registered to the pipeline. Otherwise, you’ll need to explicitly load the image encoder with a CLIPVisionModelWit... |
image_encoder = CLIPVisionModelWithProjection.from_pretrained( |
"h94/IP-Adapter", |
subfolder="models/image_encoder", |
torch_dtype=torch.float16 |
) |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"stabilityai/stable-diffusion-xl-base-1.0", |
image_encoder=image_encoder, |
torch_dtype=torch.float16 |
).to("cuda") |
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.safetensors") |
Stable Diffusion text-to-image fine-tuning |
The train_text_to_image.py script shows how to fine-tune the stable diffusion model on your own dataset. |
The text-to-image fine-tuning script is experimental. It’s easy to overfit and run into issues like catastrophic forgetting. We recommend to explore different hyperparameters to get the best results on your dataset. |
Running locally |
Installing the dependencies |
Before running the scripts, make sure to install the library’s training dependencies: |
Copied |
pip install git+https://github.com/huggingface/diffusers.git |
pip install -U -r requirements.txt |
And initialize an 🤗Accelerate environment with: |
Copied |
accelerate config |
You need to accept the model license before downloading or using the weights. In this example we’ll use model version v1-4, so you’ll need to visit its card, read the license and tick the checkbox if you agree. |
You have to be a registered user in 🤗 Hugging Face Hub, and you’ll also need to use an access token for the code to work. For more information on access tokens, please refer to this section of the documentation. |
Run the following command to authenticate your token |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.