SDXL LoRA DreamBooth - slhomme/test-style-seb

- Prompt
- a referee holding up a yellow card in the style of <s0><s1>

- Prompt
- a woman with glasses and a brush in her hand in the style of <s0><s1>

- Prompt
- a painting of a large building with trees and people in the style of <s0><s1>

- Prompt
- a painting of people sitting at a table in a restaurant in the style of <s0><s1>

- Prompt
- a group of people sitting around a table with beer in the style of <s0><s1>

- Prompt
- a waiter is serving food to people in a restaurant in the style of <s0><s1>

- Prompt
- a person riding skis down a snowy slope in the style of <s0><s1>

- Prompt
- a woman playing tennis on a court with a ball in the style of <s0><s1>
Model description
These are slhomme/test-style-seb LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
Download model
Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- LoRA: download
test-style-seb.safetensorshere 💾.- Place it on your
models/Lorafolder. - On AUTOMATIC1111, load the LoRA by adding
<lora:test-style-seb:1>to your prompt. On ComfyUI just load it as a regular LoRA.
- Place it on your
- Embeddings: download
test-style-seb_emb.safetensorshere 💾.- Place it on it on your
embeddingsfolder - Use it by adding
test-style-seb_embto your prompt. For example,in the style of test-style-seb_emb(you need both the LoRA and the embeddings as they were trained together for this LoRA)
- Place it on it on your
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('slhomme/test-style-seb', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='slhomme/test-style-seb', filename='test-style-seb_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('in the style of <s0><s1>').images[0]
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept TOK → use <s0><s1> in your prompt
Details
All Files & versions.
The weights were trained using 🧨 diffusers Advanced Dreambooth Training Script.
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
- Downloads last month
- 1
Model tree for slhomme/test-style-seb
Base model
stabilityai/stable-diffusion-xl-base-1.0