File size: 3,295 Bytes
c16d896
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a204a2
bdc6e7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a204a2
c16d896
bdc6e7c
 
 
 
 
 
 
 
 
 
 
 
 
c16d896
da8cd63
 
 
 
c16d896
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: a photo of sks dog
widget:
- text: A photo of sks dog in a bucket
  output:
    url: image_0.png
- text: A photo of sks dog in a bucket
  output:
    url: image_1.png
- text: A photo of sks dog in a bucket
  output:
    url: image_2.png
- text: A photo of sks dog in a bucket
  output:
    url: image_3.png
---

<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->


# SDXL LoRA DreamBooth - DKTech/dreambooth-test-1

<Gallery />

## Model description

These are DKTech/dreambooth-test-1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.

The weights were trained  using [DreamBooth](https://dreambooth.github.io/).

LoRA for the text encoder was enabled: False.

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.

## Trigger words

You should use a photo of sks dog to trigger the image generation.

## Download model

Weights for this model are available in Safetensors format.

[Download](DKTech/dreambooth-test-1/tree/main) them in the Files & versions tab.



## Intended uses & limitations

#### How to use

Set up the environment on command-line / terminal.
```bash
# Create and activate conda environment
conda create –name dreambooth python=3.10
conda activate dreambooth

# Install ipykernel (needed only if you want to run the inference inside a jupyter-notebook)
conda install -c anaconda ipykernel
python -m ipykernel install --user --name=dreambooth

# Clone and install diffusers package
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install -e .

# Browse to examples/dreambooth directory in the diffusers installation directory
cd examples/dreambooth

# Install dreambooth sdxl training dependencies
pip install -r requirements_sdxl.txt
```

Run the inference in Python.
```python
from huggingface_hub.repocard import RepoCard
from diffusers import DiffusionPipeline
import torch

lora_model_id = "DKTech/dreambooth-test-1"
card = RepoCard.load(lora_model_id)
base_model_id = card.data.to_dict()["base_model"]

pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
pipe.load_lora_weights(lora_model_id)
image = pipe("A picture of an elephant that looks like a dog.", num_inference_steps=25).images[0]
image.save("my_image.png")
```
#### Fine tuning the original model
This model was created by fine tuning the original stable diffusion model based on the instructions here- https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sdxl.md
Various other base models (other than stable diffusion) can also be fine tuned using DreamBooth. For example, some discussion on fine tuning Playground 2.5 model can be found here- https://github.com/huggingface/diffusers/pull/7126


#### Limitations and bias

[TODO: provide examples of latent issues and potential remediations]

## Training details

[TODO: describe the data used to train the model]