End of training
Browse files- .gitattributes +1 -0
- README.md +7 -7
- huggy_lora_v4.safetensors +1 -1
- huggy_lora_v4_emb.safetensors +1 -1
- image_0.png +0 -0
- image_1.png +2 -2
- image_2.png +2 -2
- image_3.png +2 -2
- pytorch_lora_weights.safetensors +1 -1
.gitattributes
CHANGED
|
@@ -36,3 +36,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 36 |
image_1.png filter=lfs diff=lfs merge=lfs -text
|
| 37 |
image_2.png filter=lfs diff=lfs merge=lfs -text
|
| 38 |
image_3.png filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 36 |
image_1.png filter=lfs diff=lfs merge=lfs -text
|
| 37 |
image_2.png filter=lfs diff=lfs merge=lfs -text
|
| 38 |
image_3.png filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
image_0.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -9,28 +9,28 @@ tags:
|
|
| 9 |
- template:sd-lora
|
| 10 |
widget:
|
| 11 |
|
| 12 |
-
- text: 'a <s0><s1>
|
| 13 |
output:
|
| 14 |
url:
|
| 15 |
"image_0.png"
|
| 16 |
|
| 17 |
-
- text: 'a <s0><s1>
|
| 18 |
output:
|
| 19 |
url:
|
| 20 |
"image_1.png"
|
| 21 |
|
| 22 |
-
- text: 'a <s0><s1>
|
| 23 |
output:
|
| 24 |
url:
|
| 25 |
"image_2.png"
|
| 26 |
|
| 27 |
-
- text: 'a <s0><s1>
|
| 28 |
output:
|
| 29 |
url:
|
| 30 |
"image_3.png"
|
| 31 |
|
| 32 |
base_model: stabilityai/stable-diffusion-xl-base-1.0
|
| 33 |
-
instance_prompt: a <s0><s1>
|
| 34 |
license: openrail++
|
| 35 |
---
|
| 36 |
|
|
@@ -51,7 +51,7 @@ license: openrail++
|
|
| 51 |
- On AUTOMATIC1111, load the LoRA by adding `<lora:huggy_lora_v4:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
|
| 52 |
- *Embeddings*: download **[`huggy_lora_v4_emb.safetensors` here 💾](/linoyts/huggy_lora_v4/blob/main/huggy_lora_v4_emb.safetensors)**.
|
| 53 |
- Place it on it on your `embeddings` folder
|
| 54 |
-
- Use it by adding `huggy_lora_v4_emb` to your prompt. For example, `a huggy_lora_v4_emb
|
| 55 |
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
|
| 56 |
|
| 57 |
|
|
@@ -70,7 +70,7 @@ state_dict = load_file(embedding_path)
|
|
| 70 |
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
|
| 71 |
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
|
| 72 |
|
| 73 |
-
image = pipeline('a <s0><s1>
|
| 74 |
```
|
| 75 |
|
| 76 |
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
|
|
|
| 9 |
- template:sd-lora
|
| 10 |
widget:
|
| 11 |
|
| 12 |
+
- text: 'watercolor painting of a <s0><s1> woman with pink hair in New York'
|
| 13 |
output:
|
| 14 |
url:
|
| 15 |
"image_0.png"
|
| 16 |
|
| 17 |
+
- text: 'watercolor painting of a <s0><s1> woman with pink hair in New York'
|
| 18 |
output:
|
| 19 |
url:
|
| 20 |
"image_1.png"
|
| 21 |
|
| 22 |
+
- text: 'watercolor painting of a <s0><s1> woman with pink hair in New York'
|
| 23 |
output:
|
| 24 |
url:
|
| 25 |
"image_2.png"
|
| 26 |
|
| 27 |
+
- text: 'watercolor painting of a <s0><s1> woman with pink hair in New York'
|
| 28 |
output:
|
| 29 |
url:
|
| 30 |
"image_3.png"
|
| 31 |
|
| 32 |
base_model: stabilityai/stable-diffusion-xl-base-1.0
|
| 33 |
+
instance_prompt: a <s0><s1> woman
|
| 34 |
license: openrail++
|
| 35 |
---
|
| 36 |
|
|
|
|
| 51 |
- On AUTOMATIC1111, load the LoRA by adding `<lora:huggy_lora_v4:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
|
| 52 |
- *Embeddings*: download **[`huggy_lora_v4_emb.safetensors` here 💾](/linoyts/huggy_lora_v4/blob/main/huggy_lora_v4_emb.safetensors)**.
|
| 53 |
- Place it on it on your `embeddings` folder
|
| 54 |
+
- Use it by adding `huggy_lora_v4_emb` to your prompt. For example, `a huggy_lora_v4_emb woman`
|
| 55 |
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
|
| 56 |
|
| 57 |
|
|
|
|
| 70 |
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
|
| 71 |
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
|
| 72 |
|
| 73 |
+
image = pipeline('watercolor painting of a <s0><s1> woman with pink hair in New York').images[0]
|
| 74 |
```
|
| 75 |
|
| 76 |
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
huggy_lora_v4.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 186046568
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3b35b9bbf7781e4fef726facb8cced42b47e9b3f1a47f5026c01924c5dffc41f
|
| 3 |
size 186046568
|
huggy_lora_v4_emb.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 16536
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e2240d34e0c14473dd0e8957a4cee3a60cc403377a253eb9654a4039a9f26b9e
|
| 3 |
size 16536
|
image_0.png
CHANGED
|
|
Git LFS Details
|
image_1.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
image_2.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
image_3.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
pytorch_lora_weights.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 185963768
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e13e9f895e2e8eb23aa54e65521155915e46779afc6c3b08f302abc7123799b8
|
| 3 |
size 185963768
|