linoyts HF Staff commited on
Commit
824e60e
·
verified ·
1 Parent(s): 626dc82

End of training

Browse files
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  tags:
3
  - stable-diffusion-xl
4
  - stable-diffusion-xl-diffusers
@@ -7,30 +8,31 @@ tags:
7
  - diffusers
8
  - lora
9
  - template:sd-lora
 
10
  widget:
11
 
12
- - text: 'watercolor painting of a <s0><s1> woman with pink hair in New York'
13
  output:
14
  url:
15
  "image_0.png"
16
 
17
- - text: 'watercolor painting of a <s0><s1> woman with pink hair in New York'
18
  output:
19
  url:
20
  "image_1.png"
21
 
22
- - text: 'watercolor painting of a <s0><s1> woman with pink hair in New York'
23
  output:
24
  url:
25
  "image_2.png"
26
 
27
- - text: 'watercolor painting of a <s0><s1> woman with pink hair in New York'
28
  output:
29
  url:
30
  "image_3.png"
31
 
32
  base_model: stabilityai/stable-diffusion-xl-base-1.0
33
- instance_prompt: a <s0><s1> woman
34
  license: openrail++
35
  ---
36
 
@@ -49,39 +51,25 @@ license: openrail++
49
  - **LoRA**: download **[`huggy_lora_v4.safetensors` here 💾](/linoyts/huggy_lora_v4/blob/main/huggy_lora_v4.safetensors)**.
50
  - Place it on your `models/Lora` folder.
51
  - On AUTOMATIC1111, load the LoRA by adding `<lora:huggy_lora_v4:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
52
- - *Embeddings*: download **[`huggy_lora_v4_emb.safetensors` here 💾](/linoyts/huggy_lora_v4/blob/main/huggy_lora_v4_emb.safetensors)**.
53
- - Place it on it on your `embeddings` folder
54
- - Use it by adding `huggy_lora_v4_emb` to your prompt. For example, `a huggy_lora_v4_emb woman`
55
- (you need both the LoRA and the embeddings as they were trained together for this LoRA)
56
-
57
 
58
  ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
59
 
60
  ```py
61
  from diffusers import AutoPipelineForText2Image
62
  import torch
63
- from huggingface_hub import hf_hub_download
64
- from safetensors.torch import load_file
65
-
66
  pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
67
  pipeline.load_lora_weights('linoyts/huggy_lora_v4', weight_name='pytorch_lora_weights.safetensors')
68
- embedding_path = hf_hub_download(repo_id='linoyts/huggy_lora_v4', filename='huggy_lora_v4_emb.safetensors', repo_type="model")
69
- state_dict = load_file(embedding_path)
70
- pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
71
- pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
72
-
73
- image = pipeline('watercolor painting of a <s0><s1> woman with pink hair in New York').images[0]
74
  ```
75
 
76
  For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
77
 
78
  ## Trigger words
79
 
80
- To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
81
-
82
- to trigger concept `TOK` → use `<s0><s1>` in your prompt
83
-
84
-
85
 
86
  ## Details
87
  All [Files & versions](/linoyts/huggy_lora_v4/tree/main).
@@ -90,7 +78,7 @@ The weights were trained using [🧨 diffusers Advanced Dreambooth Training Scri
90
 
91
  LoRA for the text encoder was enabled. False.
92
 
93
- Pivotal tuning was enabled: True.
94
 
95
  Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
96
 
 
1
  ---
2
+
3
  tags:
4
  - stable-diffusion-xl
5
  - stable-diffusion-xl-diffusers
 
8
  - diffusers
9
  - lora
10
  - template:sd-lora
11
+
12
  widget:
13
 
14
+ - text: 'a TOK emoji dressed as an easter bunny'
15
  output:
16
  url:
17
  "image_0.png"
18
 
19
+ - text: 'a TOK emoji dressed as an easter bunny'
20
  output:
21
  url:
22
  "image_1.png"
23
 
24
+ - text: 'a TOK emoji dressed as an easter bunny'
25
  output:
26
  url:
27
  "image_2.png"
28
 
29
+ - text: 'a TOK emoji dressed as an easter bunny'
30
  output:
31
  url:
32
  "image_3.png"
33
 
34
  base_model: stabilityai/stable-diffusion-xl-base-1.0
35
+ instance_prompt: a TOK emoji
36
  license: openrail++
37
  ---
38
 
 
51
  - **LoRA**: download **[`huggy_lora_v4.safetensors` here 💾](/linoyts/huggy_lora_v4/blob/main/huggy_lora_v4.safetensors)**.
52
  - Place it on your `models/Lora` folder.
53
  - On AUTOMATIC1111, load the LoRA by adding `<lora:huggy_lora_v4:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
54
+
 
 
 
 
55
 
56
  ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
57
 
58
  ```py
59
  from diffusers import AutoPipelineForText2Image
60
  import torch
61
+
 
 
62
  pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
63
  pipeline.load_lora_weights('linoyts/huggy_lora_v4', weight_name='pytorch_lora_weights.safetensors')
64
+
65
+ image = pipeline('a TOK emoji dressed as an easter bunny').images[0]
 
 
 
 
66
  ```
67
 
68
  For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
69
 
70
  ## Trigger words
71
 
72
+ You should use a TOK emoji to trigger the image generation.
 
 
 
 
73
 
74
  ## Details
75
  All [Files & versions](/linoyts/huggy_lora_v4/tree/main).
 
78
 
79
  LoRA for the text encoder was enabled. False.
80
 
81
+ Pivotal tuning was enabled: False.
82
 
83
  Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
84
 
huggy_lora_v4.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3b35b9bbf7781e4fef726facb8cced42b47e9b3f1a47f5026c01924c5dffc41f
3
- size 186046568
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a35100f826cf351f6d9c3ec1a059b533780f8660aa0908ea114161706336eb9
3
+ size 46698072
image_0.png CHANGED

Git LFS Details

  • SHA256: f49ca5b543709e109564cadc3f0f199aea7d4594a9eafff9a031332b9af6a9fe
  • Pointer size: 132 Bytes
  • Size of remote file: 1.56 MB

Git LFS Details

  • SHA256: ea0cd333fc50f6d8d033fdb638dc3752454f2526191432d3c15e5c1db9e884e5
  • Pointer size: 132 Bytes
  • Size of remote file: 1.62 MB
image_1.png CHANGED

Git LFS Details

  • SHA256: 6175cab0edbeb9eb504781f451f90209d64fd87e3a0edaf8f038e99cea73d6d2
  • Pointer size: 132 Bytes
  • Size of remote file: 1.6 MB

Git LFS Details

  • SHA256: 750e4be041ae89ee4c56696aa7ae50ff0c7b94949d0f3d2abd3db3dad7e605c5
  • Pointer size: 132 Bytes
  • Size of remote file: 1.58 MB
image_2.png CHANGED

Git LFS Details

  • SHA256: b91a819721996fb4ca82464d99c3255bd78efe959a0a70a9e91d87da0b97d56d
  • Pointer size: 132 Bytes
  • Size of remote file: 1.62 MB

Git LFS Details

  • SHA256: e5e83075009779be37a293b7a7b35f8f3393ee845619e20bc66ace57bca0f8c7
  • Pointer size: 132 Bytes
  • Size of remote file: 1.72 MB
image_3.png CHANGED

Git LFS Details

  • SHA256: 555d4ef2c317c08236adb645240c7eff8844aa71e8e4ec67f69175a508aba80f
  • Pointer size: 132 Bytes
  • Size of remote file: 1.53 MB

Git LFS Details

  • SHA256: 4c1f077712a54cb9e48fa7a8ef0d4cf3d0c765e0a0af422c9ca23aa24f7c0825
  • Pointer size: 132 Bytes
  • Size of remote file: 1.66 MB
pytorch_lora_weights.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e13e9f895e2e8eb23aa54e65521155915e46779afc6c3b08f302abc7123799b8
3
- size 185963768
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:825e9ded2d46541eb39522fed28b21a123d28bbc24098a1c4271f9231ceb3f89
3
+ size 46615272