0x-alx commited on
Commit
bfa2433
·
verified ·
1 Parent(s): 0507a68

End of training

Browse files
Files changed (9) hide show
  1. .gitattributes +3 -0
  2. README.md +62 -0
  3. alex1.jpeg +3 -0
  4. alex2.jpeg +0 -0
  5. alex3.jpeg +0 -0
  6. alex4.jpeg +3 -0
  7. alex5.jpeg +3 -0
  8. alex6.jpeg +0 -0
  9. metadata.jsonl +6 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ alex1.jpeg filter=lfs diff=lfs merge=lfs -text
37
+ alex4.jpeg filter=lfs diff=lfs merge=lfs -text
38
+ alex5.jpeg filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: stabilityai/stable-diffusion-xl-base-1.0
3
+ library_name: diffusers
4
+ license: openrail++
5
+ instance_prompt: a photo of TOK person
6
+ widget: []
7
+ tags:
8
+ - text-to-image
9
+ - text-to-image
10
+ - diffusers-training
11
+ - diffusers
12
+ - lora
13
+ - template:sd-lora
14
+ - stable-diffusion-xl
15
+ - stable-diffusion-xl-diffusers
16
+ ---
17
+
18
+ <!-- This model card has been generated automatically according to the information the training script had access to. You
19
+ should probably proofread and complete it, then remove this comment. -->
20
+
21
+
22
+ # SDXL LoRA DreamBooth - 0x-alx/dataset-alex
23
+
24
+ <Gallery />
25
+
26
+ ## Model description
27
+
28
+ These are 0x-alx/dataset-alex LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
29
+
30
+ The weights were trained using [DreamBooth](https://dreambooth.github.io/).
31
+
32
+ LoRA for the text encoder was enabled: False.
33
+
34
+ Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
35
+
36
+ ## Trigger words
37
+
38
+ You should use a photo of TOK person to trigger the image generation.
39
+
40
+ ## Download model
41
+
42
+ Weights for this model are available in Safetensors format.
43
+
44
+ [Download](0x-alx/dataset-alex/tree/main) them in the Files & versions tab.
45
+
46
+
47
+
48
+ ## Intended uses & limitations
49
+
50
+ #### How to use
51
+
52
+ ```python
53
+ # TODO: add an example code snippet for running this diffusion pipeline
54
+ ```
55
+
56
+ #### Limitations and bias
57
+
58
+ [TODO: provide examples of latent issues and potential remediations]
59
+
60
+ ## Training details
61
+
62
+ [TODO: describe the data used to train the model]
alex1.jpeg ADDED

Git LFS Details

  • SHA256: e688c93ad2f3aa170694df704ea5a52fc9add7583beabefa45db6d141e9a2353
  • Pointer size: 131 Bytes
  • Size of remote file: 102 kB
alex2.jpeg ADDED
alex3.jpeg ADDED
alex4.jpeg ADDED

Git LFS Details

  • SHA256: 1bbe397d6f5c401f58add7b1ebe1668960c31d89d9f0f60e401928592fb58bec
  • Pointer size: 131 Bytes
  • Size of remote file: 221 kB
alex5.jpeg ADDED

Git LFS Details

  • SHA256: 766afb549f0c647677af8da7a5dd81a28498f853df1d47300dc3427533d119a7
  • Pointer size: 131 Bytes
  • Size of remote file: 286 kB
alex6.jpeg ADDED
metadata.jsonl ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {"file_name": "alex3.jpeg", "prompt": "a photo of TOK person, a man and woman are smiling while they hold avocas"}
2
+ {"file_name": "alex6.jpeg", "prompt": "a photo of TOK person, two men sitting at a table with food"}
3
+ {"file_name": "alex4.jpeg", "prompt": "a photo of TOK person, a man holding a bunch of flowers in front of a building"}
4
+ {"file_name": "alex2.jpeg", "prompt": "a photo of TOK person, a man and woman are sitting on a beach chair"}
5
+ {"file_name": "alex1.jpeg", "prompt": "a photo of TOK person, a man sitting at a table with a bottle of wine"}
6
+ {"file_name": "alex5.jpeg", "prompt": "a photo of TOK person, a man sitting on a bench with a plant in his lap"}