Mujeeb603 commited on
Commit
85ad7dd
·
verified ·
1 Parent(s): a798e35

Model card auto-generated by SimpleTuner

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: other
3
- base_model: "black-forest-labs/FLUX.1-schnell"
4
  tags:
5
  - flux
6
  - flux-diffusers
@@ -25,7 +25,7 @@ widget:
25
 
26
  # lora-training
27
 
28
- This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell).
29
 
30
 
31
  The main validation prompt used during training was:
@@ -58,7 +58,7 @@ You may reuse the base model text encoder for inference.
58
  ## Training settings
59
 
60
  - Training epochs: 0
61
- - Training steps: 300
62
  - Learning rate: 0.0008
63
  - Effective batch size: 1
64
  - Micro-batch size: 1
@@ -70,7 +70,7 @@ You may reuse the base model text encoder for inference.
70
  - Precision: bf16
71
  - Quantised: Yes: int8-quanto
72
  - Xformers: Not used
73
- - LoRA Rank: 16
74
  - LoRA Alpha: None
75
  - LoRA Dropout: 0.1
76
  - LoRA initialisation style: default
@@ -80,7 +80,7 @@ You may reuse the base model text encoder for inference.
80
 
81
  ### right-triangles
82
  - Repeats: 0
83
- - Total number of images: 348
84
  - Total number of aspect buckets: 1
85
  - Resolution: 512 px
86
  - Cropped: True
@@ -95,7 +95,7 @@ You may reuse the base model text encoder for inference.
95
  import torch
96
  from diffusers import DiffusionPipeline
97
 
98
- model_id = 'black-forest-labs/FLUX.1-schnell'
99
  adapter_id = 'Mujeeb603/lora-training'
100
  pipeline = DiffusionPipeline.from_pretrained(model_id)
101
  pipeline.load_lora_weights(adapter_id)
 
1
  ---
2
  license: other
3
+ base_model: "black-forest-labs/FLUX.1-dev"
4
  tags:
5
  - flux
6
  - flux-diffusers
 
25
 
26
  # lora-training
27
 
28
+ This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
29
 
30
 
31
  The main validation prompt used during training was:
 
58
  ## Training settings
59
 
60
  - Training epochs: 0
61
+ - Training steps: 100
62
  - Learning rate: 0.0008
63
  - Effective batch size: 1
64
  - Micro-batch size: 1
 
70
  - Precision: bf16
71
  - Quantised: Yes: int8-quanto
72
  - Xformers: Not used
73
+ - LoRA Rank: 8
74
  - LoRA Alpha: None
75
  - LoRA Dropout: 0.1
76
  - LoRA initialisation style: default
 
80
 
81
  ### right-triangles
82
  - Repeats: 0
83
+ - Total number of images: 380
84
  - Total number of aspect buckets: 1
85
  - Resolution: 512 px
86
  - Cropped: True
 
95
  import torch
96
  from diffusers import DiffusionPipeline
97
 
98
+ model_id = 'black-forest-labs/FLUX.1-dev'
99
  adapter_id = 'Mujeeb603/lora-training'
100
  pipeline = DiffusionPipeline.from_pretrained(model_id)
101
  pipeline.load_lora_weights(adapter_id)