mjbuehler commited on
Commit
bc610c2
·
verified ·
1 Parent(s): 98c2324

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -31,6 +31,8 @@ pip install -U diffusers
31
 
32
  These are LoRA adaption weights for the FLUX.1 [dev] model (```black-forest-labs/FLUX.1-dev```). The base model is, and you must first get access to it before loading this LoRA adapter.
33
 
 
 
34
  ## Trigger keywords
35
 
36
  The following images were used during fine-tuning using the keyword \<leaf microstructure\>:
@@ -109,10 +111,8 @@ adapter='leaf-flux.safetensors' #Step 4000, final step
109
  #adapter='leaf-flux-step-3000.safetensors' #Step 3000
110
  #adapter='leaf-flux-step-3500.safetensors' #Step 3500
111
 
112
- pipeline.load_lora_weights(repo_id, weight_name=adapter)
113
 
114
- pipeline=pipeline.to('cuda')
115
- )
116
  pipeline=pipeline.to('cuda')
117
  ```
118
 
 
31
 
32
  These are LoRA adaption weights for the FLUX.1 [dev] model (```black-forest-labs/FLUX.1-dev```). The base model is, and you must first get access to it before loading this LoRA adapter.
33
 
34
+ This LoRA adapter has rank=64 and alpha=64, trained for 4,000 steps. Earlier checkpoints are available in this repository as well (you can load these via the ```adapter``` parameter, see example below).
35
+
36
  ## Trigger keywords
37
 
38
  The following images were used during fine-tuning using the keyword \<leaf microstructure\>:
 
111
  #adapter='leaf-flux-step-3000.safetensors' #Step 3000
112
  #adapter='leaf-flux-step-3500.safetensors' #Step 3500
113
 
114
+ pipeline.load_lora_weights(repo_id, weight_name=adapter) #You need to use the weight_name parameter since the repo includes multiple checkpoints
115
 
 
 
116
  pipeline=pipeline.to('cuda')
117
  ```
118