Text-to-Image
Diffusers
Safetensors
LibreFluxIPAdapterPipeline
neuralvfx commited on
Commit
8c1e4dc
·
verified ·
1 Parent(s): 6ccfd12

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -16,7 +16,7 @@ base_model:
16
  # LibreFLUX-IP-Adapter
17
  ![Example: Control image vs result](examples/matrix_edge.png)
18
 
19
- This model/pipeline is the product of my [LibreFlux IP-Adapter training repo](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter), which uses [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX) as the underlying Transformer model. The Adapter design is roughly based on InstantX's IP Adapter
20
 
21
  I used transfer learning, to fintune the InstantX weights until they worked with LibreFlux and attention masking. For the dataset, I trained on laion2b-squareish-1024px for 20,000 iterations.
22
 
@@ -25,8 +25,8 @@ I used transfer learning, to fintune the InstantX weights until they worked with
25
  - Trained in same non-distilled fashion
26
  - Uses Attention Masking
27
  - Uses CFG during Inference
28
-
29
  # Fun Facts
 
30
  - Trained on the [laion2b-squareish-1024px Dataset](https://huggingface.co/datasets/opendiffusionai/laion2b-squareish-1024px/)
31
  - Trained using this repo: [https://github.com/NeuralVFX/LibreFLUX-IP-Adapter](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter)
32
  - Transformer model used: [https://huggingface.co/jimmycarter/LibreFlux](https://huggingface.co/jimmycarter/LibreFlux)
 
16
  # LibreFLUX-IP-Adapter
17
  ![Example: Control image vs result](examples/matrix_edge.png)
18
 
19
+ This model/pipeline is the product of my [LibreFlux IP-Adapter training repo](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter), which uses [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX) as the underlying Transformer model. The IP Adapter and Attention Wrapper design is roughly based on the [InstantX IP Adapter](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/)
20
 
21
  I used transfer learning, to fintune the InstantX weights until they worked with LibreFlux and attention masking. For the dataset, I trained on laion2b-squareish-1024px for 20,000 iterations.
22
 
 
25
  - Trained in same non-distilled fashion
26
  - Uses Attention Masking
27
  - Uses CFG during Inference
 
28
  # Fun Facts
29
+ - Fine tuned from these weights: [https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/)
30
  - Trained on the [laion2b-squareish-1024px Dataset](https://huggingface.co/datasets/opendiffusionai/laion2b-squareish-1024px/)
31
  - Trained using this repo: [https://github.com/NeuralVFX/LibreFLUX-IP-Adapter](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter)
32
  - Transformer model used: [https://huggingface.co/jimmycarter/LibreFlux](https://huggingface.co/jimmycarter/LibreFlux)