Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ base_model:
|
|
| 16 |
# LibreFLUX-IP-Adapter
|
| 17 |

|
| 18 |
|
| 19 |
-
This model/pipeline is the product of my [LibreFlux IP-Adapter training repo](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter), which uses [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX) as the underlying Transformer model. The Adapter design is roughly based on InstantX
|
| 20 |
|
| 21 |
I used transfer learning, to fintune the InstantX weights until they worked with LibreFlux and attention masking. For the dataset, I trained on laion2b-squareish-1024px for 20,000 iterations.
|
| 22 |
|
|
@@ -25,8 +25,8 @@ I used transfer learning, to fintune the InstantX weights until they worked with
|
|
| 25 |
- Trained in same non-distilled fashion
|
| 26 |
- Uses Attention Masking
|
| 27 |
- Uses CFG during Inference
|
| 28 |
-
|
| 29 |
# Fun Facts
|
|
|
|
| 30 |
- Trained on the [laion2b-squareish-1024px Dataset](https://huggingface.co/datasets/opendiffusionai/laion2b-squareish-1024px/)
|
| 31 |
- Trained using this repo: [https://github.com/NeuralVFX/LibreFLUX-IP-Adapter](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter)
|
| 32 |
- Transformer model used: [https://huggingface.co/jimmycarter/LibreFlux](https://huggingface.co/jimmycarter/LibreFlux)
|
|
|
|
| 16 |
# LibreFLUX-IP-Adapter
|
| 17 |

|
| 18 |
|
| 19 |
+
This model/pipeline is the product of my [LibreFlux IP-Adapter training repo](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter), which uses [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX) as the underlying Transformer model. The IP Adapter and Attention Wrapper design is roughly based on the [InstantX IP Adapter](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/)
|
| 20 |
|
| 21 |
I used transfer learning, to fintune the InstantX weights until they worked with LibreFlux and attention masking. For the dataset, I trained on laion2b-squareish-1024px for 20,000 iterations.
|
| 22 |
|
|
|
|
| 25 |
- Trained in same non-distilled fashion
|
| 26 |
- Uses Attention Masking
|
| 27 |
- Uses CFG during Inference
|
|
|
|
| 28 |
# Fun Facts
|
| 29 |
+
- Fine tuned from these weights: [https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/)
|
| 30 |
- Trained on the [laion2b-squareish-1024px Dataset](https://huggingface.co/datasets/opendiffusionai/laion2b-squareish-1024px/)
|
| 31 |
- Trained using this repo: [https://github.com/NeuralVFX/LibreFLUX-IP-Adapter](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter)
|
| 32 |
- Transformer model used: [https://huggingface.co/jimmycarter/LibreFlux](https://huggingface.co/jimmycarter/LibreFlux)
|