Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,7 @@ base_model:
|
|
| 18 |
|
| 19 |
This model/pipeline is the product of my [LibreFlux IP-Adapter training repo](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter), which uses [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX) as the underlying Transformer model. The IP Adapter and Attention Wrapper design is roughly based on the [InstantX IP Adapter](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/)
|
| 20 |
|
| 21 |
-
I used transfer learning, to fintune the InstantX weights until they worked with LibreFlux and attention masking. For the dataset, I trained on laion2b-squareish-1024px for 20,000 iterations.
|
| 22 |
|
| 23 |
# How does this relate to LibreFLUX?
|
| 24 |
- Base model is [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX)
|
|
|
|
| 18 |
|
| 19 |
This model/pipeline is the product of my [LibreFlux IP-Adapter training repo](https://github.com/NeuralVFX/LibreFLUX-IP-Adapter), which uses [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX) as the underlying Transformer model. The IP Adapter and Attention Wrapper design is roughly based on the [InstantX IP Adapter](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter/)
|
| 20 |
|
| 21 |
+
I used transfer learning, to fintune the InstantX weights until they worked with LibreFlux and attention masking. For the dataset, I trained on [laion2b-squareish-1024px](https://huggingface.co/datasets/opendiffusionai/laion2b-squareish-1024px/) for roughly 20,000 iterations.
|
| 22 |
|
| 23 |
# How does this relate to LibreFLUX?
|
| 24 |
- Base model is [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX)
|