File size: 1,627 Bytes
294f156
 
2dbf6cc
185aa01
c40744e
 
 
294f156
 
2dbf6cc
294f156
f3fbcc7
 
2dbf6cc
294f156
2dbf6cc
294f156
2dbf6cc
294f156
2dbf6cc
294f156
 
2dbf6cc
 
 
294f156
1c76fbb
dfdbde5
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
library_name: diffusers
pipeline_tag: text-to-image
license: other
license_name: flux-1-dev-non-commercial-license
base_model:
- ashen0209/Flux-Dev2Pro
---

Model converted to fp8_e4m3fn. Everything else is the same as upstream.  https://huggingface.co/ashen0209/Flux-Dev2Pro

[Colab Flux-dev2pro fp8 script](https://colab.research.google.com/drive/1k3aIu7-iMNR2UrbWoxOswUsMepSk0Rxs?usp=sharing)

## Flux-Dev2Pro

Flux-Dev2Pro finetunes the transformer of Flux-Dev to make LoRA training better.

As discussed in this blog https://medium.com/@zhiwangshi28/why-flux-lora-so-hard-to-train-and-how-to-overcome-it-a0c70bc59eaf, LoRA trained on Flux-Dev often yields bad results, because without guidance distillation the LoRA training is diverged from the original training process. Flux-Dev2Pro recovers Flux-pro from Flux-dev by finetuning the model for many steps. Two epoch of 3M high quality images have been trained.

The LoRA trained on Flux-Dev2pro yields a much better results when being applied on Flux-dev, just like LoRA trained on SDXL and being applied to SDXL-turbo/lightning.


To use this model, run:
```python
from diffusers import FluxTransformer2DModel

transformer = FluxTransformer2DModel.from_pretrained("rockerBOO/Flux-Dev2Pro-fp8_e4m3fn")
```

“The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.

IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.”