bubbliiiing
commited on
Commit
·
b2a5317
1
Parent(s):
5f2e9bf
Update Flux.2 Control 2602
Browse files- FLUX.2-dev-Fun-Controlnet-Union-2602.safetensors +3 -0
- README.md +45 -18
- asset/gray.jpg +3 -0
- asset/hed.jpg +3 -0
- results/canny.png +2 -2
- results/depth.png +2 -2
- results/gray.png +3 -0
- results/hed.png +3 -0
- results/pose.png +2 -2
- results/pose2.png +2 -2
- results/pose_inpaint.png +3 -0
- results/pose_ref.png +2 -2
FLUX.2-dev-Fun-Controlnet-Union-2602.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:516532a885d12ae84bb3c6b24ef4816ac05ffa1c9c7b93476f74652eb0a7a794
|
| 3 |
+
size 8232506680
|
README.md
CHANGED
|
@@ -4,37 +4,30 @@ license: other
|
|
| 4 |
license_name: flux-dev-non-commercial-license
|
| 5 |
license_link: https://huggingface.co/black-forest-labs/FLUX.2-dev/blob/main/LICENSE.txt
|
| 6 |
---
|
|
|
|
| 7 |
# Flux.2-dev-Fun-Controlnet-Union
|
| 8 |
|
| 9 |
[](https://github.com/aigc-apps/VideoX-Fun)
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
# Model features
|
| 12 |
- This ControlNet is added on 4 double blocks.
|
| 13 |
-
- The model was trained from scratch for 10,000 steps on a dataset of 1 million high-quality images covering both general and human-centric content. Training was performed at 1328 resolution using BFloat16 precision, with a batch size of 64, a learning rate of 2e-5, and a text dropout ratio of 0.10.
|
| 14 |
- It supports multiple control conditions—including Canny, HED, depth maps, pose estimation, and MLSD can be used like a standard ControlNet.
|
| 15 |
- Inpainting mode is also supported.
|
| 16 |
- You can adjust controlnet_conditioning_scale for stronger control and better detail preservation. For better stability, we highly recommend using a detailed prompt. The optimal range for controlnet_conditioning_scale is from 0.65 to 0.80.
|
| 17 |
- Although Flux.2‑dev supports certain image‑editing capabilities, its generation speed slows down when handling multiple images, and it sometimes produces similarity issues or fails to follow the control images. Compared with edit‑based methods, using ControlNet adheres more reliably to control instructions and makes it easier to apply multiple types of control.
|
| 18 |
|
| 19 |
-
# TODO
|
| 20 |
-
- [ ] Train more data and steps.
|
| 21 |
-
|
| 22 |
# Results
|
| 23 |
|
| 24 |
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 25 |
<tr>
|
| 26 |
-
<td>Pose</td>
|
| 27 |
-
<td>Output</td>
|
| 28 |
-
</tr>
|
| 29 |
-
<tr>
|
| 30 |
-
<td><img src="asset/ref.jpg" width="100%" /><img src="asset/mask.jpg" width="100%" /></td>
|
| 31 |
-
<td><img src="results/inpaint.png" width="100%" /></td>
|
| 32 |
-
</tr>
|
| 33 |
-
</table>
|
| 34 |
-
|
| 35 |
-
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 36 |
-
<tr>
|
| 37 |
-
<td>Pose</td>
|
| 38 |
<td>Output</td>
|
| 39 |
</tr>
|
| 40 |
<tr>
|
|
@@ -78,7 +71,18 @@ license_link: https://huggingface.co/black-forest-labs/FLUX.2-dev/blob/main/LICE
|
|
| 78 |
|
| 79 |
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 80 |
<tr>
|
| 81 |
-
<td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
<td>Output</td>
|
| 83 |
</tr>
|
| 84 |
<tr>
|
|
@@ -87,6 +91,28 @@ license_link: https://huggingface.co/black-forest-labs/FLUX.2-dev/blob/main/LICE
|
|
| 87 |
</tr>
|
| 88 |
</table>
|
| 89 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
# Inference
|
| 91 |
Go to VideoX-Fun repository for more details.
|
| 92 |
|
|
@@ -110,7 +136,8 @@ Then download weights to models/Diffusion_Transformer and models/Personalized_Mo
|
|
| 110 |
├── 📂 Diffusion_Transformer/
|
| 111 |
│ └── 📂 FLUX.2-dev/
|
| 112 |
├── 📂 Personalized_Model/
|
| 113 |
-
│
|
|
|
|
| 114 |
```
|
| 115 |
|
| 116 |
Then run the file `examples/flux2_fun/predict_t2i_control.py`.
|
|
|
|
| 4 |
license_name: flux-dev-non-commercial-license
|
| 5 |
license_link: https://huggingface.co/black-forest-labs/FLUX.2-dev/blob/main/LICENSE.txt
|
| 6 |
---
|
| 7 |
+
|
| 8 |
# Flux.2-dev-Fun-Controlnet-Union
|
| 9 |
|
| 10 |
[](https://github.com/aigc-apps/VideoX-Fun)
|
| 11 |
|
| 12 |
+
## Model Card
|
| 13 |
+
|
| 14 |
+
| Name | Description |
|
| 15 |
+
|--|--|
|
| 16 |
+
| FLUX.2-dev-Fun-Controlnet-Union-2602.safetensors | Compared to the previous version of the model, we have added Scribble and Gray controls. Similar to Z-Image-Turbo, the Flux2 model loses its CFG distillation capability after Control training, which is why the previous version performed poorly. Building upon the previous version, we trained with a better dataset and performed CFG distillation after training, resulting in superior performance. |
|
| 17 |
+
| FLUX.2-dev-Fun-Controlnet-Union.safetensors | ControlNet weights for Flux2. The model supports multiple control conditions such as Canny, Depth, Pose, MLSD, Scribble, Hed and Gray. This ControlNet is added on 15 layer blocks and 2 refiner layer blocks. |
|
| 18 |
+
|
| 19 |
# Model features
|
| 20 |
- This ControlNet is added on 4 double blocks.
|
|
|
|
| 21 |
- It supports multiple control conditions—including Canny, HED, depth maps, pose estimation, and MLSD can be used like a standard ControlNet.
|
| 22 |
- Inpainting mode is also supported.
|
| 23 |
- You can adjust controlnet_conditioning_scale for stronger control and better detail preservation. For better stability, we highly recommend using a detailed prompt. The optimal range for controlnet_conditioning_scale is from 0.65 to 0.80.
|
| 24 |
- Although Flux.2‑dev supports certain image‑editing capabilities, its generation speed slows down when handling multiple images, and it sometimes produces similarity issues or fails to follow the control images. Compared with edit‑based methods, using ControlNet adheres more reliably to control instructions and makes it easier to apply multiple types of control.
|
| 25 |
|
|
|
|
|
|
|
|
|
|
| 26 |
# Results
|
| 27 |
|
| 28 |
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 29 |
<tr>
|
| 30 |
+
<td>Pose + Ref</td>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
<td>Output</td>
|
| 32 |
</tr>
|
| 33 |
<tr>
|
|
|
|
| 71 |
|
| 72 |
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 73 |
<tr>
|
| 74 |
+
<td>HED</td>
|
| 75 |
+
<td>Output</td>
|
| 76 |
+
</tr>
|
| 77 |
+
<tr>
|
| 78 |
+
<td><img src="asset/hed.jpg" width="100%" /></td>
|
| 79 |
+
<td><img src="results/hed.png" width="100%" /></td>
|
| 80 |
+
</tr>
|
| 81 |
+
</table>
|
| 82 |
+
|
| 83 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 84 |
+
<tr>
|
| 85 |
+
<td>Depth</td>
|
| 86 |
<td>Output</td>
|
| 87 |
</tr>
|
| 88 |
<tr>
|
|
|
|
| 91 |
</tr>
|
| 92 |
</table>
|
| 93 |
|
| 94 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 95 |
+
<tr>
|
| 96 |
+
<td>Gray</td>
|
| 97 |
+
<td>Output</td>
|
| 98 |
+
</tr>
|
| 99 |
+
<tr>
|
| 100 |
+
<td><img src="asset/gray.jpg" width="100%" /></td>
|
| 101 |
+
<td><img src="results/gray.png" width="100%" /></td>
|
| 102 |
+
</tr>
|
| 103 |
+
</table>
|
| 104 |
+
|
| 105 |
+
<table border="0" style="width: 100%; text-align: left; margin-top: 20px;">
|
| 106 |
+
<tr>
|
| 107 |
+
<td>Pose + Inpaint</td>
|
| 108 |
+
<td>Output</td>
|
| 109 |
+
</tr>
|
| 110 |
+
<tr>
|
| 111 |
+
<td><img src="asset/ref.jpg" width="100%" /><img src="asset/mask.jpg" width="100%" /><img src="asset/pose.jpg" width="100%" /></td>
|
| 112 |
+
<td><img src="results/pose_inpaint.png" width="100%" /></td>
|
| 113 |
+
</tr>
|
| 114 |
+
</table>
|
| 115 |
+
|
| 116 |
# Inference
|
| 117 |
Go to VideoX-Fun repository for more details.
|
| 118 |
|
|
|
|
| 136 |
├── 📂 Diffusion_Transformer/
|
| 137 |
│ └── 📂 FLUX.2-dev/
|
| 138 |
├── 📂 Personalized_Model/
|
| 139 |
+
│ ├── 📦 FLUX.2-dev-Fun-Controlnet-Union-2602.safetensors
|
| 140 |
+
│ └── 📦 FLUX.2-dev-Fun-Controlnet-Union.safetensors
|
| 141 |
```
|
| 142 |
|
| 143 |
Then run the file `examples/flux2_fun/predict_t2i_control.py`.
|
asset/gray.jpg
ADDED
|
Git LFS Details
|
asset/hed.jpg
ADDED
|
Git LFS Details
|
results/canny.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/depth.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/gray.png
ADDED
|
Git LFS Details
|
results/hed.png
ADDED
|
Git LFS Details
|
results/pose.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/pose2.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|
results/pose_inpaint.png
ADDED
|
Git LFS Details
|
results/pose_ref.png
CHANGED
|
Git LFS Details
|
|
Git LFS Details
|