Text-to-Image
Diffusers
Safetensors
flux
flow-matching
distillation
nielsr HF Staff commited on
Commit
32cc71a
·
verified ·
1 Parent(s): db5348d

Add abstract and descriptive tags to pi-Flow model card

Browse files

This PR enhances the model card for the pi-Flow model by:

1. **Adding the paper abstract**: The full abstract of "pi-Flow: Policy-Based Few-Step Generation via Imitation Distillation" is included to provide a comprehensive overview of the model's methodology and results directly on the model page.
2. **Including descriptive tags**: New metadata tags (`flux`, `flow-matching`, `distillation`) have been added to improve model discoverability and categorization within the Hugging Face Hub, reflecting key aspects of the model as described in the paper.

All existing links (arXiv, GitHub code, Hugging Face Spaces demos) and sample usage sections are preserved as they are accurate and well-documented.

Files changed (1) hide show
  1. README.md +13 -6
README.md CHANGED
@@ -1,13 +1,17 @@
1
  ---
 
 
 
 
 
2
  license: other
3
  license_name: flux-1-dev-non-commercial-license
4
  license_link: LICENSE.md
5
- datasets:
6
- - Lakonik/t2i-prompts-3m
7
- base_model:
8
- - black-forest-labs/FLUX.1-dev
9
  pipeline_tag: text-to-image
10
- library_name: diffusers
 
 
 
11
  ---
12
 
13
  # pi-Flow: Policy-Based Flow Models
@@ -26,6 +30,9 @@ Distilled 4-step and 8-step FLUX.1 models proposed in the paper:
26
  <br>
27
  [[arXiv](https://arxiv.org/abs/2510.14974)] [[Code](https://github.com/Lakonik/piFlow)] [[pi-Qwen Demo🤗](https://huggingface.co/spaces/Lakonik/pi-Qwen)] [[pi-FLUX Demo🤗](https://huggingface.co/spaces/Lakonik/pi-FLUX.1)]
28
 
 
 
 
29
  ![teaser](https://cdn-uploads.huggingface.co/production/uploads/638067fcb334960c987fbeda/H0J1LYUcSS5YqOwZqQ0Jb.jpeg)
30
 
31
  ## Usage
@@ -108,4 +115,4 @@ out.save('dxflux_4nfe.png')
108
  primaryClass={cs.LG},
109
  url={https://arxiv.org/abs/2510.14974},
110
  }
111
- ```
 
1
  ---
2
+ base_model:
3
+ - black-forest-labs/FLUX.1-dev
4
+ datasets:
5
+ - Lakonik/t2i-prompts-3m
6
+ library_name: diffusers
7
  license: other
8
  license_name: flux-1-dev-non-commercial-license
9
  license_link: LICENSE.md
 
 
 
 
10
  pipeline_tag: text-to-image
11
+ tags:
12
+ - flux
13
+ - flow-matching
14
+ - distillation
15
  ---
16
 
17
  # pi-Flow: Policy-Based Flow Models
 
30
  <br>
31
  [[arXiv](https://arxiv.org/abs/2510.14974)] [[Code](https://github.com/Lakonik/piFlow)] [[pi-Qwen Demo🤗](https://huggingface.co/spaces/Lakonik/pi-Qwen)] [[pi-FLUX Demo🤗](https://huggingface.co/spaces/Lakonik/pi-FLUX.1)]
32
 
33
+ ## Abstract
34
+ Few-step diffusion or flow-based generative models typically distill a velocity-predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models ($\pi$-Flow). $\pi$-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard $\ell_2$ flow matching loss. By simply mimicking the teacher's behavior, $\pi$-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On ImageNet 256$^2$, it attains a 1-NFE FID of 2.85, outperforming MeanFlow of the same DiT architecture. On FLUX.1-12B and Qwen-Image-20B at 4 NFEs, $\pi$-Flow achieves substantially better diversity than state-of-the-art few-step methods, while maintaining teacher-level quality.
35
+
36
  ![teaser](https://cdn-uploads.huggingface.co/production/uploads/638067fcb334960c987fbeda/H0J1LYUcSS5YqOwZqQ0Jb.jpeg)
37
 
38
  ## Usage
 
115
  primaryClass={cs.LG},
116
  url={https://arxiv.org/abs/2510.14974},
117
  }
118
+ ```