Image-to-Image
English
FlowInOne / README.md
Junc1i's picture
Update README.md
2fd6fa3 verified
---
license: apache-2.0
datasets:
- CSU-JPG/VisPrompt5M
- CSU-JPG/VPBench
language:
- en
metrics:
- code_eval
pipeline_tag: image-to-image
---
<div align="center">
<h2 align="center" style="margin-top: 0; margin-bottom: 15px;">
<span style="color:#0052CC">F</span><span style="color:#135FD0">l</span><span style="color:#266CD4">o</span><span style="color:#3979D7">w</span><span style="color:#4C86DB">I</span><span style="color:#6093DF">n</span><span style="color:#73A0E3">O</span><span style="color:#86ADE7">n</span><span style="color:#99BAEB">e</span>: Unifying Multimodal Generation as
<span style="color:#0052CC">I</span><span style="color:#0958CE">m</span><span style="color:#125ED0">a</span><span style="color:#1B64D2">g</span><span style="color:#246AD4">e</span><span style="color:#2D70D6">-</span><span style="color:#3676D8">i</span><span style="color:#3F7CDA">n</span><span style="color:#4882DC">,</span>&nbsp;<span style="color:#5188DE">I</span><span style="color:#5A8EE0">m</span><span style="color:#6394E2">a</span><span style="color:#6C9AE4">g</span><span style="color:#75A0E6">e</span><span style="color:#7EA6E8">-</span><span style="color:#87ACEA">o</span><span style="color:#90B2EC">u</span><span style="color:#99B8EE">t</span> Flow Matching
</h2>
<p align="center" style="font-size: 15px;">
<span style="color:#E74C3C; font-weight: bold;">TL;DR:</span> <strong>The first vision-centric image-in, image-out image generation model.</strong>
</p>
<p align="center" style="font-size: 16px;">
<a href="https://csu-jpg.github.io/FlowInOne.github.io/" style="text-decoration: none;">🌐 Homepage</a> |
<a href="https://github.com/CSU-JPG/FlowInOne" style="text-decoration: none;">πŸ’» Code</a> |
<a href="https://arxiv.org/pdf/2604.06757" style="text-decoration: none;">πŸ“„ Paper</a> |
<a href="https://huggingface.co/datasets/CSU-JPG/VisPrompt5M" style="text-decoration: none;">πŸ“ Dataset</a> |
<a href="https://huggingface.co/datasets/CSU-JPG/VPBench" style="text-decoration: none;">🌏 Benchmark</a> |
<a href="https://huggingface.co/CSU-JPG/FlowInOne" style="text-decoration: none;">πŸ€— Model</a>
</p>
</div>
## About
We present FlowInOne, a framework that reformulates multimodal generation as a **purely visual flow**, converting all inputs into visual prompts and enabling a clean **image-in, image-out** pipeline governed by a single flow matching model.
This vision-centric formulation naturally eliminates cross-modal alignment bottlenecks, noise scheduling, and task-specific architectural branches, **unifying text-to-image generation, layout-guided editing, and visual instruction following under one coherent paradigm**.
Extensive experiments demonstrate that FlowInOne achieves **state-of-the-art performance across all unified generation tasks**, surpassing both open-source models and competitive commercial systems, establishing a new foundation for fully vision-centric generative modeling where perception and creation coexist within a single continuous visual space.
## πŸ§ͺ Usage
you can download the model weights and model preparation
```bash
# model weights
wget -O /path/to/download https://huggingface.co/CSU-JPG/FlowInOne/resolve/main/flowinone_256px.pth
# model preparation
wget -O /path/to/download https://huggingface.co/CSU-JPG/FlowInOne/resolve/main/preparation.tar.gz
# unzip
tar -xzvf "preparation.tar.gz" -C "/path/to/preparation"
```
you can download the dataset examples
```bash
wget -O /path/to/download https://huggingface.co/CSU-JPG/FlowInOne/resolve/main/flowinone_demo_dataset.tar.gz
# unzip
tar -xzvf "flowinone_demo_dataset.tar.gz" -C "/path/to/flowinone_demo_dataset"
```
Our training and inference scripts are now available on [GitHub](https://github.com/CSU-JPG/FlowInOne)!
## Citation
If you found our work useful, please consider citing:
```
@article{yi2026flowinoneunifyingmultimodalgenerationimagein,
title={FlowInOne:Unifying Multimodal Generation as Image-in, Image-out Flow Matching},
author={Junchao Yi and Rui Zhao and Jiahao Tang and Weixian Lei and Linjie Li and Qisheng Su and Zhengyuan Yang and Lijuan Wang and Xiaofeng Zhu and Alex Jinpeng Wang},
journal={arXiv preprint arXiv:2604.06757},
year={2026}
}
```