datasets:
- CSU-JPG/VisPrompt5M
- CSU-JPG/VPBench
language:
- en
license: apache-2.0
pipeline_tag: image-to-image
tags:
- flow-matching
- image-generation
- image-editing
- vision-centric
FlowInOne: Unifying Multimodal Generation as Image-in, Image-out Flow Matching
TL;DR: The first vision-centric image-in, image-out image generation model.
π Homepage | π» Code | π Paper | π Dataset | π Benchmark | π€ Model
Authors
Junchao Yi, Rui Zhao, Jiahao Tang, Weixian Lei, Linjie Li, Qisheng Su, Zhengyuan Yang, Lijuan Wang, Xiaofeng Zhu, Alex Jinpeng Wang.
About
FlowInOne is a framework that reformulates multimodal generation as a purely visual flow, converting all inputs into visual prompts and enabling a clean image-in, image-out pipeline governed by a single flow matching model.
This vision-centric formulation naturally eliminates cross-modal alignment bottlenecks, noise scheduling, and task-specific architectural branches, unifying text-to-image generation, layout-guided editing, and visual instruction following under one coherent paradigm.
π Setup
# Create conda environment
conda create -n flowinone python=3.10 -y
conda activate flowinone
# Install required packages
git clone https://github.com/CSU-JPG/FlowInOne.git
cd FlowInOne/scripts
sh setup.sh
β¨ Usage
1. Download Weights
You can download the model weights and model preparation files using the following commands:
# model weights
wget -O checkpoints/flowinone_256px.pth https://huggingface.co/CSU-JPG/FlowInOne/resolve/main/flowinone_256px.pth
# model preparation
wget https://huggingface.co/CSU-JPG/FlowInOne/resolve/main/preparation.tar.gz
tar -xzvf "preparation.tar.gz"
2. Inference
Run inference with the provided script in the repository:
sh scripts/inference.sh
Our training and inference scripts are fully available on GitHub.
Citation
If you found our work useful, please consider citing:
@article{yi2026flowinoneunifyingmultimodalgenerationimagein,
title={FlowInOne:Unifying Multimodal Generation as Image-in, Image-out Flow Matching},
author={Junchao Yi and Rui Zhao and Jiahao Tang and Weixian Lei and Linjie Li and Qisheng Su and Zhengyuan Yang and Lijuan Wang and Xiaofeng Zhu and Alex Jinpeng Wang},
journal={arXiv preprint arXiv:2604.06757},
year={2026}
}