Image-to-Image
English
Junc1i commited on
Commit
674551e
·
verified ·
1 Parent(s): 07ddb27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -27,8 +27,9 @@ pipeline_tag: image-to-image
27
  </div>
28
 
29
  ## About
30
- <div align="center">
31
- We present FlowInOne, a framework that reformulates multimodal generation as a **purely visual flow**, converting all inputs into visual prompts and enabling a clean **image-in, image-out** pipeline governed by a single flow matching model. This vision-centric formulation naturally eliminates cross-modal alignment bottlenecks, noise scheduling, and task-specific architectural branches, **unifying text-to-image generation, layout-guided editing, and visual instruction following under one coherent paradigm**.Extensive experiments demonstrate that FlowInOne achieves **state-of-the-art performance across all unified generation tasks**, surpassing both open-source models and competitive commercial systems, establishing a new foundation for fully vision-centric generative modeling where perception and creation coexist within a single continuous visual space.
32
- </div>
 
33
  ## 🧪 Usage
34
  Our training and inference scripts are now available on [GitHub](https://github.com/CSU-JPG/FlowInOne)!
 
27
  </div>
28
 
29
  ## About
30
+ We present FlowInOne, a framework that reformulates multimodal generation as a **purely visual flow**, converting all inputs into visual prompts and enabling a clean **image-in, image-out** pipeline governed by a single flow matching model.
31
+ This vision-centric formulation naturally eliminates cross-modal alignment bottlenecks, noise scheduling, and task-specific architectural branches, **unifying text-to-image generation, layout-guided editing, and visual instruction following under one coherent paradigm**.
32
+ Extensive experiments demonstrate that FlowInOne achieves **state-of-the-art performance across all unified generation tasks**, surpassing both open-source models and competitive commercial systems, establishing a new foundation for fully vision-centric generative modeling where perception and creation coexist within a single continuous visual space.
33
+
34
  ## 🧪 Usage
35
  Our training and inference scripts are now available on [GitHub](https://github.com/CSU-JPG/FlowInOne)!