Add comprehensive dataset card for RealUnify benchmark
#2
by
nielsr
HF Staff
- opened
README.md
ADDED
|
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- image-text-to-text
|
| 4 |
+
- text-to-image
|
| 5 |
+
license: other
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- multimodal
|
| 10 |
+
- benchmark
|
| 11 |
+
- unified-models
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# RealUnify: Do Unified Models Truly Benefit from Unification? A Comprehensive Benchmark
|
| 15 |
+
|
| 16 |
+
**Paper:** [https://huggingface.co/papers/2509.24897](https://huggingface.co/papers/2509.24897) | **Code:** [https://github.com/FrankYang-17/RealUnify](https://github.com/FrankYang-17/RealUnify)
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## π’ News
|
| 21 |
+
- **[2025/09/29]** We are proud to introduce **RealUnify**, a comprehensive benchmark designed to evaluate bidirectional capability synergy. π
|
| 22 |
+
- We aim to raise a key question through RealUnify: **Do Unified Models Truly Benefit from Unification?**π₯
|
| 23 |
+
|
| 24 |
+
## Abstract
|
| 25 |
+
The integration of visual understanding and generation into unified multimodal models represents a significant stride toward general-purpose AI. However, a fundamental question remains unanswered by existing benchmarks: does this architectural unification actually enable synergetic interaction between the constituent capabilities? Existing evaluation paradigms, which primarily assess understanding and generation in isolation, are insufficient for determining whether a unified model can leverage its understanding to enhance its generation, or use generative simulation to facilitate deeper comprehension. To address this critical gap, we introduce RealUnify, a benchmark specifically designed to evaluate bidirectional capability synergy. RealUnify comprises 1,000 meticulously human-annotated instances spanning 10 categories and 32 subtasks. It is structured around two core axes: 1) Understanding Enhances Generation, which requires reasoning (e.g., commonsense, logic) to guide image generation, and 2) Generation Enhances Understanding, which necessitates mental simulation or reconstruction (e.g., of transformed or disordered visual inputs) to solve reasoning tasks. A key contribution is our dual-evaluation protocol, which combines direct end-to-end assessment with a diagnostic stepwise evaluation that decomposes tasks into distinct understanding and generation phases. This protocol allows us to precisely discern whether performance bottlenecks stem from deficiencies in core abilities or from a failure to integrate them. Through large-scale evaluations of 12 leading unified models and 6 specialized baselines, we find that current unified models still struggle to achieve effective synergy, indicating that architectural unification alone is insufficient. These results highlight the need for new training strategies and inductive biases to fully unlock the potential of unified modeling.
|
| 26 |
+
|
| 27 |
+
## π Introduction
|
| 28 |
+
|
| 29 |
+
- The integration of visual understanding and generation into unified multimodal models represents a significant stride toward general-purpose AI. However, a fundamental question remains unanswered by existing benchmarks: **does this architectural unification actually enable synergetic interaction between the constituent capabilities?**
|
| 30 |
+
- Existing evaluation paradigms, which primarily assess understanding and generation in isolation, are insufficient for determining whether a unified model can leverage its understanding to enhance its generation, or use generative simulation to facilitate deeper comprehension.
|
| 31 |
+
- To address this critical gap, we introduce **RealUnify**, a benchmark specifically designed to evaluate bidirectional capability synergy.
|
| 32 |
+
RealUnify comprises **1,000** meticulously human-annotated instances spanning 10 categories and 32 subtasks.
|
| 33 |
+
- It is structured around two core axes: **1) Understanding Enhances Generation (UEG)**, which requires reasoning (e.g., commonsense, logic) to guide image generation, and **2) Generation Enhances Understanding (GEU)**, which necessitates mental simulation or reconstruction (e.g., of transformed or disordered visual inputs) to solve reasoning tasks.
|
| 34 |
+
- A key contribution is our **dual-evaluation protocol**, which combines direct end-to-end assessment with a diagnostic stepwise evaluation that decomposes tasks into distinct understanding and generation phases. This protocol allows us to precisely discern whether performance bottlenecks stem from deficiencies in core abilities or from a failure to integrate them.
|
| 35 |
+
|
| 36 |
+
## π Benchmark Overview
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
## β¨ Sample Usage (Evaluation Pipeline)
|
| 43 |
+
|
| 44 |
+
We support two evaluation methods: **direct evaluation** and **stepwise evaluation**.
|
| 45 |
+
|
| 46 |
+
Before evaluation, please download the dataset files from our [Hugging Face repository](https://huggingface.co/datasets/DogNeverSleep/RealUnify) to your local path.
|
| 47 |
+
|
| 48 |
+
### π Direct Evaluation
|
| 49 |
+
- **Understanding Ehances Generation (UEG) Tasks**
|
| 50 |
+
- For the UEG task, please use `UEG_direct.json` as the dataset for evaluation.
|
| 51 |
+
- The prompts for image generation are stored in the `prompt` field. Please save the path to the generated image in the `generated_image` field.
|
| 52 |
+
- After obtaining all the generated images and saving the JSON file, please use `eval/eval_generation.py` for evaluation.
|
| 53 |
+
- Please add the model names and their corresponding result JSON files to `task_json_list` in `eval/eval_generation.py`, and set the directory for saving the evaluation results as `RES_JSON_DIR`.
|
| 54 |
+
|
| 55 |
+
- **Generation Enhances Understanding (GEU) Tasks**
|
| 56 |
+
- For the GEU task, please use `GEU_direct.json` as the dataset for evaluation.
|
| 57 |
+
- The prompts for visual understanding are stored in the `evaluation_prompt` field. Please save the response of model in the `response` field.
|
| 58 |
+
- After obtaining all the responses and saving the JSON file, please use `eval/eval_understanding.py` for evaluation.
|
| 59 |
+
- Please add the model names and their corresponding result JSON files to `task_json_list` in `eval/eval_understanding.py`.
|
| 60 |
+
|
| 61 |
+
### π Stepwise Evaluation
|
| 62 |
+
- **Understanding Ehances Generation (UEG) Tasks**
|
| 63 |
+
- For the UEG task, please use `UEG_step.json` as the dataset for evaluation.
|
| 64 |
+
- The prompts for prompt refine (understanding) are stored in the `new_prompt` field. Please save the response of model in the `response` field.
|
| 65 |
+
- After obtaining all the responses and saving the JSON file, please use `response` as the prompt for image generation. Please save the path to the generated image in the `generated_image` field.
|
| 66 |
+
- Please add the model names and their corresponding result JSON files to `task_json_list` in `eval/eval_generation.py`, and set the directory for saving the evaluation results as `RES_JSON_DIR`.
|
| 67 |
+
- **Generation Enhances Understanding (GEU) Tasks**
|
| 68 |
+
- For the GEU task, please use `GEU_step.json` as the dataset for evaluation.
|
| 69 |
+
- The prompts for image manipulation (editing) are stored in the `edit_prompt` field. Please save the path to the generated image in the `edit_image` field.
|
| 70 |
+
- After obtaining all the edited images and saving the JSON file, please use `edit_image` as the input image for visual understanding. Please save the response of model in the `response` field.
|
| 71 |
+
- Please add the model names and their corresponding result JSON files to `task_json_list` in `eval/eval_understanding.py`.
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
## π‘ Representive Examples of Each Task
|
| 75 |
+
#### π Examples of Understanding Enhances Generation (UEG) tasks in RealUnify.
|
| 76 |
+

|
| 77 |
+
#### π Examples of Generation Enhances Understanding (GEU) tasks in RealUnify.
|
| 78 |
+

|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
## π Dataset License
|
| 82 |
+
**License:**
|
| 83 |
+
```
|
| 84 |
+
RealUnify is only used for academic research. Commercial use in any form is prohibited.
|
| 85 |
+
The copyright of all (generated) images belongs to the image/model owners.
|
| 86 |
+
If there is any infringement in RealUnify, please email frankyang1517@gmail.com and we will remove it immediately.
|
| 87 |
+
Without prior approval, you cannot distribute, publish, copy, disseminate, or modify RealUnify in whole or in part.
|
| 88 |
+
You must strictly comply with the above restrictions.
|
| 89 |
+
```
|
| 90 |
+
Please send an email to <u>frankyang1517@gmail.com</u>. π
|
| 91 |
+
|
| 92 |
+
## π Citation
|
| 93 |
+
```bibtex
|
| 94 |
+
@misc{shi2025realunifyunifiedmodelstruly,
|
| 95 |
+
title={RealUnify: Do Unified Models Truly Benefit from Unification? A Comprehensive Benchmark},
|
| 96 |
+
author={Yang Shi and Yuhao Dong and Yue Ding and Yuran Wang and Xuanyu Zhu and Sheng Zhou and Wenting Liu and Haochen Tian and Rundong Wang and Huanqian Wang and Zuyan Liu and Bohan Zeng and Ruizhe Chen and Qixun Wang and Zhuoran Zhang and Xinlong Chen and Chengzhuo Tong and Bozhou Li and Chaoyou Fu and Qiang Liu and Haotian Wang and Wenjing Yang and Yuanxing Zhang and Pengfei Wan and Yi-Fan Zhang and Ziwei Liu},
|
| 97 |
+
year={2025},
|
| 98 |
+
eprint={2509.24897},
|
| 99 |
+
archivePrefix={arXiv},
|
| 100 |
+
primaryClass={cs.AI},
|
| 101 |
+
url={https://arxiv.org/abs/2509.24897},
|
| 102 |
+
}
|
| 103 |
+
```
|