Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
UEval / README.md
primerL's picture
Update README.md
fdeac66 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: task
      dtype: string
    - name: prompt
      dtype: string
    - name: image_rubrics
      sequence: string
    - name: text_rubrics
      sequence: string
    - name: image_ref
      sequence: image
    - name: text_ref
      dtype: string
  splits:
    - name: test
      num_bytes: 301995195
      num_examples: 1000
  download_size: 292190640
  dataset_size: 301995195
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

UEval: A Benchmark for Unified Multimodal Generation

UEval: A Benchmark for Unified Multimodal Generation
Bo Li, Yida Yin, Wenhao Chai, Xingyu Fu*, Zhuang Liu* ( indicates co-advising)
Princeton University
[Paper] [Project page] [Code]


We introduce UEval, a benchmark to evaluate unified models, i.e., models capable of generating both images and text. UEval comprises 1,000 expert-curated prompts that require both images and text in the model outputs, sourced from 8 diverse real-world domains.

Results

We evaluate recent unified models on all 8 tasks in our benchmark. Overall, frontier models consistently outperform open-source ones across all tasks: GPT-5-Thinking achieves the highest average score of 66.4, while the best open-source model obtains only 49.1. The gap between proprietary and open-source models is very large: the strongest frontier model (e.g., GPT-5-Thinking) outperforms the best open-source model (e.g., Emu 3.5) by over 17 points on average.

Model Space Textbook Diagram Paper Art Life Tech Exercise Avg
Reference 96.2 94.4 93.1 96.2 90.6 87.7 90.6 89.2 92.2
Janus-Pro 21.0 31.0 37.4 15.2 26.4 23.0 17.6 11.5 22.9
Show-o2 25.4 33.1 33.2 17.4 25.6 15.6 17.4 13.1 22.6
MMaDA 10.8 20.0 14.2 13.3 15.7 15.8 12.4 12.6 14.4
BAGEL 29.8 42.5 37.2 20.0 39.0 33.6 24.8 21.4 31.0
Emu3.5 59.1 57.4 41.1 31.6 59.3 62.0 37.0 45.4 49.1
Gemini-2.0-Flash 65.2 55.2 47.6 45.8 70.4 58.0 50.2 48.0 55.1
Gemini-2.5-Flash 78.0 74.0 66.4 71.6 66.6 63.0 58.2 50.0 66.0
GPT-5-Instant 77.3 77.9 62.3 55.1 71.2 69.7 50.7 57.6 65.2
GPT-5-Thinking 84.0 78.0 67.8 51.9 67.8 63.8 57.0 61.4 66.4
Image Image

Citation

If you find this repository helpful, please consider citing:

@article{li2026ueval,
    title     = {UEval: A Benchmark for Unified Multimodal Generation},
    author    = {Li, Bo and Yin, Yida and Chai, Wenhao and Fu, Xingyu and Liu, Zhuang},
    journal   = {arXiv preprint arXiv:2601.22155},
    year      = {2026}
}