Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 5,756 Bytes
2f6b02b
823c93d
 
 
 
 
 
 
 
 
 
 
 
2f6b02b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
882d6fd
2f6b02b
dcf4601
882d6fd
2f6b02b
 
 
 
 
 
dcf4601
 
 
823c93d
dcf4601
823c93d
dcf4601
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
882d6fd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
language:
- en
license: cc-by-sa-4.0
size_categories:
- n<1K
task_categories:
- image-text-to-text
pretty_name: ROME
tags:
- benchmark
- reasoning
- vlm
dataset_info:
  features:
  - name: task_category
    dtype: string
  - name: question_id
    dtype: string
  - name: question
    dtype: string
  - name: img_paths
    dtype: string
  - name: reference
    dtype: string
  - name: question_type
    dtype: string
  - name: evaluator
    dtype: string
  - name: evaluator_kwargs
    dtype: string
  - name: meta_info
    dtype: string
  - name: image_0
    dtype: image
  - name: image_1
    dtype: image
  - name: image_2
    dtype: image
  - name: image_3
    dtype: image
  - name: image_4
    dtype: image
  splits:
  - name: train
    num_bytes: 108098652
    num_examples: 281
  download_size: 107332725
  dataset_size: 108098652
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

![LRM-Eval](src/LRM-Eval.png)

🏠[Project Page & Leaderboard](https://flageval-baai.github.io/LRM-Eval/) | 💻[Code](https://github.com/flageval-baai/ROME-evaluation) | 📄[Paper](https://huggingface.co/papers/2509.17177) | 🤗[Data](https://huggingface.co/datasets/FlagEval/ROME) | 🤗[Evaluation Response](https://huggingface.co/datasets/)

This repository contains a visual reasoning benchmark named ROME from the paper [FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questions](https://huggingface.co/papers/2509.17177).

ROME include 8 subtasks (281 high-quality questions in total). Each sample has been verified to ensure that images are necessary to answer correctly:
* Academic
    * questions from college courses
* Diagrams
    * charts and figures collected from recent scientific papers, reports, or blog posts
* Puzzles and games
    * Raven's Progressive Matrices, rebus puzzles, and gameplay
* Memes
    * recreated memes
* Geo
    * geolocation inference
* Recognition
    * fine-grained recognition
* Multi-image
    * find-the-difference tasks or video frame reordering.
* Spatial
    * relative positions, depths/distances, heights, etc

We plot the scatter of overall accuracy vs. token consumption for visual problems:

![Comparison of model performance on visual tasks](src/VLM-overall_scatter.png)

## 📰 News
**[09/10/2025]** 🚀 First release of Rome.
We released our [leaderboard](https://github.com/flageval-baai/LRM-Eval) on **30+ LLMs and MLLMs** that we have tested so far.
We also released all model responses across 4 runs of evaluations([Model responses]()).


## 👋 Evaluation Findings
We conduct a moderate-scale contamination-free (hopefully) evaluation of current LRMs with some preliminary findings. To highlight a few:

* With a few more thousands of thinking tokens, LRMs consistently show superior performance than their non-thinking counterparts in solving challenging problems or puzzles.
* LRMs achieving high metrics on previous benchmarks are also showing within-task generalization, thus benchmark saturation should not always be attributed to contamination or memorization.
* Many recently findings from LRMs might be model-specific or data-specific. For instance, we observe slight degradation in instruction following only on Claude Sonnet 4 and DeepSeek series, and on Qwen 3 and DeepSeek LRMs in multi-turn settings.
* There exists degradation in multi-turn settings for some LRMs against their non-thinking counterparts, even when they are showing superior or on-par metrics on single-turn instructions following.
* Current open-weight LRMs may tend to show more vulnerability against harmful content prompts or jailbreaking, implying necessity of careful deployment.
* Current-generation text-based inference-time scaling has not yet brought notable gains on visual reasoning for most VLMs. %\emoji{-1} 
* Performance varies too much for generally difficult subsets which implies huge difficulty in conducting statistically reliable evaluation at moderate cost.
* Many top-tier LRMs may pretend to conduct tool use or web search even when they do not have real access, which leaves question on reliability. We appeal for more transparency in revealing the reasoning details to enable more awareness during usage, especially multimodal contents.
* Signals of misaligned thinking and answers: models are optimized to be stronger but also more difficult to monitor or to interpret, with inconsistency between thinking and answers being non-trivially prevalent for many LRMs we investigated.
* Different model developers seem to prioritize things differently: On visual questions (our ROME benchmark), Gemini 2.5 Pro tops in overall accuracy, o4-mini and GPT-5 strike a better balance in performance and token consumption, while Claude Sonnet 4 is showing the best controlled thinking behaviors.

## Licensing Information
The ROME benchmark is licensed under the [CC BY-SA 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/).

## 🥺 Citation Information
```bibtex
@misc{qin2025flageval,
    title={FlagEval Findings Report: A Preliminary Evaluation of Large Reasoning Models on Automatically Verifiable Textual and Visual Questions},
    author={Bowen Qin and Chen Yue and Fang Yin and Hui Wang and JG Yao and Jiakang Liu and Jing-Shu Zheng and Miguel Hu Chen and Richeng Xuan and Shibei Meng and Shiqi Zhou and Teng Dai and Tong-Shuai Ren and Wei Cui and Xi Yang and Xialin Du and Xiaojing Xu and Xue Sun and Xuejing Li and Yaming Liu and Yesheng Liu and Ying Liu and Yonghua Lin and Yu Zhao and Yunduo Zhang and Yuwen Luo and Zheqi He and Zhiyuan He and Zhongyuan Wang},
    year={2025},
    eprint={2509.17177},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
```