nielsr HF Staff commited on
Commit
aa05f3e
Β·
verified Β·
1 Parent(s): 1b41b8a

Improve dataset card: Add description, links, task categories, and evaluation usage

Browse files

This PR significantly enhances the dataset card for `cheryyunl/ROVER-Gen` by:
- Adding `text-to-image` and `image-text-to-text` to the `task_categories` metadata, reflecting the reciprocal cross-modal reasoning tasks.
- Providing a descriptive introduction to the ROVER benchmark, based on the paper abstract.
- Including direct links to the associated paper, project page, and GitHub repository.
- Adding sections for "Evaluation Metrics" and "Dataset Categorization" (Reasoning Types and Dimensions) from the GitHub README, providing crucial context for the benchmark.
- Incorporating a "Sample Usage" section directly from the GitHub README's "Quick Start" guide, detailing how to set up and run evaluations using the ROVER evaluation system, which automatically handles dataset loading.

These updates improve the discoverability, clarity, and completeness of the dataset card on the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +179 -69
README.md CHANGED
@@ -1,69 +1,179 @@
1
- ---
2
- license: mit
3
- dataset_info:
4
- - config_name: ROVER-IG
5
- features:
6
- - name: id
7
- dtype: string
8
- - name: image
9
- dtype: image
10
- - name: image2
11
- dtype: image
12
- - name: target_image
13
- dtype: image
14
- - name: prompt
15
- dtype: string
16
- - name: dimension
17
- dtype:
18
- class_label:
19
- names:
20
- '0': logic
21
- '1': science
22
- '2': culture
23
- '3': common_sense
24
- - name: reasoning_type
25
- dtype: string
26
- - name: keywords
27
- dtype: string
28
- - name: target_description
29
- dtype: string
30
- splits:
31
- - name: train
32
- num_bytes: 738665916
33
- num_examples: 908
34
- download_size: 735038053
35
- dataset_size: 738665916
36
- - config_name: ROVER-TG
37
- features:
38
- - name: id
39
- dtype: string
40
- - name: image
41
- dtype: image
42
- - name: reasoning_image
43
- dtype: image
44
- - name: example_image
45
- dtype: image
46
- - name: image2
47
- dtype: image
48
- - name: prompt
49
- dtype: string
50
- - name: problem_type
51
- dtype: string
52
- - name: answer
53
- dtype: string
54
- splits:
55
- - name: train
56
- num_bytes: 894933512.0
57
- num_examples: 404
58
- download_size: 765965957
59
- dataset_size: 894933512.0
60
- configs:
61
- - config_name: ROVER-IG
62
- data_files:
63
- - split: train
64
- path: ROVER-IG/train-*
65
- - config_name: ROVER-TG
66
- data_files:
67
- - split: train
68
- path: ROVER-TG/train-*
69
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-to-image
5
+ - image-text-to-text
6
+ dataset_info:
7
+ - config_name: ROVER-IG
8
+ features:
9
+ - name: id
10
+ dtype: string
11
+ - name: image
12
+ dtype: image
13
+ - name: image2
14
+ dtype: image
15
+ - name: target_image
16
+ dtype: image
17
+ - name: prompt
18
+ dtype: string
19
+ - name: dimension
20
+ dtype:
21
+ class_label:
22
+ names:
23
+ '0': logic
24
+ '1': science
25
+ '2': culture
26
+ '3': common_sense
27
+ - name: reasoning_type
28
+ dtype: string
29
+ - name: keywords
30
+ dtype: string
31
+ - name: target_description
32
+ dtype: string
33
+ splits:
34
+ - name: train
35
+ num_bytes: 738665916
36
+ num_examples: 908
37
+ download_size: 735038053
38
+ dataset_size: 738665916
39
+ - config_name: ROVER-TG
40
+ features:
41
+ - name: id
42
+ dtype: string
43
+ - name: image
44
+ dtype: image
45
+ - name: reasoning_image
46
+ dtype: image
47
+ - name: example_image
48
+ dtype: image
49
+ - name: image2
50
+ dtype: image
51
+ - name: prompt
52
+ dtype: string
53
+ - name: problem_type
54
+ dtype: string
55
+ - name: answer
56
+ dtype: string
57
+ splits:
58
+ - name: train
59
+ num_bytes: 894933512.0
60
+ num_examples: 404
61
+ download_size: 765965957
62
+ dataset_size: 894933512.0
63
+ configs:
64
+ - config_name: ROVER-IG
65
+ data_files:
66
+ - split: train
67
+ path: ROVER-IG/train-*
68
+ - config_name: ROVER-TG
69
+ data_files:
70
+ - split: train
71
+ path: ROVER-TG/train-*
72
+ ---
73
+
74
+ # ROVER: Benchmarking Reciprocal Cross-Modal Reasoning for Omnimodal Generation
75
+
76
+ This repository hosts the ROVER (Reciprocal Cross-Modal Reasoning for Omnimodal Generation) benchmark dataset. ROVER is a human-annotated benchmark designed to explicitly target reciprocal cross-modal reasoning, an ability central to the vision of unified multimodal intelligence. It comprises 1312 tasks grounded in 1876 images, spanning two complementary settings:
77
+
78
+ * **Verbally-augmented reasoning for visual generation (ROVER-IG)**: Evaluates whether models can use verbal prompts and reasoning chains to guide faithful image synthesis.
79
+ * **Visually-augmented reasoning for verbal generation (ROVER-TG)**: Evaluates whether models can generate intermediate visualizations that strengthen their own reasoning processes for question answering.
80
+
81
+ For more details, refer to the:
82
+ - **Paper**: [ROVER: Benchmarking Reciprocal Cross-Modal Reasoning for Omnimodal Generation](https://huggingface.co/papers/2511.01163)
83
+ - **Project Page**: [https://roverbench.github.io/](https://roverbench.github.io/)
84
+ - **Code (Evaluation System)**: [https://github.com/cheryyunl/ROVER](https://github.com/cheryyunl/ROVER)
85
+
86
+ ## Evaluation Metrics
87
+
88
+ The ROVER evaluation system uses 5 core metrics to assess model performance:
89
+
90
+ | Metric | Code | Description | Requires Text |
91
+ |--------|------|-------------|---------------|
92
+ | **Reasoning Process** | RP | Quality of written reasoning steps | βœ… Yes |
93
+ | **Reasoning Visual** | RV | Visual result matches target description | ❌ No |
94
+ | **Reasoning Alignment** | RA | Consistency between text and visual result | βœ… Yes |
95
+ | **Visual Consistency** | VC | Non-target elements remain unchanged | ❌ No |
96
+ | **Image Quality** | IQ | Technical quality of generated image | ❌ No |
97
+
98
+ ## Dataset Categorization
99
+
100
+ The ROVER dataset is structured to test various reasoning types and dimensions:
101
+
102
+ ### Reasoning Types
103
+ - **Temporal**: Changes over time (growth, aging, weather transitions)
104
+ - **Spatial**: Geometric transformations (rotation, perspective, positioning)
105
+ - **Quantitative**: Numerical changes (counting, scaling, proportions)
106
+ - **Causal**: Cause-effect relationships (interventions, reactions)
107
+ - **Synthetic**: Creative additions/modifications (style transfer, object addition)
108
+
109
+ ### Dimensions
110
+ - **Science**: Physics, chemistry, biology principles
111
+ - **Humanity**: Cultural, historical, social contexts
112
+ - **Common Sense**: Everyday knowledge and practical understanding
113
+ - **Logic**: Mathematical and formal reasoning
114
+
115
+ ## Sample Usage
116
+
117
+ This dataset is designed to be used with the ROVER Evaluation System, which automatically handles data loading. The following steps outline how to set up the environment and run an evaluation, adapted from the official GitHub repository.
118
+
119
+ ### 1. Setup
120
+
121
+ Install necessary packages for the evaluation system:
122
+ ```bash
123
+ pip3 install -r requirements.txt
124
+ ```
125
+
126
+ ### 2. Configure OpenAI Credentials
127
+
128
+ Configure your OpenAI API key and model (e.g., `gpt-4o`) using environment variables (recommended):
129
+ ```bash
130
+ export OPENAI_API_KEY="your-api-key"
131
+ export OPENAI_MODEL="gpt-4o"
132
+ ```
133
+ Alternatively, you can edit the `config.py` file directly.
134
+
135
+ ### 3. Configure Data Path for Generated Results
136
+
137
+ Set the directory path where your model's generated images and optional reasoning texts are stored:
138
+ ```bash
139
+ export ROVER_GEN_DIR="/path/to/your/generated/results"
140
+ export MAX_RETRIES="3" # Optional: number of retries for failed evaluations
141
+ ```
142
+
143
+ Your generation directory should contain files in this format, with task IDs following `{dimension}_{reasoning_type}_{number}`:
144
+ ```
145
+ your_gen_dir/
146
+ β”œβ”€β”€ gen_{task_id}.png # Generated image (required)
147
+ β”œβ”€β”€ gen_{task_id}.txt # Reasoning text (optional)
148
+ β”œβ”€β”€ gen_science_temporal_1.png
149
+ β”œβ”€β”€ gen_science_temporal_1.txt
150
+ β”œβ”€β”€ gen_science_causal_3.png
151
+ β”œβ”€β”€ gen_science_causal_3.txt
152
+ └── ...
153
+ ```
154
+
155
+ ### 4. Run Evaluation
156
+
157
+ Execute the evaluation script. The ROVER dataset (`cheryyunl/ROVER-Gen`) will be automatically downloaded from Hugging Face by the evaluation system for comparison.
158
+
159
+ ```bash
160
+ # Evaluate all available results
161
+ python evaluate_rover.py --output_dir results
162
+
163
+ # Filter by reasoning type
164
+ python evaluate_rover.py --output_dir results --reasoning_type temporal
165
+
166
+ # Filter by dimension
167
+ python evaluate_rover.py --output_dir results --dimension science
168
+ ```
169
+
170
+ ### 5. View Results
171
+
172
+ After evaluation, generate a summary report and view detailed results:
173
+ ```bash
174
+ # Generate summary report
175
+ python summarize.py --results_file results/rover_metrics.jsonl --output_dir results
176
+
177
+ # View detailed results
178
+ cat results/rover_summary.json
179
+ ```