Improve dataset card: Add description, links, task categories, and evaluation usage
Browse filesThis PR significantly enhances the dataset card for `cheryyunl/ROVER-Gen` by:
- Adding `text-to-image` and `image-text-to-text` to the `task_categories` metadata, reflecting the reciprocal cross-modal reasoning tasks.
- Providing a descriptive introduction to the ROVER benchmark, based on the paper abstract.
- Including direct links to the associated paper, project page, and GitHub repository.
- Adding sections for "Evaluation Metrics" and "Dataset Categorization" (Reasoning Types and Dimensions) from the GitHub README, providing crucial context for the benchmark.
- Incorporating a "Sample Usage" section directly from the GitHub README's "Quick Start" guide, detailing how to set up and run evaluations using the ROVER evaluation system, which automatically handles dataset loading.
These updates improve the discoverability, clarity, and completeness of the dataset card on the Hugging Face Hub.
|
@@ -1,69 +1,179 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
| 4 |
-
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
'
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
- name:
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
- name:
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-to-image
|
| 5 |
+
- image-text-to-text
|
| 6 |
+
dataset_info:
|
| 7 |
+
- config_name: ROVER-IG
|
| 8 |
+
features:
|
| 9 |
+
- name: id
|
| 10 |
+
dtype: string
|
| 11 |
+
- name: image
|
| 12 |
+
dtype: image
|
| 13 |
+
- name: image2
|
| 14 |
+
dtype: image
|
| 15 |
+
- name: target_image
|
| 16 |
+
dtype: image
|
| 17 |
+
- name: prompt
|
| 18 |
+
dtype: string
|
| 19 |
+
- name: dimension
|
| 20 |
+
dtype:
|
| 21 |
+
class_label:
|
| 22 |
+
names:
|
| 23 |
+
'0': logic
|
| 24 |
+
'1': science
|
| 25 |
+
'2': culture
|
| 26 |
+
'3': common_sense
|
| 27 |
+
- name: reasoning_type
|
| 28 |
+
dtype: string
|
| 29 |
+
- name: keywords
|
| 30 |
+
dtype: string
|
| 31 |
+
- name: target_description
|
| 32 |
+
dtype: string
|
| 33 |
+
splits:
|
| 34 |
+
- name: train
|
| 35 |
+
num_bytes: 738665916
|
| 36 |
+
num_examples: 908
|
| 37 |
+
download_size: 735038053
|
| 38 |
+
dataset_size: 738665916
|
| 39 |
+
- config_name: ROVER-TG
|
| 40 |
+
features:
|
| 41 |
+
- name: id
|
| 42 |
+
dtype: string
|
| 43 |
+
- name: image
|
| 44 |
+
dtype: image
|
| 45 |
+
- name: reasoning_image
|
| 46 |
+
dtype: image
|
| 47 |
+
- name: example_image
|
| 48 |
+
dtype: image
|
| 49 |
+
- name: image2
|
| 50 |
+
dtype: image
|
| 51 |
+
- name: prompt
|
| 52 |
+
dtype: string
|
| 53 |
+
- name: problem_type
|
| 54 |
+
dtype: string
|
| 55 |
+
- name: answer
|
| 56 |
+
dtype: string
|
| 57 |
+
splits:
|
| 58 |
+
- name: train
|
| 59 |
+
num_bytes: 894933512.0
|
| 60 |
+
num_examples: 404
|
| 61 |
+
download_size: 765965957
|
| 62 |
+
dataset_size: 894933512.0
|
| 63 |
+
configs:
|
| 64 |
+
- config_name: ROVER-IG
|
| 65 |
+
data_files:
|
| 66 |
+
- split: train
|
| 67 |
+
path: ROVER-IG/train-*
|
| 68 |
+
- config_name: ROVER-TG
|
| 69 |
+
data_files:
|
| 70 |
+
- split: train
|
| 71 |
+
path: ROVER-TG/train-*
|
| 72 |
+
---
|
| 73 |
+
|
| 74 |
+
# ROVER: Benchmarking Reciprocal Cross-Modal Reasoning for Omnimodal Generation
|
| 75 |
+
|
| 76 |
+
This repository hosts the ROVER (Reciprocal Cross-Modal Reasoning for Omnimodal Generation) benchmark dataset. ROVER is a human-annotated benchmark designed to explicitly target reciprocal cross-modal reasoning, an ability central to the vision of unified multimodal intelligence. It comprises 1312 tasks grounded in 1876 images, spanning two complementary settings:
|
| 77 |
+
|
| 78 |
+
* **Verbally-augmented reasoning for visual generation (ROVER-IG)**: Evaluates whether models can use verbal prompts and reasoning chains to guide faithful image synthesis.
|
| 79 |
+
* **Visually-augmented reasoning for verbal generation (ROVER-TG)**: Evaluates whether models can generate intermediate visualizations that strengthen their own reasoning processes for question answering.
|
| 80 |
+
|
| 81 |
+
For more details, refer to the:
|
| 82 |
+
- **Paper**: [ROVER: Benchmarking Reciprocal Cross-Modal Reasoning for Omnimodal Generation](https://huggingface.co/papers/2511.01163)
|
| 83 |
+
- **Project Page**: [https://roverbench.github.io/](https://roverbench.github.io/)
|
| 84 |
+
- **Code (Evaluation System)**: [https://github.com/cheryyunl/ROVER](https://github.com/cheryyunl/ROVER)
|
| 85 |
+
|
| 86 |
+
## Evaluation Metrics
|
| 87 |
+
|
| 88 |
+
The ROVER evaluation system uses 5 core metrics to assess model performance:
|
| 89 |
+
|
| 90 |
+
| Metric | Code | Description | Requires Text |
|
| 91 |
+
|--------|------|-------------|---------------|
|
| 92 |
+
| **Reasoning Process** | RP | Quality of written reasoning steps | β
Yes |
|
| 93 |
+
| **Reasoning Visual** | RV | Visual result matches target description | β No |
|
| 94 |
+
| **Reasoning Alignment** | RA | Consistency between text and visual result | β
Yes |
|
| 95 |
+
| **Visual Consistency** | VC | Non-target elements remain unchanged | β No |
|
| 96 |
+
| **Image Quality** | IQ | Technical quality of generated image | β No |
|
| 97 |
+
|
| 98 |
+
## Dataset Categorization
|
| 99 |
+
|
| 100 |
+
The ROVER dataset is structured to test various reasoning types and dimensions:
|
| 101 |
+
|
| 102 |
+
### Reasoning Types
|
| 103 |
+
- **Temporal**: Changes over time (growth, aging, weather transitions)
|
| 104 |
+
- **Spatial**: Geometric transformations (rotation, perspective, positioning)
|
| 105 |
+
- **Quantitative**: Numerical changes (counting, scaling, proportions)
|
| 106 |
+
- **Causal**: Cause-effect relationships (interventions, reactions)
|
| 107 |
+
- **Synthetic**: Creative additions/modifications (style transfer, object addition)
|
| 108 |
+
|
| 109 |
+
### Dimensions
|
| 110 |
+
- **Science**: Physics, chemistry, biology principles
|
| 111 |
+
- **Humanity**: Cultural, historical, social contexts
|
| 112 |
+
- **Common Sense**: Everyday knowledge and practical understanding
|
| 113 |
+
- **Logic**: Mathematical and formal reasoning
|
| 114 |
+
|
| 115 |
+
## Sample Usage
|
| 116 |
+
|
| 117 |
+
This dataset is designed to be used with the ROVER Evaluation System, which automatically handles data loading. The following steps outline how to set up the environment and run an evaluation, adapted from the official GitHub repository.
|
| 118 |
+
|
| 119 |
+
### 1. Setup
|
| 120 |
+
|
| 121 |
+
Install necessary packages for the evaluation system:
|
| 122 |
+
```bash
|
| 123 |
+
pip3 install -r requirements.txt
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
### 2. Configure OpenAI Credentials
|
| 127 |
+
|
| 128 |
+
Configure your OpenAI API key and model (e.g., `gpt-4o`) using environment variables (recommended):
|
| 129 |
+
```bash
|
| 130 |
+
export OPENAI_API_KEY="your-api-key"
|
| 131 |
+
export OPENAI_MODEL="gpt-4o"
|
| 132 |
+
```
|
| 133 |
+
Alternatively, you can edit the `config.py` file directly.
|
| 134 |
+
|
| 135 |
+
### 3. Configure Data Path for Generated Results
|
| 136 |
+
|
| 137 |
+
Set the directory path where your model's generated images and optional reasoning texts are stored:
|
| 138 |
+
```bash
|
| 139 |
+
export ROVER_GEN_DIR="/path/to/your/generated/results"
|
| 140 |
+
export MAX_RETRIES="3" # Optional: number of retries for failed evaluations
|
| 141 |
+
```
|
| 142 |
+
|
| 143 |
+
Your generation directory should contain files in this format, with task IDs following `{dimension}_{reasoning_type}_{number}`:
|
| 144 |
+
```
|
| 145 |
+
your_gen_dir/
|
| 146 |
+
βββ gen_{task_id}.png # Generated image (required)
|
| 147 |
+
βββ gen_{task_id}.txt # Reasoning text (optional)
|
| 148 |
+
βββ gen_science_temporal_1.png
|
| 149 |
+
βββ gen_science_temporal_1.txt
|
| 150 |
+
βββ gen_science_causal_3.png
|
| 151 |
+
βββ gen_science_causal_3.txt
|
| 152 |
+
βββ ...
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
### 4. Run Evaluation
|
| 156 |
+
|
| 157 |
+
Execute the evaluation script. The ROVER dataset (`cheryyunl/ROVER-Gen`) will be automatically downloaded from Hugging Face by the evaluation system for comparison.
|
| 158 |
+
|
| 159 |
+
```bash
|
| 160 |
+
# Evaluate all available results
|
| 161 |
+
python evaluate_rover.py --output_dir results
|
| 162 |
+
|
| 163 |
+
# Filter by reasoning type
|
| 164 |
+
python evaluate_rover.py --output_dir results --reasoning_type temporal
|
| 165 |
+
|
| 166 |
+
# Filter by dimension
|
| 167 |
+
python evaluate_rover.py --output_dir results --dimension science
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
### 5. View Results
|
| 171 |
+
|
| 172 |
+
After evaluation, generate a summary report and view detailed results:
|
| 173 |
+
```bash
|
| 174 |
+
# Generate summary report
|
| 175 |
+
python summarize.py --results_file results/rover_metrics.jsonl --output_dir results
|
| 176 |
+
|
| 177 |
+
# View detailed results
|
| 178 |
+
cat results/rover_summary.json
|
| 179 |
+
```
|