Add task category and dataset description
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -337,4 +337,132 @@ configs:
|
|
| 337 |
data_files:
|
| 338 |
- split: test
|
| 339 |
path: temporal/test-*
|
|
|
|
|
|
|
| 340 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 337 |
data_files:
|
| 338 |
- split: test
|
| 339 |
path: temporal/test-*
|
| 340 |
+
task_categories:
|
| 341 |
+
- image-text-to-text
|
| 342 |
---
|
| 343 |
+
|
| 344 |
+
<p align="center">
|
| 345 |
+
<img src="./images/stare_name.png" alt="STARE" width="300" />
|
| 346 |
+
</p>
|
| 347 |
+
<h1 align="center">
|
| 348 |
+
Evaluating Multimodal Models on Visual Simulations
|
| 349 |
+
</h1>
|
| 350 |
+
|
| 351 |
+
|
| 352 |
+
<div align="center" style="font-family: Arial, sans-serif;">
|
| 353 |
+
<p>
|
| 354 |
+
<a href="https://arxiv.org/abs/2506.04633" style="text-decoration: none; font-weight: bold;">[🔗 ArXiv]</a> •
|
| 355 |
+
<a href="https://arxiv.org/pdf/2506.04633" style="text-decoration: none; font-weight: bold;">[📑 Paper]</a> •
|
| 356 |
+
<a href="https://huggingface.co/datasets/kuvvi/STARE" style="text-decoration: none; font-weight: bold;"> [🤗 Data]</a>
|
| 357 |
+
</p>
|
| 358 |
+
</div>
|
| 359 |
+
|
| 360 |
+
<p align="center" width="80%">
|
| 361 |
+
<img src="./images/overview.png" width="80%" height="70%">
|
| 362 |
+
</p>
|
| 363 |
+
<p align="center" style="font-size: 14px; color: gray;">
|
| 364 |
+
<em>An overview of our <b>STARE</b>. </em>
|
| 365 |
+
</p>
|
| 366 |
+
|
| 367 |
+
|
| 368 |
+
|
| 369 |
+
## 😳 STARE: Unfolding Spatial Cognition
|
| 370 |
+
STARE is structured to comprehensively cover spatial reasoning at multiple complexity levels, from basic geometric transformations (2D and 3D) to more integrated tasks (cube net folding and tangram puzzles) and real-world spatial reasoning scenarios (temporal frame and perspective reasoning). Each task is presented as a multiple-choice or yes/no question using carefully designed visual and textual prompts. In total, the dataset contains about 4K instances across different evaluation setups.
|
| 371 |
+
<p align="center">
|
| 372 |
+
<img src="images/vissim.png" width="90%"> <br>
|
| 373 |
+
<b>Visual simulation of a cube net folding task reveals the challenges of spatial reasoning.</b>
|
| 374 |
+
</p>
|
| 375 |
+
Models exhibit significant variation in spatial reasoning performance across STARE tasks. Accuracy is highest on simple 2D transformations (up to 87.7%) but drops substantially for 3D tasks and multi-step reasoning (e.g., cube nets, tangrams), often nearing chance. Visual simulations generally improve performance, though inconsistently across models. The reasoning-optimized o1 model performs best overall with VisSim, yet still lags behind humans. Human participants consistently outperform models, confirming the complexity of STARE tasks.
|
| 376 |
+
|
| 377 |
+
|
| 378 |
+
## 📖 Dataset Usage
|
| 379 |
+
|
| 380 |
+
You can download both two datasets by the following command (Taking downloading math data as an example):
|
| 381 |
+
|
| 382 |
+
```python
|
| 383 |
+
from datasets import load_dataset
|
| 384 |
+
|
| 385 |
+
dataset = load_dataset("kuvvi/STARE", "folding_nets", split="test")
|
| 386 |
+
```
|
| 387 |
+
|
| 388 |
+
|
| 389 |
+
|
| 390 |
+
### Data Format
|
| 391 |
+
|
| 392 |
+
The dataset is provided in jsonl format and contains the following attributes:
|
| 393 |
+
|
| 394 |
+
```
|
| 395 |
+
{
|
| 396 |
+
"pid": [string] Problem ID, e.g., “2d_va_vsim_001”,
|
| 397 |
+
"question": [string] The question text,
|
| 398 |
+
"answer": [string] The correct answer for the problem,
|
| 399 |
+
"images": [list] , The images that problem needs.
|
| 400 |
+
"other_info": [string] Additional information about this question,
|
| 401 |
+
"category": [string] The category of the problem, e.g., “2D_text_instruction”,
|
| 402 |
+
}
|
| 403 |
+
```
|
| 404 |
+
|
| 405 |
+
## Requirements
|
| 406 |
+
```bash
|
| 407 |
+
git clone https://github.com/STARE-bench/STARE.git
|
| 408 |
+
cd STARE
|
| 409 |
+
git install -e .
|
| 410 |
+
```
|
| 411 |
+
|
| 412 |
+
## 📈 Evaluation
|
| 413 |
+
|
| 414 |
+
### Responses Generation
|
| 415 |
+
Our repository supports the evaluation of open source models such as Qwen2-VL, InternVL, LLaVA, and closed source models such as GPT, Gemini, Claude, etc.
|
| 416 |
+
You can generate responses of these models by using the following commands:
|
| 417 |
+
|
| 418 |
+
Open-source Model:
|
| 419 |
+
```
|
| 420 |
+
python generate_response.py \
|
| 421 |
+
--dataset_name 'kuvvi/STARE' \
|
| 422 |
+
--split 'test' \
|
| 423 |
+
--category '2D_text_instruct_VSim' \
|
| 424 |
+
--strategy 'CoT' \
|
| 425 |
+
--config_path 'configs/gpt.yaml' \
|
| 426 |
+
--model_path 'path_to_your_local_model' \
|
| 427 |
+
--output_path 'path_to_output_json_file' \
|
| 428 |
+
--max_tokens 4096 \
|
| 429 |
+
--temperature 0.7 \
|
| 430 |
+
--save_every 20
|
| 431 |
+
```
|
| 432 |
+
|
| 433 |
+
Close-source Model:
|
| 434 |
+
|
| 435 |
+
```
|
| 436 |
+
python generate_response.py \
|
| 437 |
+
--dataset_name 'kuvvi/STARE' \
|
| 438 |
+
--split 'test' \
|
| 439 |
+
--category '2D_text_instruct_VSim' \
|
| 440 |
+
--config_path 'configs/gpt.yaml' \
|
| 441 |
+
--model 'remote-model-name' \
|
| 442 |
+
--api_key '' \
|
| 443 |
+
--output_path 'path_to_output_file_name.json' \
|
| 444 |
+
--max_tokens 4096 \
|
| 445 |
+
--temperature 0 \
|
| 446 |
+
--save_every 20
|
| 447 |
+
```
|
| 448 |
+
|
| 449 |
+
### Score Calculation
|
| 450 |
+
|
| 451 |
+
Finally, execute `python evaluation/calculate_acc.py` to calculate the final score based on the evaluation results.
|
| 452 |
+
This step will compute overall accuracy as well as accuracy for each subject, category, and tasks.
|
| 453 |
+
|
| 454 |
+
## 📝Citation
|
| 455 |
+
|
| 456 |
+
If you find our benchmark useful in your research, please consider citing this BibTex:
|
| 457 |
+
|
| 458 |
+
```
|
| 459 |
+
@misc{li2025unfoldingspatialcognitionevaluating,
|
| 460 |
+
title={Unfolding Spatial Cognition: Evaluating Multimodal Models on Visual Simulations},
|
| 461 |
+
author={Linjie Li and Mahtab Bigverdi and Jiawei Gu and Zixian Ma and Yinuo Yang and Ziang Li and Yejin Choi and Ranjay Krishna},
|
| 462 |
+
year={2025},
|
| 463 |
+
eprint={2506.04633},
|
| 464 |
+
archivePrefix={arXiv},
|
| 465 |
+
primaryClass={cs.CV},
|
| 466 |
+
url={https://arxiv.org/abs/2506.04633},
|
| 467 |
+
}
|
| 468 |
+
```
|