File size: 6,128 Bytes
6c8d784
 
 
 
 
 
 
 
 
3e495b0
 
6c8d784
 
823b0b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e495b0
 
 
2ee46c8
3e495b0
9316f14
 
c4b3cc4
 
 
9316f14
 
 
4ab33eb
2d28178
9316f14
22b00d0
c4b3cc4
3e495b0
 
 
 
 
f531d79
 
 
 
 
 
 
 
3e495b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d84db2e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f7985d5
d84db2e
 
3e495b0
 
 
 
ec4c7dc
3e495b0
 
 
 
 
ed5a0bd
 
3e495b0
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
license: apache-2.0
task_categories:
- visual-question-answering
language:
- en
tags:
- video
- text
- Robotics
- Autonomous Driving
size_categories:
- 1K<n<10K
dataset_info:
  features:
  - name: Video
    dtype: string
  - name: Source
    dtype: string
  - name: Task
    dtype: string
  - name: QType
    dtype: string
  - name: Question
    dtype: string
  - name: Prompt
    dtype: string
  - name: time_start
    dtype: float64
  - name: time_end
    dtype: float64
  - name: Candidates
    struct:
    - name: A
      dtype: string
    - name: B
      dtype: string
    - name: C
      dtype: string
    - name: D
      dtype: string
    - name: E
      dtype: string
  - name: Answer
    dtype: string
  - name: Answer Detail
    dtype: string
  - name: ID
    dtype: int64
  - name: scene
    dtype: string
  splits:
  - name: test
    num_bytes: 1299057
    num_examples: 2064
  download_size: 392237
  dataset_size: 1299057
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---


# [ICCV 2025] Spatial-Temporal Intelligence Benchmark (STI-Bench)

<div style="text-align: center">
  <a href="https://arxiv.org/abs/2503.23765"><img src="https://img.shields.io/badge/arXiv-2503.23765-b31b1b.svg" alt="arXiv"></a>
  <a href="https://huggingface.co/datasets/MINT-SJTU/STI-Bench"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-blue" alt="Hugging Face Datasets"></a>
  <a href="https://github.com/MINT-SJTU/STI-Bench"><img src="https://img.shields.io/badge/GitHub-Code-lightgrey" alt="GitHub Repo"></a>
  <a href="https://mint-sjtu.github.io/STI-Bench.io/"><img src="https://img.shields.io/badge/Homepage-STI--Bench-brightgreen" alt="Homepage"></a>
</div>
<div style="text-align: center">
  <a href="https://mp.weixin.qq.com/s/yIRoyI1HbChLZv4GuvI7BQ"><img src="https://img.shields.io/badge/量子位-red" alt="量子位"></a>
  <a href="https://app.xinhuanet.com/news/article.html?articleId=8af447763b11efc491455eb93a27eac0"><img src="https://img.shields.io/badge/新华网-red" alt="新华网"></a>
    <a href="https://mp.weixin.qq.com/s/pVytCfXmcG-Wkg-sOHk_BA"><img src="https://img.shields.io/badge/PaperWeekly-red" alt="PaperWeekly"></a>
</div>


This repository contains the Spatial-Temporal Intelligence Benchmark (STI-Bench), introduced in the paper [“STI-Bench: Are MLLMs Ready for Precise Spatial-Temporal World Understanding?”](https://arxiv.org/abs/2503.23765), which evaluates the ability of Multimodal Large Language Models (MLLMs) to understand spatial-temporal concepts through real-world video data.

## Files



```bash
# Make sure git-lfs is installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/datasets/MIRA-SJTU/STI-Bench

```


## Dataset Description

STI-Bench evaluates MLLMs’ spatial-temporal understanding by testing their ability to estimate, predict, and understand object appearance, pose, displacement, and motion from video data. The benchmark contains more than 2,000 question-answer pairs across 300 videos, sourced from real-world environments such as desktop settings, indoor scenes, and outdoor scenarios. These videos are taken from datasets like Omni6DPose, ScanNet, and Waymo.

STI-Bench is designed to challenge models on both static and dynamic spatial-temporal tasks, including:

| Task Name | Description |
| :-------- | :---------- |
| **3D Video Grounding** | Locate the 3D bounding box of objects in the video |
| **Ego-Centric Orientation** | Estimate the camera's rotation angle |
| **Pose Estimation** | Determine the camera pose |
| **Dimensional Measurement** | Measure the length of objects |
| **Displacement & Path Length** | Estimate the distance traveled by objects or camera |
| **Speed & Acceleration** | Predict the speed and acceleration of moving objects or camera |
| **Spatial Relation** | Identify the relative positions of objects |
| **Trajectory Description** | Summarize the trajectory of moving objects or camera|

### Dataset Fields Explanation

The dataset contains the following fields, each with its respective description:

| Field Name        | Description |
| :---------------- | :---------- |
| **Video**         | The string corresponding to the video file. |
| **Source**        | The string corresponding to the video source, which can be "ScanNet," "Waymo," or "Omni6DPose." |
| **Task**          | The string representing the task type, e.g., "3D Video Grounding," "Ego-Centric Orientation," etc. |
| **QType**         | The string specifying the question type, typically a multiple-choice question. |
| **Question**      | The string containing the question presented to the model. |
| **Prompt**        | Additional information that might be helpful for answering the question, such as object descriptions. |
| **time_start**    | A float64 value indicating the start time of the question in the video (in seconds). |
| **time_end**      | A float64 value indicating the end time of the question in the video (in seconds). |
| **Candidates**    | A dictionary containing answer choices in the format `{"A": "value", "B": "value", ...}`. |
| **Answer**        | The string corresponding to the correct answer, represented by the choice label (e.g., "A", "B", etc.). |
| **Answer Detail** | A string representing the precise value or description of the correct answer. |
| **ID**            | A sequential ID for each question, unique within that video. |
| **Scene**         | The string describing the scene type of the video, such as "indoor," "outdoor," or "desktop." |

## Evaluation

STI-Bench evaluates performance using accuracy, calculated based on exact matches for multiple-choice questions. 

We provide an out-of-the-box evaluation of STI-Bench in our [GitHub repository](https://github.com/MIRA-SJTU/STI-Bench)

## Citation

```bibtex
@article{li2025sti,
    title={STI-Bench: Are MLLMs Ready for Precise Spatial-Temporal World Understanding?}, 
    author={Yun Li and Yiming Zhang and Tao Lin and XiangRui Liu and Wenxiao Cai and Zheng Liu and Bo Zhao},
    year={2025},
    journal={arXiv preprint arXiv:2503.23765},
}
```