STI-Bench / README.md
LimitedMouse's picture
Update README.md
c7bcd5c verified
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - video
  - text
  - Robotics
  - Autonomous Driving
size_categories:
  - 1K<n<10K
dataset_info:
  features:
    - name: Video
      dtype: string
    - name: Source
      dtype: string
    - name: Task
      dtype: string
    - name: QType
      dtype: string
    - name: Question
      dtype: string
    - name: Prompt
      dtype: string
    - name: time_start
      dtype: float64
    - name: time_end
      dtype: float64
    - name: Candidates
      struct:
        - name: A
          dtype: string
        - name: B
          dtype: string
        - name: C
          dtype: string
        - name: D
          dtype: string
        - name: E
          dtype: string
    - name: Answer
      dtype: string
    - name: Answer Detail
      dtype: string
    - name: ID
      dtype: int64
    - name: scene
      dtype: string
  splits:
    - name: test
      num_bytes: 1299057
      num_examples: 2064
  download_size: 392237
  dataset_size: 1299057
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

[ICCV 2025] Spatial-Temporal Intelligence Benchmark (STI-Bench)

arXiv Hugging Face Datasets GitHub Repo Homepage
量子位 新华网 PaperWeekly

This repository contains the Spatial-Temporal Intelligence Benchmark (STI-Bench), introduced in the paper “STI-Bench: Are MLLMs Ready for Precise Spatial-Temporal World Understanding?”, which evaluates the ability of Multimodal Large Language Models (MLLMs) to understand spatial-temporal concepts through real-world video data.

Files

# Make sure git-lfs is installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/datasets/MIRA-SJTU/STI-Bench

Dataset Description

STI-Bench evaluates MLLMs’ spatial-temporal understanding by testing their ability to estimate, predict, and understand object appearance, pose, displacement, and motion from video data. The benchmark contains more than 2,000 question-answer pairs across 300 videos, sourced from real-world environments such as desktop settings, indoor scenes, and outdoor scenarios. These videos are taken from datasets like Omni6DPose, ScanNet, and Waymo.

STI-Bench is designed to challenge models on both static and dynamic spatial-temporal tasks, including:

Task Name Description
3D Video Grounding Locate the 3D bounding box of objects in the video
Ego-Centric Orientation Estimate the camera's rotation angle
Pose Estimation Determine the camera pose
Dimensional Measurement Measure the length of objects
Displacement & Path Length Estimate the distance traveled by objects or camera
Speed & Acceleration Predict the speed and acceleration of moving objects or camera
Spatial Relation Identify the relative positions of objects
Trajectory Description Summarize the trajectory of moving objects or camera

Dataset Fields Explanation

The dataset contains the following fields, each with its respective description:

Field Name Description
Video The string corresponding to the video file.
Source The string corresponding to the video source, which can be "ScanNet," "Waymo," or "Omni6DPose."
Task The string representing the task type, e.g., "3D Video Grounding," "Ego-Centric Orientation," etc.
QType The string specifying the question type, typically a multiple-choice question.
Question The string containing the question presented to the model.
Prompt Additional information that might be helpful for answering the question, such as object descriptions.
time_start A float64 value indicating the start time of the question in the video (in seconds).
time_end A float64 value indicating the end time of the question in the video (in seconds).
Candidates A dictionary containing answer choices in the format {"A": "value", "B": "value", ...}.
Answer The string corresponding to the correct answer, represented by the choice label (e.g., "A", "B", etc.).
Answer Detail A string representing the precise value or description of the correct answer.
ID A sequential ID for each question, unique within that video.
Scene The string describing the scene type of the video, such as "indoor," "outdoor," or "desktop."

Evaluation

STI-Bench evaluates performance using accuracy, calculated based on exact matches for multiple-choice questions.

We provide an out-of-the-box evaluation of STI-Bench in our GitHub repository

Citation

@article{li2025sti,
    title={STI-Bench: Are MLLMs Ready for Precise Spatial-Temporal World Understanding?}, 
    author={Yun Li and Yiming Zhang and Tao Lin and XiangRui Liu and Wenxiao Cai and Zheng Liu and Bo Zhao},
    year={2025},
    journal={arXiv preprint arXiv:2503.23765},
}