Add task category and update paper link

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +12 -2
README.md CHANGED
@@ -1,3 +1,10 @@
 
 
 
 
 
 
 
1
  <!-- # TimeBlind Benchmark -->
2
 
3
  <!-- TimeBlind: A video VQA benchmark for evaluating temporal understanding in vision-language models -->
@@ -12,10 +19,13 @@
12
 
13
  <div align="center">
14
 
15
- [🏠**Home Page**](https://baiqi-li.github.io/timeblind_project/) | [🤗**HuggingFace**](https://huggingface.co/datasets/BaiqiL/TimeBlind) | [**📖Paper**(coming soon)](https://arxiv.org/abs/2602.00288) | [🖥️ **Code**](https://github.com/Baiqi-Li/TimeBlind)
16
 
17
  </div>
18
 
 
 
 
19
  ## Setup
20
 
21
  ```bash
@@ -71,7 +81,7 @@ I-Acc serves as our primary metric.
71
  - **Acc**: Binary VQA accuracy
72
  - **Q_Acc**: Question accuracy
73
  - **V_Acc**: Video accuracy
74
- - **I_Acc**: Instance accuracy (the primary metric in our pape)
75
 
76
  # Copyright & Infringement Notice
77
  The data provided in this benchmark is intended for academic research purposes only. We respect the intellectual property rights of the content creators.
 
1
+ ---
2
+ language:
3
+ - en
4
+ task_categories:
5
+ - video-text-to-text
6
+ ---
7
+
8
  <!-- # TimeBlind Benchmark -->
9
 
10
  <!-- TimeBlind: A video VQA benchmark for evaluating temporal understanding in vision-language models -->
 
19
 
20
  <div align="center">
21
 
22
+ [🏠**Home Page**](https://baiqi-li.github.io/timeblind_project/) | [🤗**HuggingFace**](https://huggingface.co/datasets/BaiqiL/TimeBlind) | [**📖Paper**](https://huggingface.co/papers/2602.00288) | [🖥️ **Code**](https://github.com/Baiqi-Li/TimeBlind)
23
 
24
  </div>
25
 
26
+ ## Introduction
27
+ Fine-grained spatio-temporal understanding is essential for video reasoning and embodied AI. TimeBlind is a diagnostic benchmark for compositional spatio-temporal understanding. Inspired by cognitive science, TimeBlind categorizes fine-grained temporal understanding into three levels: recognizing atomic events, characterizing event properties, and reasoning about event interdependencies. It leverages a minimal-pairs paradigm where video pairs share identical static visual content but differ solely in temporal structure, utilizing complementary questions to neutralize language priors.
28
+
29
  ## Setup
30
 
31
  ```bash
 
81
  - **Acc**: Binary VQA accuracy
82
  - **Q_Acc**: Question accuracy
83
  - **V_Acc**: Video accuracy
84
+ - **I_Acc**: Instance accuracy (the primary metric in our paper)
85
 
86
  # Copyright & Infringement Notice
87
  The data provided in this benchmark is intended for academic research purposes only. We respect the intellectual property rights of the content creators.