Datasets:

Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
dcores commited on
Commit
1d933b2
·
verified ·
1 Parent(s): afbb8cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -84,18 +84,17 @@ Question and answers are provided as a json file for each task.
84
 
85
  Videos in TVBench are sourced from Perception Test, CLEVRER, STAR, MoVQA, Charades-STA, NTU RGB+D, FunQA and CSV. All videos are included in this repository, except for those from NTU RGB+D, which can be downloaded from the official [website](https://rose1.ntu.edu.sg/dataset/actionRecognition/). It is not necessary to download the full dataset, as NTU RGB+D provides a subset specifically for TVBench with the required videos. These videos are required by th Action Antonym task and should be stored in the `video/action_antonym` folder.
86
 
87
- ## Leaderboard
88
- https://paperswithcode.com/sota/video-question-answering-on-tvbench
89
-
90
  # Citation
91
  If you find this benchmark useful, please consider citing:
92
  ```
93
 
94
- @misc{cores2024tvbench,
95
- author = {Daniel Cores and Michael Dorkenwald and Manuel Mucientes and Cees G. M. Snoek and Yuki M. Asano},
96
- title = {Lost in Time: A New Temporal Benchmark for Video LLMs},
97
- year = {2024},
98
- eprint = {arXiv:2410.07752},
 
 
99
  }
100
 
101
  ```
 
84
 
85
  Videos in TVBench are sourced from Perception Test, CLEVRER, STAR, MoVQA, Charades-STA, NTU RGB+D, FunQA and CSV. All videos are included in this repository, except for those from NTU RGB+D, which can be downloaded from the official [website](https://rose1.ntu.edu.sg/dataset/actionRecognition/). It is not necessary to download the full dataset, as NTU RGB+D provides a subset specifically for TVBench with the required videos. These videos are required by th Action Antonym task and should be stored in the `video/action_antonym` folder.
86
 
 
 
 
87
  # Citation
88
  If you find this benchmark useful, please consider citing:
89
  ```
90
 
91
+ @inproceedings{Cores_2025_BMVC,
92
+ author = {Daniel Cores and Michael Dorkenwald and Manuel Mucientes and Cees G. M. Snoek and Yuki M Asano},
93
+ title = {Lost in Time: A New Temporal Benchmark for VideoLLMs},
94
+ booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025, Sheffield, UK, November 24-27, 2025},
95
+ publisher = {BMVA},
96
+ year = {2025},
97
+ url = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_857/paper.pdf}
98
  }
99
 
100
  ```