EdwinHuang commited on
Commit
1857114
·
verified ·
1 Parent(s): 5348228

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ tags:
6
+ - video
7
+ - spatial-intelligence
8
+ - recall
9
+ - benchmark
10
+ language:
11
+ - en
12
+ ---
13
+
14
+ # VSI-SUPER-Recall
15
+
16
+ **[Website](https://vision-x-nyu.github.io/cambrian-s.github.io/)** | **[Paper](https://arxiv.org/abs/2025)** | **[GitHub](https://github.com/cambrian-mllm/cambrian-s)** | **[Models](https://huggingface.co/collections/nyu-visionx/cambrian-s-models)**
17
+
18
+ **Authors**: [Shusheng Yang*](https://github.com/vealocia), [Jihan Yang*](https://jihanyang.github.io/), [Pinzhi Huang†](https://pinzhihuang.github.io/), [Ellis Brown†](https://ellisbrown.github.io/), et al.
19
+
20
+ VSI-SUPER-Recall is a benchmark for testing long-horizon spatial observation and recall in arbitrarily long videos. It evaluates whether models can remember and recall the order in which unusual objects appeared across extended video sequences.
21
+
22
+ ## Overview
23
+
24
+ VSI-SUPER-Recall challenges models to:
25
+ - Track object appearances across long videos (10-240 minutes)
26
+ - Recall the temporal order of inserted objects
27
+ - Maintain spatial memory over extended periods
28
+
29
+ This benchmark is part of [VSI-Super](https://huggingface.co/collections/nyu-visionx/vsi-super), which also includes [VSI-SUPER-Count](https://huggingface.co/datasets/nyu-visionx/VSI-SUPER-Count).
30
+
31
+ ## Quick Start
32
+
33
+ ```python
34
+ from datasets import load_dataset
35
+
36
+ # Load the dataset
37
+ dataset = load_dataset("nyu-visionx/VSI-SUPER-Recall", split="test")
38
+
39
+ # Access a sample
40
+ sample = dataset[0]
41
+ print(sample)
42
+ ```
43
+
44
+ ## Dataset Structure
45
+
46
+ Each sample contains:
47
+
48
+ ```python
49
+ {
50
+ "video_path": "10mins/00000000.mp4",
51
+ "question": "These are frames of a video.\nWhich of the following correctly represents the order in which the Pikachu appeared in the video?",
52
+ "options": [
53
+ "A. Trash can, Bed, Chair, Basket",
54
+ "B. Trash can, Bed, Basket, Chair",
55
+ "C. Bed, Chair, Basket, Trash can",
56
+ "D. Bed, Chair, Trash can, Basket"
57
+ ],
58
+ "answer": "A", # Correct option letter
59
+ "type": "10mins" # Video duration
60
+ }
61
+ ```
62
+
63
+ **Key points:**
64
+ - 300 samples total (60 per video duration)
65
+ - Video durations: 10, 30, 60, 120, 240 minutes
66
+ - Videos downsampled to 1 frame per second
67
+ - Multiple choice format with 4 options
68
+ - Questions ask about the order of appearance of inserted objects
69
+
70
+ ## Dataset Details
71
+
72
+ - **Total samples**: 300
73
+ - **Video durations**: 10mins (60), 30mins (60), 60mins (60), 120mins (60), 240mins (60)
74
+ - **Question format**: Multiple choice about object appearance order
75
+ - **Frame rate**: 1 FPS (downsampled)
76
+
77
+
78
+ ## Citation
79
+
80
+ ```bibtex
81
+ @article{yang2025cambrian,
82
+ title={Cambrian-S: Towards Spatial Supersensing in Video},
83
+ author={Yang, Shusheng and Yang, Jihan and Huang, Pinzhi and Brown, Ellis and others},
84
+ journal={arXiv preprint arXiv:2025},
85
+ year={2025}
86
+ }
87
+ ```
88
+