pulkitkumar95 commited on
Commit
a3853e1
·
verified ·
1 Parent(s): 21e6966

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +173 -174
README.md CHANGED
@@ -1,174 +1,173 @@
1
- ---
2
- license: mit
3
- title: "Trokens: Semantic-Aware Relational Trajectory Tokens Dataset"
4
- tags:
5
- - computer-vision
6
- - action-recognition
7
- - few-shot-learning
8
- - video-understanding
9
- - point-tracking
10
- size_categories:
11
- - 100K<n<1M
12
- task_categories:
13
- - video-classification
14
- ---
15
-
16
- # Trokens Dataset: Semantic-Aware Relational Trajectory Tokens for Few-shot Action Recognition
17
-
18
- This dataset contains the preprocessed data for "Trokens: Semantic-Aware Relational Trajectory Tokens for Few-Shot Action Recognition" (ICCV 2025).
19
-
20
- [**Paper**]() | [**Project Page**](https://www.cs.umd.edu/~pulkit/trokens/) | [**Code**](https://github.com/pulkitkumar95/trokens)
21
-
22
- ## Dataset Overview
23
-
24
- This dataset provides semantic-aware relational trajectory tokens (Trokens) extracted from multiple action recognition datasets, designed specifically for few-shot action recognition tasks. The dataset includes semantically meaningful point trajectories extracted using CoTracker3 and DINOv2 features, along with few-shot episode split information.
25
-
26
- ## Dataset Structure
27
-
28
- The dataset contains two main components:
29
-
30
- ### 1. Point Tracking Data (`cotracker3_bip_fr_32/`)
31
-
32
- Semantic point trajectories extracted using CoTracker3 with bipartite clustering on DINOv2 features:
33
-
34
- ```
35
- cotracker3_bip_fr_32/
36
- └── {dataset_name}/
37
- └── feat_dump/
38
- └── {video_name}.pkl
39
- ```
40
-
41
- Each pickle file contains:
42
- - **`pred_tracks`**: Tracked point coordinates across frames [T, N, 2]
43
- - **`pred_visibility`**: Visibility mask for each point [T, N]
44
- - **`obj_ids`**: Object/cluster IDs for each point [N]
45
- - **`point_queries`**: Original query point indices [N]
46
-
47
- ### 2. Few-shot Split Information (`few_shot_info/`)
48
-
49
- Data splits for few-shot learning evaluation across multiple datasets.
50
-
51
- ## Point Extraction Details
52
-
53
- ### Semantic Point Tracking
54
- - **Method**: CoTracker3 with semantic clustering on DINOv2 features
55
- - **Clustering**: Bipartite clustering for semantic entity detection
56
- - **Parameters**:
57
- - Clustering method: `bipartite`
58
- - Number of frames for clustering: 32
59
- - Points filtered based on spatial proximity to remove redundancy
60
-
61
- ### Video Processing
62
- - **Frame Rate**:
63
- - Most datasets: 10 fps
64
- - Something Something V2 (SSV2): 12 fps (original video fps)
65
- - **Point Filtering**: Redundant points removed based on spatial proximity
66
- - **GPU Acceleration**: CUDA support for efficient processing
67
-
68
- ### Key Features
69
- - Robust point tracking across video frames using CoTracker3
70
- - Semantic point extraction through clustering on DINOv2 features
71
- - Point filtering to remove redundant tracks
72
- - Support for different clustering strategies and parameters
73
-
74
- ## Supported Datasets
75
-
76
- The point tracking data is available for multiple action recognition datasets:
77
- - **Something Something V2 (SSV2)**
78
- - **Kinetics**
79
- - **UCF-101**
80
- - **HMDB-51**
81
- - And others
82
-
83
- ## Usage
84
-
85
- ### Loading Point Tracking Data
86
-
87
- ```python
88
- import pickle
89
- import numpy as np
90
-
91
- # Load point tracking data for a video
92
- with open('cotracker3_bip_fr_32/{dataset}/{video_name}.pkl', 'rb') as f:
93
- data = pickle.load(f)
94
-
95
- pred_tracks = data['pred_tracks'] # [T, N, 2] - point coordinates
96
- pred_visibility = data['pred_visibility'] # [T, N] - visibility mask
97
- obj_ids = data['obj_ids'] # [N] - cluster/object IDs
98
- point_queries = data['point_queries'] # [N] - query point indices
99
- ```
100
-
101
- ### Loading Few-shot Splits
102
-
103
- ```python
104
- # Load few-shot episode information
105
- # (Structure depends on specific dataset format)
106
- ```
107
-
108
- ## Applications
109
-
110
- This dataset is designed for:
111
- - **Few-shot Action Recognition**: Training models with limited labeled examples
112
- - **Video Understanding**: Learning from semantic-aware relational trajectory tokens (Trokens)
113
- - **Point Tracking Research**: Semantic point trajectory analysis
114
- - **Action Recognition**: General video classification tasks
115
-
116
- ## Technical Details
117
-
118
- ### Dependencies
119
- - PyTorch
120
- - NumPy
121
- - Pandas
122
- - Einops
123
- - CoTracker3 (for point tracking)
124
- - DINOv2 (for feature extraction)
125
-
126
- ### Point Extraction Pipeline
127
- 1. **Feature Extraction**: DINOv2 features computed for video frames
128
- 2. **Semantic Clustering**: Bipartite clustering to identify semantic entities
129
- 3. **Point Sampling**: Points sampled from cluster centers
130
- 4. **Trajectory Tracking**: CoTracker3 used to track points across frames
131
- 5. **Post-processing**: Redundant points filtered based on spatial proximity
132
-
133
- ## Citation
134
-
135
- If you use this dataset in your research, please cite our papers:
136
-
137
- ```bibtex
138
- @inproceedings{kumar2025trokens,
139
- title={Trokens: Semantic-Aware Relational Trajectory Tokens for Few-Shot Action Recognition},
140
- author={Kumar, Pulkit and Huang, Shuaiyi and Walmer, Matthew and Rambhatla, Sai Saketh and Shrivastava, Abhinav},
141
- booktitle={International Conference on Computer Vision},
142
- year={2025}
143
- }
144
-
145
- @inproceedings{kumar2024trajectory,
146
- title={Trajectory-aligned Space-time Tokens for Few-shot Action Recognition},
147
- author={Kumar, Pulkit and Padmanabhan, Namitha and Luo, Luke and Rambhatla, Sai Saketh and Shrivastava, Abhinav},
148
- booktitle={European Conference on Computer Vision},
149
- pages={474--493},
150
- year={2024},
151
- organization={Springer}
152
- }
153
- ```
154
-
155
- ## Authors
156
-
157
- [**Pulkit Kumar***](https://www.cs.umd.edu/~pulkit/)<sup>1</sup> · [**Shuaiyi Huang***](https://shuaiyihuang.github.io/)<sup>1</sup> · [**Matthew Walmer**](https://www.cs.umd.edu/~mwalmer/)<sup>1</sup> · [**Sai Saketh Rambhatla**](https://rssaketh.github.io)<sup>1,2</sup> · [**Abhinav Shrivastava**](http://www.cs.umd.edu/~abhinav/)<sup>1</sup>
158
-
159
- <sup>1</sup>University of Maryland, College Park&emsp;&emsp;&emsp;&emsp;<sup>2</sup>GenAI, Meta<br>
160
- <sup>*Equal contribution</sup>
161
-
162
- ## License
163
-
164
- This dataset is released under the MIT License.
165
-
166
- ## Acknowledgments
167
-
168
- This dataset is built upon:
169
- - [CoTracker](https://github.com/facebookresearch/co-tracker): For robust point tracking
170
- - [TATs](https://github.com/pulkitkumar95/tats): Trajectory-aligned Space-time Tokens for Few-shot Action Recognition
171
- - [DINOv2](https://github.com/facebookresearch/dinov2): For semantic feature extraction
172
-
173
-
174
- We thank the authors for making their code publicly available.
 
1
+ ---
2
+ title: 'Trokens: Semantic-Aware Relational Trajectory Tokens Dataset'
3
+ tags:
4
+ - computer-vision
5
+ - action-recognition
6
+ - few-shot-learning
7
+ - video-understanding
8
+ - point-tracking
9
+ size_categories:
10
+ - 100K<n<1M
11
+ task_categories:
12
+ - video-classification
13
+ ---
14
+
15
+ # Trokens Dataset: Semantic-Aware Relational Trajectory Tokens for Few-shot Action Recognition
16
+
17
+ This dataset contains the preprocessed data for "Trokens: Semantic-Aware Relational Trajectory Tokens for Few-Shot Action Recognition" (ICCV 2025).
18
+
19
+ [**Paper**]() | [**Project Page**](https://www.cs.umd.edu/~pulkit/trokens/) | [**Code**](https://github.com/pulkitkumar95/trokens)
20
+
21
+ ## Dataset Overview
22
+
23
+ This dataset provides semantic-aware relational trajectory tokens (Trokens) extracted from multiple action recognition datasets, designed specifically for few-shot action recognition tasks. The dataset includes semantically meaningful point trajectories extracted using CoTracker3 and DINOv2 features, along with few-shot episode split information.
24
+
25
+ ## Dataset Structure
26
+
27
+ The dataset contains two main components:
28
+
29
+ ### 1. Point Tracking Data (`cotracker3_bip_fr_32/`)
30
+
31
+ Semantic point trajectories extracted using CoTracker3 with bipartite clustering on DINOv2 features:
32
+
33
+ ```
34
+ cotracker3_bip_fr_32/
35
+ └── {dataset_name}/
36
+ └── feat_dump/
37
+ └── {video_name}.pkl
38
+ ```
39
+
40
+ Each pickle file contains:
41
+ - **`pred_tracks`**: Tracked point coordinates across frames [T, N, 2]
42
+ - **`pred_visibility`**: Visibility mask for each point [T, N]
43
+ - **`obj_ids`**: Object/cluster IDs for each point [N]
44
+ - **`point_queries`**: Original query point indices [N]
45
+
46
+ ### 2. Few-shot Split Information (`few_shot_info/`)
47
+
48
+ Data splits for few-shot learning evaluation across multiple datasets.
49
+
50
+ ## Point Extraction Details
51
+
52
+ ### Semantic Point Tracking
53
+ - **Method**: CoTracker3 with semantic clustering on DINOv2 features
54
+ - **Clustering**: Bipartite clustering for semantic entity detection
55
+ - **Parameters**:
56
+ - Clustering method: `bipartite`
57
+ - Number of frames for clustering: 32
58
+ - Points filtered based on spatial proximity to remove redundancy
59
+
60
+ ### Video Processing
61
+ - **Frame Rate**:
62
+ - Most datasets: 10 fps
63
+ - Something Something V2 (SSV2): 12 fps (original video fps)
64
+ - **Point Filtering**: Redundant points removed based on spatial proximity
65
+ - **GPU Acceleration**: CUDA support for efficient processing
66
+
67
+ ### Key Features
68
+ - Robust point tracking across video frames using CoTracker3
69
+ - Semantic point extraction through clustering on DINOv2 features
70
+ - Point filtering to remove redundant tracks
71
+ - Support for different clustering strategies and parameters
72
+
73
+ ## Supported Datasets
74
+
75
+ The point tracking data is available for multiple action recognition datasets:
76
+ - **Something Something V2 (SSV2)**
77
+ - **Kinetics**
78
+ - **UCF-101**
79
+ - **HMDB-51**
80
+ - And others
81
+
82
+ ## Usage
83
+
84
+ ### Loading Point Tracking Data
85
+
86
+ ```python
87
+ import pickle
88
+ import numpy as np
89
+
90
+ # Load point tracking data for a video
91
+ with open('cotracker3_bip_fr_32/{dataset}/{video_name}.pkl', 'rb') as f:
92
+ data = pickle.load(f)
93
+
94
+ pred_tracks = data['pred_tracks'] # [T, N, 2] - point coordinates
95
+ pred_visibility = data['pred_visibility'] # [T, N] - visibility mask
96
+ obj_ids = data['obj_ids'] # [N] - cluster/object IDs
97
+ point_queries = data['point_queries'] # [N] - query point indices
98
+ ```
99
+
100
+ ### Loading Few-shot Splits
101
+
102
+ ```python
103
+ # Load few-shot episode information
104
+ # (Structure depends on specific dataset format)
105
+ ```
106
+
107
+ ## Applications
108
+
109
+ This dataset is designed for:
110
+ - **Few-shot Action Recognition**: Training models with limited labeled examples
111
+ - **Video Understanding**: Learning from semantic-aware relational trajectory tokens (Trokens)
112
+ - **Point Tracking Research**: Semantic point trajectory analysis
113
+ - **Action Recognition**: General video classification tasks
114
+
115
+ ## Technical Details
116
+
117
+ ### Dependencies
118
+ - PyTorch
119
+ - NumPy
120
+ - Pandas
121
+ - Einops
122
+ - CoTracker3 (for point tracking)
123
+ - DINOv2 (for feature extraction)
124
+
125
+ ### Point Extraction Pipeline
126
+ 1. **Feature Extraction**: DINOv2 features computed for video frames
127
+ 2. **Semantic Clustering**: Bipartite clustering to identify semantic entities
128
+ 3. **Point Sampling**: Points sampled from cluster centers
129
+ 4. **Trajectory Tracking**: CoTracker3 used to track points across frames
130
+ 5. **Post-processing**: Redundant points filtered based on spatial proximity
131
+
132
+ ## Citation
133
+
134
+ If you use this dataset in your research, please cite our papers:
135
+
136
+ ```bibtex
137
+ @inproceedings{kumar2025trokens,
138
+ title={Trokens: Semantic-Aware Relational Trajectory Tokens for Few-Shot Action Recognition},
139
+ author={Kumar, Pulkit and Huang, Shuaiyi and Walmer, Matthew and Rambhatla, Sai Saketh and Shrivastava, Abhinav},
140
+ booktitle={International Conference on Computer Vision},
141
+ year={2025}
142
+ }
143
+
144
+ @inproceedings{kumar2024trajectory,
145
+ title={Trajectory-aligned Space-time Tokens for Few-shot Action Recognition},
146
+ author={Kumar, Pulkit and Padmanabhan, Namitha and Luo, Luke and Rambhatla, Sai Saketh and Shrivastava, Abhinav},
147
+ booktitle={European Conference on Computer Vision},
148
+ pages={474--493},
149
+ year={2024},
150
+ organization={Springer}
151
+ }
152
+ ```
153
+
154
+ ## Authors
155
+
156
+ [**Pulkit Kumar***](https://www.cs.umd.edu/~pulkit/)<sup>1</sup> · [**Shuaiyi Huang***](https://shuaiyihuang.github.io/)<sup>1</sup> · [**Matthew Walmer**](https://www.cs.umd.edu/~mwalmer/)<sup>1</sup> · [**Sai Saketh Rambhatla**](https://rssaketh.github.io)<sup>1,2</sup> · [**Abhinav Shrivastava**](http://www.cs.umd.edu/~abhinav/)<sup>1</sup>
157
+
158
+ <sup>1</sup>University of Maryland, College Park&emsp;&emsp;&emsp;&emsp;<sup>2</sup>GenAI, Meta<br>
159
+ <sup>*Equal contribution</sup>
160
+
161
+ ## License
162
+
163
+ This dataset is released under the MIT License.
164
+
165
+ ## Acknowledgments
166
+
167
+ This dataset is built upon:
168
+ - [CoTracker](https://github.com/facebookresearch/co-tracker): For robust point tracking
169
+ - [TATs](https://github.com/pulkitkumar95/tats): Trajectory-aligned Space-time Tokens for Few-shot Action Recognition
170
+ - [DINOv2](https://github.com/facebookresearch/dinov2): For semantic feature extraction
171
+
172
+
173
+ We thank the authors for making their code publicly available.