File size: 6,734 Bytes
a3853e1 1258061 a3853e1 6e1c365 1258061 a3853e1 3b2f1c1 a3853e1 3b2f1c1 a3853e1 6087c78 a3853e1 319ac1e a3853e1 c235e1d a3853e1 eea3dd1 ae67c70 a3853e1 473f28e a3853e1 ae67c70 a3853e1 ae67c70 a3853e1 ae67c70 a3853e1 1258061 a3853e1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
---
size_categories:
- 100K<n<1M
task_categories:
- video-classification
title: 'Trokens: Semantic-Aware Relational Trajectory Tokens Dataset'
tags:
- computer-vision
- action-recognition
- few-shot-learning
- video-understanding
- point-tracking
viewer: false
license: cc-by-nc-4.0
---
# Trokens: Semantic-Aware Relational Trajectory Tokens for Few-shot Action Recognition
This contains the preprocessed data for "Trokens: Semantic-Aware Relational Trajectory Tokens for Few-Shot Action Recognition" (ICCV 2025).
[**Paper**](https://arxiv.org/abs/2508.03695) | [**Project Page**](https://trokens-iccv25.github.io/) | [**Code**](https://github.com/pulkitkumar95/trokens)
## Dataset Overview
This dataset provides semantic-aware relational trajectory tokens (Trokens) extracted from multiple action recognition datasets, specifically for few-shot action recognition tasks. The dataset includes semantically meaningful point trajectories extracted using CoTracker3 and DINOv2 features, along with few-shot episode split information.
## Dataset Structure
The dataset contains two main components:
### 1. Point Tracking Data (`cotracker3_bip_fr_32/`)
Each dataset is present in a zip file. To unzip the dataset, run the following command:
```bash
cd cotracker3_bip_fr_32
unzip *.zip
```
Semantic point trajectories extracted using CoTracker3 with bipartite clustering on DINOv2 features:
```
cotracker3_bip_fr_32/
└── {dataset_name}/
└── feat_dump/
└── {video_name}.pkl
```
Each pickle file contains:
- **`pred_tracks`**: Tracked point coordinates across frames [T, N, 2]
- **`pred_visibility`**: Visibility mask for each point [T, N]
- **`obj_ids`**: Object/cluster IDs for each point [N]
- **`point_queries`**: Original query point indices [N]
It also contains **`vid_info`**, which contains the video information of the video the points were extracted:
- **`fps`**: FPS at which the video was processed for point tracking.
- **`height`**: Height of the video.
- **`width`**: Width of the video.
### 2. Few-shot Split Information (`few_shot_info/`)
Data splits for few-shot learning evaluation across multiple datasets.
## Point Extraction Details
Code for extraction can be found on the GitHub repo [here](https://github.com/pulkitkumar95/trokens/tree/main/point_tracking). Some details are provided below.
### Semantic Point Tracking
- **Method**: CoTracker3 with semantic clustering on DINOv2 features
- **Clustering**: Bipartite clustering for semantic entity detection
- **Parameters**:
- Clustering method: `bipartite`
- Number of frames for clustering: 32
- Points filtered based on spatial proximity to remove redundancy
### Video Processing
- **Frame Rate**:
- Most datasets: 10 fps
- Something Something V2 (SSV2): 12 fps (original video fps)
- **Point Filtering**: Redundant points removed based on spatial proximity
- **GPU Acceleration**: CUDA support for efficient processing
### Key Features
- Robust point tracking across video frames using CoTracker3
- Semantic point extraction through clustering on DINOv2 features
- Point filtering to remove redundant tracks
- Support for different clustering strategies and parameters
## Supported Datasets
The point tracking data is available for few shot splits of multiple action recognition datasets:
- **Something Something V2 (SSV2)**
- **Kinetics**
- **UCF-101**
- **HMDB-51**
- **Finegym**
## Usage
### Loading Point Tracking Data
```python
import pickle
import numpy as np
# Load point tracking data for a video
with open('cotracker3_bip_fr_32/{dataset}/{video_name}.pkl', 'rb') as f:
data = pickle.load(f)
pred_tracks = data['pred_tracks'] # [T, N, 2] - point coordinates
pred_visibility = data['pred_visibility'] # [T, N] - visibility mask
obj_ids = data['obj_ids'] # [N] - cluster/object IDs
point_queries = data['point_queries'] # [N] - query point indices
```
### Loading Few-shot Splits
```python
# Load few-shot episode information
# (Structure depends on specific dataset format)
```
## Applications
This dataset is designed for:
- **Few-shot Action Recognition**: Training models with limited labeled examples
- **Video Understanding**: Learning from semantic-aware relational trajectory tokens (Trokens)
- **Point Tracking Research**: Semantic point trajectory analysis
- **Action Recognition**: General video classification tasks
## Technical Details
### Dependencies
- PyTorch
- NumPy
- Pandas
- Einops
- CoTracker3 (for point tracking)
- DINOv2 (for feature extraction)
### Point Extraction Pipeline
1. **Feature Extraction**: DINOv2 features computed for video frames
2. **Semantic Clustering**: Bipartite clustering to identify semantic entities
3. **Point Sampling**: Points sampled from cluster centers
4. **Trajectory Tracking**: CoTracker3 used to track points across frames
5. **Post-processing**: Redundant points filtered based on spatial proximity
## Citation
If you use this dataset in your research, please cite our papers:
```bibtex
@inproceedings{kumar2025trokens,
title={Trokens: Semantic-Aware Relational Trajectory Tokens for Few-Shot Action Recognition},
author={Kumar, Pulkit and Huang, Shuaiyi and Walmer, Matthew and Rambhatla, Sai Saketh and Shrivastava, Abhinav},
booktitle={International Conference on Computer Vision},
year={2025}
}
@inproceedings{kumar2024trajectory,
title={Trajectory-aligned Space-time Tokens for Few-shot Action Recognition},
author={Kumar, Pulkit and Padmanabhan, Namitha and Luo, Luke and Rambhatla, Sai Saketh and Shrivastava, Abhinav},
booktitle={European Conference on Computer Vision},
pages={474--493},
year={2024},
organization={Springer}
}
```
## Authors
[**Pulkit Kumar***](https://www.cs.umd.edu/~pulkit/)<sup>1</sup> · [**Shuaiyi Huang***](https://shuaiyihuang.github.io/)<sup>1</sup> · [**Matthew Walmer**](https://www.cs.umd.edu/~mwalmer/)<sup>1</sup> · [**Sai Saketh Rambhatla**](https://rssaketh.github.io)<sup>1,2</sup> · [**Abhinav Shrivastava**](http://www.cs.umd.edu/~abhinav/)<sup>1</sup>
<sup>1</sup>University of Maryland, College Park    <sup>2</sup>GenAI, Meta<br>
<sup>*Equal contribution</sup>
## License
This dataset is licensed under the [CC-BY-NC-4.0 License](https://creativecommons.org/licenses/by-nc/4.0/).
## Acknowledgments
This dataset is built upon:
- [CoTracker](https://github.com/facebookresearch/co-tracker): For robust point tracking
- [TATs](https://github.com/pulkitkumar95/tats): Trajectory-aligned Space-time Tokens for Few-shot Action Recognition
- [DINOv2](https://github.com/facebookresearch/dinov2): For semantic feature extraction
We thank the authors for making their code publicly available. |