Datasets:
File size: 3,335 Bytes
da045ec 2332c82 d7aa06f da045ec 2332c82 d7aa06f fe79b02 d7aa06f 2332c82 d7aa06f 2332c82 d7aa06f 2332c82 d7aa06f 2332c82 d7aa06f 2332c82 d7aa06f 2332c82 d7aa06f 2332c82 d7aa06f 2b5a638 adecb8d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 | ---
configs:
- config_name: track
default: true
data_files:
- split: train
path: track/train-*
license: apache-2.0
task_categories:
- video-classification
- object-detection
tags:
- video-object-tracking
- video-segmentation
- synthetic
---
# MolmoPoint-TrackSyn Dataset
Synthetic point tracking annotations for procedurally generated videos generated with Blender.
Each example contains an expression describing an object, per-frame point trajectories, and video metadata. All videos are encoded as **6 FPS** and points are sampled at **2 FPS**.
## Dataset Statistics
| Video Source | Unique Annotations | Unique Videos |
|-------------|-------------------|-----------------|
| static-camera | 34,324 | 11,629 |
| dyna-camera | 41,841 | 14,158 |
| **Total** | **76,165** | **25,787** |
## Schema
| Column | Type | Description |
|--------|------|-------------|
| `id` | `string` | Unique example identifier |
| `video` | `string` | Relative video path (without extension), e.g. `static-camera/{run_dir}/{video_file}`. We support static camera (`static-camera`) and dynamic camera (`dyna-camera`) setups. |
| `expression` | `string` | Natural-language description of the tracked object |
| `fps` | `int64` | Original video FPS |
| `sampling_fps` | `int64` | Sampling FPS used for annotation (always 2) |
| `height` | `int64` | Video height in pixels |
| `width` | `int64` | Video width in pixels |
| `n_frames` | `int64` | Number of frames in the sampled clip |
| `task` | `string` | Task type (always `"track"`) |
| `frame_trajectories` | `list[object]` | Per-frame point tracks (frame index, timestamp, point coords + occlusion) |
| `mask_id` | `list[string]` | Optional mask identifiers |
| `obj_id` | `list[int64]` | Optional object identifiers |
## Video Download
Videos are bundled in this repository as `synthetic_tracks.tar`.
### Automatic download
```python
from olmo.data.molmo2_video_track_datasets import MolmoPointTrackSyn
# Downloads the tar from HF, extracts, and verifies
MolmoPointTrackSyn.download()
```
### Manual download
```bash
# Download the tar from HuggingFace
huggingface-cli download allenai/MolmoPoint-TrackSyn synthetic_tracks.tar --repo-type dataset --local-dir ./MolmoPoint-TrackSyn
# Extract
tar -xf ./MolmoPoint-TrackSyn/synthetic_tracks.tar -C ./MolmoPoint-TrackSyn/
```
After extraction the directory structure is:
```
MolmoPoint-TrackSyn/
├── static-camera/
│ ├── {run_dir}/
│ │ ├── video.mp4
│ │ └── metadata.json
│ └── ...
└── dyna-camera/
├── {run_dir}/
│ ├── video.mp4
│ └── metadata.json
└── ...
```
The `video` column maps directly to the file path: `{VIDEO_HOME}/{video}/video.mp4
## Usage
```python
from datasets import load_dataset
# Load the dataset
ds = load_dataset("allenai/MolmoPoint-TrackSyn", split="train")
# Inspect an example
print(ds[0])
```
## Citation
If you use this dataset, please cite the MolmoPoint paper.
## License
Dataset license: ODC-BY
Dataset card (License section): This dataset is licensed under ODC-BY.
It is intended for research and educational use in accordance with [Ai2’s Responsible Use Guidelines](https://allenai.org/responsible-use). |