File size: 6,106 Bytes
08c2e87
 
 
 
ef0e2ca
 
08c2e87
 
ef0e2ca
 
08c2e87
 
ef0e2ca
 
08c2e87
 
ef0e2ca
 
08c2e87
 
ef0e2ca
 
08c2e87
 
ef0e2ca
 
 
08c2e87
 
 
 
d0f4dcf
45c9ea4
d0f4dcf
 
 
 
08c2e87
ef0e2ca
08c2e87
 
ef0e2ca
 
08c2e87
 
 
 
 
 
 
d50a0d8
08c2e87
 
45c9ea4
d50a0d8
 
 
 
08c2e87
 
 
 
d50a0d8
 
 
45c9ea4
 
 
 
 
08c2e87
 
 
 
 
 
 
 
 
4449750
08c2e87
 
 
 
 
45c9ea4
08c2e87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45c9ea4
 
 
 
 
 
 
 
 
 
08c2e87
 
 
 
18b4ba2
 
c4338ed
 
 
 
 
 
 
08c2e87
 
 
ef0e2ca
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
configs:
- config_name: default
  data_files:
  - split: test
    path: data/**/*.parquet
- config_name: person
  data_files:
  - split: test
    path: data/person/*.parquet
- config_name: sports
  data_files:
  - split: test
    path: data/sports/*.parquet
- config_name: animal
  data_files:
  - split: test
    path: data/animal/*.parquet
- config_name: misc
  data_files:
  - split: test
    path: data/misc/*.parquet
- config_name: dance
  data_files:
  - split: test
    path: data/dance/*.parquet
license: odc-by
---

# Molmo2-VideoTrackEval

Molmo2-VideoTrackEval is an evaluation benchmark for video point tracking, containing human-annotated ground truth expressions. It includes segmentation masks for evaluating whether predicted points fall within the correct object regions. Currently, there are five categories for evaluation:
- animal
- dance
- sports
- person
- misc

This benchmark is part of the [Molmo2 dataset collection](https://huggingface.co/collections/allenai/molmo2-data) and is used to evaluate the [Molmo2 family of models](https://huggingface.co/collections/allenai/molmo2) on video object tracking via point trajectories.

Quick links:
- πŸ“ƒ [Paper](https://allenai.org/papers/molmo2)
- πŸŽ₯ [Blog with Videos](https://allenai.org/blog/molmo2)

## Usage

```python
from datasets import load_dataset

# Load entire evaluation dataset
ds = load_dataset("allenai/Molmo2-VideoTrackEval", split="test")

# Load a specific benchmark subset by config name
animal = load_dataset("allenai/Molmo2-VideoTrackEval", "animal", split="test")
dance = load_dataset("allenai/Molmo2-VideoTrackEval", "dance", split="test")
sports = load_dataset("allenai/Molmo2-VideoTrackEval", "sports", split="test")
person = load_dataset("allenai/Molmo2-VideoTrackEval", "person", split="test")
misc = load_dataset("allenai/Molmo2-VideoTrackEval", "misc", split="test")
```

## Available Configs

| Config | Dataset | Description |
|--------|---------|-------------|
| `default` | All | All evaluation data combined |
| `animal` | APTv2 | Animal tracking benchmark |
| `dance` | dancetrack | Dancer tracking benchmark |
| `sports` | sportsmot | Sports player tracking benchmark |
| `person` | personpath22 | Person tracking benchmark |
| `misc` | sav | Misc Video benchmark |

## Data Format

Each row contains tracking annotations for one or more objects in a video clip:

| Field | Description |
|-------|-------------|
| `id` | Unique identifier for this annotation |
| `video` | Video filename |
| `clip` | trimmed clip id |
| `video_dataset` | Source dataset name (e.g., 'dancetrack', 'sportsmot') |
| `video_source` | Video directory path (can be ignored) |
| `exp` | Text expression describing the tracked object(s) |
| `obj_id` | List of object IDs per video |
| `mask_id` | List of mask IDs corresponding to tracked objects starting from '0' |
| `masks` | List of segmentation masks per object for evaluation. Each entry contains `object_id` and `masks` (used to verify if predicted points fall within the ground truth object region) |
| `points` | List of point trajectories per object. Each entry contains `object_id` and `points` (list of [x, y] coordinates per frame) |
| `segments` | List of segment annotations per object. Each entry contains `object_id` and `segments` |
| `start_frame` | Starting frame index for this clip |
| `end_frame` | Ending frame index for this clip |
| `w` | Video width |
| `h` | Video height |
| `n_frames` | Number of frames in the clip |
| `fps` | Frames per second |

**Important:** `start_frame` and `end_frame` indicate which portion of the source video to use. You need to trim the video to this range β€” the annotations correspond to frames within `[start_frame, end_frame]`, not the entire video.

### Evaluation with Masks

The `masks` field contains ground truth segmentation masks that can be used to evaluate tracking predictions. A predicted point is considered correct if it falls within the segmentation mask of the target object for that frame.

## Folder Structure

```
Molmo2-VideoTrackEval/
β”œβ”€β”€ README.md
└── data/
    β”œβ”€β”€ animal/
    β”‚   └── APTv2_point_tracks_with_masks.parquet
    β”œβ”€β”€ dance/
    β”‚   └── dancetrack_point_tracks_with_masks.parquet
    └── sports/
        └── sportsmot_point_tracks_with_masks.parquet
    β”œβ”€β”€ person/
    β”‚   └── personpath22_point_tracks_with_masks.parquet
    β”œβ”€β”€ misc/
    β”‚   └── sav_point_tracks_with_masks.parquet
```

## Video Sources

The table below contains information on the sources of the third party datasets used or referenced in curating the benchmark data for Molmo2-VideoTrackEval. We do not provide video files or share the original raw data from datasets with restrictions on use and distribution according to the source license.

| Dataset | Category | Download | Dataset License |
|---------|----------|----------|-----------------|
| APTv2 | Animals | [APTv2](https://github.com/ViTAE-Transformer/APTv2) | Apache 2.0
| dancetrack | Dancers | [DanceTrack](https://github.com/DanceTrack/DanceTrack?tab=readme-ov-file#dataset) | Non-commercial research use only
| sportsmot | Sports | [SportsMOT](https://codalab.lisn.upsaclay.fr/competitions/12424#participate) | CC BY-NC 4.0
| personpath22 | Person | [PersonPath22](https://amazon-science.github.io/tracking-dataset/personpath22.html) | CC BY-NC 4.0
| sav | Misc | [SA-V](https://ai.meta.com/datasets/segment-anything-video/) (Frames sampled at 6 fps from 24 fps video) | CC BY 4.0

## License

This dataset is licensed under ODC-BY-1.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use). Please refer to the Video Sources section for the original datasets that provide the videos used to generate the segmentations and point tracks for this dataset. All use of the videos and original data from these datasets are subject to the licenses and terms of use provided by the sources. Please check the sources to determine if they are appropriate for your use case.