jamepark3922 commited on
Commit
45c9ea4
Β·
1 Parent(s): d0f4dcf

rename dataset

Browse files
README.md CHANGED
@@ -7,33 +7,33 @@ configs:
7
  - config_name: person
8
  data_files:
9
  - split: test
10
- path: "data/personpath22/*.parquet"
11
  - config_name: sports
12
  data_files:
13
  - split: test
14
- path: "data/sportsmot/*.parquet"
15
  - config_name: animal
16
  data_files:
17
  - split: test
18
- path: "data/APTv2/*.parquet"
19
  - config_name: misc
20
  data_files:
21
  - split: test
22
- path: "data/sav/*.parquet"
23
  - config_name: dance
24
  data_files:
25
  - split: test
26
- path: "data/dancetrack/*.parquet"
27
  ---
28
 
29
  # Molmo2-VideoTrackEval
30
 
31
  Molmo2-VideoTrackEval is an evaluation benchmark for video point tracking, containing human-annotated ground truth expressions. It includes segmentation masks for evaluating whether predicted points fall within the correct object regions. Currently, there are five categories for evaluation:
 
32
  - dance
33
  - sports
34
  - person
35
  - misc
36
- - animal
37
 
38
  This benchmark is part of the [Molmo2 dataset collection](https://huggingface.co/collections/allenai/molmo2) and is used to evaluate the Molmo2 family of models on video object tracking via point trajectories.
39
 
@@ -50,11 +50,11 @@ from datasets import load_dataset
50
  ds = load_dataset("allenai/Molmo2-VideoTrackEval", split="test")
51
 
52
  # Load a specific benchmark subset by config name
 
53
  dance = load_dataset("allenai/Molmo2-VideoTrackEval", "dance", split="test")
54
  sports = load_dataset("allenai/Molmo2-VideoTrackEval", "sports", split="test")
55
  person = load_dataset("allenai/Molmo2-VideoTrackEval", "person", split="test")
56
  misc = load_dataset("allenai/Molmo2-VideoTrackEval", "misc", split="test")
57
- animal = load_dataset("allenai/Molmo2-VideoTrackEval", "animal", split="test")
58
  ```
59
 
60
  ## Available Configs
@@ -62,11 +62,11 @@ animal = load_dataset("allenai/Molmo2-VideoTrackEval", "animal", split="test")
62
  | Config | Dataset | Description |
63
  |--------|---------|-------------|
64
  | `default` | All | All evaluation data combined |
65
- | `person` | personpath22 | Pedestrian tracking benchmark |
66
- | `sports` | sportsmot | Sports multi-object tracking benchmark |
67
- | `animal` | APTv2 | Animal pose tracking benchmark |
68
- | `misc` | sav | Segment Anything Video benchmark |
69
- | `dance` | dancetrack | Dance tracking benchmark |
70
 
71
  ## Data Format
72
 
@@ -81,9 +81,9 @@ Each row contains tracking annotations for one or more objects in a video clip:
81
  | `exp` | Text expression describing the tracked object(s) |
82
  | `obj_id` | List of object IDs per video |
83
  | `mask_id` | List of mask IDs corresponding to tracked objects starting from '0' |
 
84
  | `points` | List of point trajectories per object. Each entry contains `object_id` and `points` (list of [x, y] coordinates per frame) |
85
  | `segments` | List of segment annotations per object. Each entry contains `object_id` and `segments` |
86
- | `masks` | List of segmentation masks per object for evaluation. Each entry contains `object_id` and `masks` (used to verify if predicted points fall within the ground truth object region) |
87
  | `start_frame` | Starting frame index for this clip |
88
  | `end_frame` | Ending frame index for this clip |
89
  | `w` | Video width |
@@ -103,27 +103,27 @@ The `masks` field contains ground truth segmentation masks that can be used to e
103
  Molmo2-VideoTrackEval/
104
  β”œβ”€β”€ README.md
105
  └── data/
106
- β”œβ”€β”€ APTv2/
107
- β”‚ └── APTv2.parquet
108
- β”œβ”€β”€ dancetrack/
109
- β”‚ └── dancetrack.parquet
110
- β”œβ”€β”€ personpath22/
111
- β”‚ └── personpath22.parquet
112
- β”œβ”€β”€ sav/
113
- β”‚ └── sav.parquet
114
- └── sportsmot/
115
- └── sportsmot.parquet
116
  ```
117
 
118
  ## Video Sources
119
 
120
  | Dataset | Category | Download |
121
  |---------|----------|----------|
122
- | personpath22 | People | [PersonPath22](https://amazon-science.github.io/tracking-dataset/personpath22.html) |
123
- | sportsmot | Sports | [SportsMOT](https://codalab.lisn.upsaclay.fr/competitions/12424#participate) |
124
  | APTv2 | Animals | [APTv2](https://github.com/ViTAE-Transformer/APTv2) |
125
- | sav | Misc | [SA-V](https://ai.meta.com/datasets/segment-anything-video/) (Videos at 6 fps) |
126
  | dancetrack | Dancers | [DanceTrack](https://github.com/DanceTrack/DanceTrack?tab=readme-ov-file#dataset) |
 
 
 
127
 
128
  ## License
129
 
 
7
  - config_name: person
8
  data_files:
9
  - split: test
10
+ path: "data/person/*.parquet"
11
  - config_name: sports
12
  data_files:
13
  - split: test
14
+ path: "data/sports/*.parquet"
15
  - config_name: animal
16
  data_files:
17
  - split: test
18
+ path: "data/animal/*.parquet"
19
  - config_name: misc
20
  data_files:
21
  - split: test
22
+ path: "data/misc/*.parquet"
23
  - config_name: dance
24
  data_files:
25
  - split: test
26
+ path: "data/dance/*.parquet"
27
  ---
28
 
29
  # Molmo2-VideoTrackEval
30
 
31
  Molmo2-VideoTrackEval is an evaluation benchmark for video point tracking, containing human-annotated ground truth expressions. It includes segmentation masks for evaluating whether predicted points fall within the correct object regions. Currently, there are five categories for evaluation:
32
+ - animal
33
  - dance
34
  - sports
35
  - person
36
  - misc
 
37
 
38
  This benchmark is part of the [Molmo2 dataset collection](https://huggingface.co/collections/allenai/molmo2) and is used to evaluate the Molmo2 family of models on video object tracking via point trajectories.
39
 
 
50
  ds = load_dataset("allenai/Molmo2-VideoTrackEval", split="test")
51
 
52
  # Load a specific benchmark subset by config name
53
+ animal = load_dataset("allenai/Molmo2-VideoTrackEval", "animal", split="test")
54
  dance = load_dataset("allenai/Molmo2-VideoTrackEval", "dance", split="test")
55
  sports = load_dataset("allenai/Molmo2-VideoTrackEval", "sports", split="test")
56
  person = load_dataset("allenai/Molmo2-VideoTrackEval", "person", split="test")
57
  misc = load_dataset("allenai/Molmo2-VideoTrackEval", "misc", split="test")
 
58
  ```
59
 
60
  ## Available Configs
 
62
  | Config | Dataset | Description |
63
  |--------|---------|-------------|
64
  | `default` | All | All evaluation data combined |
65
+ | `animal` | APTv2 | Animal tracking benchmark |
66
+ | `dance` | dancetrack | Dancer tracking benchmark |
67
+ | `sports` | sportsmot | Sports player tracking benchmark |
68
+ | `person` | personpath22 | Person tracking benchmark |
69
+ | `misc` | sav | Misc Video benchmark |
70
 
71
  ## Data Format
72
 
 
81
  | `exp` | Text expression describing the tracked object(s) |
82
  | `obj_id` | List of object IDs per video |
83
  | `mask_id` | List of mask IDs corresponding to tracked objects starting from '0' |
84
+ | `masks` | List of segmentation masks per object for evaluation. Each entry contains `object_id` and `masks` (used to verify if predicted points fall within the ground truth object region) |
85
  | `points` | List of point trajectories per object. Each entry contains `object_id` and `points` (list of [x, y] coordinates per frame) |
86
  | `segments` | List of segment annotations per object. Each entry contains `object_id` and `segments` |
 
87
  | `start_frame` | Starting frame index for this clip |
88
  | `end_frame` | Ending frame index for this clip |
89
  | `w` | Video width |
 
103
  Molmo2-VideoTrackEval/
104
  β”œβ”€β”€ README.md
105
  └── data/
106
+ β”œβ”€β”€ animal/
107
+ β”‚ └── APTv2_point_tracks_with_masks.parquet
108
+ β”œβ”€β”€ dance/
109
+ β”‚ └── dancetrack_point_tracks_with_masks.parquet
110
+ └── sports/
111
+ └── sportsmot_point_tracks_with_masks.parquet
112
+ β”œβ”€β”€ person/
113
+ β”‚ └── personpath22_point_tracks_with_masks.parquet
114
+ β”œβ”€β”€ misc/
115
+ β”‚ └── sav_point_tracks_with_masks.parquet
116
  ```
117
 
118
  ## Video Sources
119
 
120
  | Dataset | Category | Download |
121
  |---------|----------|----------|
 
 
122
  | APTv2 | Animals | [APTv2](https://github.com/ViTAE-Transformer/APTv2) |
 
123
  | dancetrack | Dancers | [DanceTrack](https://github.com/DanceTrack/DanceTrack?tab=readme-ov-file#dataset) |
124
+ | sportsmot | Sports | [SportsMOT](https://codalab.lisn.upsaclay.fr/competitions/12424#participate) |
125
+ | personpath22 | Person | [PersonPath22](https://amazon-science.github.io/tracking-dataset/personpath22.html) |
126
+ | sav | Misc | [SA-V](https://ai.meta.com/datasets/segment-anything-video/) (Videos at 6 fps) |
127
 
128
  ## License
129
 
data/APTv2/APTv2_point_tracks_with_masks.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:98d05a831f9c887c65ce0640823b076049461e3bc3a4db5f0ccffe074ece44fc
3
- size 9620166
 
 
 
 
data/dancetrack/dancetrack_point_tracks_with_masks.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a414742eda58f88adcde7fd55cd6742ca834945ca4736a61cb92db51f856407
3
- size 283401821
 
 
 
 
data/personpath22/personpath22_point_tracks_with_masks.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4544da450c1bcc1c9ebac39c04d48be9c54ccac4067c28add87fad59b66550a8
3
- size 71408255
 
 
 
 
data/sav/sav_point_tracks_with_masks.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7203db09c61249d3acf6a6355ad85db93bd71278fa8595f219e4fc9b058d7d27
3
- size 1620831
 
 
 
 
data/sportsmot/sportsmot_point_tracks_with_masks.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d98b0d9b42e792e3ae1f60a35c7688df3263ce9f47a78623304482e67f67799
3
- size 119239621