--- configs: - config_name: default data_files: - path: data/**/*.parquet split: train license: odc-by --- # Molmo2-VideoTrack Molmo2-VideoTrack is a dataset of video point tracking annotations collected from human annotators across 16 video datasets. It can be used to fine-tune vision-language models for video object tracking via point trajectories. Molmo2-VideoTrack is part of the [Molmo2 dataset collection](https://huggingface.co/collections/allenai/molmo2-data) and was used to train the [Molmo2 family of models](https://huggingface.co/collections/allenai/molmo2). Quick links: - 📃 [Paper](https://allenai.org/papers/molmo2) - 🎥 [Blog with Videos](https://allenai.org/blog/molmo2) ## Usage ```python from datasets import load_dataset # Load entire dataset ds = load_dataset("allenai/Molmo2-VideoTrack", split="train") # Filter by video dataset dancetrack = ds.filter(lambda x: x == 'dancetrack', input_columns='video_dataset') ``` ## Data Format Each row contains tracking annotations for one or more objects in a video clip: | Field | Description | |-------|-------------| | `id` | Unique identifier for this annotation | | `video` | Video filename | | `clip` | trimmed clip id | | `video_dataset` | Source dataset name (e.g., 'dancetrack', 'mose') | | `video_source` | Video directory used in training (can be ignored) | | `exp` | Text expression describing the tracked object(s) | | `obj_id` | List of object IDs per video | | `mask_id` | List of mask IDs corresponding to tracked objects starting from '0' | | `points` | List of point trajectories per object. Each entry contains `object_id` (corresponding to an ID in `mask_id`) and `points` (list of [x, y] coordinates per frame). Example: `[{'object_id': '0', 'points': [[x1, y1], [x2, y2], ...]}, ...]` | | `segments` | List of segment annotations per object. Each entry contains `object_id` (corresponding to an ID in `mask_id`) and `segments`. Example: `[{'object_id': '0', 'segments': [...]}, ...]` | | `start_frame` | Starting frame index for this clip (use to trim the source video) | | `end_frame` | Ending frame index for this clip (use to trim the source video) | | `w` | Video width | | `h` | Video height | | `n_frames` | Number of frames in the clip | | `fps` | Used in training | **Important:** `start_frame` and `end_frame` indicate which portion of the source video to use. You need to trim the video to this range — the annotations correspond to frames within `[start_frame, end_frame]`, not the entire video. ## Folder Structure ``` Molmo2-VideoTrack/ ├── README.md └── data/ ├── animaltrack/ │ └── point_tracks.parquet ├── APTv2/ │ └── point_tracks.parquet ├── ... └── {video_dataset}/ └── point_tracks.parquet ``` ## Video Sources The table below contains information on the sources of the third party datasets used or referenced in curating the data for Molmo2-VideoTrack. We do not provide video files or share the original raw data from datasets with restrictions on use and distribution according to the source license. We instead provide the links, license information, and notes for downloading videos from the original datasets for transparency and reproducibility. Please verify the licenses and use requirements that apply to each dataset before downloading as they may change or be updated by the dataset providers. | Dataset | Category | Annotation Source | Download | Dataset License | Note | |---------|----------|-------------------|----------|-----------------|------| | mose | General | Segmentation | MOSE | CC BY-NC-SA 4.0 | | mosev2 | General | Segmentation | MOSEv2 | CC BY-NC-SA 4.0 | | sav | General | Segmentation | SA-V | CC BY 4.0 | Sampled at 6 fps from the original 24 fps video to match the segmentation annotation | | vipseg | General | Segmentation | VIPSeg | Non-commercial research use only | Change to 720p format | | animaltrack | Animals | Bounding Box | AnimalTrack | Non-commercial research use only | Train and val videos are used due to data scarcity | | APTv2 | Animals | Bounding Box | APTv2 | Apache 2.0 | | bft | Bird Flocks | Bounding Box | BFT | Apache 2.0 | | soccernet | Sports | Bounding Box | SoccerNet | Non-commercial research use only | Fill in the NDA form to access the videos | | sportsmot | Sports | Bounding Box | SportsMOT | CC BY-NC 4.0 | | teamtrack | Sports | Bounding Box | TeamTrack | MIT | | mot2020 | Pedestrians | Bounding Box | MOT20 | CC BY-NC-SA 3.0 | | personpath22 | Pedestrians | Bounding Box | PersonPath22 | CC BY-NC 4.0 | | dancetrack | Dancers | Bounding Box | DanceTrack | Non-commercial research use only | | bdd100k | Autonomous Driving | Bounding Box | BDD100K | BSD-3 | Download only bdd100k_videos_train_00.zip | | uavdt | UAV | Bounding Box | UAVDT | Research use only | | seadrones | UAV | Bounding Box | SeaDronesSee | CC0 / Unknown | Use 'Multi-Object Tracking' | ## License This dataset is licensed under ODC-BY-1.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use). Please refer to the Video Sources section for the original datasets that provide the videos used to generate the segmentations and point tracks for this dataset. All use of the videos and original data from these datasets are subject to the licenses and terms of use provided by the sources. Please check the sources to determine if they are appropriate for your use case.