Datasets:
Tasks:
Object Detection
Modalities:
Image
Languages:
English
Size:
100K<n<1M
Tags:
Multi-Object-Tracking
License:
| license: apache-2.0 | |
| task_categories: | |
| - object-detection | |
| language: | |
| - en | |
| tags: | |
| - Multi-Object-Tracking | |
| pretty_name: HardTracksDataset | |
| size_categories: | |
| - 100K<n<1M | |
| # HardTracksDataset: A Benchmark for Robust Object Tracking under Heavy Occlusion and Challenging Conditions | |
| [Computer Vision Lab, ETH Zurich](https://vision.ee.ethz.ch/) | |
|  | |
| ## Introduction | |
| We introduce the HardTracksDataset (HTD), a novel multi-object tracking (MOT) benchmark specifically designed to address two critical | |
| limitations prevalent in existing tracking datasets. First, most current MOT benchmarks narrowly focus on restricted scenarios, such as | |
| pedestrian movements, dance sequences, or autonomous driving environments, thus lacking the object diversity and scenario complexity | |
| representative of real-world conditions. Second, datasets featuring broader vocabularies, such as, OVT-B and TAO, typically do not sufficiently emphasize challenging scenarios involving long-term occlusions, abrupt appearance changes, and significant position variations. As a consequence, the majority of tracking instances evaluated are relatively easy, obscuring trackers’ limitations on truly challenging cases. HTD addresses these gaps by curating a challenging subset of scenarios from existing datasets, explicitly combining large vocabulary diversity with severe visual challenges. By emphasizing difficult tracking scenarios, particularly long-term occlusions and substantial appearance shifts, HTD provides a focused benchmark aimed at fostering the development of more robust and reliable tracking algorithms for complex real-world situations. | |
| ## Results of state of the art trackers on HTD | |
| <table> | |
| <thead> | |
| <tr> | |
| <th rowspan="2">Method</th> | |
| <th colspan="4">Validation</th> | |
| <th colspan="4">Test</th> | |
| </tr> | |
| <tr> | |
| <th>TETA</th> | |
| <th>LocA</th> | |
| <th>AssocA</th> | |
| <th>ClsA</th> | |
| <th>TETA</th> | |
| <th>LocA</th> | |
| <th>AssocA</th> | |
| <th>ClsA</th> | |
| </tr> | |
| </thead> | |
| <tbody> | |
| <tr> | |
| <td colspan="9"><em>Motion-based</em></td> | |
| </tr> | |
| <tr> | |
| <td>ByteTrack</td> | |
| <td>34.877</td> | |
| <td>54.624</td> | |
| <td>19.085</td> | |
| <td>30.922</td> | |
| <td>37.875</td> | |
| <td>56.135</td> | |
| <td>19.464</td> | |
| <td>38.025</td> | |
| </tr> | |
| <tr> | |
| <td>DeepSORT</td> | |
| <td>33.782</td> | |
| <td>57.350</td> | |
| <td>15.009</td> | |
| <td>28.987</td> | |
| <td>37.099</td> | |
| <td>58.766</td> | |
| <td>15.729</td> | |
| <td>36.803</td> | |
| </tr> | |
| <tr> | |
| <td>OCSORT</td> | |
| <td>33.012</td> | |
| <td>57.599</td> | |
| <td>12.558</td> | |
| <td>28.880</td> | |
| <td>35.164</td> | |
| <td>59.117</td> | |
| <td>11.549</td> | |
| <td>34.825</td> | |
| </tr> | |
| <tr> | |
| <td colspan="9"><em>Appearance-based</em></td> | |
| </tr> | |
| <tr> | |
| <td>MASA</td> | |
| <td>42.246</td> | |
| <td>60.260</td> | |
| <td>34.241</td> | |
| <td>32.237</td> | |
| <td>43.656</td> | |
| <td>60.125</td> | |
| <td>31.454</td> | |
| <td><strong>39.390</strong></td> | |
| </tr> | |
| <tr> | |
| <td>OV-Track</td> | |
| <td>29.179</td> | |
| <td>47.393</td> | |
| <td>25.758</td> | |
| <td>14.385</td> | |
| <td>33.586</td> | |
| <td>51.310</td> | |
| <td>26.507</td> | |
| <td>22.941</td> | |
| </tr> | |
| <tr> | |
| <td colspan="9"><em>Transformer-based</em></td> | |
| </tr> | |
| <tr> | |
| <td>OVTR</td> | |
| <td>26.585</td> | |
| <td>44.031</td> | |
| <td>23.724</td> | |
| <td>14.138</td> | |
| <td>29.771</td> | |
| <td>46.338</td> | |
| <td>24.974</td> | |
| <td>21.643</td> | |
| </tr> | |
| <tr> | |
| <td colspan="9"></td> | |
| </tr> | |
| <tr> | |
| <td><strong>MASA+</strong></td> | |
| <td><strong>42.716</strong></td> | |
| <td><strong>60.364</strong></td> | |
| <td><strong>35.252</strong></td> | |
| <td><strong>32.532</strong></td> | |
| <td><strong>44.063</strong></td> | |
| <td><strong>60.319</strong></td> | |
| <td><strong>32.735</strong></td> | |
| <td>39.135</td> | |
| </tr> | |
| </tbody> | |
| </table> | |
| ## Download Instructions | |
| To download the dataset you can use the HuggingFace CLI. | |
| First install the HuggingFace CLI according to the official [HuggingFace documentation](https://huggingface.co/docs/huggingface_hub/main/guides/cli) | |
| and login with your HuggingFace account. Then, you can download the dataset using the following command: | |
| ```bash | |
| huggingface-cli download mscheidl/htd --repo-type dataset --local-dir htd | |
| ``` | |
| The video folders are provided as zip files. Before usage please unzip the files. You can use the following command to unzip all files in the `data` folder. | |
| Please note that the unzipping process can take a while (especially for _TAO.zip_) | |
| ```bash | |
| cd htd | |
| for z in data/*.zip; do (unzip -o -q "$z" -d data && echo "Unzipped: $z") & done; wait; echo "✅ Done" | |
| mkdir -p data/zips # create a folder for the zip files | |
| mv data/*.zip data/zips/ # move the zip files to the zips folder | |
| ``` | |
| The dataset is organized in the following structure: | |
| ``` | |
| ├── htd | |
| ├── data | |
| ├── AnimalTrack | |
| ├── BDD | |
| ├── ... | |
| ├── annotations | |
| ├── classes.txt | |
| ├── hard_tracks_dataset_coco_test.json | |
| ├── hard_tracks_dataset_coco_val.json | |
| ├── ... | |
| ├── metadata | |
| ├── lvis_v1_clip_a+cname.npy | |
| ├── lvis_v1_train_cat_info.json | |
| ``` | |
| The `data` folder contains the videos, the `annotations` folder contains the annotations in COCO (TAO) format, and the `metadata` folder contains the metadata files for running MASA+. | |
| If you use HTD independently, you can ignore the `metadata` folder. | |
| ## Annotation format for HTD dataset | |
| The annotations folder is structured as follows: | |
| ``` | |
| ├── annotations | |
| ├── classes.txt | |
| ├── hard_tracks_dataset_coco_test.json | |
| ├── hard_tracks_dataset_coco_val.json | |
| ├── hard_tracks_dataset_coco.json | |
| ├── hard_tracks_dataset_coco_class_agnostic.json | |
| ``` | |
| Details about the annotations: | |
| - `classes.txt`: Contains the list of classes in the dataset. Useful for Open-Vocabulary tracking. | |
| - `hard_tracks_dataset_coco_test.json`: Contains the annotations for the test set. | |
| - `hard_tracks_dataset_coco_val.json`: Contains the annotations for the validation set. | |
| - `hard_tracks_dataset_coco.json`: Contains the annotations for the entire dataset. | |
| - `hard_tracks_dataset_coco_class_agnostic.json`: Contains the annotations for the entire dataset in a class-agnostic format. This means that there is only one category namely "object" and all the objects in the dataset are assigned to this category. | |
| The HTD dataset is annotated in COCO format. The annotations are stored in JSON files, which contain information about the images, annotations, categories, and other metadata. | |
| The format of the annotations is as follows: | |
| ````python | |
| { | |
| "images": [image], | |
| "videos": [video], | |
| "tracks": [track], | |
| "annotations": [annotation], | |
| "categories": [category] | |
| } | |
| image: { | |
| "id": int, # Unique ID of the image | |
| "video_id": int, # Reference to the parent video | |
| "file_name": str, # Path to the image file | |
| "width": int, # Image width in pixels | |
| "height": int, # Image height in pixels | |
| "frame_index": int, # Index of the frame within the video (starting from 0) | |
| "frame_id": int # Redundant or external frame ID (optional alignment) | |
| "video": str, # Name of the video | |
| "neg_category_ids": [int], # List of category IDs explicitly not present (optional) | |
| "not_exhaustive_category_ids": [int] # Categories not exhaustively labeled in this image (optional) | |
| video: { | |
| "id": int, # Unique video ID | |
| "name": str, # Human-readable or path-based name | |
| "width": int, # Frame width | |
| "height": int, # Frame height | |
| "neg_category_ids": [int], # List of category IDs explicitly not present (optional) | |
| "not_exhaustive_category_ids": [int] # Categories not exhaustively labeled in this video (optional) | |
| "frame_range": int, # Number of frames between annotated frames | |
| "metadata": dict, # Metadata for the video | |
| } | |
| track: { | |
| "id": int, # Unique track ID | |
| "category_id": int, # Object category | |
| "video_id": int # Associated video | |
| } | |
| category: { | |
| "id": int, # Unique category ID | |
| "name": str, # Human-readable name of the category | |
| } | |
| annotation: { | |
| "id": int, # Unique annotation ID | |
| "image_id": int, # Image/frame ID | |
| "video_id": int, # Video ID | |
| "track_id": int, # Associated track ID | |
| "bbox": [x, y, w, h], # Bounding box in absolute pixel coordinates | |
| "area": float, # Area of the bounding box | |
| "category_id": int # Category of the object | |
| "iscrowd": int, # Crowd flag (from COCO) | |
| "segmentation": [], # Polygon-based segmentation (if available) | |
| "instance_id": int, # Instance index with a video | |
| "scale_category": str # Scale type (e.g., 'moving-object') | |
| } | |
| ```` |