Datasets:
Tasks:
Object Detection
Modalities:
Image
Languages:
English
Size:
100K<n<1M
Tags:
Multi-Object-Tracking
License:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,254 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- object-detection
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- Multi-Object-Tracking
|
| 9 |
+
pretty_name: HardTracksDataset
|
| 10 |
+
size_categories:
|
| 11 |
+
- 100K<n<1M
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
# HardTracksDataset: A Benchmark for Robust Object Tracking under Heavy Occlusion and Challenging Conditions
|
| 16 |
+
|
| 17 |
+
[Computer Vision Lab, ETH Zurich](https://vision.ee.ethz.ch/)
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
## Introduction
|
| 23 |
+
We introduce the HardTracksDataset (HTD), a novel multi-object tracking (MOT) benchmark specifically designed to address two critical
|
| 24 |
+
limitations prevalent in existing tracking datasets. First, most current MOT benchmarks narrowly focus on restricted scenarios, such as
|
| 25 |
+
pedestrian movements, dance sequences, or autonomous driving environments, thus lacking the object diversity and scenario complexity
|
| 26 |
+
representative of real-world conditions. Second, datasets featuring broader vocabularies, such as, OVT-B and TAO, typically do not sufficiently emphasize challenging scenarios involving long-term occlusions, abrupt appearance changes, and significant position variations. As a consequence, the majority of tracking instances evaluated are relatively easy, obscuring trackers’ limitations on truly challenging cases. HTD addresses these gaps by curating a challenging subset of scenarios from existing datasets, explicitly combining large vocabulary diversity with severe visual challenges. By emphasizing difficult tracking scenarios, particularly long-term occlusions and substantial appearance shifts, HTD provides a focused benchmark aimed at fostering the development of more robust and reliable tracking algorithms for complex real-world situations.
|
| 27 |
+
|
| 28 |
+
## Results of state of the art trackers on HTD
|
| 29 |
+
<table>
|
| 30 |
+
<thead>
|
| 31 |
+
<tr>
|
| 32 |
+
<th rowspan="2">Method</th>
|
| 33 |
+
<th colspan="4">Validation</th>
|
| 34 |
+
<th colspan="4">Test</th>
|
| 35 |
+
</tr>
|
| 36 |
+
<tr>
|
| 37 |
+
<th>TETA</th>
|
| 38 |
+
<th>LocA</th>
|
| 39 |
+
<th>AssocA</th>
|
| 40 |
+
<th>ClsA</th>
|
| 41 |
+
<th>TETA</th>
|
| 42 |
+
<th>LocA</th>
|
| 43 |
+
<th>AssocA</th>
|
| 44 |
+
<th>ClsA</th>
|
| 45 |
+
</tr>
|
| 46 |
+
</thead>
|
| 47 |
+
<tbody>
|
| 48 |
+
<tr>
|
| 49 |
+
<td colspan="9"><em>Motion-based</em></td>
|
| 50 |
+
</tr>
|
| 51 |
+
<tr>
|
| 52 |
+
<td>ByteTrack</td>
|
| 53 |
+
<td>34.877</td>
|
| 54 |
+
<td>54.624</td>
|
| 55 |
+
<td>19.085</td>
|
| 56 |
+
<td>30.922</td>
|
| 57 |
+
<td>37.875</td>
|
| 58 |
+
<td>56.135</td>
|
| 59 |
+
<td>19.464</td>
|
| 60 |
+
<td>38.025</td>
|
| 61 |
+
</tr>
|
| 62 |
+
<tr>
|
| 63 |
+
<td>DeepSORT</td>
|
| 64 |
+
<td>33.782</td>
|
| 65 |
+
<td>57.350</td>
|
| 66 |
+
<td>15.009</td>
|
| 67 |
+
<td>28.987</td>
|
| 68 |
+
<td>37.099</td>
|
| 69 |
+
<td>58.766</td>
|
| 70 |
+
<td>15.729</td>
|
| 71 |
+
<td>36.803</td>
|
| 72 |
+
</tr>
|
| 73 |
+
<tr>
|
| 74 |
+
<td>OCSORT</td>
|
| 75 |
+
<td>33.012</td>
|
| 76 |
+
<td>57.599</td>
|
| 77 |
+
<td>12.558</td>
|
| 78 |
+
<td>28.880</td>
|
| 79 |
+
<td>35.164</td>
|
| 80 |
+
<td>59.117</td>
|
| 81 |
+
<td>11.549</td>
|
| 82 |
+
<td>34.825</td>
|
| 83 |
+
</tr>
|
| 84 |
+
<tr>
|
| 85 |
+
<td colspan="9"><em>Appearance-based</em></td>
|
| 86 |
+
</tr>
|
| 87 |
+
<tr>
|
| 88 |
+
<td>MASA</td>
|
| 89 |
+
<td>42.246</td>
|
| 90 |
+
<td>60.260</td>
|
| 91 |
+
<td>34.241</td>
|
| 92 |
+
<td>32.237</td>
|
| 93 |
+
<td>43.656</td>
|
| 94 |
+
<td>60.125</td>
|
| 95 |
+
<td>31.454</td>
|
| 96 |
+
<td><strong>39.390</strong></td>
|
| 97 |
+
</tr>
|
| 98 |
+
<tr>
|
| 99 |
+
<td>OV-Track</td>
|
| 100 |
+
<td>29.179</td>
|
| 101 |
+
<td>47.393</td>
|
| 102 |
+
<td>25.758</td>
|
| 103 |
+
<td>14.385</td>
|
| 104 |
+
<td>33.586</td>
|
| 105 |
+
<td>51.310</td>
|
| 106 |
+
<td>26.507</td>
|
| 107 |
+
<td>22.941</td>
|
| 108 |
+
</tr>
|
| 109 |
+
<tr>
|
| 110 |
+
<td colspan="9"><em>Transformer-based</em></td>
|
| 111 |
+
</tr>
|
| 112 |
+
<tr>
|
| 113 |
+
<td>OVTR</td>
|
| 114 |
+
<td>26.585</td>
|
| 115 |
+
<td>44.031</td>
|
| 116 |
+
<td>23.724</td>
|
| 117 |
+
<td>14.138</td>
|
| 118 |
+
<td>29.771</td>
|
| 119 |
+
<td>46.338</td>
|
| 120 |
+
<td>24.974</td>
|
| 121 |
+
<td>21.643</td>
|
| 122 |
+
</tr>
|
| 123 |
+
<tr>
|
| 124 |
+
<td colspan="9"></td>
|
| 125 |
+
</tr>
|
| 126 |
+
<tr>
|
| 127 |
+
<td><strong>MASA+</strong></td>
|
| 128 |
+
<td><strong>42.716</strong></td>
|
| 129 |
+
<td><strong>60.364</strong></td>
|
| 130 |
+
<td><strong>35.252</strong></td>
|
| 131 |
+
<td><strong>32.532</strong></td>
|
| 132 |
+
<td><strong>44.063</strong></td>
|
| 133 |
+
<td><strong>60.319</strong></td>
|
| 134 |
+
<td><strong>32.735</strong></td>
|
| 135 |
+
<td>39.135</td>
|
| 136 |
+
</tr>
|
| 137 |
+
</tbody>
|
| 138 |
+
</table>
|
| 139 |
+
|
| 140 |
+
|
| 141 |
+
## Download Instructions
|
| 142 |
+
|
| 143 |
+
To download the dataset you can use the HuggingFace CLI.
|
| 144 |
+
First install the HuggingFace CLI according to the official [HuggingFace documentation](https://huggingface.co/docs/huggingface_hub/main/guides/cli)
|
| 145 |
+
and login with your HuggingFace account. Then, you can download the dataset using the following command:
|
| 146 |
+
|
| 147 |
+
```bash
|
| 148 |
+
huggingface-cli download mscheidl/htd --repo-type dataset
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
The dataset is organized in the following structure:
|
| 152 |
+
|
| 153 |
+
```
|
| 154 |
+
├── htd
|
| 155 |
+
├── data
|
| 156 |
+
├── AnimalTrack
|
| 157 |
+
├── BDD
|
| 158 |
+
├── ...
|
| 159 |
+
├── annotations
|
| 160 |
+
├── classes.txt
|
| 161 |
+
├── hard_tracks_dataset_coco_test.json
|
| 162 |
+
├── hard_tracks_dataset_coco_val.json
|
| 163 |
+
├── ...
|
| 164 |
+
├── metadata
|
| 165 |
+
├── lvis_v1_clip_a+cname.npy
|
| 166 |
+
├── lvis_v1_train_cat_info.json
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
The `data` folder contains the videos, the `annotations` folder contains the annotations in COCO (TAO) format, and the `metadata` folder contains the metadata files for running MASA+.
|
| 170 |
+
If you use HTD independently, you can ignore the `metadata` folder.
|
| 171 |
+
|
| 172 |
+
|
| 173 |
+
## Annotation format for HTD dataset
|
| 174 |
+
|
| 175 |
+
|
| 176 |
+
The annotations folder is structured as follows:
|
| 177 |
+
|
| 178 |
+
```
|
| 179 |
+
├── annotations
|
| 180 |
+
├── classes.txt
|
| 181 |
+
├── hard_tracks_dataset_coco_test.json
|
| 182 |
+
├── hard_tracks_dataset_coco_val.json
|
| 183 |
+
├── hard_tracks_dataset_coco.json
|
| 184 |
+
├── hard_tracks_dataset_coco_class_agnostic.json
|
| 185 |
+
```
|
| 186 |
+
|
| 187 |
+
Details about the annotations:
|
| 188 |
+
- `classes.txt`: Contains the list of classes in the dataset. Useful for Open-Vocabulary tracking.
|
| 189 |
+
- `hard_tracks_dataset_coco_test.json`: Contains the annotations for the test set.
|
| 190 |
+
- `hard_tracks_dataset_coco_val.json`: Contains the annotations for the validation set.
|
| 191 |
+
- `hard_tracks_dataset_coco.json`: Contains the annotations for the entire dataset.
|
| 192 |
+
- `hard_tracks_dataset_coco_class_agnostic.json`: Contains the annotations for the entire dataset in a class-agnostic format. This means that there is only one category namely "object" and all the objects in the dataset are assigned to this category.
|
| 193 |
+
|
| 194 |
+
|
| 195 |
+
The HTD dataset is annotated in COCO format. The annotations are stored in JSON files, which contain information about the images, annotations, categories, and other metadata.
|
| 196 |
+
The format of the annotations is as follows:
|
| 197 |
+
|
| 198 |
+
````python
|
| 199 |
+
{
|
| 200 |
+
"images": [image],
|
| 201 |
+
"videos": [video],
|
| 202 |
+
"tracks": [track],
|
| 203 |
+
"annotations": [annotation],
|
| 204 |
+
"categories": [category]
|
| 205 |
+
}
|
| 206 |
+
|
| 207 |
+
image: {
|
| 208 |
+
"id": int, # Unique ID of the image
|
| 209 |
+
"video_id": int, # Reference to the parent video
|
| 210 |
+
"file_name": str, # Path to the image file
|
| 211 |
+
"width": int, # Image width in pixels
|
| 212 |
+
"height": int, # Image height in pixels
|
| 213 |
+
"frame_index": int, # Index of the frame within the video (starting from 0)
|
| 214 |
+
"frame_id": int # Redundant or external frame ID (optional alignment)
|
| 215 |
+
"video": str, # Name of the video
|
| 216 |
+
"neg_category_ids": [int], # List of category IDs explicitly not present (optional)
|
| 217 |
+
"not_exhaustive_category_ids": [int] # Categories not exhaustively labeled in this image (optional)
|
| 218 |
+
|
| 219 |
+
video: {
|
| 220 |
+
"id": int, # Unique video ID
|
| 221 |
+
"name": str, # Human-readable or path-based name
|
| 222 |
+
"width": int, # Frame width
|
| 223 |
+
"height": int, # Frame height
|
| 224 |
+
"neg_category_ids": [int], # List of category IDs explicitly not present (optional)
|
| 225 |
+
"not_exhaustive_category_ids": [int] # Categories not exhaustively labeled in this video (optional)
|
| 226 |
+
"frame_range": int, # Number of frames between annotated frames
|
| 227 |
+
"metadata": dict, # Metadata for the video
|
| 228 |
+
}
|
| 229 |
+
|
| 230 |
+
track: {
|
| 231 |
+
"id": int, # Unique track ID
|
| 232 |
+
"category_id": int, # Object category
|
| 233 |
+
"video_id": int # Associated video
|
| 234 |
+
}
|
| 235 |
+
|
| 236 |
+
category: {
|
| 237 |
+
"id": int, # Unique category ID
|
| 238 |
+
"name": str, # Human-readable name of the category
|
| 239 |
+
}
|
| 240 |
+
|
| 241 |
+
annotation: {
|
| 242 |
+
"id": int, # Unique annotation ID
|
| 243 |
+
"image_id": int, # Image/frame ID
|
| 244 |
+
"video_id": int, # Video ID
|
| 245 |
+
"track_id": int, # Associated track ID
|
| 246 |
+
"bbox": [x, y, w, h], # Bounding box in absolute pixel coordinates
|
| 247 |
+
"area": float, # Area of the bounding box
|
| 248 |
+
"category_id": int # Category of the object
|
| 249 |
+
"iscrowd": int, # Crowd flag (from COCO)
|
| 250 |
+
"segmentation": [], # Polygon-based segmentation (if available)
|
| 251 |
+
"instance_id": int, # Instance index with a video
|
| 252 |
+
"scale_category": str # Scale type (e.g., 'moving-object')
|
| 253 |
+
}
|
| 254 |
+
````
|