Datasets:
Tasks:
Object Detection
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -14,18 +14,17 @@ size_categories:
|
|
| 14 |
|
| 15 |
[Computer Vision Lab, ETH Zurich](https://vision.ee.ethz.ch/)
|
| 16 |
|
| 17 |
-
|
| 18 |
-
<p align="center">
|
| 19 |
-
<img src="./docs/imgs/main.png" alt="Image" width="100%"/>
|
| 20 |
-
</p>
|
| 21 |
|
| 22 |
|
| 23 |
## Introduction
|
| 24 |
-
We introduce the HardTracksDataset (HTD), a novel multi-object tracking (MOT) benchmark specifically designed to address two critical
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
## Results of state of the art trackers on HTD
|
| 27 |
<table>
|
| 28 |
-
<caption>TETA evaluation of state-of-the-art trackers on the HTD validation and test sets, grouped by tracking approach.</caption>
|
| 29 |
<thead>
|
| 30 |
<tr>
|
| 31 |
<th rowspan="2">Method</th>
|
|
@@ -139,10 +138,6 @@ We introduce the HardTracksDataset (HTD), a novel multi-object tracking (MOT) be
|
|
| 139 |
|
| 140 |
## Download Instructions
|
| 141 |
|
| 142 |
-
We provide the full dataset with annotations and metadata on HuggingFace:
|
| 143 |
-
|
| 144 |
-
- [HTD Dataset 🤗](https://huggingface.co/datasets/mscheidl/htd)
|
| 145 |
-
|
| 146 |
To download the dataset you can use the HuggingFace CLI.
|
| 147 |
First install the HuggingFace CLI according to the official [HuggingFace documentation](https://huggingface.co/docs/huggingface_hub/main/guides/cli)
|
| 148 |
and login with your HuggingFace account. Then, you can download the dataset using the following command:
|
|
@@ -170,4 +165,90 @@ The dataset is organized in the following structure:
|
|
| 170 |
```
|
| 171 |
|
| 172 |
The `data` folder contains the videos, the `annotations` folder contains the annotations in COCO (TAO) format, and the `metadata` folder contains the metadata files for running MASA+.
|
| 173 |
-
If you use HTD independently, you can ignore the `metadata` folder.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
[Computer Vision Lab, ETH Zurich](https://vision.ee.ethz.ch/)
|
| 16 |
|
| 17 |
+

|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
|
| 20 |
## Introduction
|
| 21 |
+
We introduce the HardTracksDataset (HTD), a novel multi-object tracking (MOT) benchmark specifically designed to address two critical
|
| 22 |
+
limitations prevalent in existing tracking datasets. First, most current MOT benchmarks narrowly focus on restricted scenarios, such as
|
| 23 |
+
pedestrian movements, dance sequences, or autonomous driving environments, thus lacking the object diversity and scenario complexity
|
| 24 |
+
representative of real-world conditions. Second, datasets featuring broader vocabularies, such as, OVT-B and TAO, typically do not sufficiently emphasize challenging scenarios involving long-term occlusions, abrupt appearance changes, and significant position variations. As a consequence, the majority of tracking instances evaluated are relatively easy, obscuring trackers’ limitations on truly challenging cases. HTD addresses these gaps by curating a challenging subset of scenarios from existing datasets, explicitly combining large vocabulary diversity with severe visual challenges. By emphasizing difficult tracking scenarios, particularly long-term occlusions and substantial appearance shifts, HTD provides a focused benchmark aimed at fostering the development of more robust and reliable tracking algorithms for complex real-world situations.
|
| 25 |
|
| 26 |
## Results of state of the art trackers on HTD
|
| 27 |
<table>
|
|
|
|
| 28 |
<thead>
|
| 29 |
<tr>
|
| 30 |
<th rowspan="2">Method</th>
|
|
|
|
| 138 |
|
| 139 |
## Download Instructions
|
| 140 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 141 |
To download the dataset you can use the HuggingFace CLI.
|
| 142 |
First install the HuggingFace CLI according to the official [HuggingFace documentation](https://huggingface.co/docs/huggingface_hub/main/guides/cli)
|
| 143 |
and login with your HuggingFace account. Then, you can download the dataset using the following command:
|
|
|
|
| 165 |
```
|
| 166 |
|
| 167 |
The `data` folder contains the videos, the `annotations` folder contains the annotations in COCO (TAO) format, and the `metadata` folder contains the metadata files for running MASA+.
|
| 168 |
+
If you use HTD independently, you can ignore the `metadata` folder.
|
| 169 |
+
|
| 170 |
+
|
| 171 |
+
## Annotation format for HTD dataset
|
| 172 |
+
|
| 173 |
+
|
| 174 |
+
The annotations folder is structured as follows:
|
| 175 |
+
|
| 176 |
+
```
|
| 177 |
+
├── annotations
|
| 178 |
+
├── classes.txt
|
| 179 |
+
├── hard_tracks_dataset_coco_test.json
|
| 180 |
+
├── hard_tracks_dataset_coco_val.json
|
| 181 |
+
├── hard_tracks_dataset_coco.json
|
| 182 |
+
├── hard_tracks_dataset_coco_class_agnostic.json
|
| 183 |
+
```
|
| 184 |
+
|
| 185 |
+
Details about the annotations:
|
| 186 |
+
- `classes.txt`: Contains the list of classes in the dataset. Useful for Open-Vocabulary tracking.
|
| 187 |
+
- `hard_tracks_dataset_coco_test.json`: Contains the annotations for the test set.
|
| 188 |
+
- `hard_tracks_dataset_coco_val.json`: Contains the annotations for the validation set.
|
| 189 |
+
- `hard_tracks_dataset_coco.json`: Contains the annotations for the entire dataset.
|
| 190 |
+
- `hard_tracks_dataset_coco_class_agnostic.json`: Contains the annotations for the entire dataset in a class-agnostic format. This means that there is only one category namely "object" and all the objects in the dataset are assigned to this category.
|
| 191 |
+
|
| 192 |
+
|
| 193 |
+
The HTD dataset is annotated in COCO format. The annotations are stored in JSON files, which contain information about the images, annotations, categories, and other metadata.
|
| 194 |
+
The format of the annotations is as follows:
|
| 195 |
+
|
| 196 |
+
````python
|
| 197 |
+
{
|
| 198 |
+
"images": [image],
|
| 199 |
+
"videos": [video],
|
| 200 |
+
"tracks": [track],
|
| 201 |
+
"annotations": [annotation],
|
| 202 |
+
"categories": [category]
|
| 203 |
+
}
|
| 204 |
+
|
| 205 |
+
image: {
|
| 206 |
+
"id": int, # Unique ID of the image
|
| 207 |
+
"video_id": int, # Reference to the parent video
|
| 208 |
+
"file_name": str, # Path to the image file
|
| 209 |
+
"width": int, # Image width in pixels
|
| 210 |
+
"height": int, # Image height in pixels
|
| 211 |
+
"frame_index": int, # Index of the frame within the video (starting from 0)
|
| 212 |
+
"frame_id": int # Redundant or external frame ID (optional alignment)
|
| 213 |
+
"video": str, # Name of the video
|
| 214 |
+
"neg_category_ids": [int], # List of category IDs explicitly not present (optional)
|
| 215 |
+
"not_exhaustive_category_ids": [int] # Categories not exhaustively labeled in this image (optional)
|
| 216 |
+
|
| 217 |
+
video: {
|
| 218 |
+
"id": int, # Unique video ID
|
| 219 |
+
"name": str, # Human-readable or path-based name
|
| 220 |
+
"width": int, # Frame width
|
| 221 |
+
"height": int, # Frame height
|
| 222 |
+
"neg_category_ids": [int], # List of category IDs explicitly not present (optional)
|
| 223 |
+
"not_exhaustive_category_ids": [int] # Categories not exhaustively labeled in this video (optional)
|
| 224 |
+
"frame_range": int, # Number of frames between annotated frames
|
| 225 |
+
"metadata": dict, # Metadata for the video
|
| 226 |
+
}
|
| 227 |
+
|
| 228 |
+
track: {
|
| 229 |
+
"id": int, # Unique track ID
|
| 230 |
+
"category_id": int, # Object category
|
| 231 |
+
"video_id": int # Associated video
|
| 232 |
+
}
|
| 233 |
+
|
| 234 |
+
category: {
|
| 235 |
+
"id": int, # Unique category ID
|
| 236 |
+
"name": str, # Human-readable name of the category
|
| 237 |
+
}
|
| 238 |
+
|
| 239 |
+
annotation: {
|
| 240 |
+
"id": int, # Unique annotation ID
|
| 241 |
+
"image_id": int, # Image/frame ID
|
| 242 |
+
"video_id": int, # Video ID
|
| 243 |
+
"track_id": int, # Associated track ID
|
| 244 |
+
"bbox": [x, y, w, h], # Bounding box in absolute pixel coordinates
|
| 245 |
+
"area": float, # Area of the bounding box
|
| 246 |
+
"category_id": int # Category of the object
|
| 247 |
+
"iscrowd": int, # Crowd flag (from COCO)
|
| 248 |
+
"segmentation": [], # Polygon-based segmentation (if available)
|
| 249 |
+
"instance_id": int, # Instance index with a video
|
| 250 |
+
"scale_category": str # Scale type (e.g., 'moving-object')
|
| 251 |
+
}
|
| 252 |
+
````
|
| 253 |
+
|
| 254 |
+
|