111liqigui111's picture
Duplicate from Hamidreza-Hashemp/FastTracker-Benchmark
722c81c
---
license: bigscience-openrail-m
task_categories:
- object-detection
language:
- en
tags:
- Multi-object-tracking
pretty_name: FastTracker-Benchmark
size_categories:
- 100K<n<1M
---
# FastTracker Benchmark
### A new benchmark dataset comprising diverse vehicle classes with frame-level tracking annotation introduced in paper: *FastTracker: Real-Time and Accurate Visual Tracking*
_[Hamidreza Hashempoor](https://hamidreza-hashempoor.github.io/), Yu Dong Hwang_.
## Updates
| Date | Update |
|------|---------|
| **2026-03-01** | Speciall thanks to _[Mikhail Kozak](https://huggingface.co/datasets/Fleyderer/FastTracker-Benchmark-MOT) which helped to prepare the revised current version of the benchmark.|
## Resources
| Github | Paper |
|:-----------------:|:-------:|
|[![github](https://img.shields.io/badge/Github-Code-blue)](https://github.com/Hamidreza-Hashempoor/FastTracker)|[![arXiv](https://img.shields.io/badge/arXiv-2508.14370-blue)](https://arxiv.org/abs/2508.14370)
<div align="center">
<img src="./fig/fasttrack_benchmark.jpg" width="40%" alt="MiroThinker" />
</div>
----
## Dataset Overview
Brief statistics and visualization of FastTracker benchmark and its comparison with other benchmarks.
| Attribute | UrbanTracker | CityFlow | FastTracker |
|----------------|--------------|----------|-----------|
| **Year** | 2014 | 2022 | 2025 |
| **Detections** | 12.5K | 890K | 800K |
| **#Videos** | 5 | 40 | 12 |
| **Obj/Frame** | 5.4 | 8.2 | 43.5 |
| **#Classes** | 3 | 1 | 9 |
| **#Scenarios** | 1 | 4 | 12 |
---
## Dataset Summary
- **What is it?**
FastTrack is a large-scale benchmark dataset for evaluating multi-object tracking in complex and high-density traffic environments. It includes 800K annotated object detections across 12 videos, with an average of 43.5 objects per frame. The dataset features 9 traffic-related classes and covers diverse real-world traffic scenarios—such as multilane intersections, tunnels, crosswalks, and merging roads—captured under varying lighting conditions (daytime, nighttime, shadows).
- **Why was it created?**
FastTrack was created to address limitations of existing benchmarks like UrbanTracker and CityFlow, which lack diversity in scene types and have lower object density. This benchmark introduces challenging conditions including extreme crowding, long-term occlusions, and diverse motion patterns, to push the boundaries of modern multi-object tracking algorithms—particularly those optimized for real-world, urban traffic settings.
- **What can it be used for?**
Multi-object tracking, re-identification, online tracking evaluation, urban scene understanding, and benchmarking tracking algorithms under occlusion and crowding.
- **Who are the intended users?**
Researchers and practitioners in computer vision and intelligent transportation systems, especially those focusing on real-time tracking, urban mobility, autonomous driving, and edge deployment. Also valuable for students and developers working on lightweight or environment-aware tracking models.
---
## Dataset Structure
### Data Format
Each sequence is provided as a compressed archive inside the train/ directory:
```bash
train/
task_day_left_turn.zip
task_day_occlusion.zip
...
```
After extracting a sequence archive, the structure is:
```bash
task_xxx/
img1/ # extracted frames (.jpg)
gt/ # MOT-format ground truth (gt.txt)
video/ # reconstructed video (.mp4) and metadata
seqinfo.ini # sequence metadata
```
GT formats `gt/gt.txt` is like (each line):
`frame, id, bb_left, bb_top, bb_width, bb_height, conf, class, visibility`.
Each sequence includes `seqinfo.ini`:
```bash
[Sequence]
name=task_day_left_turn
imDir=img1
frameRate=30
seqLength=1962
imWidth=1920
imHeight=1080
imExt=.jpg
```
## Citation
If you use our code or Benchmark, please cite our work.
```
@misc{hashempoor2025fasttrackerrealtimeaccuratevisual,
title={FastTracker: Real-Time and Accurate Visual Tracking},
author={Hamidreza Hashempoor and Yu Dong Hwang},
year={2025},
eprint={2508.14370},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.14370},
}
```