nielsr's picture
nielsr HF Staff
Improve dataset card: Add task category, tags, GitHub link, and sample usage
8d417ea verified
|
raw
history blame
4.31 kB
metadata
license: bigscience-openrail-m
task_categories:
  - object-detection
tags:
  - tracking
  - multi-object-tracking
  - vehicle-tracking
  - traffic
  - computer-vision

FastTracker Benchmark

A new benchmark dataset comprising diverse vehicle classes with frame-level tracking annotation introduced in paper: FastTracker: Real-Time and Accurate Visual Tracking: arXiv 2508.14370.

Code: https://github.com/Hamidreza-Hashempoor/FastTracker Hamidreza Hashempoor, Yu Dong Hwang.

MiroThinker

Dataset Overview

Brief statistics and visualization of FastTracker benchmark and its comparison with other benchmarks.

Attribute UrbanTracker CityFlow FastTracker
Year 2014 2022 2025
Detections 12.5K 890K 800K
#Videos 5 40 12
Obj/Frame 5.4 8.2 43.5
#Classes 3 1 9
#Scenarios 1 4 12

Dataset Summary

  • What is it?
    FastTrack is a large-scale benchmark dataset for evaluating multi-object tracking in complex and high-density traffic environments. It includes 800K annotated object detections across 12 videos, with an average of 43.5 objects per frame. The dataset features 9 traffic-related classes and covers diverse real-world traffic scenarios—such as multilane intersections, tunnels, crosswalks, and merging roads—captured under varying lighting conditions (daytime, nighttime, shadows).

  • Why was it created?
    FastTrack was created to address limitations of existing benchmarks like UrbanTracker and CityFlow, which lack diversity in scene types and have lower object density. This benchmark introduces challenging conditions including extreme crowding, long-term occlusions, and diverse motion patterns, to push the boundaries of modern multi-object tracking algorithms—particularly those optimized for real-world, urban traffic settings.

  • What can it be used for?
    Multi-object tracking, re-identification, online tracking evaluation, urban scene understanding, and benchmarking tracking algorithms under occlusion and crowding.

  • Who are the intended users?
    Researchers and practitioners in computer vision and intelligent transportation systems, especially those focusing on real-time tracking, urban mobility, autonomous driving, and edge deployment. Also valuable for students and developers working on lightweight or environment-aware tracking models.


Sample Usage

For detailed instructions on installation, data preparation, running tracking, evaluation, and demos, please refer to the FastTracker GitHub repository.

Here's a quick start for setting up the environment:

cd <home>
conda create --name FastTracker python=3.9
conda activate FastTracker
pip3 install -r requirements.txt  # Ignore the errors
python setup.py develop
pip3 install cython
conda install -c conda-forge pycocotools
pip3 install cython_bbox

And an example for running the tracker on MOT17 benchmark:

bash run_mot17.sh

Dataset Structure

Data Format

The FastTrack benchmark follows the MOTChallenge standard annotation format. Each ground truth file (gt/gt.txt) contains a list of object annotations per frame in CSV format with the following 10 columns:

Citation

If you use our code or Benchmark, please cite our work.

@misc{hashempoor2025fasttrackerrealtimeaccuratevisual,
      title={FastTracker: Real-Time and Accurate Visual Tracking}, 
      author={Hamidreza Hashempoor and Yu Dong Hwang},
      year={2025},
      eprint={2508.14370},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2508.14370}, 
}