|
|
--- |
|
|
license: apache-2.0 |
|
|
--- |
|
|
|
|
|
# QuadTrack |
|
|
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
|
|
QuadTrack is a dataset designed for multi-object tracking (MOT) research, with a focus on panoramic and long-span scenarios. It provides challenging tracking sequences that include drastic appearance variations, prolonged occlusions, and wide field-of-view distortions, enabling the development and evaluation of robust MOT algorithms. |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
|
|
|
|
|
|
|
|
- **Curated by:** [HNU CVPU] |
|
|
- **Funded by [optional]:** [National Natural Science Foundation of China (No.62473139 and No.12174341), Zhejiang Provincial Natural Science Foundation of China (Grant No. LZ24F050003) and Shanghai SUPREMIND Technology Co. Ltd.] |
|
|
- **Shared by [optional]:** [HNU CVPU] |
|
|
- **License:** [CC BY-NC 4.0] |
|
|
|
|
|
### Dataset Sources [optional] |
|
|
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
|
|
- **Repository:** [https://github.com/xifen523/OmniTrack] |
|
|
- **Paper:** [https://arxiv.org/abs/2503.04565] |
|
|
- **Demo:** [https://www.youtube.com/watch?v=Q3mvzBtkkeU] |
|
|
|
|
|
## Uses |
|
|
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
<!-- This section describes suitable use cases for the dataset. --> |
|
|
|
|
|
QuadTrack is designed for multi-object tracking (MOT) research, particularly in panoramic. |
|
|
|
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
The dataset is organized into two main splits: train and test. |
|
|
```bash |
|
|
QuadTrack/ |
|
|
βββ train/ # Training set |
|
|
β βββ img1/ # Training images (video frames) |
|
|
β βββ gt/ # Ground-truth annotations (bounding boxes, IDs, etc.) |
|
|
β |
|
|
βββ test/ # Test set |
|
|
βββ img1/ # Test images (no ground-truth provided) |
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
|
|
|
|
QuadTrack was created to address the limitations of existing multi-object tracking (MOT) datasets, which often focus on narrow field-of-view scenarios and short-term associations. In contrast, panoramic and long-span tracking poses unique challenges such as: |
|
|
|
|
|
+ Prolonged occlusions leading to identity switches. |
|
|
|
|
|
+ Wide field-of-view distortions caused by panoramic cameras. |
|
|
|
|
|
+ Dramatic appearance variations across long sequences. |
|
|
|
|
|
The dataset aims to provide a benchmark for developing algorithms that achieve long-term identity stability and robust re-identification in real-world panoramic environments. |
|
|
|
|
|
### Source Data |
|
|
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
|
|
|
|
#### Data Collection and Processing |
|
|
|
|
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
|
|
|
|
Collection: The video sequences were captured using panoramic and wide-angle cameras in complex real-world environments (e.g., urban traffic, crowded public areas). |
|
|
|
|
|
+ Annotation: |
|
|
|
|
|
+ Bounding boxes and unique object IDs were assigned frame-by-frame. |
|
|
|
|
|
+ Annotations follow the standard MOTChallenge format for compatibility. |
|
|
|
|
|
+ Processing: |
|
|
|
|
|
+ Frames were extracted at fixed intervals to balance temporal resolution and storage. |
|
|
|
|
|
+ Quality checks ensured consistency in ID assignment across long occlusions. |
|
|
|
|
|
+ Tools used: https://www.cvat.ai/ |
|
|
#### Who are the source data producers? |
|
|
|
|
|
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> |
|
|
|
|
|
The source videos were collected and annotated by the QuadTrack research team. |
|
|
|
|
|
+ Producers: Internal annotation team trained for MOT labeling tasks. |
|
|
|
|
|
+ Demographics: Not applicable, as the dataset focuses on object trajectories rather than personal or sensitive identity information. |
|
|
|
|
|
+ Note: No personally identifiable information (PII) is included. The dataset is curated strictly for research purposes. |
|
|
|
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
|
|
While QuadTrack provides challenging panoramic multi-object tracking scenarios, several limitations and risks should be noted: |
|
|
|
|
|
+ Domain bias: The dataset primarily consists of panoramic and wide field-of-view sequences. Models trained on QuadTrack may not generalize well to conventional narrow-angle tracking datasets. |
|
|
|
|
|
+ Scene diversity: Although collected across different environments, the dataset may not cover all possible real-world scenarios (e.g., extreme weather, night-time, or thermal imagery). |
|
|
|
|
|
+ Annotation errors: Despite quality control, occasional inaccuracies in bounding boxes or identity switches may exist, especially under heavy occlusion. |
|
|
|
|
|
+ Ethical risks: As a vision dataset, improper use in surveillance or privacy-intrusive applications could raise ethical concerns. |
|
|
|
|
|
### Recommendations |
|
|
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
|
|
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. |
|
|
|
|
|
## Citation [optional] |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
**BibTeX:** |
|
|
```bibtex |
|
|
@inproceedings{luo2025omnidirectional, |
|
|
title={Omnidirectional Multi-Object Tracking}, |
|
|
author={Luo, Kai and Shi, Hao and Wu, Sheng and Teng, Fei and Duan, Mengfei and Huang, Chang and Wang, Yuhang and Wang, Kaiwei and Yang, Kailun}, |
|
|
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference}, |
|
|
pages={21959--21969}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
**APA:** |
|
|
|
|
|
```mathematica |
|
|
Luo, K., Shi, H., Wu, S., Teng, F., Duan, M., Huang, C., Wang, Y., Wang, K., & Yang, K. (2025). Omnidirectional multi-object tracking. *Proceedings of the Computer Vision and Pattern Recognition Conference*, 21959β21969. |
|
|
|
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
## Dataset Card Authors [optional] |
|
|
|
|
|
xifen527 |
|
|
|
|
|
## Dataset Card Contact |
|
|
|
|
|
kailun.yang@hnu.edu.cn, luokai@hnu.edu.cn |