English | ็ฎไฝไธญๆ
Contents
Multi-Object Tracking Dataset Preparation
MOT Dataset
PaddleDetection implement JDE and FairMOT, and use the same training data named 'MIX' as them, including Caltech Pedestrian, CityPersons, CUHK-SYSU, PRW, ETHZ, MOT17 and MOT16. The former six are used as the mixed dataset for training, and MOT16 are used as the evaluation dataset. If you want to use these datasets, please follow their licenses.
Notes:
- Multi-Object Tracking(MOT) datasets are always used for single category tracking. DeepSORT, JDE and FairMOT are single category MOT models. 'MIX' dataset and it's sub datasets are also single category pedestrian tracking datasets. It can be considered that there are additional IDs ground truth for detection datasets.
- In order to train the feature models of more scenes, more datasets are also processed into the same format as the MIX dataset. PaddleDetection Team also provides feature datasets and models of vehicle tracking, head tracking and more general pedestrian tracking. User defined datasets can also be prepared by referring to this data preparation doc.
- The multipe category MOT model is [MCFairMOT] (../../configs/mot/mcfairmot/readme_cn.md), and the multi category dataset is the integrated version of VisDrone dataset. Please refer to the doc of MCFairMOT.
- The Multi-Target Multi-Camera Tracking (MTMCT) model is AIC21 MTMCT(CityFlow) Multi-Camera Vehicle Tracking dataset. The dataset and model can refer to the doc of MTMCT.
Dataset Directory
First, download the image_lists.zip using the following command, and unzip them into PaddleDetection/dataset/mot:
wget https://bj.bcebos.com/v1/paddledet/data/mot/image_lists.zip
Then, download the MIX dataset using the following command, and unzip them into PaddleDetection/dataset/mot:
wget https://bj.bcebos.com/v1/paddledet/data/mot/MOT17.zip
wget https://bj.bcebos.com/v1/paddledet/data/mot/Caltech.zip
wget https://bj.bcebos.com/v1/paddledet/data/mot/CUHKSYSU.zip
wget https://bj.bcebos.com/v1/paddledet/data/mot/PRW.zip
wget https://bj.bcebos.com/v1/paddledet/data/mot/Cityscapes.zip
wget https://bj.bcebos.com/v1/paddledet/data/mot/ETHZ.zip
wget https://bj.bcebos.com/v1/paddledet/data/mot/MOT16.zip
The final directory is:
dataset/mot
|โโโโโโimage_lists
|โโโโโโcaltech.10k.val
|โโโโโโcaltech.all
|โโโโโโcaltech.train
|โโโโโโcaltech.val
|โโโโโโcitypersons.train
|โโโโโโcitypersons.val
|โโโโโโcuhksysu.train
|โโโโโโcuhksysu.val
|โโโโโโeth.train
|โโโโโโmot16.train
|โโโโโโmot17.train
|โโโโโโprw.train
|โโโโโโprw.val
|โโโโโโCaltech
|โโโโโโCityscapes
|โโโโโโCUHKSYSU
|โโโโโโETHZ
|โโโโโโMOT16
|โโโโโโMOT17
|โโโโโโPRW
Data Format
These several relevant datasets have the following structure:
MOT17
|โโโโโโimages
| โโโโโโโtrain
| โโโโโโโtest
โโโโโโโlabels_with_ids
โโโโโโโtrain
Annotations of these datasets are provided in a unified format. Every image has a corresponding annotation text. Given an image path, the annotation text path can be generated by replacing the string images with labels_with_ids and replacing .jpg with .txt.
In the annotation text, each line is describing a bounding box and has the following format:
[class] [identity] [x_center] [y_center] [width] [height]
Notes:
classis the class id, support single class and multi-class, start from0, and for single class is0.identityis an integer from1tonum_identities(num_identitiesis the total number of instances of objects in the dataset), or-1if this box has no identity annotation.[x_center] [y_center] [width] [height]are the center coordinates, width and height, note that they are normalized by the width/height of the image, so they are floating point numbers ranging from 0 to 1.
Custom Dataset Preparation
In order to standardize training and evaluation, custom data needs to be converted into the same directory and format as MOT-16 dataset:
custom_data
|โโโโโโimages
| โโโโโโโtest
| โโโโโโโtrain
| โโโโโโโseq1
| | โโโโโโโgt
| | | โโโโโโโgt.txt
| | โโโโโโโimg1
| | | โโโโโโโ000001.jpg
| | | |โโโโโโ000002.jpg
| | | โโโโโโโ ...
| | โโโโโโโseqinfo.ini
| โโโโโโโseq2
| โโโโโโโ...
โโโโโโโlabels_with_ids
โโโโโโโtrain
โโโโโโโseq1
| โโโโโโโ000001.txt
| |โโโโโโ000002.txt
| โโโโโโโ ...
โโโโโโโseq2
โโโโโโโ ...
images
gt.txtis the original annotation file of all images extracted from the video.img1is the folder of images extracted from the video by a certain frame rate.seqinfo.iniis a video information description file, and the following format is required:
[Sequence]
name=MOT16-02
imDir=img1
frameRate=30
seqLength=600
imWidth=1920
imHeight=1080
imExt=.jpg
Each line in gt.txt describes a bounding box, with the format as follows:
[frame_id],[identity],[bb_left],[bb_top],[width],[height],[score],[label],[vis_ratio]
Notes::
frame_idis the current frame id.identityis an integer from1tonum_identities(num_identitiesis the total number of instances of objects in this video or image sequence), or-1if this box has no identity annotation.bb_leftis the x coordinate of the left boundary of the target boxbb_topis the Y coordinate of the upper boundary of the target boxwidth, heightare the pixel width and heightscoreacts as a flag whether the entry is to be considered. A value of 0 means that this particular instance is ignored in the evaluation, while a value of 1 is used to mark it as active.1by default.labelis the type of object annotated, use1as default because only single-class multi-object tracking is supported now. There are other classes of object in MOT-16, but they are treated as ignore.vis_ratiois the visibility ratio of each bounding box. This can be due to occlusion by another static or moving object, or due to image border cropping.1by default.
labels_with_ids
Annotations of these datasets are provided in a unified format. Every image has a corresponding annotation text. Given an image path, the annotation text path can be generated by replacing the string images with labels_with_ids and replacing .jpg with .txt.
In the annotation text, each line is describing a bounding box and has the following format:
[class] [identity] [x_center] [y_center] [width] [height]
Notes:
classis the class id, support single class and multi-class, start from0, and for single class is0.identityis an integer from1tonum_identities(num_identitiesis the total number of instances of objects in the dataset of all videos or image squences), or-1if this box has no identity annotation.[x_center] [y_center] [width] [height]are the center coordinates, width and height, note that they are normalized by the width/height of the image, so they are floating point numbers ranging from 0 to 1.
Generate the corresponding labels_with_ids with following command:
cd dataset/mot
python gen_labels_MOT.py
Citation
Caltech:
@inproceedings{ dollarCVPR09peds,
author = "P. Doll\'ar and C. Wojek and B. Schiele and P. Perona",
title = "Pedestrian Detection: A Benchmark",
booktitle = "CVPR",
month = "June",
year = "2009",
city = "Miami",
}
Citypersons:
@INPROCEEDINGS{Shanshan2017CVPR,
Author = {Shanshan Zhang and Rodrigo Benenson and Bernt Schiele},
Title = {CityPersons: A Diverse Dataset for Pedestrian Detection},
Booktitle = {CVPR},
Year = {2017}
}
@INPROCEEDINGS{Cordts2016Cityscapes,
title={The Cityscapes Dataset for Semantic Urban Scene Understanding},
author={Cordts, Marius and Omran, Mohamed and Ramos, Sebastian and Rehfeld, Timo and Enzweiler, Markus and Benenson, Rodrigo and Franke, Uwe and Roth, Stefan and Schiele, Bernt},
booktitle={Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2016}
}
CUHK-SYSU:
@inproceedings{xiaoli2017joint,
title={Joint Detection and Identification Feature Learning for Person Search},
author={Xiao, Tong and Li, Shuang and Wang, Bochao and Lin, Liang and Wang, Xiaogang},
booktitle={CVPR},
year={2017}
}
PRW:
@inproceedings{zheng2017person,
title={Person re-identification in the wild},
author={Zheng, Liang and Zhang, Hengheng and Sun, Shaoyan and Chandraker, Manmohan and Yang, Yi and Tian, Qi},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={1367--1376},
year={2017}
}
ETHZ:
@InProceedings{eth_biwi_00534,
author = {A. Ess and B. Leibe and K. Schindler and and L. van Gool},
title = {A Mobile Vision System for Robust Multi-Person Tracking},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR'08)},
year = {2008},
month = {June},
publisher = {IEEE Press},
keywords = {}
}
MOT-16&17:
@article{milan2016mot16,
title={MOT16: A benchmark for multi-object tracking},
author={Milan, Anton and Leal-Taix{\'e}, Laura and Reid, Ian and Roth, Stefan and Schindler, Konrad},
journal={arXiv preprint arXiv:1603.00831},
year={2016}
}