Datasets:
Tasks:
Object Detection
Languages:
English
Size:
100K<n<1M
ArXiv:
Tags:
Multi-object-tracking
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -33,20 +33,7 @@ _[Hamidreza Hashempoor](https://hamidreza-hashempoor.github.io/), Yu Dong Hwang
|
|
| 33 |
|
| 34 |
## Dataset Overview
|
| 35 |
|
| 36 |
-
GT format is like (each line):
|
| 37 |
-
`frame, id, bb_left, bb_top, bb_width, bb_height, conf, class, 1.0`.
|
| 38 |
|
| 39 |
-
To prepare the dataset, first run `extract_frames.py` to decode frames from each video.
|
| 40 |
-
In **line 11** of the script, add the video filename and the number of frames you want to extract.
|
| 41 |
-
```bash
|
| 42 |
-
python extract_frames.py
|
| 43 |
-
```
|
| 44 |
-
|
| 45 |
-
Then, convert the ground truth into COCO format with:
|
| 46 |
-
```bash
|
| 47 |
-
python convert_to_coco.py
|
| 48 |
-
```
|
| 49 |
-
This will generate annotations/train.json ready for training your detector.
|
| 50 |
|
| 51 |
Brief statistics and visualization of FastTracker benchmark and its comparison with other benchmarks.
|
| 52 |
|
|
@@ -84,7 +71,22 @@ Brief statistics and visualization of FastTracker benchmark and its comparison w
|
|
| 84 |
|
| 85 |
### Data Format
|
| 86 |
|
| 87 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
|
| 89 |
## Citation
|
| 90 |
If you use our code or Benchmark, please cite our work.
|
|
|
|
| 33 |
|
| 34 |
## Dataset Overview
|
| 35 |
|
|
|
|
|
|
|
| 36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
Brief statistics and visualization of FastTracker benchmark and its comparison with other benchmarks.
|
| 39 |
|
|
|
|
| 71 |
|
| 72 |
### Data Format
|
| 73 |
|
| 74 |
+
GT format is like (each line):
|
| 75 |
+
`frame, id, bb_left, bb_top, bb_width, bb_height, conf, class, 1.0`.
|
| 76 |
+
|
| 77 |
+
To prepare the dataset, first run `extract_frames.py` to decode frames from each video.
|
| 78 |
+
In **line 11** of the script, add the video filename and the number of frames you want to extract.
|
| 79 |
+
```bash
|
| 80 |
+
python extract_frames.py
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
Then, convert the ground truth into COCO format with:
|
| 84 |
+
```bash
|
| 85 |
+
python convert_to_coco.py
|
| 86 |
+
```
|
| 87 |
+
This will generate annotations/train.json ready for training your detector.
|
| 88 |
+
|
| 89 |
+
|
| 90 |
|
| 91 |
## Citation
|
| 92 |
If you use our code or Benchmark, please cite our work.
|