Hamidreza-Hashemp commited on
Commit
1780a10
·
verified ·
1 Parent(s): 56caf2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -14
README.md CHANGED
@@ -33,20 +33,7 @@ _[Hamidreza Hashempoor](https://hamidreza-hashempoor.github.io/), Yu Dong Hwang
33
 
34
  ## Dataset Overview
35
 
36
- GT format is like (each line):
37
- `frame, id, bb_left, bb_top, bb_width, bb_height, conf, class, 1.0`.
38
 
39
- To prepare the dataset, first run `extract_frames.py` to decode frames from each video.
40
- In **line 11** of the script, add the video filename and the number of frames you want to extract.
41
- ```bash
42
- python extract_frames.py
43
- ```
44
-
45
- Then, convert the ground truth into COCO format with:
46
- ```bash
47
- python convert_to_coco.py
48
- ```
49
- This will generate annotations/train.json ready for training your detector.
50
 
51
  Brief statistics and visualization of FastTracker benchmark and its comparison with other benchmarks.
52
 
@@ -84,7 +71,22 @@ Brief statistics and visualization of FastTracker benchmark and its comparison w
84
 
85
  ### Data Format
86
 
87
- The FastTrack benchmark follows the [MOTChallenge](https://motchallenge.net/) standard annotation format. Each ground truth file (`gt/gt.txt`) contains a list of object annotations per frame in CSV format with the following 10 columns:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
  ## Citation
90
  If you use our code or Benchmark, please cite our work.
 
33
 
34
  ## Dataset Overview
35
 
 
 
36
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  Brief statistics and visualization of FastTracker benchmark and its comparison with other benchmarks.
39
 
 
71
 
72
  ### Data Format
73
 
74
+ GT format is like (each line):
75
+ `frame, id, bb_left, bb_top, bb_width, bb_height, conf, class, 1.0`.
76
+
77
+ To prepare the dataset, first run `extract_frames.py` to decode frames from each video.
78
+ In **line 11** of the script, add the video filename and the number of frames you want to extract.
79
+ ```bash
80
+ python extract_frames.py
81
+ ```
82
+
83
+ Then, convert the ground truth into COCO format with:
84
+ ```bash
85
+ python convert_to_coco.py
86
+ ```
87
+ This will generate annotations/train.json ready for training your detector.
88
+
89
+
90
 
91
  ## Citation
92
  If you use our code or Benchmark, please cite our work.