mehmetkeremturkcan commited on
Commit
044fa2f
·
verified ·
1 Parent(s): be7ceac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -3
README.md CHANGED
@@ -1,3 +1,95 @@
1
- ---
2
- license: cc-by-nc-sa-3.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-3.0
3
+ ---
4
+
5
+ # :stars: Constellation Dataset: Benchmarking High-Altitude Object Detection for an Urban Intersection
6
+
7
+ <img width="100%" src="https://keremturkcan.com/projects/constellation/constellation_a2.png" alt="Constellation dataset banner">
8
+
9
+ **Paper:** [arXiv](https://arxiv.org/abs/2404.16944)
10
+
11
+ **Website:** [Constellation Dataset](https://mkturkcan.github.io/constellation-web/)
12
+
13
+ ## Abstract
14
+
15
+ We introduce Constellation, a dataset of 13K images suitable for research on high-altitude object detection of objects in dense urban streetscapes observed from high-elevation cameras, collected for a variety of temporal conditions. The dataset addresses the need for curated data to explore problems in small object detection exemplified by the limited pixel footprint of pedestrians observed tens of meters from above. It enables the testing of object detection models for variations in lighting, building shadows, weather, and scene dynamics. We evaluate contemporary object detection architectures on the dataset, observing that state-of-the-art methods have lower performance in detecting small pedestrians compared to vehicles, corresponding to a 10% difference in average precision (AP). Using structurally similar datasets for pretraining the models results in an increase of 1.8% mean AP (mAP). We further find that incorporating domain-specific data augmentations helps improve model performance. Using pseudo-labeled data, obtained from inference outcomes of the best-performing models, improves the performance of the models. Finally, comparing the models trained using the data collected in two different time intervals, we find a performance drift in models due to the changes in intersection conditions over time. The best-performing model achieves a pedestrian AP of 92.0% with 11.5 ms inference time on NVIDIA A100 GPUs, and an mAP of 95.4%.
16
+
17
+ ## Updates
18
+
19
+ * :white_check_mark: Additional dataset download links
20
+ * :white_check_mark: Release of models trained on different datasets
21
+ * :white_check_mark: Release of pretrained models
22
+
23
+ ## Setup
24
+
25
+ * (For Training) Download the dataset using the link below.
26
+
27
+ * Install [ultralytics](https://github.com/ultralytics/ultralytics) with:
28
+ ```bash
29
+ pip install ultralytics
30
+ ```
31
+
32
+ Dataset config files are presented in configs/ folder.
33
+
34
+
35
+ ## Dataset Download
36
+
37
+ Constellation dataset is available in the YOLO format from the links below:
38
+
39
+ **Google Drive:** https://drive.google.com/drive/folders/11k-EDDusIvvQB0Ss46c-_7GX3jvjWw4B?usp=sharing
40
+
41
+ **COSMOS:** :soon:
42
+
43
+ ## Model Zoo
44
+
45
+ We provide a number of pretrained models for PyTorch and TensorRT.
46
+
47
+ ### Model Table
48
+
49
+ | Model Link | Architecture | Augmentation | Pretraining Dataset | Finetuning Dataset | mAP@50 |
50
+ |:--------------------------------------------------------------------------------------------------:|:---------------:|:-------------------:|:-------------------:|:------------------:|:--------:|
51
+ | [Google Drive](https://drive.google.com/file/d/1eZITstx9uEbdARBlVOXblxs6KmafUFOb/view?usp=sharing) | YOLOv8x | :x: | COCO | Constellation | 93.0 |
52
+ | [Google Drive](https://drive.google.com/file/d/1iKIOzukvwBu-aSv2mCNpJqc3iIzW9ASj/view?usp=sharing) | YOLOv8x | :white_check_mark: | COCO | Constellation | 94.7 |
53
+ | [Google Drive](https://drive.google.com/file/d/1y552RLi7Hk_fqz70EgEaq58x0v7rfQmM/view?usp=sharing) | YOLOv8x | :white_check_mark: | VisDrone | Constellation | **95.4** |
54
+ | [Google Drive](https://drive.google.com/file/d/1wRgVRFU_ibL59VhH9zCMreiWi-CUaojq/view?usp=sharing) | YOLOv8n | :white_check_mark: | VisDrone | Constellation | 94.5 |
55
+ | [Google Drive](https://drive.google.com/file/d/1BFx9efEab7Nig7c7aOzK2y5KLBNukbzb/view?usp=sharing) | YOLOv8x (P2-P6) | :x: | COCO | Constellation | 94.3 |
56
+ | [Google Drive](https://drive.google.com/file/d/1RFy98nhgGz9jfKfvnN7JU8Ruer1-pIGw/view?usp=sharing) | DETR-x | :x: | COCO | Constellation | 92.3 |
57
+ | [Google Drive](https://drive.google.com/file/d/1Df5kwaOKd9iCR8o4b96C5ZO9TJON0R7c/view?usp=sharing) | CFINet | :x: | COCO | Constellation | 89.3 |
58
+
59
+ ### Model Directories
60
+
61
+ All models can also be downloaded from the following links as a .zip file:
62
+
63
+ **PyTorch Model Directory:** https://drive.google.com/drive/folders/1RLHkXApuIHzqgoH8CTOtXNt5yfp81sWn
64
+
65
+ ## Training and Inference
66
+
67
+ ### YOLOv8/DETR Models
68
+
69
+ We provide the training script, including the set of augmentations with all parameters, under training/.
70
+
71
+ #### Dataset Configuration
72
+
73
+ See configs/constellation.yaml and set it to your dataset download path.
74
+
75
+ #### Training
76
+
77
+ See training/ultralytics/train_script.py. The script trains all models in the paper sequentially.
78
+
79
+ #### Evaluation
80
+
81
+ See evaluation/ultralytics/evaluation.py.
82
+
83
+ ### CFINet
84
+
85
+ Please follow the instructions under training/cfinet for training and evaluation.
86
+
87
+ ## Reference
88
+ ```bibtex
89
+ @article{turkcan2024constellation,
90
+ title={Constellation Dataset: Benchmarking High-Altitude Object Detection for an Urban Intersection},
91
+ author={Turkcan, Mehmet Kerem and Narasimhan, Sanjeev and Zang, Chengbo and Je, Gyung Hyun and Yu, Bo and Ghasemi, Mahshid and Ghaderi, Javad and Zussman, Gil and Kostic, Zoran},
92
+ journal={arXiv preprint arXiv:2404.16944},
93
+ year={2024}
94
+ }
95
+ ```