--- license: cc-by-nc-4.0 task_categories: - object-detection --- # HazyDet: Open-Source Benchmark for Drone-View Object Detection With Depth-Cues in Hazy Scenes [(paper)](https://arxiv.org/abs/2409.19833) **HazyDet** is the first benchmark for object detection in hazy drone imagery. It couples physics-driven synthetic data with real foggy drone photos, providing a controlled yet realistic test-bed for designing haze-robust detectors. --- ## Abstract Object detection from aerial platforms under adverse atmospheric conditions, particularly haze, is paramount for robust drone autonomy. Yet, this domain remains largely underexplored, primarily hindered by the absence of specialized benchmarks. To bridge this gap, we present HazyDet, the first, large-scale benchmark specifically designed for drone-view object detection in hazy conditions. Comprising 383,000 real-world instances derived from both naturally hazy captures and synthetically hazed scenes augmented from clear images, HazyDet provides a challenging and realistic testbed for advancing detection algorithms. To address the severe visual degradation induced by haze, we propose the Depth-Conditioned Detector (DeCoDet), a novel architecture that integrates a Depth-Conditioned Kernel to dynamically modulate feature representations based on depth cues. The practical efficacy and robustness of DeCoDet are further enhanced by its training with a Progressive Domain Fine-Tuning (PDFT) strategy to navigate synthetic-to-real domain shifts, and a Scale-Invariant Refurbishment Loss (SIRLoss) to ensure resilient learning from potentially noisy depth annotations. Comprehensive empirical validation on HazyDet substantiates the superiority of our unified DeCoDet framework, which achieves state-of-the-art performance, surpassing the closest competitor by a notable +1.5\% mAP on challenging real-world hazy test scenarios. Our dataset and toolkit are available at [github](https://github.com/GrokCV/HazyDet). ## HazyDet ![HazyDet](./docs/dataset_pipeline_sample.jpg) --- ### πŸ“¦ Dataset at a Glance _Target size buckets: Small < 0.1 % of image area , Medium 0.1–1 % , Large > 1 %_ | Split | #Images | #Instances | Class | Small | Medium | Large | |-------|:-------:|:----------:|-------|------:|-------:|------:| | **Train** | 8 000 | 264 511 | Car | 159 491 | 77 527 | 5 177 | | | | | Truck | 4 197 | 6 262 | 1 167 | | | | | Bus | 1 990 | 7 879 | 861 | | **Val** | 1 000 | 34 560 | Car | 21 051 | 9 881 | 630 | | | | | Truck | 552 | 853 | 103 | | | | | Bus | 243 | 1 122 | 125 | | **Test** | 2 000 | 65 322 | Car | 38 910 | 19 860 | 1 256 | | | | | Truck | 881 | 1 409 | 263 | | | | | Bus | 473 | 1 991 | 279 | | **Real-world Train** | 400 | 13 753 | Car | 5 816 | 6 487 | 695 | | | | | Truck | 86 | 204 | 57 | | | | | Bus | 52 | 256 | 100 | | **Real-world Test** | 200 | 5 543 | Car | 2 351 | 2 506 | 365 | | | | | Truck | 26 | 86 | 30 | | | | | Bus | 17 | 107 | 55 | --- You can also **download** our HazyDet dataset from [**Baidu Netdisk**](https://pan.baidu.com/s/1KKWqTbG1oBAdlIZrTzTceQ?pwd=grok) or [**OneDrive**](https://1drv.ms/f/s!AmElF7K4aY9p83CqLdm4N-JSo9rg?e=H06ghJ).
For both training and inference, the following dataset structure is required: ![HazyDet](./docs/dataset_samples.jpg) ``` HazyDet/ β”œβ”€β”€ train/ β”‚ └── clean images/ β”‚ └── hazy images/ β”‚ └── lables/ β”œβ”€β”€val/ β”‚ └── clean images/ β”‚ └── hazy images/ β”‚ └── lables/ β”œβ”€β”€ test/ β”‚ └── clean images/ β”‚ └── hazy images/ β”‚ └── lables/ β”œβ”€β”€ Real-world/ β”‚ └── train/ β”‚ └── test/ β”‚ └── lables/ └── README.md <-- you are here ``` **Note: Both passwords for BaiduYun and OneDrive is `grok`**. ## Leadboard and Model Zoo All the weight files in the model zoo can be accessed on [Baidu Cloud](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) and [OneDrive](https:). ### Detectors | Model | Backbone | #Params (M) | GFLOPs | mAP on
Synthetic Test-set | mAP on
Real-world Test-set | Weight | |-------------------------|----------|-------------|--------|-----------------------------|-------------------------------|--------| | **One Stage** | | | | | | | | YOLOv3 | Darknet53 | 61.63 | 20.19 | 35.0 | 30.7 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | GFL | ResNet50 | 32.26 | 198.65 | 36.8 | 32.5 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | YOLOX | CSPDarkNet | 8.94 | 13.32 | 42.3 | 35.4 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | FCOS | ResNet50 | 32.11 | 191.48 | 45.9 | 32.7 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | VFNet | ResNet50 | 32.71 | 184.32 | 49.5 | 35.6 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | ATTS | ResNet50 | 32.12 | 195.58 | 50.4 | 36.4 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | DDOD | ResNet50 | 32.20 | 173.05 | 50.7 | 37.1 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | TOOD | ResNet50 | 32.02 | 192.51 | 51.4 | 36.7 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | **Two Stage** | | | | | | | | Faster RCNN | ResNet50 | 41.35 | 201.72 | 48.7 | 33.4 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | Libra RCNN | ResNet50 | 41.62 | 209.92 | 49.0 | 34.5 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | Grid RCNN | ResNet50 | 64.46 | 317.44 | 50.5 | 35.2 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | Cascade RCNN | ResNet50 | 69.15 | 230.40 | 51.6 | 37.2 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | **End-to-End** | | | | | | | | Conditional DETR | ResNet50 | 43.55 | 91.47 | 30.5 | 25.8 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | DAB DETR | ResNet50 | 43.7 | 91.02 | 31.3 | 27.2 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | Deform DETR | ResNet50 | 40.01 | 203.11 | 51.5 | 36.9 | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | | **DeCoDet** | | | | | | | | **DeCoDet (Ours)** | ResNet50 | 34.62 | 225.37 | **52.0** | **38.7** | [weight](https://pan.baidu.com/s/1EEX_934Q421RkHCx53akJQ?pwd=grok) | ### Dehazing
Type Method PSNR SSIM mAP on Test-set mAP on RDDTS Weight
Baseline Faster RCNN - - 39.5 21.5 weight
Dehaze GridDehaze 12.66 0.713 38.9 (-0.6) 19.6 (-1.9) weight
Dehaze MixDehazeNet 15.52 0.743 39.9 (+0.4) 21.2 (-0.3) weight
Dehaze DSANet 19.01 0.751 40.8 (+1.3) 22.4 (+0.9) weight
Dehaze FFA 19.25 0.798 41.2 (+1.7) 22.0 (+0.5) weight
Dehaze DehazeFormer 17.53 0.802 42.5 (+3.0) 21.9 (+0.4) weight
Dehaze gUNet 19.49 0.822 42.7 (+3.2) 22.2 (+0.7) weight
Dehaze C2PNet 21.31 0.832 42.9 (+3.4) 22.4 (+0.9) weight
Dehaze DCP 16.98 0.824 44.0 (+4.5) 20.6 (-0.9) weight
Dehaze RIDCP 16.15 0.718 44.8 (+5.3) 24.2 (+2.7) weight
## Citation If you use this toolbox or benchmark in your research, please cite this project. ```bibtex @article{feng2025HazyDet, title={HazyDet: Open-Source Benchmark for Drone-View Object Detection with Depth-Cues in Hazy Scenes}, author={Changfeng Feng and Zhenyuan Chen and Xiang Li and Chunping Wang and Jian Yang and Ming-Ming Cheng and Yimian Dai and Qiang Fu}, year={2025}, journal={arXiv preprint arXiv:2409.19833}, } @article{zhu2021detection, title={Detection and tracking meet drones challenge}, author={Zhu, Pengfei and Wen, Longyin and Du, Dawei and Bian, Xiao and Fan, Heng and Hu, Qinghua and Ling, Haibin}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume={44}, number={11}, pages={7380--7399}, year={2021}, publisher={IEEE} } ```