mishig HF Staff commited on
Commit
c80ad19
·
verified ·
1 Parent(s): 638ac15

Add 1 files

Browse files
Files changed (1) hide show
  1. 2206/2206.06427.md +579 -0
2206/2206.06427.md ADDED
@@ -0,0 +1,579 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth
2
+
3
+ URL Source: https://arxiv.org/html/2206.06427
4
+
5
+ Markdown Content:
6
+ \newtcbox\tightredbox
7
+ colframe=red,colback=white,arc=0.5pt,boxsep=0pt,left=0pt,right=0pt,top=0pt,bottom=0pt,boxrule=0.5pt,varwidth upper \newtcbox\tightbluebox colframe=blue,colback=white,arc=0.5pt,boxsep=0pt,left=0pt,right=0pt,top=0pt,bottom=0pt,boxrule=0.5pt,varwidth upper
8
+
9
+ Priya Narayanan, Xin Hu*, Zhenyu Wu*, Matthew D Thielke, John G Rogers, Andre V Harrison, John A D’Agostino, James D Brown, Long P Quang, James R Uplinger, Heesung Kwon, and Zhangyang Wang P. Narayanan, M. D. Thielke, J. G. Rogers, A. V. Harrison, L. D. Quang, J. R. Uplinger and H. Kwon are with DEVCOM Army Research Laboratory, Adelphi, MD, 20783 USA e-mail: priya.narayanan.civ@mail.milX. Hu is with Tulane University, New Orleans, LA 70118Z. Wu is with Texas A & M University, College Station, TX 77843 X. Hu and Z. Wu have contributed equally to this paperZ. Wang is with The University of Texas at Austin, Austin, TX 78712 J. A. D’Agostino and J.D Brown are with DEVCOM Chemical and Biological Center, Aberdeen Proving Ground, MD 21010 Manuscript received Sept XX, 2021; revised XX XX, XXX.
10
+
11
+ ###### Abstract
12
+
13
+ Imagery collected from outdoor visual environments is often degraded due to the presence of dense smoke or haze. A key challenge for research in scene understanding in these degraded visual environments (DVE) is the lack of representative benchmark datasets. These datasets are required to evaluate state-of-the-art vision algorithms (e.g., detection and tracking) in degraded settings. In this paper, we address some of these limitations by introducing the first realistic hazy image benchmark, from both aerial and ground view, with paired haze-free images, and in-situ haze density measurements. This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene, and consists of images captured from the perspective of both an unmanned aerial vehicle (UAV) and an unmanned ground vehicle (UGV). We also evaluate a set of representative state-of-the-art dehazing approaches as well as object detectors on the dataset. The full dataset presented in this paper, including the ground truth object classification bounding boxes and haze density measurements, is provided for the community to evaluate their algorithms at: [https://a2i2-archangel.vision](https://a2i2-archangel.vision/). A subset of this dataset has been used for the “Object Detection in Haze” Track of CVPR UG2 2022 challenge at [http://cvpr2022.ug2challenge.org/track1.html](http://cvpr2022.ug2challenge.org/track1.html).
14
+
15
+ ###### Index Terms:
16
+
17
+ Degraded Visual Environment, Dehazing, UAV, Object Detection, visual artefacts, benchmarking
18
+
19
+ ![Image 1: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/1.jpg)
20
+
21
+ (a)
22
+
23
+ ![Image 2: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/11.jpg)
24
+
25
+ (b)
26
+
27
+ ![Image 3: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/3.jpg)
28
+
29
+ (c)
30
+
31
+ ![Image 4: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/4.jpg)
32
+
33
+ (d)
34
+
35
+ ![Image 5: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/5.jpg)
36
+
37
+ (e)
38
+
39
+ ![Image 6: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/10.jpg)
40
+
41
+ (f)
42
+
43
+ ![Image 7: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/7.jpg)
44
+
45
+ (g)
46
+
47
+ ![Image 8: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/8.jpg)
48
+
49
+ (h)
50
+
51
+ ![Image 9: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/13.jpg)
52
+
53
+ (i)
54
+
55
+ ![Image 10: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/14.jpg)
56
+
57
+ (j)
58
+
59
+ ![Image 11: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/15.jpg)
60
+
61
+ (k)
62
+
63
+ ![Image 12: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/16.jpg)
64
+
65
+ (l)
66
+
67
+ ![Image 13: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/image00002.jpg)
68
+
69
+ (m)
70
+
71
+ ![Image 14: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/image00025.jpg)
72
+
73
+ (n)
74
+
75
+ ![Image 15: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/image00131.jpg)
76
+
77
+ (o)
78
+
79
+ ![Image 16: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/paired_images/image00223.jpg)
80
+
81
+ (p)
82
+
83
+ Figure 1: Examples of paired hazy and haze-free images in A2I2-UAV (top three rows) and A2I2-UGV (bottom one row) from A2I2-Haze.
84
+
85
+ I Introduction
86
+ --------------
87
+
88
+ Scene understanding in outdoor environments for applications such as intelligence, surveillance, and reconnaissance (ISR), and autonomous vehicle navigation is extremely challenging in the presence of smoke, and other adverse weather conditions due to haze, fog, and mist. These atmospheric phenomena with smoke particles or microscopic water droplets significantly interfere with the operations of onboard vision systems, often resulting in imagery with non-linear noise, blur, reduced contrast levels, and color dimming issues. These visual artifacts, generated from uncontrolled and potentially dynamic outdoor environments or other DVE effects, pose major challenges in many components of semantic scene understanding, including image enhancement, image restoration, object localization, and object classification.
89
+
90
+ To address these challenges, a key requirement is benchmarks to accurately evaluate the performance of these algorithms relative to different quantifiable haze levels. This is beyond the reach and scope of most of the existing curated haze datasets such as RESIDE[[1](https://arxiv.org/html/2206.06427#bib.bib1)], NH-Haze[[2](https://arxiv.org/html/2206.06427#bib.bib2)], and REVIDE[[3](https://arxiv.org/html/2206.06427#bib.bib3)]. Furthermore, these datasets are inadequate to quantitatively and fairly compare computer vision algorithms on hazy vs. haze-free imagery on a specific scene. They cannot isolate and measure the effect of haze and have a shortfall in scene object diversity (Discussed in Section II).
91
+
92
+ In this study, we leverage the US Army’s unique capability to produce and measure smoke/obscurant to generate haze in a controlled fashion. We collect imagery of target objects such as civilian vehicles, mannequins, and man-made obstacles from UAVs and UGVs. We also collect metadata such as altitude and local haze density from moving and stationary sensors. We then develop an image dataset with metadata for realistic, accurate, and fine-grained algorithm evaluation in hazy DVEs. In summary, the contributions of this paper are as follows:
93
+
94
+ a) We present A2I2-Haze, the first realistic haze dataset with in-situ smoke measurement aligned to aerial and ground imagery. This multi-purpose dataset has paired hazy and haze-free imagery to allow fine-grained evaluation of low-level vision (dehazing) and high-level vision (detection) tasks. Exemplar images are shown in Fig.[1](https://arxiv.org/html/2206.06427#S0.F1 "Figure 1 ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth").
95
+
96
+ b) We conduct a comprehensive study and evaluation on state-of-the-art single image dehazing and object detection algorithms using this non-synthetic benchmark dataset.
97
+
98
+ II Previous Work
99
+ ----------------
100
+
101
+ This section summarizes some of the key findings from previous DVE studies. In subsection A, we provide an overview of the hazy DVE datasets that are publicly available for benchmarking the performance of computer vision algorithms. We then survey current state-of-the-art single-image dehazing methods and object detection techniques used for evaluating DVE datasets in subsections B and C, respectively.
102
+
103
+ ### II-A Haze Datasets
104
+
105
+ Several of the currently available hazy DVE datasets (Table 1), such as 3R[[4](https://arxiv.org/html/2206.06427#bib.bib4)], HazeRD[[5](https://arxiv.org/html/2206.06427#bib.bib5)], 4K[[6](https://arxiv.org/html/2206.06427#bib.bib6)] and RESIDE[[1](https://arxiv.org/html/2206.06427#bib.bib1)] were synthetically generated using scattering models to simulate hazy conditions. However, while these simulation techniques are useful in generating large-scale training datasets comprising both hazy and reference haze-free images, they often use unrealistic parameters and assumptions such as homogeneity. Further, while synthetically simulating haze ensures all other conditions in the scene are preserved, the technique still poses challenges such as transferring the knowledge to real-world target domains.
106
+
107
+ More recently, a few real-world hazy datasets such as MRFID[[7](https://arxiv.org/html/2206.06427#bib.bib7)], BeDDE[[8](https://arxiv.org/html/2206.06427#bib.bib8)], Dense-Haze[[9](https://arxiv.org/html/2206.06427#bib.bib9)], O-Haze[[10](https://arxiv.org/html/2206.06427#bib.bib10)], I-Haze[[11](https://arxiv.org/html/2206.06427#bib.bib11)], REVIDE[[3](https://arxiv.org/html/2206.06427#bib.bib3)], and NH-Haze[[2](https://arxiv.org/html/2206.06427#bib.bib2)] have been published, with images captured in the presence of haze that is generated using professional-grade haze machines. Among the six datasets listed above, NH-Haze is the most representative of a realistic haze scene with non-homogeneous hazy and haze-free pairs. MRFID, BeDDE, Dense-Haze, NH-Haze and O-Haze are datasets with outdoor scenes that make them relevant to ISR and autonomous UGV applications. However, these datasets also pose several limitations. The smoke generators and the sensors for data collection in the MRFID, BeDDE, Dense-Haze, and (NH, O, and I)-Haze datasets are typically mounted in a fixed location and do not capture the spatial variability of the scene. Furthermore, all these datasets are limited to ground view imagery. Moreover, they do not provide a measurement of haze or smoke transmissibility for training and evaluation. The main distinction between our A2I2-Haze dataset from previous efforts is that our dataset provides ground and aerial imagery with haze-free reference images taken from mobile sensors in an outdoor environment. We also provide highly synchronized qualitative and quantitative measurements of smoke transmissibility, using human assessments and in-situ measurements from ground sensors. This will be discussed in detail in subsequent sections.
108
+
109
+ TABLE I: Properties of A2I2-Haze relative to other public dehazing benchmarks. A/G stands for aerial-/ground-view. In/Out stands for in/outdoor. NH stands for non-homogeneous. HM stands for haze measurement. Details of A2I2-Haze are in the first paragraph of Section IV.
110
+
111
+ Datasets Attributes
112
+ A/G In/Out NH Syn/Real#Images HM
113
+ 3R[[4](https://arxiv.org/html/2206.06427#bib.bib4)]G O S 2,750 2 750 2,750 2 , 750
114
+ HazeRD[[5](https://arxiv.org/html/2206.06427#bib.bib5)]G O S 33 33 33 33
115
+ O-Haze[[10](https://arxiv.org/html/2206.06427#bib.bib10)]G O R 45 45 45 45
116
+ I-Haze[[11](https://arxiv.org/html/2206.06427#bib.bib11)]G I R 35 35 35 35
117
+ 4K[[6](https://arxiv.org/html/2206.06427#bib.bib6)]G O S 10,000 10 000 10,000 10 , 000
118
+ REVIDE[[3](https://arxiv.org/html/2206.06427#bib.bib3)]G I R 1982
119
+ RESIDE[[1](https://arxiv.org/html/2206.06427#bib.bib1)]G I+O S+R 100,076 100 076 100,076 100 , 076
120
+ BeDDE[[8](https://arxiv.org/html/2206.06427#bib.bib8)]G O R 208 208 208 208
121
+ MRFID[[7](https://arxiv.org/html/2206.06427#bib.bib7)]G O R 800 800 800 800 1 1 1 MRFID contains images from 200 clear outdoor scenes. For each of the clear images, there are 4 images of the same scene containing different densities.
122
+ Dense-Haze[[9](https://arxiv.org/html/2206.06427#bib.bib9)]G O R 33 33 33 33
123
+ NH-Haze[[2](https://arxiv.org/html/2206.06427#bib.bib2)]G O✓R 55 55 55 55
124
+ A2I2-Haze A+G O✓R 1,033✓
125
+
126
+ ### II-B Single Image Dehazing
127
+
128
+ Based on Atmospheric scattering model (ASM)[[12](https://arxiv.org/html/2206.06427#bib.bib12), [13](https://arxiv.org/html/2206.06427#bib.bib13), [14](https://arxiv.org/html/2206.06427#bib.bib14)], a hazy image I 𝐼 I italic_I can be represented as:
129
+
130
+ I⁢(x)=t⁢(x)⁢J⁢(x)+(1−t⁢(x))⁢A,𝐼 𝑥 𝑡 𝑥 𝐽 𝑥 1 𝑡 𝑥 𝐴 I(x)=t(x)J(x)+(1-t(x))A,italic_I ( italic_x ) = italic_t ( italic_x ) italic_J ( italic_x ) + ( 1 - italic_t ( italic_x ) ) italic_A ,(1)
131
+
132
+ where J 𝐽 J italic_J, t 𝑡 t italic_t, and A 𝐴 A italic_A denote the latent haze-free image, transmission map, and global atmospheric light, respectively. Dehazing using ASM involves estimating the transmission map t⁢(x)𝑡 𝑥 t(x)italic_t ( italic_x ) and global atmospheric light A 𝐴 A italic_A. Existing approaches for dehazing can be broadly categorized as prior-based methods and learning-based methods.
133
+
134
+ #### II-B 1 Prior-based Methods
135
+
136
+ Dehazing methods based on priors[[15](https://arxiv.org/html/2206.06427#bib.bib15), [16](https://arxiv.org/html/2206.06427#bib.bib16), [17](https://arxiv.org/html/2206.06427#bib.bib17), [18](https://arxiv.org/html/2206.06427#bib.bib18), [19](https://arxiv.org/html/2206.06427#bib.bib19), [20](https://arxiv.org/html/2206.06427#bib.bib20), [21](https://arxiv.org/html/2206.06427#bib.bib21), [22](https://arxiv.org/html/2206.06427#bib.bib22)] first estimate transmission maps by exploiting the statistical properties of clean images and then obtain dehazed results using the scattering model. Tan _et al_.[[21](https://arxiv.org/html/2206.06427#bib.bib21)] proposed an adaptive contrast enhancement method for haze removal by maximizing the local contrast of hazy images. He _et al_.[[20](https://arxiv.org/html/2206.06427#bib.bib20)] put forward an approach with a dark channel prior (DCP) that assumes the existence of at least one channel for every pixel whose value is close to zero. Zhu _et al_.[[22](https://arxiv.org/html/2206.06427#bib.bib22)] proposed a color attenuation prior for haze removal by estimating the scene depth using a linear model. Fattal[[19](https://arxiv.org/html/2206.06427#bib.bib19)] proposed a color-line prior for the transmission map estimation by exploiting the regularity in natural images wherein small image patches lay in a one-dimensional distribution in the RGB color space. Berman _et al_.[[17](https://arxiv.org/html/2206.06427#bib.bib17)] proposed a non-local prior based on a key observation that pixels in a given cluster are often non-local. Thus, colors of a haze-free image could be approximated by a few hundred distinct colors. Despite some promising results, prior-based approaches have limitations in performance because some of the strong assumptions of these hand-crafted priors often do not hold in a real-world environment.
137
+
138
+ #### II-B 2 Learning-based Methods
139
+
140
+ With the availability of large-scale paired data and powerful CNNs, learning-based dehazing methods have become popular in recent years. MSCNN[[23](https://arxiv.org/html/2206.06427#bib.bib23)] estimated transmission map of the hazy images in a coarse-to-fine manner, where the coarse-scale net produced a holistic map based on the whole image and the fine-scale net refined it locally. DehazeNet[[24](https://arxiv.org/html/2206.06427#bib.bib24)] proposed an end-to-end network that learned and estimated the mapping relations between hazy image patches and their corresponding medium transmissions. DCPDN[[25](https://arxiv.org/html/2206.06427#bib.bib25)] embedded the atmospheric scattering model into the network so that it could jointly learn to estimate the transmission map, atmospheric light, and dehazing. Dehaze-cGAN[[26](https://arxiv.org/html/2206.06427#bib.bib26)] estimated the clean image based on an encoder-decoder-based conditional generative adversarial network (cGAN). It also introduced the VGG features and ℓ 1 subscript ℓ 1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularized gradient prior to improving realism. AOD-Net[[27](https://arxiv.org/html/2206.06427#bib.bib27)] recovered hazy images by reformulating the physical scattering model and developing a lightweight CNN to extract clean images. This approach has been extended to an end-to-end video dehazing and detection network[[28](https://arxiv.org/html/2206.06427#bib.bib28)]. GFN[[29](https://arxiv.org/html/2206.06427#bib.bib29)] introduced a multi-scale gated fusion end-to-end encoder-decoder network. This network obtains dehazed images by gating the important features of three inputs derived from the original hazy image by applying White Balance (WB), Contrast Enhancing (CE), and Gamma Correction (GC). EPDN[[30](https://arxiv.org/html/2206.06427#bib.bib30)] formulated the dehazing task as an image-to-image translation problem and proposed an enhanced pix2pix network to solve it. This work also introduced Perceptual Index (PI) as a metric to evaluate the dehazing quality from the perceptual perspective. MSBDN[[31](https://arxiv.org/html/2206.06427#bib.bib31)] used a boosting strategy and error feedback for progressive restoration. 4KDehazing[[32](https://arxiv.org/html/2206.06427#bib.bib32)] proposed an ultra-high-definition image dehazing method via multi-guided bilateral learning. CNNs were used to build an affine bilateral grid that maintained detailed edges and textures. Domain-invariant single image dehazing[[33](https://arxiv.org/html/2206.06427#bib.bib33)] recovered images affected by unknown haze distribution using spatially-aware channel attention to ensure feature enhancement and consistency between recovered and neighboring pixels. MSRL-DehazeNet[[34](https://arxiv.org/html/2206.06427#bib.bib34)] embedded a multi-scale deep residual learning into an image decompoosition-guided framework, for single image haze removal. DM 2 2{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT F-Net[[35](https://arxiv.org/html/2206.06427#bib.bib35)] presented a deep multi-model fusion network to attentively integrate multiple models to separate layers for single-image dehazing. LAP-Net[[36](https://arxiv.org/html/2206.06427#bib.bib36)] learned different levels of haze with different supervisions so that each stage could focus on a region with specific haze level. Despite the improved performance, learning-based methods also have limitations. They require a large amount of data (often paired hazy and haze-free images), and they don’t have debugging capabilities on failure cases.
141
+
142
+ ### II-C Object Detection
143
+
144
+ In the deep learning era, object detectors can be grouped into two genres: “two-stage detectors” and “one-stage detectors”. One-stage detectors classify and localize objects in a single-shot for dense prediction. Two-stage detectors have a region proposal module for sparse prediction. Compared to one-stage detectors, two-stage detectors usually achieve better accuracy but lower speed. RCNN series (RCNN[[37](https://arxiv.org/html/2206.06427#bib.bib37)], Fast-RCNN[[38](https://arxiv.org/html/2206.06427#bib.bib38)], Faster-RCNN[[39](https://arxiv.org/html/2206.06427#bib.bib39)], R-FCN[[40](https://arxiv.org/html/2206.06427#bib.bib40)], and Mask-RCNN[[41](https://arxiv.org/html/2206.06427#bib.bib41)]) are the most representative of two-stage detectors. YOLO[[42](https://arxiv.org/html/2206.06427#bib.bib42), [43](https://arxiv.org/html/2206.06427#bib.bib43), [44](https://arxiv.org/html/2206.06427#bib.bib44), [45](https://arxiv.org/html/2206.06427#bib.bib45), [46](https://arxiv.org/html/2206.06427#bib.bib46)], SSD[[47](https://arxiv.org/html/2206.06427#bib.bib47)], RetinaNet[[48](https://arxiv.org/html/2206.06427#bib.bib48)], CenterNet[[49](https://arxiv.org/html/2206.06427#bib.bib49)], and FCOS[[50](https://arxiv.org/html/2206.06427#bib.bib50)] are some of the state-of-the-art one-stage detectors.
145
+
146
+ ##### Architecture
147
+
148
+ The anatomy of an object detector includes a backbone, a neck, and a head. The backbone[[51](https://arxiv.org/html/2206.06427#bib.bib51), [52](https://arxiv.org/html/2206.06427#bib.bib52)] serves as a feature extractor that starts from low-level structures to high-level semantics. The neck, composed of several bottom-up and top-down paths, is adopted to collect feature maps from different stages (scales). The most representative necks include FPN[[53](https://arxiv.org/html/2206.06427#bib.bib53)], PAN[[54](https://arxiv.org/html/2206.06427#bib.bib54)], BiFPN[[55](https://arxiv.org/html/2206.06427#bib.bib55)], and NAS-FPN[[56](https://arxiv.org/html/2206.06427#bib.bib56)]. The head makes predictions in a multi-scale fashion. Head could either decouples object localization and classification (in two-stage detectors) or simultaneously makes the predictions for localization and classification (in one-stage detectors). In the head, anchors typically give the objects’ prior location, shape and size. Recently, anchor-free one-stage detectors such as CenterNet[[49](https://arxiv.org/html/2206.06427#bib.bib49)], FCOS[[50](https://arxiv.org/html/2206.06427#bib.bib50)], and YOLOX[[46](https://arxiv.org/html/2206.06427#bib.bib46)] have gained popularity.
149
+
150
+ ##### Label Assignment
151
+
152
+ Label assignment defines classification and regression targets for each anchor/grid cell. Traditional assigning strategies utilize local-view information, such as Intersection-over-Union (IoU)[[39](https://arxiv.org/html/2206.06427#bib.bib39), [42](https://arxiv.org/html/2206.06427#bib.bib42)] or Centerness[[50](https://arxiv.org/html/2206.06427#bib.bib50)]. DeTR[[57](https://arxiv.org/html/2206.06427#bib.bib57)] is the first work that revisits label assignments from a global view, by considering one-to-one assignments using the Hungarian algorithm. To obtain the optimal global assignment under the one-to-many situation, OTA[[58](https://arxiv.org/html/2206.06427#bib.bib58)] formulates label assignment as an Optimal Transport problem that defines each ground truth (together with the background) as the supplier and each anchor/grid cell as the demander. Thus, the best assignment could be obtained by solving the optimal transport in a Linear Programming form.
153
+
154
+ ![Image 17: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/smoke_generator.png)
155
+
156
+ Figure 2: M56E1 smoke-generating system for producing large scale obscuration.
157
+
158
+ ![Image 18: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/data_collection/layout.png)
159
+
160
+ (a)
161
+
162
+ ![Image 19: Refer to caption](https://arxiv.org/html/x1.jpg)
163
+
164
+ (b)
165
+
166
+ ![Image 20: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/data_collection/target_board.png)
167
+
168
+ (c)
169
+
170
+ Figure 3: Example images demonstrating the data collection process. The top figure shows a bird-view layout of the testing site. The yellow circles show target contrast boards, and the red circle shows an infrared and a visible camera. The bottom two figures show target contrast boards (the black and white pylons) used for local haze density measurement.
171
+
172
+ III A2I2-Haze Dataset
173
+ ---------------------
174
+
175
+ The A2I2-Haze Dataset consists of paired haze and haze-free aerial and ground imagery acquired by a small UAS and UGV, respectively. The data set is synchronized with in-situ smoke measurements as well as altitude data acquired from the flight controller. A2I2-Haze were captured in two separate settings – a) grass field as the background, and b) ground with concrete slabs as the background. For the aerial dataset, after each trial, target objects were re-positioned to generate different configuration of objects. The images with good pairing were then selected from the trials. However, the annotated ground dataset is only available from one configuration of target objects at this time. This section provides a detailed description of the data collection procedure and the post-processing pipeline.
176
+
177
+ ### III-A Data Collection
178
+
179
+ Data collection was conducted at the DEVCOM Chemical and Biological Center’s (CBC) M-field test range utilizing the world’s most comprehensive obscurant generation facility and assessment technologies to measure the smoke concentration accurately. The facility has a platoon of six M56E1 smoke-generating systems (shown in Fig.[2](https://arxiv.org/html/2206.06427#S2.F2 "Figure 2 ‣ Label Assignment ‣ II-C Object Detection ‣ II Previous Work ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth")) that provide large area obscuration by disseminating obscurants either simultaneously or separately while stationary or mobile. The current onboard obscuration capabilities include visual obscuration using fog oil, infrared obscuration using graphite, and radar obscuration using carbon fiber. These obscurants thus can degrade the perception capabilities of threat weapons and Reconnaissance, Intelligence, Surveillance, Targeting, and Acquisition (RISTA) operating in the Visual, Infra-Red (IR), and Millimeter Wave (MMW) portions of the electromagnetic spectrum. Specifically, the obscurant materials are designed to effectively absorb and scatter energy, thus making target acquisition difficult.
180
+
181
+ As part of the Phase 1 DVE data collection, we obtained a wide range of imagery of targets in the DEVCOM CBC M-Field test area using visual obscurants generated by using fog oil in the M56E1 Smoke-Generating System. The target objects included vehicles, mannequins, and man-made obstacles encountered during UGV maneuvers, such as traffic cones, barriers, and barricades. The ground truth for smoke concentration was measured using laser-based transmissometers. These instruments measured transmittance through the smoke cloud at a 625 625 625 625 nm wavelength and provided a quantitative measure of the clouds’ effectiveness in attenuating visible light. The transmissometers’ light sources were Z-Laser, S3 series diode modules (Edmund Optics; Barrington, NJ) with output wavelength and power of 625 625 625 625 nm and 5 5 5 5 mW, respectively.
182
+
183
+ The laser power received at the detector was measured before obscuration, providing a baseline to compare the effect of the fog oil obscurant. During the series of trials, the transmittance was reduced from 100%percent 100 100\%100 % to below 1%percent 1 1\%1 %, providing a range of obscurant concentrations to correlate with sensor performance. This variation in transmittance (and thus obscurant concentration) is critical as real-world scenarios will have varying haze concentrations. All measurements were time-synchronized with the UAV and UGV sensors to relate the obscuration properties to the sensor’s response.
184
+
185
+ In addition to the transmissometer-based obscuration measurement, we used target boards, as shown in Fig.[3](https://arxiv.org/html/2206.06427#S2.F3 "Figure 3 ‣ Label Assignment ‣ II-C Object Detection ‣ II Previous Work ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth"), as the second technique for haze measurement. The black and white pylons in the bottom two figures show target contrast boards used for haze density local measurement.
186
+
187
+ #### III-A 1 UAV Dataset
188
+
189
+ Aerial video of the DVE releases was captured by Deep Purple 3 (DP3), a custom UAS developed and operated by DEVCOM CBC. DP3 was primarily designed to carry sensor payloads weighing up to 6 lbs using the Array Configured of Remote Networked Sensors (ACoRNS) interface system. In addition to sensor payloads, DP3 can also be equipped with a variety of camera systems. For the A2I2-Haze data collection, a pair of cameras were mounted in the nosecone to capture both longwave Infrared (LWIR) and visible spectrum. DP3 was equipped with a FLIR Boson 640 (95∘superscript 95 95^{\circ}95 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT FOV) for LWIR data capture. Calibrated visible spectrum data was captured using an ELP-USBFHD01M-L21 with a 2.1mm lens, approximately 120∘superscript 120 120^{\circ}120 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT field of view, and an OV2710 sensor. Cameras mounted on the nosecone could be rotated to face 0∘superscript 0 0^{\circ}0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, 45∘superscript 45 45^{\circ}45 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT, 90∘superscript 90 90^{\circ}90 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT relative to the body of the UAS, with 90∘superscript 90 90^{\circ}90 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT being a nadir pointing the camera, and 0∘superscript 0 0^{\circ}0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT in line with the nose of the UAS. For the A2I2 collection, cameras were clocked at either 90∘superscript 90 90^{\circ}90 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT or 45∘superscript 45 45^{\circ}45 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT. The UAS is flying in a lawn-mowing pattern based on pre-programmed GPS coordinates (aka, waypoints). The flight plan for the collection was defined using a survey grid in Mission Planner to generate a cross-hatch pattern over a predefined area. The area was selected using an arbitrary rectangle within the Mission Planner, with the bounds of the rectangle selected to keep the area of interest within view of the cameras as much as possible over a range of altitudes. The survey grid was cross-hatched with a 10m lane spacing. In order to maintain consistency between altitudes, the survey grid was copied, and the altitude was increased in 5m increments from 15-50m. UAS was commanded to face toward the center of the grid at all points during its flight. The GPS coordinate for the center of the grid was determined by the UAS by manually placing the UAS at the point of interest and using its GPS to determine the center of the grid.
190
+
191
+ #### III-A 2 UGV Dataset
192
+
193
+ In addition to the aerial vehicle dataset collection described above, a ground vehicle was also used to capture a second viewpoint from inside the haze effect on the surface. The ground vehicle used to collect this dataset is a Clearpath Husky mobile robot which is equipped with a variety of cameras and lidar sensing modalities. The visual dataset for this paper was collected from the color image from the left camera of a Carnegie Robotics MultisenseSL stereo + lidar sensor, which was mounted in a forward-facing position at the front of the robot at the height of 0.6 meters. This sensor package also generated a frame-synchronized second monochrome image from the right camera, a depth image, and a lidar point cloud from the included Hokuyo 2D lidar scanner, which is actuated through rotation about the forward axis to cover a 3D hemispherical volume. A FLIR Boson infrared camera was also used to capture thermal images of the scene. The robot also collected point clouds from an Ouster OS1-64 lidar which is used with a robot mapping engine based on OmniMapper[[59](https://arxiv.org/html/2206.06427#bib.bib59)] to localize the UGV in a global frame of reference. The trajectory of the UGV was teleoperated from a remote location via live sensor feedback to collect these datasets. In this paper, only the visual imagery is analyzed; however, the analysis of additional sensor modalities is planned for future work and can be made available to other interested researchers.
194
+
195
+ ### III-B Data Augmentation and Synchronization
196
+
197
+ The raw data collected by sensors were subjected to a series of data augmentation and synchronization procedures in order to enhance their quality. The high mobility of UAV-mounted cameras has brought additional challenges compared to traditional datasets, such as variations in altitude, view angles, and weather conditions. NDFT[[60](https://arxiv.org/html/2206.06427#bib.bib60)] named these variations as UAV-specific nuisances, which constitute a large number of fine-grained domains. NDFT shows that these nuisances can be used to train a cross-domain object detector that stays robust to many fine-grained domains. As part of A2I2-Haze, we collected these UAV-specific nuisances as metadata that are synchronized with UAV’s clock.
198
+
199
+ ![Image 21: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/metadata_sync/uav_baro_transm.png)
200
+
201
+ (a)
202
+
203
+ ![Image 22: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/metadata_sync/uav_contrast.png)
204
+
205
+ (b)
206
+
207
+ Figure 4: UAV visual data synchronized with collected metadata. The left figure shows the visual data synchronized with the barometer and transmissometer metadata. The barometer records the height data of the UAV, and the transmissometer records the laser transmission rate, which measures the haze density. The right figure shows the synchronization of visual data with contrast on three target boards, the second technique that measures the haze density.
208
+
209
+ ![Image 23: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/metadata_sync/ugv_transm.png)
210
+
211
+ Figure 5: UGV visual data synchronized with contrast on target board, which is the second technique that measures the haze density.
212
+
213
+ Fig.[4](https://arxiv.org/html/2206.06427#S3.F4 "Figure 4 ‣ III-B Data Augmentation and Synchronization ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth") shows the UAV visual data synchronized to the metadata, including the altitude measured by UAV’s barometer, and haze density measured by two independent techniques (section III A)- a) using laser transmission rate on a transmissometer and b) using visual contrasts on three target boards. Similar techniques were used for synchronizing the UGV dataset to haze density measurement, as shown in Fig.[5](https://arxiv.org/html/2206.06427#S3.F5 "Figure 5 ‣ III-B Data Augmentation and Synchronization ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth").
214
+
215
+ ### III-C Labeling and pairing of UAV Dataset
216
+
217
+ Quantitative evaluation of low-level dehazing and high-level detection tasks require paired (hazy and haze-free) images with annotated target objects.
218
+
219
+ The detection task requires accurate data curation using 2D bounding box annotation tools to define regions of interest. Annotation was performed by external annotators from Engineering and Computer Simulations (ECS) using their labeling platform. Ten object classes were provided as an ontology for labeling. The objects classes were: Sedan, Van, Pickup Truck, Utility Task Vehicle (UTV), Mannequin, Unmanned Ground Vehicle (UGV), Barrel, Jersey Barrier, Aluminum Truss, and Red backpacks. The annotators provided rectangular bounding boxes for each object. They also provided the subjective haze level for each bounding box as light, medium, or heavy.
220
+
221
+ Dehazing task requires paired hazy and haze-free images. Due to UAV’s high flying altitude and rapid movement speed, we sought scene-level pairing rather than pixel-level pairing. We proposed a “Coarse-to-Fine Matching” strategy to achieve scene-level pairing. Given two video sequences (hazy and haze-free), we first cut each video into short clips of 2 seconds. We then manually selected hazy and haze-free clips that were similar at the scene level. Lastly, for each selected clip pair (V s subscript 𝑉 𝑠 V_{s}italic_V start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and V t subscript 𝑉 𝑡 V_{t}italic_V start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT), we used Alg.[1](https://arxiv.org/html/2206.06427#alg1 "1 ‣ III-C Labeling and pairing of UAV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth") to find the best pair-able frames with the least amount of relative translation or rotation. Since V s subscript 𝑉 𝑠 V_{s}italic_V start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and V t subscript 𝑉 𝑡 V_{t}italic_V start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT were describing the same scene, we hypothesize that the homography matrix (computed from keypoint matching by LoFTR[[61](https://arxiv.org/html/2206.06427#bib.bib61)]) is mostly composed of rotational and translational transformation. The distance of the homography matrix to an identity matrix served as a metric that measures scene-level similarity. We assume that the best pair-able frames have the smallest distance. We manually check the returned paired hazy and haze-free images by the proposed “Coarse-to-Fine Matching” strategy. If it fails, we manually find the best pair-able images given the selected hazy and haze-free clips that are similar at the scene level. Such a final human intervention could guarantee the correctness even in the presence of dense haze.
222
+
223
+ 1
224
+
225
+ N,M←Size⁢(V s),Size⁢(V t)formulae-sequence←𝑁 𝑀 Size subscript 𝑉 𝑠 Size subscript 𝑉 𝑡 N,M\leftarrow\text{Size}(V_{s}),\text{Size}(V_{t})italic_N , italic_M ← Size ( italic_V start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ) , Size ( italic_V start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )
226
+
227
+ 2
228
+
229
+ d m⁢i⁢n←∞←subscript 𝑑 𝑚 𝑖 𝑛 d_{min}\leftarrow\infty italic_d start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT ← ∞
230
+
231
+ 3 for _i←1 normal-←𝑖 1 i\leftarrow 1 italic\_i ← 1 to N 𝑁 N italic\_N_ do
232
+
233
+ 4 for _j←1 normal-←𝑗 1 j\leftarrow 1 italic\_j ← 1 to M 𝑀 M italic\_M_ do
234
+
235
+ 5
236
+
237
+ I s,I t←V s⁢[i],V t⁢[j]formulae-sequence←subscript 𝐼 𝑠 subscript 𝐼 𝑡 subscript 𝑉 𝑠 delimited-[]𝑖 subscript 𝑉 𝑡 delimited-[]𝑗 I_{s},I_{t}\leftarrow V_{s}[i],V_{t}[j]italic_I start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ← italic_V start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT [ italic_i ] , italic_V start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT [ italic_j ]
238
+
239
+ // Keypoints Matching
240
+
241
+ 6
242
+
243
+ p s,p t←LoFTR⁢(I s,I t)←subscript 𝑝 𝑠 subscript 𝑝 𝑡 LoFTR subscript 𝐼 𝑠 subscript 𝐼 𝑡 p_{s},p_{t}\leftarrow{\small\text{LoFTR}(I_{s},I_{t})}italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ← LoFTR ( italic_I start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )
244
+
245
+ 7
246
+
247
+ 𝐌←HomographyMatrix(p s,p t))\mathbf{M}\leftarrow{\small\text{HomographyMatrix}(p_{s},p_{t}))}bold_M ← HomographyMatrix ( italic_p start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) )
248
+
249
+ 8
250
+
251
+ d←sqrt⁢(Trace⁢((𝐌−𝐈)⊺*(𝐌−𝐈)))←𝑑 sqrt Trace superscript 𝐌 𝐈⊺𝐌 𝐈 d\leftarrow\text{sqrt}(\text{Trace}((\mathbf{M}-\mathbf{I})^{\intercal}*(% \mathbf{M}-\mathbf{I})))italic_d ← sqrt ( Trace ( ( bold_M - bold_I ) start_POSTSUPERSCRIPT ⊺ end_POSTSUPERSCRIPT * ( bold_M - bold_I ) ) )
252
+
253
+ 9 if _d<d m⁢i⁢n 𝑑 subscript 𝑑 𝑚 𝑖 𝑛 d<d\_{min}italic\_d < italic\_d start\_POSTSUBSCRIPT italic\_m italic\_i italic\_n end\_POSTSUBSCRIPT_ then
254
+
255
+ 10
256
+
257
+ I s*,I t*←I s,I t formulae-sequence←superscript subscript 𝐼 𝑠 superscript subscript 𝐼 𝑡 subscript 𝐼 𝑠 subscript 𝐼 𝑡 I_{s}^{*},I_{t}^{*}\leftarrow I_{s},I_{t}italic_I start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT , italic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ← italic_I start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT
258
+
259
+ return
260
+
261
+ I s*,I t*superscript subscript 𝐼 𝑠 superscript subscript 𝐼 𝑡 I_{s}^{*},I_{t}^{*}italic_I start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT , italic_I start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT
262
+
263
+ Algorithm 1 Coarse-to-Fine Matching
264
+
265
+ ### III-D Labeling and Pairing for UGV Dataset
266
+
267
+ Labeling the UGV dataset presents unique challenges due to the difficulty of determining object bounding boxes in hazy images to a human labeler. To address these issues, a 3D object labeling scheme was developed to assist with the labeling process. Initial bounding boxes were generated by projecting the 3D bounding volumes of each object visible in each image, given the UGV’s known observation point. To determine the UGV’s global position and orientation for each captured image, the Monte-Carlo particle filter-based localization package _AMCL_[[62](https://arxiv.org/html/2206.06427#bib.bib62)] from ROS[[63](https://arxiv.org/html/2206.06427#bib.bib63)] was used to localize into a 2D map which was generated for each obstacle configuration using OmniMapper[[59](https://arxiv.org/html/2206.06427#bib.bib59)]. The lidar data from the Ouster OS1-64 was used with the UGV’s odometry and internal IMU to generate this map. The projections of the 3D object bounding volumes were then refined by hand with LabelMe[[64](https://arxiv.org/html/2206.06427#bib.bib64)].
268
+
269
+ Two UGV trajectories were collected for each object configuration: one with the haze effect and the other without the haze effect. A 2D map was generated for each obstacle configuration and used to localize the UGV with Monte-Carlo localization as described in Section[III-A 2](https://arxiv.org/html/2206.06427#S3.SS1.SSS2 "III-A2 UGV Dataset ‣ III-A Data Collection ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth"). This global localization was used to tag each image with the UGV’s position and orientation at the capture time. This position and orientation information was used with a Hungarian algorithm to find the optimal pairing between haze and haze-free images, which minimizes the error metric E=10⁢Δ o+Δ p 𝐸 10 subscript Δ 𝑜 subscript Δ 𝑝 E=10\Delta_{o}+\Delta_{p}italic_E = 10 roman_Δ start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT + roman_Δ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT where Δ o subscript Δ 𝑜\Delta_{o}roman_Δ start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT is the difference in orientation and Δ p subscript Δ 𝑝\Delta_{p}roman_Δ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is the difference in position. This metric weighs orientation more heavily as changes in orientation have a more significant effect on image viewpoint.
270
+
271
+ ![Image 24: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/GT/009196.png)
272
+
273
+ (a)
274
+
275
+ ![Image 25: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/GT/018660.png)
276
+
277
+ (b)
278
+
279
+ ![Image 26: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/GT/020100.png)
280
+
281
+ (c)
282
+
283
+ ![Image 27: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/hazy/009196.png)
284
+
285
+ (d)
286
+
287
+ ![Image 28: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/hazy/018660.png)
288
+
289
+ (e)
290
+
291
+ ![Image 29: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/hazy/020100.png)
292
+
293
+ (f)
294
+
295
+ ![Image 30: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/FFA-Net/009196.png)
296
+
297
+ (g)
298
+
299
+ ![Image 31: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/FFA-Net/018660.png)
300
+
301
+ (h)
302
+
303
+ ![Image 32: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/FFA-Net/020100.png)
304
+
305
+ (i)
306
+
307
+ ![Image 33: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/GCANet/009196.png)
308
+
309
+ (j)
310
+
311
+ ![Image 34: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/GCANet/018660.png)
312
+
313
+ (k)
314
+
315
+ ![Image 35: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/GCANet/020100.png)
316
+
317
+ (l)
318
+
319
+ ![Image 36: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/MSBDN/009196.png)
320
+
321
+ (m)
322
+
323
+ ![Image 37: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/MSBDN/018660.png)
324
+
325
+ (n)
326
+
327
+ ![Image 38: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/MSBDN/020100.png)
328
+
329
+ (o)
330
+
331
+ ![Image 39: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/SRKT/009196.png)
332
+
333
+ (p)
334
+
335
+ ![Image 40: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/SRKT/018660.png)
336
+
337
+ (q)
338
+
339
+ ![Image 41: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/SRKT/020100.png)
340
+
341
+ (r)
342
+
343
+ ![Image 42: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/Trident/009196.png)
344
+
345
+ (s)
346
+
347
+ ![Image 43: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/Trident/018660.png)
348
+
349
+ (t)
350
+
351
+ ![Image 44: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/Trident/020100.png)
352
+
353
+ (u)
354
+
355
+ ![Image 45: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/DWDehazing/009196.png)
356
+
357
+ (v)
358
+
359
+ ![Image 46: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/DWDehazing/018660.png)
360
+
361
+ (w)
362
+
363
+ ![Image 47: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/DWDehazing/020100.png)
364
+
365
+ (x)
366
+
367
+ ![Image 48: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/Cycle-GAN/9690.png)
368
+
369
+ (y)
370
+
371
+ ![Image 49: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/Cycle-GAN/18660.png)
372
+
373
+ (z)
374
+
375
+ ![Image 50: Refer to caption](https://arxiv.org/html/extracted/2206.06427v3/figures/dehaze_images/YOLOX/Cycle-GAN/20100.png)
376
+
377
+ (aa)
378
+
379
+ Figure 6: Examples of detected vehicles on hazy images and dehazed images. All detections are produced from YOLOX-M pretrained on a merged set of VisDrone2019-DET and UAVDT-M. The first row in red frame shows ground-truth bounding-box annotations. The second row in blue frame shows baseline results w/o dehazing. The detected vehicles using different dehazing approaches are arranged from the third row to the last row: FFA-Net, GCANet, MSBDN, SRKT, Trident, DWDehazing, and our proposed Cycle-DehazeNet.
380
+
381
+ ![Image 51: Refer to caption](https://arxiv.org/html/x2.png)
382
+
383
+ Figure 7: Failed examples on dehazed images (detected by YOLOX). YOLOX always mistakenly detect small contrast boards as vehicles (marked in orange on the middle right).
384
+
385
+ TABLE II: Vehicle Detection on A2I2-Haze w/o Dehazing. The first row of A2I2-UAV shows experimental results on A2I2-UAV w/o NDFT. The second row of A2I2-UAV+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT shows experimental results on A2I2-UAV w/ NDFT. The third row of A2I2-UGV shows experimental results on A2I2-UGV.
386
+
387
+ TABLE III: Vehicle Detection on A2I2-Haze w/ Dehazing. The upper and lower table shows experiments on A2I2-UAV and A2I2-UGV, respectively. The numbers in black shows the detection score (AR, AP 0.5 0.5{}_{0.5}start_FLOATSUBSCRIPT 0.5 end_FLOATSUBSCRIPT, and AP 0.5:0.95:0.5 0.95{}_{0.5:0.95}start_FLOATSUBSCRIPT 0.5 : 0.95 end_FLOATSUBSCRIPT) after applying dehazing on both training and testing images. The numbers in blue shows the relative increase over the detection w/o dehazing. The numbers in red show the relative decrease under the detection w/o dehazing.
388
+
389
+ ![Image 52: Refer to caption](https://arxiv.org/html/x3.png)
390
+
391
+ (a)
392
+
393
+ ![Image 53: Refer to caption](https://arxiv.org/html/x4.png)
394
+
395
+ (b)
396
+
397
+ Figure 8: Comparison of different detection and dehazing approaches. The left plot shows detection results on A2I2-UAV, and the right plot shows detection results on A2I2-UGV. Scatter points in these two plots correspond to AP 0.5:0.95:0.5 0.95{}_{0.5:0.95}start_FLOATSUBSCRIPT 0.5 : 0.95 end_FLOATSUBSCRIPT in Table[III](https://arxiv.org/html/2206.06427#S3.T3 "TABLE III ‣ III-D Labeling and Pairing for UGV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth"). It gives two perspective: object detectors (the red line) and dehazing approaches (the gray lines). In the left plot, the dashed gray line with triangles is clearly above the remaining three gray lines, which indicates that CenterNet is the best detector in non-homogeneous haze. In the dashed red line with stars, our proposed Cycle-DehazeNet has the best dehazing performance, indicated by the largest improvement on the four detectors’ performance.
398
+
399
+ ![Image 54: Refer to caption](https://arxiv.org/html/x5.png)
400
+
401
+ (a)
402
+
403
+ Figure 9: Overview of Cycle-DehazeNet. G H2C subscript G H2C\rm G_{H2C}roman_G start_POSTSUBSCRIPT H2C end_POSTSUBSCRIPT and G C2H subscript G C2H\rm G_{C2H}roman_G start_POSTSUBSCRIPT C2H end_POSTSUBSCRIPT are generators, while D clean subscript D clean\rm D_{clean}roman_D start_POSTSUBSCRIPT roman_clean end_POSTSUBSCRIPT and D hazy subscript D hazy\rm D_{hazy}roman_D start_POSTSUBSCRIPT roman_hazy end_POSTSUBSCRIPT are discriminators. Note that G H2C subscript G H2C\rm G_{H2C}roman_G start_POSTSUBSCRIPT H2C end_POSTSUBSCRIPT translates image from hazy domain to haze-free domain while G C2H subscript G C2H\rm G_{C2H}roman_G start_POSTSUBSCRIPT C2H end_POSTSUBSCRIPT translates image from haze-free domain to hazy domain. D clean subscript D clean\rm D_{clean}roman_D start_POSTSUBSCRIPT roman_clean end_POSTSUBSCRIPT distinguishes real haze-free images from fake ones. D hazy subscript D hazy\rm D_{hazy}roman_D start_POSTSUBSCRIPT roman_hazy end_POSTSUBSCRIPT tells real hazy images from fake ones. VGG is used to give feature-level perceptual loss supervision.
404
+
405
+ IV Quantitative Evaluation
406
+ --------------------------
407
+
408
+ Within A2I2-Haze dataset, we created two separate subsets, A2I2-UAV and A2I2-UGV, dedicated to UAV and UGV sets, respectively. A2I2-UAV includes a training set UAV-train with 224 224 224 224 pairs of hazy and corresponding haze-free images, and an additional 240 240 240 240 haze-free images. The testing set of UAV-test has 119 119 119 119 hazy images. The training set of A2I2-UGV - UGV-train has 50 50 50 50 pairs of hazy and corresponding haze-free images, and an additional 200 200 200 200 haze-free images. UGV-test has 200 200 200 200 hazy images.
409
+
410
+ ### IV-A Object Detection
411
+
412
+ #### IV-A 1 Detection Approaches
413
+
414
+ We use four state-of-the-art detectors that are widely used both in industry and academia: (a) YOLOv5[[65](https://arxiv.org/html/2206.06427#bib.bib65)], (b) YOLOX[[46](https://arxiv.org/html/2206.06427#bib.bib46)], (c) Faster R-CNN[[39](https://arxiv.org/html/2206.06427#bib.bib39)], and (d) CenterNet[[49](https://arxiv.org/html/2206.06427#bib.bib49)] to evaluate the proposed A2I2-Haze dataset. For experiments on A2I2-UAV, all detectors are pretrained on a merged set of VisDrone2019-DET[[66](https://arxiv.org/html/2206.06427#bib.bib66)] and UAVDT-M[[67](https://arxiv.org/html/2206.06427#bib.bib67)] and adopt official COCO-based implementation. For experiments on A2I2-UGV, all detectors are pretrained on Cityscapes[[68](https://arxiv.org/html/2206.06427#bib.bib68)] and adopt official COCO-based implementation.
415
+
416
+ #### IV-A 2 Results and Analyses
417
+
418
+ Table[II](https://arxiv.org/html/2206.06427#S3.T2 "TABLE II ‣ III-D Labeling and Pairing for UGV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth") shows the performance of various object detectors without dehazing on both A2I2-UAV and A2I2-UGV subsets of A2I2-Haze. In the first row, we infer that object detectors can be ordered by their detection performance on A2I2-UAV as follows: CenterNet>YOLOv5>YOLOX>FasterRCNN CenterNet YOLOv5 YOLOX FasterRCNN\text{CenterNet}>\text{YOLOv5}>\text{YOLOX}>\text{FasterRCNN}CenterNet > YOLOv5 > YOLOX > FasterRCNN. In the second row, we show that utilizing the collected altitude metadata to learn an altitude variation robust detector by NDFT[[60](https://arxiv.org/html/2206.06427#bib.bib60)] could improve the detection performance for all the four detectors. For A2I2-UGV (the third row), the detectors can be ordered based on their performance as CenterNet>FasterRCNN>YOLOv5>YOLOX CenterNet FasterRCNN YOLOv5 YOLOX\text{CenterNet}>\text{FasterRCNN}>\text{YOLOv5}>\text{YOLOX}CenterNet > FasterRCNN > YOLOv5 > YOLOX. Among the four detectors, YOLOX consistently gives the worst performance on both A2I2-UAV and A2I2-UGV. It frequently mistakes small contrast boards as vehicles, as shown in Fig.[7](https://arxiv.org/html/2206.06427#S3.F7 "Figure 7 ‣ III-D Labeling and Pairing for UGV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth").
419
+
420
+ ### IV-B Dehazing
421
+
422
+ #### IV-B 1 Baseline Approaches
423
+
424
+ We benchmark three state-of-the-art homogeneous dehazing approaches: (I) GCANet[[69](https://arxiv.org/html/2206.06427#bib.bib69)], (II) FFA-Net[[70](https://arxiv.org/html/2206.06427#bib.bib70)], and (III) MSBDN[[31](https://arxiv.org/html/2206.06427#bib.bib31)]. In addition, we also benchmark another set of three state-of-the-art non-homogeneous dehazing approaches: (IV) SRKT[[71](https://arxiv.org/html/2206.06427#bib.bib71)], (V) DWDehaze[[72](https://arxiv.org/html/2206.06427#bib.bib72)], and (VI) Trident-Dehazing[[73](https://arxiv.org/html/2206.06427#bib.bib73)]. All dehazing models are pre-trained using official implementations.
425
+
426
+ #### IV-B 2 Proposed Cycle-DehazeNet
427
+
428
+ Inspired by CycleGAN[[74](https://arxiv.org/html/2206.06427#bib.bib74), [75](https://arxiv.org/html/2206.06427#bib.bib75), [76](https://arxiv.org/html/2206.06427#bib.bib76)], we propose a dehazing model that could be trained for image translation tasks on the hazy image domain and haze-free image domain without paired training samples. Cycle-DehazeNet is trained on 512×512 512 512 512\times 512 512 × 512 patches (CycleGAN’s constraint). It takes 512×512 512 512 512\times 512 512 × 512 as input resolution and restores 512×512 512 512 512\times 512 512 × 512 output images to minimize computational costs. In the inference, we first pad the input HR image from 1845×1500 1845 1500 1845\times 1500 1845 × 1500 to 2048×1536 2048 1536 2048\times 1536 2048 × 1536, so that the width and height are divisible by 512 512 512 512. Then we extract overlapping 512×512 512 512 512\times 512 512 × 512 patches at a stride of 256 256 256 256. Each target pixel p t subscript 𝑝 𝑡 p_{t}italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in the image may be covered by multiple patches. Last, the target pixel value is computed as a weighted sum of all the collided pixels p i subscript 𝑝 𝑖 p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT from different patches, where the weight is measured by the distance between the pixel and the patch center. The weight is decayed by a Gaussian distribution with controllable sigma parameters. It could be mathematically expressed below:
429
+
430
+ p t=∑i=1 N w i⁢p i∑i=1 N w i,subscript 𝑝 𝑡 superscript subscript 𝑖 1 𝑁 subscript 𝑤 𝑖 subscript 𝑝 𝑖 superscript subscript 𝑖 1 𝑁 subscript 𝑤 𝑖 p_{t}=\frac{\sum_{i=1}^{N}w_{i}p_{i}}{\sum_{i=1}^{N}w_{i}},italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = divide start_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG ,(2)
431
+
432
+ where N 𝑁 N italic_N is the number of patches that cover the subject pixel and w 𝑤 w italic_w follows a Gaussian distribution w.r.t. the geometric distance between the patch center and the target pixel. Following [[74](https://arxiv.org/html/2206.06427#bib.bib74)], Cycle-DehazeNet has two losses: cycle-consistency loss[[76](https://arxiv.org/html/2206.06427#bib.bib76)] and perceptual loss[[77](https://arxiv.org/html/2206.06427#bib.bib77)]. Cycle-consistency loss relieves the constraint of paired training samples. Perceptual loss remedies the local texture information that is heavily corrupted by haze and preserves the original image structure by analyzing the high and low-level features extracted from VGG Net. We observe that the hazy areas in our dataset are mostly distributed around four corners instead of the central part. Based on that, we would like to force our proposed method to more focus on haze rather than clean parts. Thus the cropped four 512×512 512 512 512\times 512 512 × 512 patches from the corners of each image are used as “focus” (red bounding boxes in Fig.[9](https://arxiv.org/html/2206.06427#S3.F9 "Figure 9 ‣ III-D Labeling and Pairing for UGV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth")). We emphasize more on the hazy area than the whole image by assigning larger weights for the “focus” patch in the final loss.
433
+
434
+ #### IV-B 3 Evaluation Metric
435
+
436
+ We use object detection score on the dehazed images as the metric to quantitatively measure restoration capability of the dehazing technique on image semantics. We use the same four detectors as in Section.[IV-A](https://arxiv.org/html/2206.06427#S4.SS1 "IV-A Object Detection ‣ IV Quantitative Evaluation ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth") (a) YOLOv5, (b) YOLOX, (c) Faster R-CNN, and (d) CenterNet.
437
+
438
+ #### IV-B 4 Results and Analyses
439
+
440
+ Fig.[6](https://arxiv.org/html/2206.06427#S3.F6 "Figure 6 ‣ III-D Labeling and Pairing for UGV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth") shows detection results on original hazy images and dehazed images. Fig.[8](https://arxiv.org/html/2206.06427#S3.F8 "Figure 8 ‣ III-D Labeling and Pairing for UGV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth") shows AP 0.5:0.95:0.5 0.95{}_{0.5:0.95}start_FLOATSUBSCRIPT 0.5 : 0.95 end_FLOATSUBSCRIPT score as metric to compare various detectors and various dehazing approaches. Fig.[8](https://arxiv.org/html/2206.06427#S3.F8 "Figure 8 ‣ III-D Labeling and Pairing for UGV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth") gives two perspective: object detectors (the red line) and dehazing approaches (the gray lines). The red line shows the averaged AP 0.5:0.95:0.5 0.95{}_{0.5:0.95}start_FLOATSUBSCRIPT 0.5 : 0.95 end_FLOATSUBSCRIPT across the four detectors for different dehazing approaches. The gray lines show the AP 0.5:0.95:0.5 0.95{}_{0.5:0.95}start_FLOATSUBSCRIPT 0.5 : 0.95 end_FLOATSUBSCRIPT across different dehazing approaches for the four detectors. Fig.[6](https://arxiv.org/html/2206.06427#S3.F6 "Figure 6 ‣ III-D Labeling and Pairing for UGV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth") shows the visual effect of various dehazing approaches on three randomly selected images. Among the seven dehazing approaches used, FFA-Net leads to the worst detection performance, and our proposed Cycle-DehazeNet gives the best mAP on YOLOv5, Faster R-CNN and CenterNet. In general, we observe that non-homogeneous dehazing algorithms have better mAP compared to the homogeneous dehazing algorithms.
441
+
442
+ V Discussions and Future Works
443
+ ------------------------------
444
+
445
+ A2I2-Haze being the first attempt to develop a real-world hazy dataset with in-situ measurements, we acknowledge several limitations in the data collection procedure. We propose the following research directions to address the gaps:
446
+
447
+ * •
448
+ Object Diversity: Both UAV and UGV datasets have a limited number of object categories and instances. Improving the diversity in objects will be a crucial step towards developing a more comprehensive dataset.
449
+
450
+ * •
451
+ Background Diversity: A2I2-Haze was primarily collected in two different scenarios: with concrete and grass backgrounds. Including additional scenes with diverse backgrounds will make the dataset a more challenging benchmark.
452
+
453
+ * •
454
+ Global Haze Measure: The smoke measurement for A2I2-Haze was made using two laser transmissometers. These measurements can be highly localized in a non-homogeneous setting. A novel sensor network for grid-based measurement is required to improve the accuracy of the in-situ data.
455
+
456
+ * •
457
+ Synchronized Aerial-Ground View: A2I2-Haze could be extended to a multi-view dataset using air-ground coordination and multi-source image matching.
458
+
459
+ * •
460
+ Jointly Optimized Dehazing and Detection Pipeline: If jointly optimized, low-level image processing could be made beneficial for high-level semantic tasks[[78](https://arxiv.org/html/2206.06427#bib.bib78), [27](https://arxiv.org/html/2206.06427#bib.bib27)]. In this study, dehazing can be considered a pre-processing step for the subsequent detection task. Thus, as a future research direction, detection and dehazing could be jointly designed and trained to optimize detection performance in the presence of haze. This could be particularly significant in the case of aerial-view imagery. Furthermore, as this approach removes the constraint of maintaining the aesthetic quality of the image, dehazing could be utilized to purely restore image semantics for object detection.
461
+
462
+ * •
463
+ Metadata Utilization: The annotated UAV-specific nuisance or metadata in A2I2-Haze could be utilized to improve the robustness of the learned features via adversarial training as discussed in NDFT[[60](https://arxiv.org/html/2206.06427#bib.bib60)]. In Tab.[II](https://arxiv.org/html/2206.06427#S3.T2 "TABLE II ‣ III-D Labeling and Pairing for UGV Dataset ‣ III A2I2-Haze Dataset ‣ A Multi-purpose Realistic Haze Benchmark with Quantifiable Haze Levels and Ground Truth"), we have shown some preliminary success by learning a detector robust to altitude variation on A2I2-UAV. Utilizing the haze density metadata to train a dehazing-aware detector better could be a future research direction.
464
+
465
+ VI Conclusion
466
+ -------------
467
+
468
+ In this paper we develop a challenging UAV and UGV dataset with realistic haze for foundational research in scene understanding in obscured conditions. Advanced smoke-generation and measurement techniques were used to develop this dataset for object detection and dehazing tasks. All images were curated precisely using a combination of both manual and partially-automated techniques. Further, the images are synchronized with haze density measurements and altitude of the UAV. Quantitative evaluation was performed using state-of-the-art object detectors and de-hazing models. Overall the baseline approaches were found to perform poorly on A2I2-Haze and we hope the dataset will serve as a good benchmark for future algorithmic advances. As discussed in the paper, we also plan to address the limitations of A2I2-Haze in future research efforts.
469
+
470
+ Acknowledgment
471
+ --------------
472
+
473
+ The authors would like to thank US Army Artificial Innovation Institute (A2I2) for funding the data collection and annotation, and financially supporting graduate students at Texas A & M University for this project. The authors also acknowledges experimentation support from DEVCOM Chemical & Biological Center (CBC) during the Data collection process.
474
+
475
+ References
476
+ ----------
477
+
478
+ * [1] B.Li, W.Ren, D.Fu, D.Tao, D.Feng, W.Zeng, and Z.Wang, “Benchmarking single-image dehazing and beyond,” _IEEE Transactions on Image Processing_, vol.28, no.1, pp. 492–505, 2019.
479
+ * [2] C.O. Ancuti, C.Ancuti, and R.Timofte, “NH-HAZE: an image dehazing benchmark with non-homogeneous hazy and haze-free images,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_, ser. IEEE CVPR 2020, 2020.
480
+ * [3] X.Zhang, H.Dong, J.Pan, C.Zhu, Y.Tai, C.Wang, J.Li, F.Huang, and F.Wang, “Learning to restore hazy video: A new real-world dataset and a new method,” in _CVPR_, 2021, pp. 9239–9248.
481
+ * [4] J.Zhang, Y.Cao, Z.-J. Zha, and D.Tao, “Nighttime dehazing with a synthetic benchmark,” in _Proceedings of the 28th ACM International Conference on Multimedia_, 2020, pp. 2355–2363.
482
+ * [5] Y.Zhang, L.Ding, and G.Sharma, “Hazerd: an outdoor scene dataset and benchmark for single image dehazing,” in _Proc. IEEE Intl. Conf. Image Proc._, 2017, pp. 3205–3209. [Online]. Available: [http://www.ece.rochester.edu/~gsharma/papers/Zhang_ICIP2017_HazeRD.pdf,paperhttps://labsites.rochester.edu/gsharma/research/computer-vision/hazerd/,projectpageanddataset](http://www.ece.rochester.edu/~gsharma/papers/Zhang_ICIP2017_HazeRD.pdf,paperhttps://labsites.rochester.edu/gsharma/research/computer-vision/hazerd/,projectpageanddataset)
483
+ * [6] B.Xiao, Z.Zheng, X.Chen, C.Lv, Y.Zhuang, and T.Wang, “Single uhd image dehazing via interpretable pyramid network,” 2022. [Online]. Available: [https://arxiv.org/abs/2202.08589](https://arxiv.org/abs/2202.08589)
484
+ * [7] W.Liu, X.Hou, J.Duan, and G.Qiu, “End-to-end single image fog removal using enhanced cycle consistent adversarial networks,” _IEEE Transactions on Image Processing_, vol.29, pp. 7819–7833, 2020.
485
+ * [8] S.Zhao, L.Zhang, S.Huang, Y.Shen, and S.Zhao, “Dehazing evaluation: Real-world benchmark datasets, criteria, and baselines,” _IEEE Transactions on Image Processing_, vol.29, pp. 6947–6962, 2020.
486
+ * [9] C.O. Ancuti, C.Ancuti, M.Sbert, and R.Timofte, “Dense-haze: A benchmark for image dehazing with dense-haze and haze-free images,” in _2019 IEEE international conference on image processing (ICIP)_.IEEE, 2019, pp. 1014–1018.
487
+ * [10] C.O. Ancuti, C.Ancuti, R.Timofte, and C.D. Vleeschouwer, “O-haze: a dehazing benchmark with real hazy and haze-free outdoor images,” in _IEEE Conference on Computer Vision and Pattern Recognition, NTIRE Workshop_, ser. NTIRE CVPR’18, 2018.
488
+ * [11] ——, “I-haze: a dehazing benchmark with real hazy and haze-free indoor images,” in _arXiv:1804.05091v1_, 2018.
489
+ * [12] E.J. McCartney, “Optics of the atmosphere: scattering by molecules and particles,” _New York_, 1976.
490
+ * [13] S.K. Nayar and S.G. Narasimhan, “Vision in bad weather,” in _Proceedings of the seventh IEEE international conference on computer vision_, vol.2.IEEE, 1999, pp. 820–827.
491
+ * [14] S.G. Narasimhan and S.K. Nayar, “Contrast restoration of weather degraded images,” _IEEE transactions on pattern analysis and machine intelligence_, vol.25, no.6, pp. 713–724, 2003.
492
+ * [15] H.Peng and R.Rao, “Image enhancement of fog-impaired scenes with variable visibility,” in _2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07_, vol.2.IEEE, 2007, pp. II–389.
493
+ * [16] R.Rao and S.Lee, “Algorithms for scene restoration and visibility estimation from aerosol scatter impaired images,” in _IEEE International Conference on Image Processing 2005_, vol.1.IEEE, 2005, pp. I–929.
494
+ * [17] D.Berman, S.Avidan _et al._, “Non-local image dehazing,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 1674–1682.
495
+ * [18] R.Fattal, “Single image dehazing,” _ACM transactions on graphics (TOG)_, vol.27, no.3, pp. 1–9, 2008.
496
+ * [19] ——, “Dehazing using color-lines,” _ACM transactions on graphics (TOG)_, vol.34, no.1, pp. 1–14, 2014.
497
+ * [20] K.He, J.Sun, and X.Tang, “Single image haze removal using dark channel prior,” _IEEE transactions on pattern analysis and machine intelligence_, vol.33, no.12, pp. 2341–2353, 2010.
498
+ * [21] R.T. Tan, “Visibility in bad weather from a single image,” in _2008 IEEE conference on computer vision and pattern recognition_.IEEE, 2008, pp. 1–8.
499
+ * [22] Q.Zhu, J.Mai, and L.Shao, “A fast single image haze removal algorithm using color attenuation prior,” _IEEE transactions on image processing_, vol.24, no.11, pp. 3522–3533, 2015.
500
+ * [23] W.Ren, S.Liu, H.Zhang, J.Pan, X.Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in _European conference on computer vision_.Springer, 2016, pp. 154–169.
501
+ * [24] B.Cai, X.Xu, K.Jia, C.Qing, and D.Tao, “Dehazenet: An end-to-end system for single image haze removal,” _IEEE Transactions on Image Processing_, vol.25, no.11, pp. 5187–5198, 2016.
502
+ * [25] H.Zhang and V.M. Patel, “Densely connected pyramid dehazing network,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2018, pp. 3194–3203.
503
+ * [26] R.Li, J.Pan, Z.Li, and J.Tang, “Single image dehazing via conditional generative adversarial network,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2018, pp. 8202–8211.
504
+ * [27] B.Li, X.Peng, Z.Wang, J.Xu, and D.Feng, “Aod-net: All-in-one dehazing network,” in _Proceedings of the IEEE international conference on computer vision_, 2017, pp. 4770–4778.
505
+ * [28] ——, “End-to-end united video dehazing and detection,” in _Proceedings of the AAAI Conference on Artificial Intelligence_, vol.32, no.1, 2018.
506
+ * [29] W.Ren, L.Ma, J.Zhang, J.Pan, X.Cao, W.Liu, and M.-H. Yang, “Gated fusion network for single image dehazing,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2018, pp. 3253–3261.
507
+ * [30] Y.Qu, Y.Chen, J.Huang, and Y.Xie, “Enhanced pix2pix dehazing network,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 8160–8168.
508
+ * [31] H.Dong, J.Pan, L.Xiang, Z.Hu, X.Zhang, F.Wang, and M.-H. Yang, “Multi-scale boosted dehazing network with dense feature fusion,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 2157–2167.
509
+ * [32] Z.Zheng, W.Ren, X.Cao, X.Hu, T.Wang, F.Song, and X.Jia, “Ultra-high-definition image dehazing via multi-guided bilateral learning,” in _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_.IEEE, 2021, pp. 16 180–16 189.
510
+ * [33] P.Shyam, K.-J. Yoon, and K.-S. Kim, “Towards domain invariant single image dehazing,” in _Proceedings of the AAAI Conference on Artificial Intelligence_, vol.35, no.11, 2021, pp. 9657–9665.
511
+ * [34] C.-H. Yeh, C.-H. Huang, and L.-W. Kang, “Multi-scale deep residual learning-based single image haze removal via image decomposition,” _IEEE Transactions on Image Processing_, vol.29, pp. 3153–3167, 2019.
512
+ * [35] Z.Deng, L.Zhu, X.Hu, C.-W. Fu, X.Xu, Q.Zhang, J.Qin, and P.-A. Heng, “Deep multi-model fusion for single-image dehazing,” in _Proceedings of the IEEE/CVF international conference on computer vision_, 2019, pp. 2453–2462.
513
+ * [36] Y.Li, Q.Miao, W.Ouyang, Z.Ma, H.Fang, C.Dong, and Y.Quan, “Lap-net: Level-aware progressive network for image dehazing,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 3276–3285.
514
+ * [37] R.Girshick, J.Donahue, T.Darrell, and J.Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2014, pp. 580–587.
515
+ * [38] R.Girshick, “Fast r-cnn,” in _Proceedings of the IEEE international conference on computer vision_, 2015, pp. 1440–1448.
516
+ * [39] S.Ren, K.He, R.Girshick, and J.Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” _TPAMI_, 2016.
517
+ * [40] J.Dai, Y.Li, K.He, and J.Sun, “R-fcn: Object detection via region-based fully convolutional networks,” _Advances in neural information processing systems_, vol.29, 2016.
518
+ * [41] K.He, G.Gkioxari, P.Dollár, and R.Girshick, “Mask r-cnn,” in _Proceedings of the IEEE international conference on computer vision_, 2017, pp. 2961–2969.
519
+ * [42] J.Redmon, S.Divvala, R.Girshick, and A.Farhadi, “You only look once: Unified, real-time object detection,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 779–788.
520
+ * [43] J.Redmon and A.Farhadi, “Yolo9000: better, faster, stronger,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 7263–7271.
521
+ * [44] ——, “Yolov3: An incremental improvement,” _arXiv preprint arXiv:1804.02767_, 2018.
522
+ * [45] A.Bochkovskiy, C.-Y. Wang, and H.-Y.M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” _arXiv preprint arXiv:2004.10934_, 2020.
523
+ * [46] Z.Ge, S.Liu, F.Wang, Z.Li, and J.Sun, “Yolox: Exceeding yolo series in 2021,” _arXiv preprint arXiv:2107.08430_, 2021.
524
+ * [47] W.Liu, D.Anguelov, D.Erhan, C.Szegedy, S.Reed, C.-Y. Fu, and A.C. Berg, “Ssd: Single shot multibox detector,” in _European conference on computer vision_.Springer, 2016, pp. 21–37.
525
+ * [48] T.-Y. Lin, P.Goyal, R.Girshick, K.He, and P.Dollár, “Focal loss for dense object detection,” in _Proceedings of the IEEE international conference on computer vision_, 2017, pp. 2980–2988.
526
+ * [49] X.Zhou, D.Wang, and P.Krähenbühl, “Objects as points,” _arXiv preprint arXiv:1904.07850_, 2019.
527
+ * [50] Z.Tian, C.Shen, H.Chen, and T.He, “Fcos: Fully convolutional one-stage object detection,” in _Proceedings of the IEEE/CVF international conference on computer vision_, 2019, pp. 9627–9636.
528
+ * [51] K.He, X.Zhang, S.Ren, and J.Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 770–778.
529
+ * [52] A.G. Howard, M.Zhu, B.Chen, D.Kalenichenko, W.Wang, T.Weyand, M.Andreetto, and H.Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” _arXiv preprint arXiv:1704.04861_, 2017.
530
+ * [53] T.-Y. Lin, P.Dollár, R.Girshick, K.He, B.Hariharan, and S.Belongie, “Feature pyramid networks for object detection,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 2117–2125.
531
+ * [54] S.Liu, L.Qi, H.Qin, J.Shi, and J.Jia, “Path aggregation network for instance segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2018, pp. 8759–8768.
532
+ * [55] M.Tan, R.Pang, and Q.V. Le, “Efficientdet: Scalable and efficient object detection,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 10 781–10 790.
533
+ * [56] G.Ghiasi, T.-Y. Lin, and Q.V. Le, “Nas-fpn: Learning scalable feature pyramid architecture for object detection,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019, pp. 7036–7045.
534
+ * [57] N.Carion, F.Massa, G.Synnaeve, N.Usunier, A.Kirillov, and S.Zagoruyko, “End-to-end object detection with transformers,” in _European conference on computer vision_.Springer, 2020, pp. 213–229.
535
+ * [58] Z.Ge, S.Liu, Z.Li, O.Yoshie, and J.Sun, “Ota: Optimal transport assignment for object detection,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 303–312.
536
+ * [59] A.J. Trevor, J.G. Rogers, and H.I. Christensen, “Omnimapper: A modular multimodal mapping framework,” in _2014 IEEE international conference on robotics and automation (ICRA)_.IEEE, 2014, pp. 1983–1990.
537
+ * [60] Z.Wu, K.Suresh, P.Narayanan, H.Xu, H.Kwon, and Z.Wang, “Delving into robust object detection from unmanned aerial vehicles: A deep nuisance disentanglement approach,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2019, pp. 1201–1210.
538
+ * [61] J.Sun, Z.Shen, Y.Wang, H.Bao, and X.Zhou, “Loftr: Detector-free local feature matching with transformers,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2021, pp. 8922–8931.
539
+ * [62] “Ros navigation stack: A 2d navigation stack that takes in information from odometry, sensor streams, and a goal pose and outputs safe velocity commands that are sent to a mobile base.” [https://github.com/ros-planning/navigation](https://github.com/ros-planning/navigation).
540
+ * [63] M.Quigley, K.Conley, B.Gerkey, J.Faust, T.Foote, J.Leibs, R.Wheeler, A.Y. Ng _et al._, “Ros: an open-source robot operating system,” in _ICRA workshop on open source software_, vol.3, no. 3.2.Kobe, Japan, 2009, p.5.
541
+ * [64] “Image polygonal annotation with python,” [https://github.com/wkentaro/labelme](https://github.com/wkentaro/labelme).
542
+ * [65] ultralytics, “yolov5,” [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5), 2022.
543
+ * [66] Y.Cao, Z.He, L.Wang, W.Wang, Y.Yuan, D.Zhang, J.Zhang, P.Zhu, L.Van Gool, J.Han _et al._, “Visdrone-det2021: The vision meets drone object detection challenge results,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 2847–2854.
544
+ * [67] D.Du, Y.Qi, H.Yu, Y.Yang, K.Duan, G.Li, W.Zhang, Q.Huang, and Q.Tian, “The unmanned aerial vehicle benchmark: Object detection and tracking,” in _Proceedings of the European conference on computer vision (ECCV)_, 2018, pp. 370–386.
545
+ * [68] M.Cordts, M.Omran, S.Ramos, T.Rehfeld, M.Enzweiler, R.Benenson, U.Franke, S.Roth, and B.Schiele, “The cityscapes dataset for semantic urban scene understanding,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 3213–3223.
546
+ * [69] D.Chen, M.He, Q.Fan, J.Liao, L.Zhang, D.Hou, L.Yuan, and G.Hua, “Gated context aggregation network for image dehazing and deraining,” in _2019 IEEE winter conference on applications of computer vision (WACV)_.IEEE, 2019, pp. 1375–1383.
547
+ * [70] X.Qin, Z.Wang, Y.Bai, X.Xie, and H.Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in _Proceedings of the AAAI Conference on Artificial Intelligence_, vol.34, no.07, 2020, pp. 11 908–11 915.
548
+ * [71] T.Chen, J.Fu, W.Jiang, C.Gao, and S.Liu, “Srktdn: Applying super resolution method to dehazing task,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 487–496.
549
+ * [72] M.Fu, H.Liu, Y.Yu, J.Chen, and K.Wang, “Dw-gan: A discrete wavelet transform gan for nonhomogeneous dehazing,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 203–212.
550
+ * [73] J.Liu, H.Wu, Y.Xie, Y.Qu, and L.Ma, “Trident dehazing network,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_, 2020, pp. 430–431.
551
+ * [74] D.Engin, A.Genç, and H.Kemal Ekenel, “Cycle-dehaze: Enhanced cyclegan for single image dehazing,” in _Proceedings of the IEEE conference on computer vision and pattern recognition workshops_, 2018, pp. 825–833.
552
+ * [75] C.You, G.Li, Y.Zhang, X.Zhang, H.Shan, M.Li, S.Ju, Z.Zhao, Z.Zhang, W.Cong _et al._, “Ct super-resolution gan constrained by the identical, residual, and cycle learning ensemble (gan-circle),” _IEEE transactions on medical imaging_, vol.39, no.1, pp. 188–203, 2019.
553
+ * [76] J.-Y. Zhu, T.Park, P.Isola, and A.A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in _Proceedings of the IEEE international conference on computer vision_, 2017, pp. 2223–2232.
554
+ * [77] J.Johnson, A.Alahi, and L.Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in _European conference on computer vision_.Springer, 2016, pp. 694–711.
555
+ * [78] D.Liu, B.Wen, J.Jiao, X.Liu, Z.Wang, and T.S. Huang, “Connecting image denoising and high-level vision tasks via deep learning,” _IEEE Transactions on Image Processing_, vol.29, pp. 3695–3706, 2020.
556
+
557
+ ![Image 55: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/pn.jpg)Priya Narayanan received her Ph.D. from the Mechanical Engineering Department at University of Maryland Baltimore County (UMBC). She was a National Research Council (NRC) Fellow in the Navy Center for Applied Research in Artificial Intelligence (NCARAI) at the Naval Research Laboratory (NRL). Currently she is a Engineering Researcher at DEVCOM Army Research Laboratory. She has led key research programs in aerial and ground robotics including perception integration activities for the final capstone experimentation of Robotic Collaborative Technology Alliance (Robotic CTA) Program.
558
+
559
+ ![Image 56: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/Xin.jpg)Xin Hu received the B.Eng. from Huazhong University of Science and Technology in 2019 and M.S. from Texas A&M University in 2021. He is currently a Ph.D. student advised by Dr. Zhengming (Allan) Ding at Tulane University. His research interests include decision-making in autonomous vehicles, video understanding, and multi-modality.
560
+
561
+ ![Image 57: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/ZhenyuWu.jpg)Zhenyu Wu received the B.Eng. from Shanghai Jiao Tong University in 2015 and Ph.D. from Texas A&M University in 2021. His Ph.D. advisor is Dr. Zhangyang Wang. His research interests include object detection, video understanding, efficient vision on embedded systems, and adversarial machine learning. He is currently a researcher at Wormpex AI Research. He is closely working with Dr. Zhou Ren, Dr. Yi Wu and Dr. Gang Hua.
562
+
563
+ ![Image 58: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/MThielke-photo.jpg)Matthew Thielke received his B.S. in Physics from Virginia Tech. He worked at the DEVCOM C5ISR’s Night Vision and Electronic Sensors Directorate in Electro-Optic measurements and processing. Currently at DEVCOM U.S. Army Research Laboratory, he has extensive measurements experience with Electro-Optical sensors including infrared imagers, and hyper spectral imagers. He has led data collection efforts in thermal and visible face recognition including a “Large-Scale, Time-Synchronized Visible and Thermal Face Dataset.”
564
+
565
+ ![Image 59: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/John_Rogers.jpg)John G Rogers received his Ph.D. in Robotics at the Georgia Institute of Technology, his masters in Computer Science at Stanford, and his masters and bachelors degrees in Electrical and Computer Engineering at Carnegie Mellon. He is currently a senior research scientist at the US DEVCOM Army Research Laboratory (ARL). He is currently leading research efforts in coordinated tactical maneuver in complex and contested terrain for teams of ground robots at ARL.
566
+
567
+ ![Image 60: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/AndreHarrison_sp19.jpg)Andre V. Harrison received his Ph.D. in Electrical and Computer Engineering from Johns Hopkins University, his M.E. and B.E. in Electrical and Computer Engineering from Cornell University. He is currently a Computer Engineer with the US DEVCOM Army Research Laboratory (ARL). He has led key research efforts in visual perception for ground vehicles as part of the A.I. for Maneuver and Mobility Essential Research Program. He has also led research projects estimating visual salience and predicting user state.
568
+
569
+ ![Image 61: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/john_dagostino.jpg)John D’Agostino received his B.S. from The Pennsylvania State University in Mechanical Engineering. He is currently a Mechanical Engineer with the U.S Army DEVCOM Chemical and Biological Center in the Research and Technology Directorate. He leads a variety of countermeasure activities including the testing of sensors in various degraded battlefield conditions.
570
+
571
+ ![Image 62: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/jamesbrown.jpg)James D Brown is a Mechanical Engineer and contractor supporting the US Army DEVCOM-CBC Advanced Design and Manufacturing division. He has supported a variety of UAV and UGV projects from design to implementation, and served as the primary ground control station operator during DVE testing.
572
+
573
+ ![Image 63: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/Long_Quang.png)Long Quang acquired a Bachelor of Science in Electrical Engineering at the University of Texas at Dallas. He is currently an Electronics Engineer with the U.S. Army DEVCOM Computational and Information Sciences Directorate, where he actively facilitates robotics research and systems integration.
574
+
575
+ ![Image 64: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/Uplinger.jpg)James Uplinger received his doctorate and masters from the Physics Department at the University of Arkansas, and bachelors from the Physics Department at the Rochester Institute of Technology (RIT). Currently he is an Associate Principal Data Scientist at Huntington Ingalls Industries, supporting DEVCOM Army Research Laboratory. He has led research programs in synthetic infrared (IR) image generation, Radio Frequency (RF) side channel analytics, and biotechnology development.
576
+
577
+ ![Image 65: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/HeesungKwon_new.jpg)Heesung Kwon is Senior Researcher and Team Lead at the DEVCOM Army Research Laboratory (ARL). He received the B.Sc. degree in Electronic Engineering from Sogang University, Seoul, Korea, in 1984, and the MS and Ph.D. degrees in Electrical Engineering from the State University of New York at Buffalo in 1995 and 1999, respectively. Dr. Kwon rejoined ARL in August 2007 as Team Lead and has been leading various AI/ML efforts pertaining to semantic scene understanding in resource-constrained environments, unmanned aerial systems (UAS)-based perception and action/activity recognition at the edge, etc., primarily leveraging machine learning-based approaches.
578
+
579
+ ![Image 66: [Uncaptioned image]](https://arxiv.org/html/extracted/2206.06427v3/figures/bio_images/atlaswang.jpeg)Zhangyang Wang is currently an Assistant Professor of ECE at UT Austin. He received his Ph.D. in ECE from UIUC in 2016, and his B.E. in EEIS from USTC in 2012. Prof. Wang is broadly interested in the fields of machine learning, computer vision, optimization, and their interdisciplinary applications. His latest interests focus on automated machine learning (AutoML), learning-based optimization, machine learning robustness, and efficient deep learning.