SlowGuess's picture
Add Batch 87014c79-7a35-4a2d-a4d3-0a35e0e2aba7
33536c0 verified

AcroFOD: An Adaptive Method for Cross-domain Few-shot Object Detection

Yipeng Gao $^{1,3}$ , Lingxiao Yang $^{1}$ , Yunmu Huang $^{2}$ , Song Xie $^{2}$ , Shiyong Li $^{2}$ , and Wei-Shi Zheng $^{1,3,4*}$

$^{1}$ School of Computer Science and Engineering, Sun Yat-sen University, China
$^{2}$ Huawei Technologies Co., Ltd., China
3 Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China
4 Guangdong Province Key Laboratory of Information Security Technology, Sun Yat-sen University, Guangzhou gaoyp23@mail2.sysu.edu.cn, yanglx9@mail.sysu.edu.cn, {huangyunmu,xiesong5,lishiyong}@huawei.com, wszheng@ieee.org

Abstract. Under the domain shift, cross-domain few-shot object detection aims to adapt object detectors in the target domain with a few annotated target data. There exists two significant challenges: (1) Highly insufficient target domain data; (2) Potential over-adaptation and misleading caused by inappropriately amplified target samples without any restriction. To address these challenges, we propose an adaptive method consisting of two parts. First, we propose an adaptive optimization strategy to select augmented data similar to target samples rather than blindly increasing the amount. Specifically, we filter the augmented candidates which significantly deviate from the target feature distribution in the very beginning. Second, to further relieve the data limitation, we propose the multi-level domain-aware data augmentation to increase the diversity and rationality of augmented data, which exploits the cross-image foreground-background mixture. Experiments show that the proposed method achieves state-of-the-art performance on multiple benchmarks. The code is available at https://github.com/Hlings/AcroFOD.

Keywords: Domain Adaptation, Few-shot Learning, Object Detection

1 Introduction

Due to the domain discrepancy, apparent performance drop is common when applying a trained detector in an unseen domain. Recently, many researchers try to address it as a domain adaption task. As one of the domain adaption adaptation sub-tasks, cross-domain few-shot object detection is proposed with the observation that a few samples can still reflect the major characteristics changes of domain shifts [41], such as view variations [10, 17], weather diversification [39]


Fig. 1. We address the task of cross-domain few-shot object detection. Top: Existing feature-aligning based methods fail to extract discriminative features within limited labeled data in the target domain. Bottom: Our method filters (thick black dotted line) source data that is far away from the target domain.

and lighting difference [1,28,31]. Different from unsupervised domain adaptation (UDA), few labeled target samples are available in few-show domain adaptation (FDA) setting, as well as a large amount samples from source domain.

Existing methods [42,52] in the FDA setting are feature-aligning based methods that first pre-train the model in source domain and then align to the target domain. Besides, some methods mainly overcome domain gaps under UDA setting [19,34,35,44]. However, UDA methods depend on labeled data from the source domain and sufficient unlabeled data from the target domain to fully describe the distribution for both domains, before any explicit or implicit feature alignment operation.

However, a large amount of unlabeled data may not available in the target domain [41]. Without sufficient data, most UDA methods performs disappointingly under the FDA setting [5-7,25,33,46,49,53]. An intuitive way to overcome such a problem is to incorporate limited target data with source data and augment them. However, we argue that not all the augmented data are useful. Blind augmentation may even exacerbate the domain discrepancy due the samples that plays as outliers of target data distribution. To overcome this problem, the adaptive optimization of directive augmentation towards target data distribution should be considered very carefully.

In this work, we present an Adaptive method for Cross-domain Few-shot Object Detection (AcroFOD), which is architecture-agnostic and generic. The AcroFOD mainly consists of two parts: an adaptive and iterative distribution optimization for augmented data filtering and multi-level domain-aware augmentation. With a large amount of data available in the source domain, it could be intuitive to train the detector using the whole set of images from source and target domains. However, we argue that such an ungrounded training method is likely to introduce much unsuitable and low-quality data relative to the target

domain, which will mislead the model during the training process. To deal with this issue, we design an adaptive distribution optimization strategy to eliminate unsuitable introduced images as shown in Fig. 1. Such a strategy allows the detector to fit the feature distribution of target domain faster and more accurately. Moreover, as both background and foreground information can reflect the characteristic of the target domain, we propose multi-level domain-aware augmentation to make a fusion of source and target domain images more diverse and rational.

There are several advantages of the proposed AcroFOD for cross-domain few-shot object detection: 1) Wide application scenarios. In contrast to most of the previous methods [14, 14, 35, 42, 44], our method requires neither complicated architecture changes nor generative models for creating additional synthetic data; 2) Fast convergence. The AcroFOD is almost $2 \times$ faster than existing methods [34] to reach better performance in some established scenarios because no pre-training phase is required; 3) Less cost of collecting data. Compared with UDA methods [19, 34, 35, 44], we greatly reduce the cost of collecting massive amounts of data in the target domain but introduce the cost of annotation.

We conduct a comprehensive and fair benchmark to demonstrate the effectiveness of AcroFOD to mitigate different kinds of domain shifts. Our method can achieve new state-of-the-art results on these benchmarks in the FDA setting. The main contributions of this work are summarized as follows:

  • The proposed adaptive optimization strategy attaches importance to the quality of augmentation. It also prevents the model from over-adaptation which is similar to over-fitting because of lacking data.
  • To enhance the diversity of merging images from source and target domain, we construct generalized formulations of multi-level domain-aware augmentation. Then, we provide several instances and discuss them.

2 Related Work

Existing UDA methods leverage a large number of unlabeled images from the target domain to explicitly mitigate the domain shift. They can be divided into domain-alignment [4,22,37,51], domain-mapping [8,16,29] and self-labeling techniques [34]. Zheng et al. [50], propose a hybrid framework to minimize L2 distance between single-class specific prototypes across domains at instance-level and use adversarial training at image-level. ViSGA [35] uses a similarity-based grouping scheme to aggregate information into multiple groups in a class agnostic manner. To overcome pseudo-label noise in self-labeling, [34] proposed a three-step training method with domain-mixed data augmentation, gradual self-adaptation and teacher-guided finetuning.

In the FDA scenario, we expect the model to overcome the domain discrepancy and performance drop due to domain shift in the target domain with only a few target domain data available. In [41], adversarial learning is used to learn an embedded subspace that simultaneously maximizes the confusion between

two domains while semantically aligning their embedding. In cross-domain few-shot object detection, Wang et al. [42] first adopted a pairing alignment mechanism to overcome the issue of insufficient data. Different from the perspective of modifying the model structure that fails to transfer on other ones, we focus on optimizing the enlarged target data distribution with source data distribution adaptively.

Data augmentation is an effective technique for improving the performance of deep learning models. Such techniques are mainly divided into two aspects: image-level [2, 47, 48] and box-level [13] with pixel-level label [12, 15, 18]. Some other methods consider the combination of multiple geometric and color transformations [23, 24], while search strategy can find appropriate collocation of them [11, 30]. Recent works [34, 36, 40] apply the mixing images technique in cross-domain scenarios. We further propose formulations of both image-level and box-level domain-aware augmentation and conduct them as a cost-free way to generate data between domains diversely.

Our FDA setting follows the prior work [42], which prompts model to have stronger generalization ability with only a few samples of target domain.

3 Approach

In this section, we present the details of the proposed adaptive method for cross-domain few-shot object detection (AcroFOD). First, we adaptively and iteratively optimize the distribution of candidates towards the target domain for training a robust detector. Then, we generate a lot of candidates to address the problem of insufficiency and sameness of target augmented samples with the proposed multi-level domain-aware augmentation.

The proposed method is motivated by the observation that limited data can still reflect the major characteristics of the target domain [42]. To deal with the lack of data in the target domain, the AcroFOD comprises an adaptive optimization strategy with cross-domain augmentation for reasonable data expansion to overcome domain shifts. Sec. 3.2 presents our adaptive optimization strategy to promise that augmented target data with source data approximately follow the distribution of target domain. Sec. 3.3 introduces the formulation of multi-level domain-aware augmentation. Finally, Sec. 3.4 summarizes the whole iterative training process of the AcroFOD.

3.1 Problem Statement

Suppose we have a large data set $D_{s} = {(x_{i}^{s},y_{i}^{s})}{i = 1}^{n{s}}$ from the source domain and a few examples $D_{t} = {(x_{j}^{t},y_{j}^{t})}{j = 1}^{n{t}}$ from the target domain, where $x_{i}^{s},x_{i}^{t}\in \mathcal{X}$ are input images, $y_{i}^{s},y_{j}^{t}\in \mathcal{V}$ consist of bounding box coordinates and object categories for $x_{i}^{s}$ and $x_{j}^{t}$ . We consider scenarios in which there exists the discrepancy between the input source distribution $\mathcal{P}s:\mathcal{X}\times \mathcal{Y}\to \mathbb{R}^+$ of $D{s}$ and target distribution $\mathcal{P}t:\mathcal{X}\times \mathcal{Y}\to \mathbb{R}^+$ of $D{t}$ .


Fig. 2. The AcroFOD includes the following three steps. (1) Using the proposed multilevel domain-aware augmentation, we can expand the data distribution of the target domain. (2) Then, the augmented and target domain data is fed into the backbone of the detector to obtain the extracted feature vector. Through the directive optimization strategy, the samples which are unsuitable for mitigating domain shifts can be filtered. (3) Finally, optimized samples will help the detector mitigate domain shifts.

Our goal is to train an adaptive detector $f: \mathcal{X} \to \mathcal{Y}$ which can alleviate performance drop due to domain gap. However, it is difficult for $f$ to capture domain invariant representation with only a few data $D_{t}$ . To effectively exploit the limited information of annotations, we extend $D_{t} \sim \mathcal{P}{t}$ with $D{s} \sim \mathcal{P}_{s}$ to $\widetilde{D}_t$ . Supposing $\widetilde{D}t$ is sampled discretely in assumption distribution $\mathcal{P}{aug}: \mathcal{X} \times \mathcal{Y} \rightarrow \mathbb{R}^+$ , we are able to approximate $\mathcal{P}t(y|x)$ with $\mathcal{P}{aug}(y|x)$ . In fact, we assume $\mathcal{P}_s(y|x) = \mathcal{P}t(y|x) = \mathcal{P}{aug}(y|x)$ but $\mathcal{P}_s(x) \neq \mathcal{P}t(x) \neq \mathcal{P}{aug}(x)$ . It's obvious that some noisy data in $\widetilde{D}t$ which are dissimilar to $D{t}$ may weaken the generalization ability of $f$ . In the following subsections, we present the details of AcroFOD.

3.2 Adaptive Optimization for Directive Data Augmentation

As shown in Fig. 2, we are able to generate a bunch of data $\widetilde{D}t = Aug(D_s, D_t)$ with the introduced domain-aware augmentation which we will discuss later. We expect $\widetilde{D}t = {(x_i^{aug}, y_i^{aug})}{i=1}^{n_a} \sim \mathcal{P}{aug}$ to approximate distribution $\mathcal{P}_t$ as close as possible. We assume that the detector $f_\theta = (g_\theta, h_\theta)$ is defined by a set of parameters $\theta$ and consists of backbone $g_\theta$ and head $h_\theta$ . The AcroFOD uses $g_\theta$ as feature extractor to output representations of $x_j^t$ in $D_t$ , $x_i^{aug}$ in $\widetilde{D}_t$ as follows:

ziaug=gθ(xiaug),zjt=gθ(xjt).(1) z _ {i} ^ {a u g} = g _ {\theta} (x _ {i} ^ {a u g}), z _ {j} ^ {t} = g _ {\theta} (x _ {j} ^ {t}). \tag {1}

Then, the AcroFOD sorts augmented candidates $x_{i}^{aug}$ according to the distance of representations between $x_{i}^{aug}$ and ${x_{j}^{t}}{j=1}^{n{t}}$ measured by metric function $dist_{f}$ :

diaug=distf(ziaug,{zjt}j=1nt).(2) d _ {i} ^ {a u g} = d i s t _ {f} \left(z _ {i} ^ {a u g}, \left\{z _ {j} ^ {t} \right\} _ {j = 1} ^ {n _ {t}}\right). \tag {2}

In order to filter a certain amount of noisy samples in $\widetilde{D}t$ , we use shrinkage ratio $k$ ( $0 < k \leq 1$ ) to decrease the quantity of expanded candidates. Then, we define an optimization function $\phi{opt}$ to optimize $\widetilde{D}_t$ with $d_i^{aug}$ , resulting in the optimized extended domain $\widetilde{D}_t^{opt}$ defined as follows:

D~topt={(xiopt,yiopt)}i=1nb=ϕopt(D~t,{diaug}i=1na,k).(3) \widetilde {D} _ {t} ^ {o p t} = \left\{\left(x _ {i} ^ {o p t}, y _ {i} ^ {o p t}\right) \right\} _ {i = 1} ^ {n _ {b}} = \phi_ {o p t} \left(\widetilde {D} _ {t}, \left\{d _ {i} ^ {a u g} \right\} _ {i = 1} ^ {n _ {a}}, k\right). \tag {3}

Through $\phi_{opt}$ , top $n_b$ ( $n_b = \lfloor n_a * k \rfloor$ ) candidates are chosen from $\widetilde{D}t$ in increasing order of $d_i^{aug}$ . With $\phi{opt}$ , we can obtain $\widetilde{D}t^{opt}$ to better reflect the target domain distribution $\mathcal{P}t$ . However, such suitable $\widetilde{D}t^{opt}$ is likely to change as $f_\theta$ converges. To tackle this problem, we optimize $\widetilde{D}t^{opt}$ iteratively. Given detectors $f_\theta^a$ and $f_\theta^b$ trained after $a$ and $b$ epochs ( $a > b \geq 0$ ) during training process. The error of $f_\theta$ in source and target domain $\epsilon{D_s}(f_\theta^a)$ , $\epsilon{D_t}(f_\theta^a)$ are expected to be smaller than $\epsilon{D_s}(f_\theta^b)$ , $\epsilon{D_t}(f_\theta^b)$ due to $g_\theta$ , $h_\theta$ updating. So, $g_\theta^a$ is able to represent $x_i^{aug}$ and $x_i^t$ more accurately than $g_\theta^b$ . Then, we can iteratively optimize $\widetilde{D}_t^{opt}$ by dist_f with updating feature representation $z_i^{aug}$ and $z_i^t$ .

At the $n$ th ( $n \geq 1$ ) epoch, $\widetilde{D}_{t^n}^{opt}$ can be obtained by filtering $\widetilde{D}_t$ as follows:

D~tnopt=ϕopt(D~t,{distf(gθn(xiaug),{gθn(xjt)}j=1nt)}i=1na,k).(4) \widetilde {D} _ {t ^ {n}} ^ {o p t} = \phi_ {o p t} (\widetilde {D} _ {t}, \left\{\operatorname {d i s t} _ {f} \left(g _ {\theta} ^ {n} \left(x _ {i} ^ {a u g}\right), \left\{g _ {\theta} ^ {n} \left(x _ {j} ^ {t}\right) \right\} _ {j = 1} ^ {n _ {t}}\right) \right\} _ {i = 1} ^ {n _ {a}}, k). \tag {4}

Finally, the adaptive detector $f_{\theta}^{n} = (g_{\theta}^{n}, h_{\theta}^{n})$ can also be optimized iteratively by $(x_{i}^{opt}, y_{i}^{opt}) \in \widetilde{D}_{t^{n}}^{opt}$ as follows:

gθn+1,hθn+1optimizer((gθn,hθn),θLθ(fθn(xiopt),yiopt),η),(5) g _ {\theta} ^ {n + 1}, h _ {\theta} ^ {n + 1} \leftarrow o p t i m i z e r \left(\left(g _ {\theta} ^ {n}, h _ {\theta} ^ {n}\right), \nabla_ {\theta} \mathcal {L} _ {\theta} \left(f _ {\theta} ^ {n} \left(x _ {i} ^ {o p t}\right), y _ {i} ^ {o p t}\right), \eta\right), \tag {5}

where the optimizer is an optimizer, $\eta$ is the learning rate for $g_{\theta}, h_{\theta}$ and $\mathcal{L}$ is the loss function.

We intend to measure the correlation between $z_{i}^{aug}$ and ${z_j^t}_{j = 1}^{n_t}$ . Therefore, we utilize two widely-used metric functions MMD, CS as distf.

First: Maximum Mean Discrepancy. The Maximum Mean Discrepancy (MMD) [20] distance is used to measure the distance of these two distributions in the Reproducing Keral Hilbert Space (RKHS). For $z_{i}^{aug}, {z_{i}^{t}}{i=1}^{n{t}}$ defined in Eq. 1, $dist_{f}$ is instantiated to $MMD^2$ as:

MMD2(ziaug,{zjt}j=1nt)=1ntj=1ntzjtziaug22.(6) M M D ^ {2} \left(z _ {i} ^ {a u g}, \left\{z _ {j} ^ {t} \right\} _ {j = 1} ^ {n _ {t}}\right) = \left| \left| \frac {1}{n _ {t}} \sum_ {j = 1} ^ {n _ {t}} z _ {j} ^ {t} - z _ {i} ^ {a u g} \right| \right| _ {2} ^ {2}. \tag {6}

Second: Cosine Distance. Cosine distance is an effective metric to measure the similarity of samples in the embedding space [3,21,32]. Eq. 2 can be rewritten as $CS$ in the following expression:

CS(ziaug,{zjt}j=1nt)=j=1nt(1zjtziaugzjt2ziaug2).(7) C S \left(z _ {i} ^ {a u g}, \left\{z _ {j} ^ {t} \right\} _ {j = 1} ^ {n _ {t}}\right) = \sum_ {j = 1} ^ {n _ {t}} \left(1 - \frac {z _ {j} ^ {t} \cdot z _ {i} ^ {a u g}}{\left| \left| z _ {j} ^ {t} \right| \right| _ {2} \cdot \left| \left| z _ {i} ^ {a u g} \right| \right| _ {2}}\right). \tag {7}

3.3 Multi-level Domain-aware Augmentation

From Sec. 3.2, the convergence of $f_{\theta}$ relies on the $D_{t}^{opt}$ optimized by $D_{t}^{aug} = Aug(D_{s},D_{t})$ . The simple combination of $D_{s}$ and $D_{t}$ limits the variety of $D_{t}^{aug}$ . To generate more adequate samples while controlling the overhead of training computation, we propose domain-aware augmentation as Aug at image-level and box-level. Here, we give uniform formulations for each level of Aug and then provide several specific instantiations of them.

Image-level Domain-aware Augmentation. Given a batch of data in source domain $B_{s} = {(x_{i}^{s},y_{i}^{s})}{i}^{n{bs}}$ and target domain $B_{t} = {(x_{i}^{t},y_{i}^{t})}{i}^{n{bt}}$ . We sample $m \leq n_{bs}$ and $n \leq n_{bt}$ data from $B_{s}$ and $B_{t}$ from these two domains respectively. Then, we randomly mix them to a single image $x^{aug}$ as follows:

xaug=x0aug+i=1mj=1nA(i,j)(λixis+λjxjt),(8) x ^ {a u g} = x _ {0} ^ {a u g} + \sum_ {i = 1} ^ {m} \sum_ {j = 1} ^ {n} A _ {(i, j)} \left(\lambda_ {i} x _ {i} ^ {s} + \lambda_ {j} x _ {j} ^ {t}\right), \tag {8}

where $x_0^{aug}$ is an initialized empty image whose size is different from both $x_i^s$ and $x_i^t$ , $A_{(i,j)}$ is the hand-crafted transformation matrix for image pair ${x_i^s, x_j^t}$ . $\lambda_i$ and $\lambda_j$ (1 ≥ $\lambda_i + \lambda_j \geq 0$ ) are the corresponding weights of $x_i^s$ and $x_j^t$ , respectively. Then, we can recompute the label set $y^{aug} = {(y_{box}^{aug}, y_{cls}^{aug})}$ of $x^{aug}$ as follows:

yboxaug=(A(i,j)Tyi(box)s,A(i,j)Tyj(box)t),yclsaug=(λiclsyi(cls)s+λjclsyj(cls)t),(9) \begin{array}{l} y _ {b o x} ^ {a u g} = \underset {i = 1 \dots m, j = 1 \dots n} {C o n c a t} (A _ {(i, j)} ^ {T} y _ {i (b o x)} ^ {s}, A _ {(i, j)} ^ {T} y _ {j (b o x)} ^ {t}), \\ y _ {c l s} ^ {a u g} = \underset {i = 1 \dots m, j = 1 \dots n} {\text {C o n c a t}} \left(\lambda_ {i} ^ {c l s} y _ {i (c l s)} ^ {s} + \lambda_ {j} ^ {c l s} y _ {j (c l s)} ^ {t}\right), \tag {9} \\ \end{array}

where $y_{i.box}^{s}$ and $y_{j.box}^{t}$ denote the bounding box coordinates of interest instances from source and target domains. $y_{i(cls)}^{s}$ and $y_{j(cls)}^{t}$ represent corresponding confidence scores of categories. $\lambda_{i}^{cls}$ and $\lambda_{j}^{cls}$ are weights of the corresponding confidence scores.

With Eq. 8 and Eq. 9, we describe two versions of image-level domain-aware augmentation. First, we define $m + n = 4(m,n\in \mathbb{N}^{+})$ , $\lambda_{i} = \lambda_{i}^{cls}$ and $\lambda_{i}|\lambda_{j} = 1$ as domain-splice. Second, to increase the degree of interaction at the image-level, we choose $m + n = 2(m,n\in \mathbb{N}^{+})$ and then weight two images with $\lambda_{i} + \lambda_{j} = 1,1\geq \lambda_{i},\lambda_{j}\geq 0$ , $\lambda \sim Beta(\alpha ,\alpha)$ and $\lambda_{i}& \lambda_{j} = 1$ as domain-reallocation. The above two methods can also be combined to generate more diverse images.

Box-level Domain-aware Augmentation. To effectively utilize limited instance annotation, we can separate them from the background and then put them on the other regions. In order to improve the generality of proposed augmentations, we focus on utilizing domain-aware box-level labels rather than pixel-level labels which are often used in previous works [12, 15, 18]. Here, we propose the formulation of box-level domain-aware augmentation with bounding box labels.

For bounding box $b^{s}$ and $b^{t}$ from source and target domain with the resized width $w$ and height $h$ , we exchange them to combine the characteristic of each other. The formulation is presented as follows:

b(p,q)aug=β(p,q)b(p,q)s+(1β(p,q))b(p,q)t,(10) b _ {(p, q)} ^ {a u g} = \beta_ {(p, q)} b _ {(p, q)} ^ {s} + (1 - \beta_ {(p, q)}) b _ {(p, q)} ^ {t}, \tag {10}


Fig. 3. Left: visualization of multi-level domain-aware augmentation. Right: three types of proposed box-level cross-domain augmentations.

where $(p,q), p = 1,2\dots w, q = 1,2\dots h$ represents the index of pixels in box $b^{s}$ and $b^{t}$ . $\beta(p,q) \in [0,1]$ is the corresponding weight for each index. From Eq. 10, we denote $\beta(p,q) = 0(\forall p,q)$ as direct exchange and define $\beta(p,q) = \beta_{mix}, \beta_{mix} \sim Beta(\alpha_{m},\alpha_{m})$ as mixture exchange, where $\alpha_{m}$ is a hyper-parameter.

Instances under different scales have different degrees of dependence on context information [9]. In order to obtain scale-aware weights for each pixels, we propose Gaussian exchange which adopts the Gaussian map defined as follows:

β(p,q)=exp(((pμx)2σx2+(qμy)2σy2)),(11) \beta (p, q) = \exp \left(- \left(\frac {(p - \mu_ {x}) ^ {2}}{\sigma_ {x} ^ {2}} + \frac {(q - \mu_ {y}) ^ {2}}{\sigma_ {y} ^ {2}}\right)\right), \tag {11}

σx=wWhw2πσy=hHhw2π, \sigma_ {x} = \frac {w}{W} \sqrt {\frac {h w}{2 \pi}} \quad \sigma_ {y} = \frac {h}{H} \sqrt {\frac {h w}{2 \pi}},

where $H$ , $W$ are height and weight of the image, $\sigma_x, \sigma_y$ are the variance of $x$ and $y$ axes in box $b^s$ and $b^t$ . $\mu_x, \mu_y$ are the corresponding mean in box $b^s$ and $b^t$ . The comparison of three box-level augmentations is shown in Fig. 3.

3.4 Cross-domain Training Framework

We expect that the augmented samples are still closer to the distribution of the target domain. Therefore, these data will be fed into the backbone network of the target detector to calculate features, and then suitable data will be selected according to the distance from the target domain. The feature extractor used for obtaining $\widetilde{D}_t^{opt}$ will update after every epoch. Note that our model, including the backbone and the detector module, is trained from scratch. Details of our AcroFOD are presented in Algo. 1.

4 Experiments

We present the results of proposed adaptive method for cross-domain few-shot object detection (AcroFOD) in various scenarios like adverse weather, synthetic-to-real and cross-camera domain shifts in Sec. 4.2. Then, we provide qualitative

Algorithm 1 Adaptive Method AcroFOD

Input: Initialized detector  $\theta^{ini}$ , the source domain  $D_{s} = \{(x_{i}^{s},y_{i}^{s})\}_{i = 1}^{n_{s}}$ , few data target domain  $D_{t} = \{(x_{j}^{t},y_{j}^{t})\}_{j = 1}^{n_{t}}$ , total epochs  $T$ , distance function  $dist_{f}$ , amount of steps for every epoch  $N$ , domain-aware augmentation function  $Aug$ , shrinkage ratio  $k$ , loss function  $\mathcal{L}$ .
Output: Adaptive Detector  $f_{\theta} = (g_{\theta},h_{\theta})$  Initialize  $\theta \gets \theta^{ini}$  Initialize feature extractor  $g_{\theta}$    
for epoch  $\leftarrow 1,\dots,T$  do  $\widetilde{D}_t = \{(x_i^{aug},y_i^{aug})\}_{i = 1}^{n_a} = Aug(D_s,D_t)$ $\widetilde{D}_t^{opt} = \phi_{opt}(\widetilde{D}_t,\{dist_f(g_\theta (x_i^{aug}),\{g_\theta (x_j^t)\}_{j = 1}^{n_t})\}_{i = 1}^{n_a},k)$  for step  $\leftarrow 1,\dots,N$  do sample batch  $B = \{(x_i^{opt},y_i^{opt})\}_{i = 1}^{bs}$  from  $\widetilde{D}_t^{opt}$ $pred = f_{\theta}(\{x_1^{opt},\dots,x_{bs}^{opt}\})$  loss  $= \mathcal{L}(pred,\{y_1^{opt},\dots,y_{bs}^{opt}\})$  Update  $\theta$  to minimize loss  $g_{\theta},h_{\theta}\gets \theta$ $f_{\theta}\gets \theta$

results of adverse weather benchmark in Sec. 4.3. Furthermore, we analyze effect of multiple parts of the AcroFOD in Sec. 4.4. Finally, we explore the performance of our method on different data magnitudes in Sec. 4.5.

4.1 Experimental Setup

  • Adverse Weather Benchmark $(\mathbf{C} \rightarrow \mathbf{F})$ . In this scenario, we use Cityscapes [10] as the source dataset. It contains 3,475 real urban images, with 2,975 images used for training and 500 for validating. Foggy version of Cityscapes [39] is used as the target dataset. Highest fog intensity (least visibility) images of 8 different categories are used in our experiments, matching prior work [45]. Following [42], we used the tightest bounding box of an instance segmentation mask as ground truth box. This scenario is referred to $\mathrm{C} \rightarrow \mathrm{F}$ .

  • Synthetic-to-real Benchmark ( $\mathbf{S} \to \mathbf{C}$ , $\mathbf{V} \to \mathbf{O}$ ). SIM10k [28] is a simulated dataset that contains 10k synthetic images. In this dataset, we use all 58,701 car bounding boxes available as the source data during training. For the target data and evaluation, we use Cityscapes [10] and only consider the car instances. This scenario is referred to $\mathrm{S} \to \mathrm{C}$ . ViPeD [1] contains 200K frames collected from the video game with bounding box annotations for person class. We select one frame per 10 frames and a total of 20K frames as the source dataset. We select COCO [31] as the target dataset and only consider the person class. We denote this scenario as $\mathrm{V} \to \mathrm{O}$ .

  • Cross-camera Benchmark $(\mathbf{K} \rightarrow \mathbf{C})$ . In this scenario, we use the KITTI [17] as our source data. KITTI contains 7,481 images and we use all of them for training. Similar to the previous scenarios, we use Cityscapes [10] as the target

Table 1. Results in $\mathrm{C} \rightarrow \mathrm{F}$ scenario. "V" and "R" stand for VGG16 and ResNet50 backbone respectively. "X" stands for a type of yolov5 model. In FDA setting, only 8 fully annotated images are used for domain adaption per round. * and † represent only using optimization and augmentation respectively. The last row combines both of them.

SettingMethodArch.personridercartruckbustrainmcyclebicyclemAP50gain
sourceV24.129.932.710.913.85.014.627.919.9-
sourceR27.231.832.516.025.55.619.927.022.8-
sourceX30.827.843.78.224.34.811.024.621.9-
Pre+FTX31.2±0.328.1±0.244.1±0.58.3±0.224.3±0.15.9±0.611.1±0.324.6±0.222.2±0.20.3
proportionX30.0±0.428.7±0.741.8±1.213.1±0.522.3±0.79.6±1.519.3±1.824.7±0.723.7±0.51.8
UDADA-Faster [8]V25.031.040.522.135.320.220.027.127.67.7
FAFRCNN [42]V29.139.742.920.837.424.126.529.931.311.4
SWDA [38]R31.844.348.921.043.828.028.935.835.312.5
ViSGA [35]R38.845.957.229.950.251.931.940.943.320.5
FDAADDA [41]V24.4±0.329.1±0.933.7±0.511.9±0.513.3±0.87.0±1.513.6±0.627.6±0.220.1±0.80.2
DTf+FT [27]V23.5±0.528.5±0.630.1±0.811.4±0.626.1±0.99.6±2.117.7±1.026.2±0.621.7±0.61.8
DA-Faster [8]V24.0±0.828.8±0.727.1±0.710.3±0.724.3±0.89.6±2.814.3±0.826.3±0.820.6±0.80.7
SimRoD [34]X34.3±1.335.8±0.355.9±0.89.6±1.818.0±0.65.9±0.310.6±0.229.2±0.824.9±0.24.7
FsDet [43]X32.3±1.229.8±1.244.0±1.714.1±2.224.2±1.48.4±1.222.9±1.626.2±2.225.2±1.13.3
AcroFOD*X31.8±0.930.9±1.243.9±2.315.3±2.127.8±1.88.8±1.326.2±1.926.3±0.826.4±1.04.5
AcroFOD†X36.5±1.437.4±1.351.6±0.917.9±1.133.0±0.726.4±1.227.5±1.131.5±1.532.7±0.610.8
AcroFODX46.2±0.547.3±0.663.5±0.420.1±1.641.5±0.834.2±1.836.1±0.739.6±0.941.1±0.819.2

data. Following prior works [34, 35], only the car class is used. This scenario is abbreviated as $\mathrm{K} \rightarrow \mathrm{C}$ .

  • Implementation Details. We adopt the single-stage detector YOLOv5 [26] as the baseline and compare with unsupervised domain adaptation (UDA) and few-shot domain adaptation (FDA) methods at the same time. For UDA setting [35], we report their results based on the full amount of target domain data. For FDA setting [42], we report mean and deviation for 5 rounds using the same number of images. Meanwhile, we also compare our method with proportional sampling (denote as "proportion") which samples data from source and target domains uniformly for training and few-shot object detection method FsDet [43] in the FDA setting. In all experiments, we adopt adaptive optimization strategy from scratch and set shrinkage ratio $k = 0.8$ . The effect of $k$ will analyze in Sec. 4.4. For a fair comparison, we resize input images to $640 \times 640$ in all experiments without any extra dataset (such as COCO [31]) for model pre-training. For evaluation metrics, We denote average precision with IoU threshold of 0.5 as AP50 for a single class or mAP50 for multi classes, and AP or mAP for 10 averaged IoU thresholds of 0.5:0.05:0.95 [31].

4.2 Main Results

In this section, we evaluate the proposed method by conducting extensive experiments on the established scenarios.

  • Results for Scenarios $\mathbf{C} \rightarrow \mathbf{F}$ . As summarized in Table 1, our proposed AcroFOD performs significantly better than other compared FDA methods in all categories. Besides, the AcroFOD achieves mAP50 at $41.1%$ , which is $19.2%$ higher than the baseline method, solely trained on source data.

It is observable that other baseline methods only obtain less improvement for both mAP50 and gain. The compared SimRoD [34] also employees domain-

Table 2. Results of AP50 for the S → C and K → C adaptation scenarios. In FDA setting, we randomly choice 8 images in target domain. "Source" refers to the model trained using source data only. "Adaptation" means the model adapted by target data. * and † represent only using optimization and augmentation respectively. The last row combines both of them.
(a) $\mathrm{S}\to \mathrm{C}$

SettingMethodSourceAdaptationgain
UDAFAFRCNN [42]33.541.27.7
DA-Faster [8]31.941.910.0
SWDA [38]31.944.612.7
ViSGA [35]31.949.317.4
FDAPre+FT49.049.4±0.30.4
proportion49.050.2±0.61.2
FsDet [43]49.052.9±1.23.9
SimRoD [34]49.054.2±0.55.2
AcroFOD*49.055.6±2.66.6
AcroFOD†49.057.4±2.18.4
AcroFOD49.062.5±1.613.5

(b) $\mathrm{K}\to \mathrm{C}$

SettingMethodSourceAdaptationgain
UDADA-Faster [8]32.541.89.3
SWDA [38]32.543.210.7
ViSGA [35]32.547.615.1
GPA [45]32.547.915.3
FDAPre+FT47.447.7±0.30.3
proportion47.447.6±0.20.2
FsDet [43]47.452.9±1.25.5
SimRoD [34]47.455.8±0.68.4
AcrFOD*47.451.9±2.94.5
AcrFOD†47.453.9±1.66.5
AcrFOD47.462.6±2.115.2

Table 3. Results of $\mathrm{V} \rightarrow \mathrm{O}$ adaptation scenario. We randomly select 60 fully annotated images from coco person for each round. * and † represent only using optimization and augmentation respectively. The last row combines both of them.

MethodAP50APgain-AP50gain-AP
Source30.413.0--
proportion31.8±1.513.5±0.31.40.5
Pre+FT43.2±0.821.0±0.513.28.0
FsDet [43]36.7±1.915.9±0.86.32.9
SimRoD [34]42.8±1.019.5±0.712.46.5
AcroFOD*42.0±1.219.2±1.311.66.2
AcorFOD†41.4±0.718.7±0.611.05.7
AcroFOD45.8±0.622.5±0.415.49.5

mix augmentation and improves the generation ability of the used detector. Our AcroFOD also achieves significantly better performance than SimRoD, which indicates that a simple augmentation is not sufficient to train a robust object detector to mitigate the domain gap.

  • Results for Other Three Scenarios. As presented in Table 2(a) and Table 2(b), results show similar trends with previous evaluation $\mathrm{C} \rightarrow \mathrm{F}$ . In FDA setting, our AcroFOD performs better than previous methods in single class domain adaptation, such as car and person. Meanwhile, we obtain comparable performance to many UDA methods. Other methods in UDA setting sometimes perform better than AcroFOD which only uses 8 target images in FDA setting. As shown in Table 3, our AcroFOD outperforms pre-training + fine-tuning paradigm (denoted as Pre+FT) about 1.7%AP50 and 0.3%AP in V → O sce


Fig. 4. Qualitative result. The results are sampled from $\mathrm{C} \rightarrow \mathrm{F}$ scenario, we set the bounding box visualization threshold of 0.3. The first/second rows are output results from unadapted/adapted training models respectively.

nario, which suggests our framework can still handle complex domain shift in person class.

4.3 Qualitative Results

Fig. 4 shows some qualitative results of $\mathrm{C} \rightarrow \mathrm{F}$ . It can be clearly observed that 1) the AcroFOD motivates the detector to place higher confidence on detected objects, especially for occluded objects; 2) the model after adaptation detects more targets than the one trained with only source data.

4.4 Ablation Study

To evaluate the impact of various components of the AcroFOD on detection performance, we use all four scenarios for evaluation. Following prior settings [42], in $\mathrm{C} \to \mathrm{F}$ , $\mathrm{S} \to \mathrm{C}$ and $\mathrm{K} \to \mathrm{C}$ adaptation scenarios, we randomly sample 8 images in the target domain for domain adaptation in each round. For $\mathrm{V} \to \mathrm{O}$ scenarios, we choose 60 images from target domain randomly.

Instantiations of Domain-aware Augmentation. Table 4 shows the effects of different types of domain-aware augmentation in our AcroFOD. For a fair comparison, we choose Eq. 6 as distance function in the optimization strategy for all the experiments. From the results, we can notice that either the introduced image-level or the box-level augmentations can both bring significant performance improvements. Meanwhile, combining different types of multi-level domain-aware augmentation can further improve the detection results.

Different Choices of $dist_{f}$ . Table 5 compares different types of distance metric function $dist_{f}$ . The simple proportional sampling strategy achieves better performance than the baseline method trained with source-only data. Optimizing augmented data with MMD distance obtains the best results among all choices. In summary, MMD can better reflect the sample distance between source and target domains than others.

Table 4. Instantiations of different types of multi-level domain-aware augmentation. "Source" and "Target" represent that model is only trained in the source and target dataset respectively. "Oracle" means pre-training in the source domain and fine-tuning with the full amount of target domain data. "Spl" and "Rea" denote the image-level domain-splice and domain-reallocation. "Dir", "Gau" and "Mix" represent the direct, Gaussian and mixture box-level exchange methods.

Img-levelBox-levelS → CK → CV → OC → F
SplReaDirGauMixAP50APAP50APAP50mAP50mAP
Source49.026.547.422.430.413.021.9
Target75.149.275.149.281.956.645.1
Oracle76.951.876.150.182.256.946.3
AcroFOD55.6±2.633.2±2.451.9±2.927.7±2.442.0±1.219.2±1.326.4±1.0
60.2±1.836.4±1.958.3±1.932.1±1.344.9±0.722.1±1.227.5±1.5
61.4±3.037.1±2.358.6±1.733.6±2.144.8±0.621.3±0.625.6±1.3
60.3±2.436.3±2.258.4±2.233.1±2.244.5±1.421.1±1.225.1±1.2
61.2±1.936.8±1.457.5±2.332.9±1.843.6±1.220.2±0.828.2±1.0
62.3±2.738.0±2.361.4±3.136.2±2.344.8±1.321.7±0.641.1±0.8
62.2±3.237.7±2.561.6±2.135.5±2.244.7±0.621.5±0.638.6±0.5
62.5±1.638.1±1.862.6±2.136.5±2.443.9±1.120.4±0.738.3±0.6
62.3±2.138.0±1.062.1±1.935.8±2.345.8±0.622.5±0.436.9±1.0

Table 5. Influence of different strategies for sample selection in S → C and K → C adaptation scenario.

MethodS → CK → C
AP50APAP50AP
Source49.026.547.422.4
Cosine distance55.2±2.732.7±2.651.4±2.127.3±2.3
MMD distance55.6±2.633.2±2.451.9±2.927.7±1.8

Analysis of Shrinkage Ratio $k$ . Fig. 5 shows the effect of $k$ on $\mathrm{S} \rightarrow \mathrm{C}$ and $\mathrm{C} \rightarrow \mathrm{F}$ scenarios. $k = 1$ means training without any process of optimization. All experiments use Eq. 6 as $dist_{f}$ . The above two figures show that our proposed adaptive optimization strategy helps model transfer better. Meanwhile, within a certain range of $k$ , the performance of the model is relatively stable.

4.5 Experiment with More Data

We conduct a series of experiments to verify the effect of different numbers of data in target domain and choose the challenging $\mathrm{V} \rightarrow \mathrm{O}$ scenario for verification. We choose 0.1%, 1% and 10% data from COCO person for model training, corresponding to 60, 600 and 6000 images respectively, and test on the validation set of COCO person. The evaluations are conducted on person category and test time on 4 Nvidia V100 GPUs. As shown in Table 6, we notice that the AcroFOD can outperform the general Pre+FT by a large margin in all cases. Meanwhile, our method is faster than both previous work [34] and Pre+FT paradigm.


Fig. 5. Experiments on different values of $k$ on $\mathrm{S} \rightarrow \mathrm{C}$ and $\mathrm{C} \rightarrow \mathrm{F}$ scenarios.

Table 6. Results of more data in $\mathrm{V} \rightarrow \mathrm{O}$ adaptation scenario. "Source" and "Target" represent that the model is only trained in source and target datasets respectively. "Epochs" means the total number of epochs. "Times" corresponds to the time required to achieve optimum performance.

ProportionMethodAP50APEpochsTimes
-Source30.413.030048h
0.1%Target----
Pre+FT43.2±1.321.0±0.950056h
AcroFOD45.8±0.622.5±0.430032h
1%Target43.9±1.720.7±1.130024h
Pre+FT50.9±0.328.2±0.560072h
AcroFOD61.1±0.934.1±0.630034h
10%Target71.5±0.544.3±0.430042h
Pre+FT69.7±0.344.5±0.360090h
AcroFOD75.1±0.650.2±0.430038h

5 Conclusion

We present AcroFOD for adapting detector under domain shift with only a few data in the target domain. Our adaptive optimization for directive data augmentation helps expand limited target data to cover the data distribution of the target domain. Our method achieves significant gains in terms of model robustness compared to existing baselines in few-shot domain adaptation setting. The results indicate that the AcroFOD can mitigate the effect of domain shifts due to various changes. Through the ablation study, we find some insights on how adaptive optimization and data augmentation from a cross-domain perspective can help model perform better. We hope this adaptive method will benefit future progress of robust object detection in cross-domain few-shot object detection research.

Acknowledgment. This work was supported partially by the NSFC (U21A204-71, U1911401, U1811461), Guangdong NSF Project (No.2022A1515011254, 2020-B1515120085, 2018B030312002), Guangzhou Research Project (201902010037), and the Key-Area Research and Development Program of Guangzhou (20200703-0004).

References

  1. Amato, G., Ciampi, L., Falchi, F., Gennaro, C., Messina, N.: Learning pedestrian detection from virtual worlds. In: International Conference on Image Analysis and Processing. pp. 302-312. Springer (2019) 2, 9

  2. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020) 4

  3. Chen, C., Xie, W., Huang, W., Rong, Y., Ding, X., Huang, Y., Xu, T., Huang, J.: Progressive feature alignment for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 627-636 (2019) 6

  4. Chen, C., Zheng, Z., Ding, X., Huang, Y., Dou, Q.: Harmonizing transferability and discriminability for adapting object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8869-8878 (2020) 3

  5. Chen, H., Wang, Y., Wang, G., Qiao, Y.: Lstd: A low-shot transfer detector for object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 32 (2018) 2

  6. Chen, T.I., Liu, Y.C., Su, H.T., Chang, Y.C., Lin, Y.H., Yeh, J.F., Chen, W.C., Hsu, W.H.: Dual-awareness attention for few-shot object detection. arXiv preprint arXiv:2102.12152 (2021) 2

  7. Chen, W.Y., Liu, Y.C., Kira, Z., Wang, Y.C.F., Huang, J.B.: A closer look at few-shot classification. In: International Conference on Learning Representations (2018) 2

  8. Chen, Y., Li, W., Sakaridis, C., Dai, D., Van Gool, L.: Domain adaptive faster r-cnn for object detection in the wild. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3339-3348 (2018) 3, 10, 11

  9. Chen, Y., Li, Y., Kong, T., Qi, L., Chu, R., Li, L., Jia, J.: Scale-aware automatic augmentation for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9563-9572 (2021) 8

  10. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3213-3223 (2016) 1, 9

  11. Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: Autoaugment: Learning augmentation strategies from data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 113-123 (2019) 4

  12. Dvornik, N., Mairal, J., Schmid, C.: Modeling visual context is key to augmenting object detection datasets. In: Proceedings of the European Conference on Computer Vision. pp. 364-380 (2018) 4, 7

  13. Dwibedi, D., Misra, I., Hebert, M.: Cut, paste and learn: Surprisingly easy synthesis for instance detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1301-1310 (2017) 4

  14. Fan, Q., Zhuo, W., Tang, C.K., Tai, Y.W.: Few-shot object detection with attention-rpn and multi-relation detector. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4013-4022 (2020) 3

  15. Fang, H.S., Sun, J., Wang, R., Gou, M., Li, Y.L., Lu, C.: Instaboost: Boosting instance segmentation via probability map guided copy-pasting. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 682-691 (2019) 4, 7

  16. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning. pp. 1180-1189. PMLR (2015) 3

  17. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3354-3361. IEEE (2012) 1, 9

  18. Ghiasi, G., Cui, Y., Srinivas, A., Qian, R., Lin, T.Y., Cubuk, E.D., Le, Q.V., Zoph, B.: Simple copy-paste is a strong data augmentation method for instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 2918-2928 (2021) 4, 7

  19. Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: An unsupervised approach. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 999-1006. IEEE (2011) 2, 3

  20. Gretton, A., Borgwardt, K.M., Rasch, M.J., Scholkopf, B., Smola, A.: A kernel two-sample test. The Journal of Machine Learning Research 13, 723-773 (2012) 6

  21. Guo, H., Pasunuru, R., Bansal, M.: Multi-source domain adaptation for text classification via distance-bandits. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 34, pp. 7830-7838 (2020) 6

  22. He, Z., Zhang, L.: Multi-adversarial faster-rcnn for unrestricted object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 6668-6677 (2019) 3

  23. Hendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., Desai, R., Zhu, T., Parajuli, S., Guo, M., et al.: The many faces of robustness: A critical analysis of out-of-distribution generalization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 8340-8349 (2021) 4

  24. Hendrycks, D., Mu, N., Cubuk, E.D., Zoph, B., Gilmer, J., Lakshminarayanan, B.: Augmix: A simple data processing method to improve robustness and uncertainty. arXiv preprint arXiv:1912.02781 (2019) 4

  25. Hu, T., Yang, P., Zhang, C., Yu, G., Mu, Y., Snoek, C.G.: Attention-based multi-context guiding for few-shot semantic segmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 8441-8448 (2019) 2

  26. Jocher., G.: ultralytics/yolov5: v3.0 - third release. In: Zenodo (December 2020) 10

  27. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of the European Conference on Computer Vision. pp. 694-711. Springer (2016) 10

  28. Johnson-Roberson, M., Barto, C., Mehta, R., Sridhar, S.N., Rosaen, K., Vasudevan, R.: Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks? In: 2017 IEEE International Conference on Robotics and Automation (ICRA). pp. 746-753. IEEE (2017) 2, 9

  29. Kim, T., Jeong, M., Kim, S., Choi, S., Kim, C.: Diversify and match: A domain adaptive representation learning paradigm for object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12456-12465 (2019) 3

  30. Lim, S., Kim, I., Kim, T., Kim, C., Kim, S.: Fast autoaugment. Advances in Neural Information Processing Systems 32, 6665-6675 (2019) 4

  31. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Proceedings of the European Conference on Computer Vision. pp. 740-755. Springer (2014) 2, 9, 10

  32. Luo, Y., Zheng, L., Guan, T., Yu, J., Yang, Y.: Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 2507-2516 (2019) 6

  33. Michaelis, C., Bethge, M., Ecker, A.: One-shot segmentation in clutter. In: International Conference on Machine Learning. pp. 3549-3558. PMLR (2018) 2

  34. Ramamonjison, R., Banitalebi-Dehkordi, A., Kang, X., Bai, X., Zhang, Y.: Simrod: A simple adaptation method for robust object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3570-3579 (2021) 2, 3, 4, 10, 11, 13

  35. Rezaeanaran, F., Shetty, R., Aljundi, R., Reino, D.O., Zhang, S., Schiele, B.: Seeking similarities over differences: Similarity-based domain alignment for adaptive object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 9204-9213 (2021) 2, 3, 10, 11

  36. Sahoo, A., Shah, R., Panda, R., Saenko, K., Das, A.: Contrast and mix: Temporal contrastive video domain adaptation with background mixing. Advances in Neural Information Processing Systems 34, 23386-23400 (2021) 4

  37. Saito, K., Ushiku, Y., Harada, T., Saenko, K.: Strong-weak distribution alignment for adaptive object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6956–6965 (2019) 3

  38. Saito, K., Ushiku, Y., Harada, T., Saenko, K.: Strong-weak distribution alignment for adaptive object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6956–6965 (2019) 10, 11

  39. Sakaridis, C., Dai, D., Van Gool, L.: Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision 126(9), 973-992 (2018) 1, 9

  40. Tranheden, W., Olsson, V., Pinto, J., Svensson, L.: Dacs: Domain adaptation via cross-domain mixed sampling. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 1379-1389 (2021) 4

  41. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7167-7176 (2017) 1, 2, 3, 10

  42. Wang, T., Zhang, X., Yuan, L., Feng, J.: Few-shot adaptive faster r-cnn. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7173-7182 (2019) 2, 3, 4, 9, 10, 11, 12

  43. Wang, X., Huang, T.E., Darrell, T., Gonzalez, J.E., Yu, F.: Frustratingly simple few-shot object detection. arXiv preprint arXiv:2003.06957 (2020) 10, 11

  44. Xu, C.D., Zhao, X.R., Jin, X., Wei, X.S.: Exploring categorical regularization for domain adaptive object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11724-11733 (2020) 2, 3

  45. Xu, M., Wang, H., Ni, B., Tian, Q., Zhang, W.: Cross-domain detection via graph-induced prototype alignment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12355-12364 (2020) 9, 11

  46. Yue, X., Zheng, Z., Zhang, S., Gao, Y., Darrell, T., Keutzer, K., Vincentelli, A.S.: Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13834-13844 (2021) 2

  47. Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: Regularization strategy to train strong classifiers with localizable features. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 6023-6032 (2019) 4

  48. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017) 4

  49. Zhang, J., Chen, Z., Huang, J., Lin, L., Zhang, D.: Few-shot structured domain adaptation for virtual-to-real scene parsing. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. pp. 0-0 (2019) 2

  50. Zheng, Y., Huang, D., Liu, S., Wang, Y.: Cross-domain object detection through coarse-to-fine feature adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13766-13775 (2020) 3

  51. Zhu, X., Pang, J., Yang, C., Shi, J., Lin, D.: Adapting object detectors via selective cross-domain alignment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 687-696 (2019) 3

  52. Zhuang, C., Han, X., Huang, W., Scott, M.: ifan: Image-instance full alignment networks for adaptive object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 34, pp. 13122-13129 (2020) 2

  53. Zou, H., Zhou, Y., Yang, J., Liu, H., Das, H.P., Spanos, C.J.: Consensus adversarial domain adaptation. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 5997-6004 (2019) 2