SlowGuess's picture
Add Batch d61897a0-e7f7-474b-a46b-fe4c084a3df2
6a6965a verified

Adaptive Adversarial Network for Source-free Domain Adaptation

Haifeng Xia†, Handong Zhao‡, and Zhengming Ding†
†Department of Computer Science, Tulane University, USA
‡Adobe Research, San Jose, USA

hxia@tulane.edu, hazhao@adobe.com, zding1@tulane.edu

Abstract

Unsupervised Domain Adaptation solves knowledge transfer along with the coexistence of well-annotated source domain and unlabeled target instances. However, the source domain in many practical applications is not always accessible due to data privacy or the insufficient memory storage for small devices. This scenario defined as Source-free Domain Adaptation only allows accessing the well-trained source model for target learning. To address the challenge of source data unavailability, we develop an Adaptive Adversarial Network (A $^2$ Net) including three components. Specifically, the first one named Adaptive Adversarial Inference seeks a target-specific classifier to advance the recognition of samples which the provided source-specific classifier difficulty identifies. Then, the Contrastive Category-wise Matching module exploits the positive relation of every two target images to enforce the compactness of subspace for each category. Thirdly, Self-Supervised Rotation facilitates the model to learn additional semantics from target images by themselves. Extensive experiments on the popular cross-domain benchmarks verify the effectiveness of our proposed model on solving adaptation task without any source data.

1. Introduction

Recent years witness great promising achievements from the exploration of deep neural network (DNN) in the practical scenarios, i.e., image classification, segmentation and detection [20, 47, 24, 48]. However, DNN model easily suffers from severe performance degradation when evaluated on test data (target domain) lying in different distribution from the training instances (source domain). Such discrepancy termed as domain shift [9, 36] results from the varying environments or devices [28] and various image styles [49]. To tackle the challenge, unsupervised domain adaptation (UDA) attracts increasing attention and achieves encouraging results by using deep learning architecture.

Conventional UDA assumes the cross-domain data is


Figure 1. Schematic diagram of high-level source (black) and target (red) feature distributions from the trained source model. The target samples can be divided into two subsets: source-similar and source-dissimilar sets. Square and circle represent two different categories. $\mathbf{A}^2\mathbf{Net}$ adaptively learns a new classifier (dashed) based on the frozen classifier (solid) trained in source domain.

available during model training, so that it effectively measures and eliminates the domain discrepancy [7, 39, 42]. Based on this assumption, the mainstream solutions of UDA are roughly divided into two paradigms. One branch attempts to transform source and target data into the high-level feature space with the consistent statistical moments to achieve the alignment of their feature distributions [14, 26, 41, 3, 16]. As for the representative work maximum mean discrepancy (MMD), the learned cross-domain features are enforced to share the identical first-order moment. The other branch devotes more efforts to the deployment of adversarial framework [27, 33, 2]. The core strategy is exploiting feature generator to deceive the domain discriminator so that it fails to recognize which domain the feature comes from. Despite the successes of these methods, it is not hard to observe that they heavily depend on the co-existence of source and target data. However, abundant application scenarios cannot always meet the basic assumption of UDA due to data privacy and memory constraint of small devices. For instance, the training benchmark of ImageNet [6] contains 14 million images occupying hundreds gigabytes storage, which is a huge burden for small-storage equipment. Moreover, many industries such as hospitals are

restricted to share their sensitive data with external sites.

The conflict between the practical demand and UDA setting motivates the novel research direction named Source-free Domain Adaptation where we are only provided with the well-trained source model instead of well-annotated source data to achieve adaptation to target data. Recently, a few research efforts [21, 19] start exploring this new scenario on cross-domain classification task by assuming that the source classifier contains sufficient knowledge. Thus, they both attempt to directly adjust target features to adapt the source classifier. Among them, SHOT [21], as a simple yet efficient method, freezes the source classifier and integrates pseudo-label supervision and entropy minimization [46] to shorten the distance between target features and source classification boundary. Similarly, MA [19] first considers source classifier as an anchor to guide the generation of new target samples closer to the source domain and then adopts adversarial strategy to achieve domain alignment. In addition, SoFA [43] adopts self-supervised reconstruction to extract more discriminative knowledge from target images themselves to improve the classification ability of model. However, when the data in source domain is imbalanced or insufficient, the above methods with frozen classifier becomes vulnerable due to the lower generalization of source classifier. It is difficult for these approaches to move abundant target features with large variance into the small source classification boundary. For example, as illustrated in Figure 1, the source classifier (solid line) trained on the imbalanced data where circle class has only a few data points. Restricted by the frozen classifier, this, unfortunately, leads to a bad classification performance in source-dissimilar set. From another perspective, we post a question: "Can we seek a novel target-specific classifier during model optimization and adapt it to the target features?"

Along with such a question, we propose a novel Adaptive Adversarial Network ( $\mathbf{A}^2\mathbf{Net}$ ) to address the Source-Free Domain Adaptation. To achieve flexible adjustment for classifier and preserve the original source knowledge, our work firstly introduces a novel target classifier and then exploits dual-classifier design to achieve adversarial domain-level alignment and contrastive category-wise matching (CCM). Concretely, according to the predictions of source and target classifiers, we adaptively divide target samples into two categories: source-similar set and source-dissimilar one. By building such an adversarial relation between dual-classifier and feature generator, $\mathbf{A}^2\mathbf{Net}$ gradually eliminates the significant difference across source-similar and source-dissimilar sets and remedies the defect of the frozen source classifier by updating the target classifier. To further learn discriminative features, our work considers the relation of paired samples consisting two any target images as three levels: positive, uncertain and negative pairs, and develops contrastive category-wise matching over all

positive pairs to intensify their association. The main contributions of our work are summarized as three folds:

  • First, the proposed $\mathbf{A}^2\mathbf{Net}$ integrates a new flexible classifier to be available for optimization and the frozen source classifier to form the dual-classifier architecture which we use to adaptively distinguish source-similar target samples from source-dissimilar ones and achieve alignment across them.
  • Second, $\mathbf{A}^2\mathbf{Net}$ learns robust and discriminative features in a self-supervised learning manner. Specifically, the contrastive category-wise matching module relies on source knowledge to explore the association of the paired target features and enforce the positive relation to achieve category-wise alignment.
  • Finally, we further enhance the model to learn additional semantics through a self-supervised rotation. Experimental results on three benchmarks fully verify the effectiveness of $\mathbf{A}^2\mathbf{Net}$ for source-free scenario.

2. Related Works

Unsupervised Domain Adaptation (UDA) aims to transfer knowledge of source domain to achieve the classification of unsupervised target domain [12]. And these two domains share the identical label space yet with different distributions due to the varying conditions [13, 35]. During the training stage, the models have the access to well-annotated source data and unlabeled target data. Thus, the intuition is to learn domain-invariant representation with all available data so that the well-trained source model is directly applied to the target data. Recently, many efforts [46, 38] make significant progress on this exploration by designing various distribution alignment criteria. From domain-level alignment, several works [26, 31] attempt to match various order moments across source and target distributions, while others [33, 46] introduce discriminator to achieve adversarial game between feature generator and discriminator. As for the category-wise alignment, CAN [14] exploits the pseudo-label of target sample to reduce the intra-class distance and learn more compact category subspace. In terms of the sample-wise alignment, a few works [18, 5] extract and optimize sample-to-sample association via optimal transport. Although these methods actually have facilitated model with powerful adaptation ability, they heavily rely on availability of two domains' data, which conflicts with the real applications due to the data privacy and memory constraint of small devices. To surmount the bottleneck of UDA, we consider source-free domain adaptation and develop an adaptive adversarial network.

Source-free Domain Adaptation provides the well-trained source model and target data without any access to source data during the training stage. Under this condition, the


Figure 2. Overview of our Adaptive Adversarial Network ( $\mathbf{A}^2\mathbf{Net}$ ) on solving source-free domain adaptation. Given the trained source model including feature extractor $F(\cdot)$ and source classifier $C_s(\cdot)$ , we transfer it to identify the target images without source data. To address such a challenge, $\mathbf{A}^2\mathbf{Net}$ first adaptively distinguishes source-similar target samples from source-dissimilar ones, and adopts soft-adversarial manner with the introduced target classifier to eliminate their discrepancy. Second, our method explores the contrastive category-wise matching (CCM) to reinforce the relation of positive paired samples. Third, $\mathbf{A}^2\mathbf{Net}$ exploits self-supervised rotation to learn more robust and discriminative features.

conventional UDA methods become invalid since they fail to match source feature with target ones. Most recently, [19, 17, 21] discover that the original trained model conceals lots of knowledge of source feature distribution. Thus, with the supervision of source classifier, [19, 17] attempt to produce novel target samples closer to source domain and then align the novel and original target instances in high-level features via adversarial manner. Similarly, [21] freezes the source classifier and applies pseudo-label to optimize feature generator, which aims to move target features into the unseen source feature domain. However, the significant domain discrepancy or imbalanced distribution of source domain has a negative influence on the generalization of source model, which increases the difficulty of adapting target feature to source classifier. Differently, our work introduces a novel flexible target classifier under dual-classifier architecture with the frozen source classifier. The dual-classifier firstly assists to adaptively discover source-similar and source-dissimilar target sets, and fights with the feature generator to achieve the alignment across the two subsets. In addition, we adopt self-supervised manners, i.e., category-wise matching and rotation recognition, to learn more discriminative representations.

3. The Proposed Method

3.1. Preliminaries

Given the well-annotated source domain $\mathcal{D}_s = {(x_i^s, y_i^s)}$ and unlabeled target instances $\mathcal{D}_t = {x_i^t}$ , the conventional UDA methods [40, 23, 45, 34] attempt to eliminate domain discrepancy by training a model with the available cross-domain data. However, the practical applications sometimes are restricted to access to original source raw data due to data privacy and/or memory constraint of small devices, which motivates the more challenging Source-Free Domain Adaptation. When adapting to target domain in the

novel scenario, we only deploy well-trained source model including feature extractor $F(\cdot)$ and classifier $C_s(\cdot)$ to recognize target samples without any source instances for explicit cross-domain alignment. With the main exploration on how to adapt model to target classification task, our work follows the protocol [21] to train source model by optimizing the $F(\cdot)$ and $C_s(\cdot)$ with the supervisor of source annotation and neglect the specific description on this part.

3.2. $\mathbf{A}^2\mathrm{Net}$ : Adaptive Adversarial Network

From the investigation of Figure 1, the considerable domain discrepancy results in the mismatch of feature distribution across source and target domains. Fortunately, there exist some ready-to-recognize target instances similar to source domain distribution for each category. Thus, target high-level features can be divided into two types: source-similar features and source-dissimilar ones. The source classifier $C_s(\cdot)$ confidently identifies source-similar samples. However, it difficultly provides accurate labels for the remaining, especially when trained on insufficient data in source domain. Under such condition, the frozen source classifier in SHOT [21] becomes invalid for the classification of source-dissimilar features, since it is difficult to adapt abundant target features with large variance to $C_s(\cdot)$ with the lower generalization. To avoid the defect, we propose a novel method named Adaptive Adversarial Network (A²Net) in Fig. 2 which alternatively develops a learnable classifier $C_t(\cdot)$ to adapt target feature distribution. The introduced target classifier not only should accurately identify source-similar target features as $C_s$ but also improves the recognition ability on source-dissimilar ones. Along with the mentioned expectation, the first challenge is to distinguish source-similar features from source-dissimilar ones. However, it is non-trivial to make the decision due to the difficulty of measuring distance between data points and class boundary in high-dimensional feature space.

3.2.1 Soft-Adversarial Inference

Motivated by the voting strategy, we compare the output of classifiers to adaptively determine the type of features. Specifically, each target sample through two classifiers in Figure 2 achieves its probability distribution of category before Softmax operation $p_i^s = C_s(F(x_i^t)) \in \mathbb{R}^K$ and $p_i^t = C_t(F(x_i^t)) \in \mathbb{R}^K$ , where $K$ is the number of class. Subsequently, we activate the concatenation of $p_i^s$ and $p_i^t$ with Softmax function $\sigma(\cdot)$ to access $p_{(i)}^{st} = \sigma([p_i^s p_i^t]^\top) \in \mathbb{R}^{2K}$ and consider $\alpha_i^s = \sum_{k=1}^{K} p_{(i)k}^{st}$ and $\alpha_i^t = \sum_{k=K+1}^{2K} p_{(i)k}^{st}$ as voting scores. When $\alpha_i^s$ is larger than $\alpha_i^t$ , the corresponding feature belongs to source-similar set, otherwise, it is divided into the other subset. The definition gives us a manner to optimize target classifier and feature extractor with the following formulation:

minF,Cti=1ntI(αis>αit)σ(pis)log(σ(pis))i=1ntI(αisαit)σ(pit)log(σ(pit)),(1) \begin{array}{l} \min _ {F, C _ {t}} - \sum_ {i = 1} ^ {n _ {t}} \mathbb {I} \left(\alpha_ {i} ^ {s} > \alpha_ {i} ^ {t}\right) \sigma \left(p _ {i} ^ {s}\right) \log \left(\sigma \left(p _ {i} ^ {s}\right)\right) \\ - \sum_ {i = 1} ^ {n _ {t}} \mathbb {I} \left(\alpha_ {i} ^ {s} \leq \alpha_ {i} ^ {t}\right) \sigma \left(p _ {i} ^ {t}\right) \log \left(\sigma \left(p _ {i} ^ {t}\right)\right), \tag {1} \\ \end{array}

where $\mathbb{I}(\cdot)$ is the indicator function. However, such a constraint easily gives rise to the necessary concern "What will happen if $C_t(\cdot)$ generates wrong prediction when $\alpha_i^s \leq \alpha_i^t$ ? Under this situation, the prediction tends to be far away from the ground-truth. Thus, the trade-off between accepting novel target knowledge and preserving well-learned source knowledge becomes important, and we further rewrite Eq. (1) as:

Lc=i=1nt(αisσ(pis)log(σ(pis))+αitσ(pit)log(σ(pit))), \mathcal {L} _ {c} = - \sum_ {i = 1} ^ {n _ {t}} \left(\alpha_ {i} ^ {s} \sigma \left(p _ {i} ^ {s}\right) \log \left(\sigma \left(p _ {i} ^ {s}\right)\right) + \alpha_ {i} ^ {t} \sigma \left(p _ {i} ^ {t}\right) \log \left(\sigma \left(p _ {i} ^ {t}\right)\right)\right),

where $\alpha_{i}^{s}$ and $\alpha_{i}^{t}$ are frozen during optimization.

From another perspective, we also consider the source-similar and source-dissimilar high-level features distributing in two independent domains. The alignment of them further reduces their discrepancy to learn more discriminative features. In addition, the introduced target classifier $C_t(\cdot)$ finally has the equivalent classification ability for source-similar ones. According to the dual-classifier design, we propose a Soft-Adversarial mechanism to address the above demands with the formal objective function as:

minCtLc=i=1nt(αislog(k=1Kp(i)kst)+αitlog(k=K+12Kp(i)kst)),Lc=i=1nt(αitlog(k=1Kp(i)kst)+αislog(k=K+12Kp(i)kst)). \begin{array}{l} \min _ {\mathcal {C} _ {t}} \mathcal {L} _ {c ^ {\prime}} = - \sum_ {i = 1} ^ {n _ {t}} \left(\alpha_ {i} ^ {s} \log \left(\sum_ {k = 1} ^ {K} p _ {(i) k} ^ {s t}\right) + \alpha_ {i} ^ {t} \log \left(\sum_ {k = K + 1} ^ {2 K} p _ {(i) k} ^ {s t}\right)\right), \\ \underset {F} {\min } \mathcal {L} _ {c ^ {\prime \prime}} = - \sum_ {i = 1} ^ {n _ {t}} (\alpha_ {i} ^ {t} \log (\sum_ {k = 1} ^ {K} p _ {(i) k} ^ {s t}) + \alpha_ {i} ^ {s} \log (\sum_ {k = K + 1} ^ {2 K} p _ {(i) k} ^ {s t})). \\ \end{array}

To explicitly understand the Soft-Adversarial loss, we firstly illustrate that $\alpha_{i}^{s / t}$ denotes the probability of the sample $x_{i}$ belonging to source-similar or source-dissimilar subsets and $\alpha_{i}^{s} + \alpha_{i}^{t} = 1$ . For the extreme condition such as

$\alpha_{i}^{s}\approx 1$ , the optimization of $\mathcal{L}{c^{\prime}}$ further reduces the discriminability of target classifier $C_t(\cdot)$ for $x{i}$ . However, feature generator engages in the inverse operation mapping $x_{i}$ into high-level representation similar to source-dissimilar part by minimizing $\mathcal{L}_{c^{\prime \prime}}$ . Beneficial from the adversarial manner between feature generator and classifiers, we further align source-closer and source-dissimilar sets and eliminate the difference of classifiers.

3.2.2 Contrastive Category-wise Matching

The core motivation of adaptive adversarial inference is to discover source-similar target samples and achieve domain-level alignment across source-similar and source-dissimilar sets. However, domain-invariant features learned with adversarial learning fail to represent the category-level matching. In addition, without annotation over target domain, it becomes difficult to identify the category relationship among samples. The intuitive solution to the challenge is to directly provide each sample with pseudo-label, which easily results in the negative influence on model training, especially during the initialization stage, due to the uncertainty of pseudo-label. Inspired by the contrastive learning [1], we design a novel discriminative dual classifier exploring the association of paired samples to achieve the class-wise alignment in unsupervised manner.

Concretely, each visual instance within a batch is transformed into the label space via the source classifier $\mathcal{G}i = \sigma(p_i^s) \in \mathbb{R}^K$ which we use to capture the similarity of any paired samples through $s{ij} = \mathcal{G}i^\top \mathcal{G}j$ in Figure 2. The larger $s{ij}$ denotes that the $i$ -th and $j$ -th samples belong to the same category with higher probability, vice versa. However, we fail to confidently judge the relationship of several pairs when $s{ij}$ lies in the middle interval. Thus, all pairs of each mini-batch are divided into three subsets: positive, uncertain and negative sets by comparing $s_{ij}$ with the upper bound $\mu(t)$ and lower bound $\ell(t)$ defined as:

{μ(t)=μ0λμt(t)=0+λt0(t)μ(t)1γij={1,sij>μ(t)1,sij<(t)0,otherwise \left\{ \begin{array}{l} \mu (t) = \mu_ {0} - \lambda_ {\mu} \cdot t \\ \ell (t) = \ell_ {0} + \lambda_ {\ell} \cdot t \\ 0 \leq \ell (t) \leq \mu (t) \leq 1 \end{array} \right. \quad \gamma_ {i j} = \left\{ \begin{array}{l l} 1, & s _ {i j} > \mu (t) \\ - 1, & s _ {i j} < \ell (t) \\ 0, & o t h e r w i s e \end{array} \right.

where $\mu(t)$ as well as $\ell(t)$ are the linear functions of epoch $t$ starting from zero, and $\mu_0$ and $\ell_0$ are the initial upper and lower bounds, respectively, and $\lambda_\mu$ and $\lambda_\ell$ separately control the decreasing and increasing rate of $\mu_0$ and $\ell_0$ . The pairs are definitely classified into positive ( $\gamma_{ij} = 1$ ) and negative ( $\gamma_{ij} = -1$ ) subsets when $s_{ij} > \mu(t)$ and $s_{ij} < \ell(t)$ , respectively. For other cases, we temporarily neglect the ambiguous associations with $\ell(t) < s_{ij} < \mu(t)$ by $\gamma_{ij} = 0$ . As the piecemeal change of $\mu(t)$ and $\ell(t)$ , our method adaptively makes the judgement for more pairs.

To achieve class-wise alignment, the relation of positive pairs must be further intensified to learn more similar fea

ture representation for themselves. Similar with [1], we develop the contrastive loss for each positive pair of example $(i,j)$ formulated as:

ξ(i,j)=logexp(sij)v=1bI(vi)γivexp(siv),(2) \xi (i, j) = - \log \frac {\exp \left(s _ {i j}\right)}{\sum_ {v = 1} ^ {b} \mathbb {I} (v \neq i) \left| \gamma_ {i v} \right| \exp \left(s _ {i v}\right)}, \tag {2}

where $b$ is the size of each batch and $|\gamma_{iv}|$ means the absolute value of $\gamma_{iv}$ . According to the monotonic property1, we obtain the optimization of Eq. (2) approximates the minimum value of function with $s_{ij} \to 1$ . That illustrates $\sigma(p_i^s)$ and $\sigma(p_j^s)$ follow the more similar probability distribution. And the property is transmitted into the output of feature generator due to the frozen source classifier so that samples from the identical class distribute closer to each other in the high-level feature space. Thus, we adopt Eq. 2 on all positive pairs and reformulate the final contrastive objective:

minFLp=I[μ(λ)>(λ)]i=1bj=1,jibI(γij=1)ξ(i,j).(3) \min _ {F} \mathcal {L} _ {p} = \mathbb {I} [ \mu (\lambda) > \ell (\lambda) ] \sum_ {i = 1} ^ {b} \sum_ {j = 1, j \neq i} ^ {b} \mathbb {I} \left(\gamma_ {i j} = 1\right) \xi (i, j). \tag {3}

Note that Eq. (3) makes no sense under $\mu (\lambda)\leq \ell (\lambda)$ since we fail to find new positive pairs to optimize it.

3.2.3 Self-Supervised Rotation

So far, we mainly consider how to transfer knowledge into target domain only with the guidance of well-trained source model. However, the pure classification model heavily relies on the given source data, which is often lying imbalanced distribution, further limiting the generalization ability of target classifier. To solve this, we explore self-supervised rotation manner over target domain to augment the sample space, which enhances the learning of feature extraction and target classifier. In other words, the model is able to easily see more variances per high-confident predicted target sample. Following [44], we set four rotation degrees $\theta \in {0^o, 90^o, 180^o, 270^o}$ with corresponding 4-class rotation labels $y_r$ . Within one batch, we randomly select rotation label $y_r$ and then have access to the new processed image $\hat{x}_i^t$ by rotating the original image $x_i^t$ with $90^o y_r$ . In addition, we also introduce the rotation classifier $C_r(\cdot)$ in Figure 2 taking $F(\hat{x}_i^t)$ as input and predicting the rotation label. Finally, the cross-entropy loss is adopted to measure the difference between prediction and ground-truth rotation as follows:

minF,CrLr=i=1byirlog(F(x^it)).(4) \min _ {F, C _ {r}} \mathcal {L} _ {r} = - \sum_ {i = 1} ^ {b} y _ {i} ^ {r} \log \left(F \left(\hat {x} _ {i} ^ {t}\right)\right). \tag {4}

By identifying the rotation degree, the model effectively captures the important visual signals from original images for object classification.

3.3. Overall Objective and Optimization

The above description has specifically illustrated how our method works for source-free domain adaptation. It is simple to notice that the training of model mainly involves the update of three modules (i.e., feature generator $F(\cdot)$ , rotation classifier $C_r(\cdot)$ , and target classifier $C_t(\cdot)$ ) with the following overall objective:

minF,CrLc+Lc+Lp+ηLr,(5) \min _ {F, C _ {r}} \mathcal {L} _ {c} + \mathcal {L} _ {c ^ {\prime \prime}} + \mathcal {L} _ {p} + \eta \mathcal {L} _ {r}, \tag {5}

minCtLc+Lc,(6) \min _ {C _ {t}} \mathcal {L} _ {c} + \mathcal {L} _ {c ^ {\prime}}, \tag {6}

where $\eta$ is the trade-off parameter. To achieve the adaptive adversarial operation, we adopt iterative manner to alternately optimize three modules. First, the source and target classifiers take the features from generator as input to access the class prediction which we use to update the feature generator and rotation classifier via Eq. (5) with frozen the target multi-class classifier $C_t(\cdot)$ . Second, we only optimize the target classifier when fixing $F(\cdot)$ and $C_r(\cdot)$ with Eq. (6). Third, the adversarial training repeats the above two steps until we reach the convergence or maximum epochs.

4. Experiments

4.1. Experimental Details

$\S$ Datasets. Office-31 [30] as a popular benchmark contains three domains: Amazon (A), Webcam (W) and DSLR (D). The specific number of image for each domain is 2,817 (A), 795 (W) and 498 (D), and three domains share the identical label space with 31 categories. Adapting model from source to target domain on Office-31 not only eliminates the domain shift but also overcomes additional challenge resulting from the data scale imbalance such as task $\mathbf{A} \rightarrow \mathbf{D}$ . Office-Home [34] consists of 15,500 images collected from four domains Realworld (Rw), Clipart (Cl), Art (Ar), Product (Pr) and there are 65 categories per domain. The challenges of 12 domain adaptation tasks mainly result from more object categories and considerable domain discrepancy. VisDA [29] is a challenging large-scale benchmark with 12 classes typically used to evaluate model adaptation from synthetic domain to real domain. The source domain involves 152 thousands synthetic images which the 3D rendering model produces under various conditions. The validation set is considered as the target domain containing 55 thousands real object images from MS-COCO [22].

§ Implementation. According to [21], for the source model, we separately consider Resnet-50 and Resnet-101 as backbones to extract high-level features from two object

Table 1. Comparisons of Object Classification Accuracy (%) of Source-free Domain Adaptation on Office-31. The best accuracy for source-free tasks is highlighted with bold type, while we use underline to emphasize the highest result for source-need task.

MethodA→DA→WD→AD→WW→AW→DAvg
Source-NeededResNet [10]68.968.462.596.760.799.376.1
DANN [8]79.782.068.296.967.499.182.2
SAFN [38]90.790.173.098.670.299.887.1
CDAN [25]92.994.171.098.669.3100.087.7
BNM [4]90.391.570.998.571.6100.087.1
MCC [11]95.695.472.698.673.9100.089.4
SRDC [32]95.895.776.799.277.1100.090.8
Source-FreeSFDA [15]92.291.171.098.271.299.587.2
SDDA [17]85.382.566.499.067.799.883.5
SoFA [43]73.971.753.796.754.698.274.8
SHOT [21]94.090.174.798.474.399.988.6
Ours94.594.076.799.276.1100.090.1

datasets and VisDA, and replace the original last FC layer with a new bottleneck layer followed by Batch Normalization (BN). The source classifier $C_s$ consists of one FC layer and a weight normalization layer. During adaptation, we introduce the target classifier $C_t$ with the same architecture as $C_s$ and the rotation classifier $C_r$ including two FC layers. In addition, we initialize $C_t$ with the parameters of $C_s$ by appending Gaussian noises from $N(0, I)$ . As for the optimizer, we adopt SDG with momentum 0.9 and weight decay $1e^{-3}$ . The initial learning rates on Office-31/OfficeHome for the pre-trained backbone and new added components are $1e^{-3}$ and $1e^{-2}$ respectively, however, they are set as $1e^{-4}$ and $1e^{-2}$ for VisDA. Moreover, we set the identical upper and lower bounds for all experiments as $\mu_0 = 0.95$ , $\ell_0 = 0.45$ , $\lambda_\mu = 9.9e^{-3}$ and $\lambda_\ell = 9.9e^{-4}$ . Source code of this work is available online2.

§Baselines. The comparisons include two categories of domain adaptation algorithms. One is vanilla domain adaptation, which requires source and target data at the same time to solve the domain shift, such as Resnet [10], DANN [8], SAFN [38], CDAN [25], SRDC [32], BNM [4] and MCC [11]. Additionally, we also compare the recent state-of-the-art source-free domain adaptation models, i.e., SFDA [15], SHOT [21], SDDA [17] and SoFA [43]. Note that since MA [19] needs to generate additional target samples on solving source-free task, we make no comparison with it.

4.2. Comparison Results

Tables 1-3 report the results of object classification on Office-31, Office-Home and VisDA, respectively. From the investigation of them, our proposed Adaptive Adversarial Network ( $\mathbf{A}^2\mathbf{Net}$ ) achieves the highest average accuracy across three benchmarks when compared with others

for source-free domain adaptation, which illustrates the design of $\mathbf{A}^2\mathbf{Net}$ effectively transfers knowledge only from source model to assist the target data recognition. In addition, we can easily achieve three important conclusions by making delicate comparisons over these competitors.

First of all, $\mathbf{A}^2\mathbf{Net}$ provides well-trained source model with more powerful adaptation ability when evaluated on unsupervised target domain, especially for small scale source domain. For example, with $\mathbf{D}$ and $\mathbf{W}$ as source domains on Office-31, our approach outperforms the second highest accuracy from SHOT by $2.0%$ and $1.8%$ when adapting to the target domain $\mathbf{A}$ . As we all know, there exists serious imbalanced data scale challenge across source and target domain, i.e., $\mathbf{D}(498)$ vs $\mathbf{A}(2,817)$ and $\mathbf{W}(795)$ vs $\mathbf{A}$ . The classifier firstly trained on small-scale source domain has so insufficient generalization ability that it ineffectively is applied to large-scale target domain. Thus, it is difficult for SHOT with frozen classifier to accurately move abundant target features into the source classification boundary. However, our $\mathbf{A}^2\mathbf{Net}$ adopts flexible target classifier with adversarial training to adapt it to target features. This is the main reason for our success on these two tasks. And $\mathbf{A}^2\mathbf{Net}$ beats several UDA based methods such as SAFN and BNM by a large margin, which means even if we fail to access the source data, our method still exploits the finite source knowledge to achieve better adaptation.

Second, our proposed method also effectively overcomes the negative influence of significant domain discrepancy. To the best of our knowledge, there exists significant domain shift between Ar and Cl because of the considerable difference of image styles. However, $\mathbf{A}^2\mathbf{Net}$ surpasses SFDA by $10%$ for this adaptation task since our proposed method adaptively distinguishes the source-similar target samples from source-dissimilar ones and explores adversarial manner to gradually eliminate domain discrepancy. Moreover, with the increasing of object category, we notice that all methods suffer from the performance degradation on Office-Home when compared with their results of Office-31. But the contrastive category-wise matching depends on the constraint over positive paired samples to learn so explicit classification boundary that $\mathbf{A}^2\mathbf{Net}$ still achieves the best classification accuracy among the baselines.

Third, the experimental results in Table 3 fully demonstrate that our designed algorithm makes sense to solve source-free domain adaptation with the large-scale benchmark. Specifically, $\mathbf{A}^2\mathbf{Net}$ obtains higher classification accuracy than other state-of-the-art methods in most adaptation tasks on VisDA and makes more accurate identify on several confusing objects such as bus and car.

4.3. Empirical Analysis

$\S$ Feature Visualization & Confusion Matrix. According to the experimental results in Table 1 and the working

Table 2. Comparisons of Object Classification Accuracy (%) of Source-free Domain Adaptation on Office-Home. The best accuracy for source-free tasks is highlighted with bold type, while we use underline to emphasize the highest result for source-need task.

MethodAr→ClAr→PrAr→RwCl→ArCl→PrCl→RwPr→ArPr→ClPr→RwRw→ArRw→ClRw→PrAvg
Source-NeededResnet[10]46.367.575.959.159.962.758.241.874.967.448.274.261.3
DANN [8]45.659.370.147.058.560.946.143.768.563.251.876.857.6
SAFN [37]52.071.776.364.269.971.963.751.477.170.957.181.567.3
CDAN [25]50.770.676.057.670.070.057.450.977.370.956.781.665.8
BNM [4]52.373.980.063.372.974.961.749.579.770.553.682.267.9
SRDC [32]52.376.381.069.576.278.068.753.881.776.357.185.071.3
Source-FreeSFDA[15]48.473.476.964.369.871.762.745.376.669.850.57965.7
SoFA[43]-74.177.6-71.875.1-------
SHOT [21]57.178.181.568.078.278.167.454.982.273.358.884.371.8
Ours58.479.082.467.579.378.968.056.282.974.160.585.072.8

Table 3. Comparisons of Object Classification Accuracy (%) of Source-free Domain Adaptation on VisDA. The best accuracy for source-free tasks is highlighted with bold type, while we use underline to emphasize the highest result for source-need task.

MethodsplanebcyclbuscarhorseknifemcyclpersonplantsktbrdtraintruckPer-Class
Source-NeededResnet [10]55.153.361.959.180.617.979.731.281.026.573.58.552.4
DANN [8]81.977.782.844.381.229.565.128.651.954.682.87.857.4
CDAN [25]85.266.983.050.884.274.988.174.583.476.081.928.073.9
SAFN [38]93.661.384.170.694.179.091.879.689.955.689.024.476.1
MCC [11]88.780.380.571.590.193.285.071.689.473.885.036.978.8
Source-FreeSFDA [15]86.981.784.663.993.191.486.671.984.558.274.542.776.7
SoFA [43]------------64.6
SHOT [21]94.388.580.157.393.194.980.780.391.589.186.358.282.9
Ours94.087.885.666.893.795.185.881.291.688.286.556.084.3

mechanism of model, when compared with other state-of-the-art baselines, it is simple to notice that $\mathbf{A}^2\mathbf{Net}$ is nonsensitive to the mismatch of cross-domain data scale, where source domain contains much fewer instances than target domain. To further explore how our work achieves it, we provide the visualization of embedding feature and confusion matrix in Figure 3 with large- or small- scale source domain. Concretely, the well-trained target model of $\mathbf{A}^2\mathbf{Net}$ and source-only ResNet are frozen to extract the high-level features before the classifier from the unseen source domain and unlabeled target one. And we carry out the experiments on Office-31 since it exists the imbalanced data scale challenge, i.e., A (2,817) vs D (498), and A (2,817) vs W (795). The comparison between Fig. 3 (a) and (b) illustrates the model trained on large-scale source domain has more powerful generalization ability than that with smaller-scale one. With A as source domain, $\mathbf{A}^2\mathbf{Net}$ easily distinguishes source-similar target features from source-dissimilar ones and gradually aligns these two parts by using soft-adversarial mechanism. Thus, after adaptation, target features of each category (produced by $\mathbf{A}^2\mathbf{Net}$ ) almost distribute the boundary of source domain. Under this condition, our target classifier being similar to the original source one exactly identifies them. However, with source model trained on D, even if the model has finished the adaptation, there are abundant target features far away from the corre

sponding source class so that the original source classifier difficulty make an accurate decision on them. The flexible target classifier of our method, thus, fully shows the importance of its optimization, which facilitates model to adapt itself to target features instead of only adjusting feature learning as SHOT [21]. Beneficial from the dual-classifier design, our work achieves the highest classification accuracy on task $\mathbf{D} \rightarrow \mathbf{A}$ in Table 1. In addition, the confusion matrix derived from task $\mathbf{W} \rightarrow \mathbf{A}$ demonstrates our method learns more compact category subspace by intensifying the association of positive pairs with contrastive loss to achieve category-wise matching across source-similar and source-dissimilar sets.

§ Ablation Study, Parameter Analysis & Training Stability. Our $\mathbf{A}^2\mathbf{Net}$ mainly consists of three modules: soft-adversarial inference, contrastive category-wise matching and self-supervised rotation which support the model adaptation from various perspectives. Therefore, we attempt to separately remove each module from them to investigate the change of classification accuracy on two tasks $\mathbf{D} \rightarrow \mathbf{A}$ and $\mathbf{W} \rightarrow \mathbf{A}$ . According to the experimental results in Fig. 4 (b), we achieve the conclusion that the soft-adversarial mechanism has an important and positive influence on improving the generalization of model. Without the adversarial operation, it becomes tough to effectively promote adaptation of the target classifier so that the model heavily relies on the


(a) ResNet $(A\to W)$


(c) ResNet $(D\to A)$


(d) $\mathbf{A}^2\mathbf{Net}(D\to A)$


(b) $\mathbf{A}^2\mathbf{Net}(A\to W)$
(e) ResNet $(W\to A)$


(f) $\mathbf{A}^2\mathbf{Net}(W\to A)$


Figure 3. Result of Feature Visualization and Confusion Matrix. (a)-(d) show high-level source (red) and target (blue) features generated by source-only model (Resnet-50) and our $\mathbf{A}^2\mathbf{Net}$ . Note that we only exploit source data to draw the t-SNE without any use of it during adaptation stage. (e) and (f) are the confusion matrices, comparing the ground-truth and the category prediction from ResNet and our model, respectively.
(a) Parameter Analysis.


(b) Ablation Study.
Figure 4. (a) Parameter Analysis records the object recognition accuracy with the varying $\eta$ . (b) Ablation Study shows the influence of removing each constraint on the performance of our model. (c) Training Stability reports the object recognition ability of target classifier as the increasing number of epoch.


(c) Training Stability.

performance of the frozen source classifier. Similarly, removing the contrastive category matching also results in the performance degradation since this module mainly exploits the existed knowledge of source model to explore the relation of any two target samples and controls the compactness of each target class subspace by using contrastive loss over all positive pairs. In terms of the rotation design, it actually makes a small contribution to the improvement of performance by learning additional semantics from target images in self-supervised manner. However, we still promote the adaptation ability of model via the adjustment of parameter $\eta$ balancing the rotation constraint and others. For instance, Fig. 4 (a) reports the relation between the varying $\eta$ and target classification accuracy. These two tasks of Office-Home both achieve the highest performance with $\eta = 0.3$ . Finally, considering the adversarial game between feature generator and dual-classifier, we show the change of object recognition accuracy as the increasing of epoch in Fig. 4 (c). With the adversarial training manner, the target classifier gradually improves its classification ability in a stable rhythm.

5. Conclusions

Unsupervised Domain Adaptation (UDA) assumes the well-annotated source domain and unlabeled target images are both available for the model training. However, many practical applications only access the well-trained source model instead of source data during adaptation stage, which is defined as source-free domain adaptation. To overcome the novel scenario, this paper proposes Adaptive Adversarial Network $(\mathbf{A}^2\mathbf{Net})$ including three operations. First, $\mathbf{A}^2\mathbf{Net}$ develops a soft-adversarial mechanism to learn a flexible target classifier promoting the recognition of samples which the frozen source classifier difficulty identifies. Second, it explores the contrastive loss over all positive paired target samples to intensify the compactness of each category subspace. Finally, the self-supervised rotation is adopted to learn additional semantics from target images to learn more discriminative features. Moreover, experiments of three popular benchmarks illustrate our method effectively achieves domain adaptation without source data.

References

[1] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. 4, 5
[2] Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In International conference on machine learning, pages 1081-1090. PMLR, 2019. 1
[3] Yiming Chen, Shiji Song, Shuang Li, and Cheng Wu. A graph embedding framework for maximum mean discrepancy-based domain adaptation algorithms. IEEE Transactions on Image Processing, 29:199-213, 2019. 1
[4] Shuhao Cui, Shuhui Wang, Junbao Zhuo, Liang Li, Qingming Huang, and Qi Tian. Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3941-3950, 2020. 6, 7
[5] Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, and Nicolas Courty. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 447-463, 2018. 2
[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 1
[7] Jiahua Dong, Yang Cong, Gan Sun, Yuyang Liu, and Xiaowei Xu. Ccsr: Critical semantic-consistent learning for unsupervised domain adaptation. In European Conference on Computer Vision, pages 745-762. Springer, 2020. 1
[8] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096-2030, 2016. 6, 7
[9] Xiang Gu, Jian Sun, and Zongben Xu. Spherical space domain adaptation with robust pseudo-label loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9101-9110, 2020. 1
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6, 7
[11] Ying Jin, Ximei Wang, Mingsheng Long, and Jianmin Wang. Minimum class confusion for versatile domain adaptation. In European Conference on Computer Vision, pages 464-480. Springer, 2020. 6, 7
[12] Taotao Jing and Zhengming Ding. Adversarial dual distinct classifiers for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 605-614, 2021. 2
[13] Taotao Jing, Haifeng Xia, and Zhengming Ding. Adaptively- accumulated knowledge transfer for partial domain adapta

tion. In Proceedings of the 28th ACM International Conference on Multimedia, pages 1606-1614, 2020. 2
[14] Guoliang Kang, Lu Jiang, Yi Yang, and Alexander G Hauptmann. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4893-4902, 2019. 1, 2
[15] Youngeun Kim, Sungeun Hong, Donghyeon Cho, Hyoungseob Park, and Priyadarshini Panda. Domain adaptation without source data. arXiv preprint arXiv:2007.01524, 2020. 6, 7
[16] Atsutoshi Kumagai and Tomoharu Iwata. Unsupervised domain adaptation by matching distributions based on the maximum mean discrepancy via unilateral transformations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4106-4113, 2019. 1
[17] Vinod K Kurmi, Venkatesh K Subramanian, and Vinay P Namboodiri. Domain impression: A source data free domain adaptation method. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 615-625, 2021. 3, 6
[18] Mengxue Li, Yi-Ming Zhai, You-Wei Luo, Peng-Fei Ge, and Chuan-Xian Ren. Enhanced transport distance for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13936–13944, 2020. 2
[19] Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9641-9650, 2020. 2, 3, 6
[20] Justin Liang, Namdar Homayounfar, Wei-Chiu Ma, Yuwen Xiong, Rui Hu, and Raquel Urtasun. Polytransform: Deep polygon transformer for instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9131-9140, 2020. 1
[21] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In International Conference on Machine Learning, pages 6028-6039. PMLR, 2020. 2, 3, 5, 6, 7
[22] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dálár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 5
[23] Hongfu Liu, Ming Shao, Zhengming Ding, and Yun Fu. Structure-preserved unsupervised domain adaptation. IEEE Transactions on Knowledge and Data Engineering, 2018. 3
[24] Ziming Liu, Guangyu Gao, Lin Sun, and Li Fang. Ipgnet: Image pyramid guidance network for small object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 1026-1027, 2020. 1
[25] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. arXiv preprint arXiv:1705.10667, 2017. 6, 7

[26] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In International conference on machine learning, pages 2208-2217. PMLR, 2017. 1, 2
[27] Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Multi-adversarial domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. 1
[28] Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1406-1415, 2019. 1
[29] Xingchao Peng, Ben Usman, Neela Kaushik, Dequan Wang, Judy Hoffman, and Kate Saenko. Visda: A synthetic-to-real benchmark for visual domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2021-2026, 2018. 5
[30] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European conference on computer vision, pages 213-226. Springer, 2010. 5
[31] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3723-3732, 2018. 2
[32] Hui Tang, Ke Chen, and Kui Jia. Unsupervised domain adaptation via structurally regularized deep clustering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8725-8735, 2020. 6, 7
[33] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7167-7176, 2017. 1, 2
[34] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5018-5027, 2017. 3, 5
[35] Haifeng Xia and Zhengming Ding. Hgnet: Hybrid generative network for zero-shot domain adaptation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXVII 16, pages 55-70. Springer, 2020. 2
[36] Haifeng Xia and Zhengming Ding. Structure preserving generative cross-domain learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4364-4373, 2020. 1
[37] Cai Xu, Ziyu Guan, Wei Zhao, Hongchang Wu, Yunfei Niu, and Beilei Ling. Adversarial incomplete multi-view clustering. In Proceedings of the 28th International Joint Con-

ference on Artificial Intelligence, pages 3933-3939. AAAI Press, 2019. 7
[38] Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1426-1435, 2019. 2, 6, 7
[39] Renjun Xu, Pelen Liu, Liyan Wang, Chao Chen, and Jindong Wang. Reliable weighted optimal transport for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4394-4403, 2020. 1
[40] Zhe Xu, Shaoli Huang, Ya Zhang, and Dacheng Tao. Webly-supervised fine-grained visual categorization via deep domain adaptation. IEEE transactions on pattern analysis and machine intelligence, 40(5):1100-1113, 2018. 3
[41] Hongliang Yan, Yukang Ding, Peihua Li, Qilong Wang, Yong Xu, and Wangmeng Zuo. Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2272-2281, 2017. 1
[42] Guanglei Yang, Haifeng Xia, Mingli Ding, and Zhengming Ding. Bi-directional generation for unsupervised domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6615-6622, 2020. 1
[43] Hao-Wei Yeh, Baoyao Yang, Pong C Yuen, and Tatsuya Harada. Sofa: Source-data-free feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 474–483, 2021. 2, 6, 7
[44] Manli Zhang, Jianhong Zhang, Zhiwu Lu, Tao Xiang, Mingyu Ding, and Songfang Huang. Iept: Instance-level and episode-level pre-text tasks for few-shot learning. 5
[45] Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 3801-3809, 2018. 3
[46] Yabin Zhang, Hui Tang, Kui Jia, and Mingkui Tan. Domain-symmetric networks for adversarial domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5031-5040, 2019. 2
[47] Handong Zhao, Zhengming Ding, and Yun Fu. Multi-view clustering via deep matrix factorization. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. 1
[48] Handong Zhao, Hongfu Liu, and Yun Fu. Incomplete multimodal visual data grouping. In IJCAI, pages 2392-2398, 2016. 1
[49] Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Learning to generate novel domains for domain generalization. In European Conference on Computer Vision, pages 561-578. Springer, 2020. 1