SlowGuess's picture
Add Batch 374c2040-b05a-4e7c-802e-6c0e397626e3
7bd0580 verified

ADVERSARIAL SUPPORT ALIGNMENT

Shangyuan Tong*
MIT CSAIL

Timur Garipov*
MIT CSAIL

Yang Zhang
MIT-IBM Watson AI Lab

Shiyu Chang
UC Santa Barbara

Tommi Jaakkola
MIT CSAIL

ABSTRACT

We study the problem of aligning the supports of distributions. Compared to the existing work on distribution alignment, support alignment does not require the densities to be matched. We propose symmetric support difference as a divergence measure to quantify the mismatch between supports. We show that select discriminators (e.g. discriminator trained for Jensen-Shannon divergence) are able to map support differences as support differences in their one-dimensional output space. Following this result, our method aligns supports by minimizing a symmetrized relaxed optimal transport cost in the discriminator 1D space via an adversarial process. Furthermore, we show that our approach can be viewed as a limit of existing notions of alignment by increasing transportation assignment tolerance. We quantitatively evaluate the method across domain adaptation tasks with shifts in label distributions. Our experiments1 show that the proposed method is more robust against these shifts than other alignment-based baselines.

1 INTRODUCTION

Learning tasks often involve estimating properties of distributions from samples or aligning such characteristics across domains. We can align full distributions (adversarial domain alignment), certain statistics (canonical correlation analysis), or the support of distributions (this paper). Much of the recent work has focused on full distributional alignment, for good reasons. In domain adaptation, motivated by theoretical results (Ben-David et al., 2007; 2010), a series of papers (Ajakan et al., 2014; Ganin & Lempitsky, 2015; Ganin et al., 2016; Tzeng et al., 2017; Shen et al., 2018; Pei et al., 2018; Zhao et al., 2018; Li et al., 2018a; Wang et al., 2021; Kumar et al., 2018) seek to align distributions of representations between domains, and utilize a shared classifier on the aligned representation space.

Alignment in distributions implies alignment in supports. However, when there are additional objectives/constraints to satisfy, the minimizer for a distribution alignment objective does not necessarily minimize a support alignment objective. Example in Figure 1 demonstrates the qualitative distinction between two minimizers when distribution alignment is not achievable. The distribution alignment objective prefers to keep supports unaligned even if support alignment is achievable. Recent works (Zhao et al., 2019; Li et al., 2020; Tan et al., 2020; Wu et al., 2019b; Tachet des Combes et al., 2020) have demonstrated that a shift in label distributions between source and target leads to a characterizable performance drop when the representations are forced into a distribution alignment. The error bound in Johansson et al. (2019) suggests aligning the supports of representations instead.

In this paper, we focus on distribution support as the key characteristic to align. We introduce a support divergence to measure the support mismatch and algorithms to optimize such alignment. We also position our approach in the spectrum of other alignment methods. Our contributions are as follows (all proofs can be found in Appendix A):

  1. In Section 2.1, we measure the differences between supports of distributions. Building on the Hausdorff distance, we introduce a novel support divergence better suited for optimization, which we refer to as symmetric support difference (SSD) divergence.

  2. In Section 2.2, we identify an important property of the discriminator trained for Jensen-Shannon divergence: support differences in the original space of interest are "preserved" as support differences in the one-dimensional discriminator output space.

  3. In Section 3, we present our practical algorithm for support alignment, Adversarial Support Alignment (ASA). Essentially, based on the analysis presented in Section 2.2, our solution is to align supports in the discriminator 1D space, which is computationally efficient.

  4. In Section 4, we place different notions of alignment – distribution alignment, relaxed distribution alignment and support alignment – within a coherent spectrum from the point of view of optimal transport, characterizing their relationships, both theoretically in terms of their objectives and practically in terms of their algorithms.

  5. In Section 5, we demonstrate the effectiveness of support alignment in practice for domain adaptation setting. Compared to other alignment-based baselines, our proposed method is more robust against shifts in label distributions.


(a) Initialization

DW(p,qθ)=11.12 \mathcal {D} _ {W} (p, q ^ {\theta}) = 1 1. 1 2

D(p,qθ)=14.9 \mathcal {D} _ {\triangle} (p, q ^ {\theta}) = 1 4. 9


(b) Distribution alignment
Figure 1: Illustration of differences between the final configurations of distribution alignment and support alignment procedures. $p(x)$ is a fixed Beta distribution $p(x) = \mathrm{Beta}(x|4,2)$ with support [0,1]; $q^{\theta}(x)$ is a "shifted" Beta distribution $q^{\theta}(x) = \mathrm{Beta}(x - \theta|2,4)$ parameterized by $\theta$ with support $[\theta, \theta + 1]$ . Panel (a) shows the initial configuration with $\theta_{\mathrm{init}} = -3$ . Panel (b) shows the result by distribution alignment. Panel (c) shows the result by support alignment. We report Wasserstein distance $\mathcal{D}W(p,q^\theta)$ (7) and SSD divergence $\mathcal{D}{\triangle}(p,q^{\theta})$ (1).

DW(p,qθ)=2103 \mathcal {D} _ {W} (p, q ^ {\theta}) = 2 \cdot 1 0 ^ {- 3}

D(p,qθ)=6104 \mathcal {D} _ {\triangle} (p, q ^ {\theta}) = 6 \cdot 1 0 ^ {- 4}


(c) Support alignment

DW(p,q^θ)=5~102 \mathcal {D} _ {W} (p, \hat {q} ^ {\theta}) = \tilde {5} \cdot 1 0 ^ {- 2}

D(p,qθ)<1106 \mathcal {D} _ {\triangle} (p, q ^ {\theta}) < 1 \cdot 1 0 ^ {- 6}

2 SSD DIVERGENCE AND SUPPORT ALIGNMENT

Notation. We consider an Euclidean space $\mathcal{X} = \mathbb{R}^n$ equipped with Borel sigma algebra $\mathcal{B}$ and a metric $d: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ (e.g. Euclidean distance). Let $\mathcal{P}$ be the set of probability measures on $(\mathcal{X}, \mathcal{B})$ . For $p \in \mathcal{P}$ , the support of $p$ is denoted by $\operatorname{supp}(p)$ and is defined as the smallest closed set $X \subseteq \mathcal{X}$ such that $p(X) = 1$ . $f_{\sharp}p$ denotes the pushforward measure of $p$ induced by a measurable mapping $f$ . With a slight abuse of notation, we use $p(x)$ and $f_{\sharp}p$ to denote the densities of measures $p$ and $f_{\sharp}p$ evaluated at $x$ and $t$ respectively, implicitly assuming that the measures are absolutely continuous. The distance between a point $x \in \mathcal{X}$ and a subset $Y \subseteq \mathcal{X}$ is defined as $d(x, Y) = \inf_{y \in Y} d(x, y)$ . The symmetric difference of two sets $A$ and $B$ is defined as $A \triangle B = (A \setminus B) \cup (B \setminus A)$ .

2.1 DIFFERENCE BETWEEN SUPPORTS

To align the supports of distributions, we first need to evaluate how different they are. Similar to distribution divergences like Jensen-Shannon divergence, we introduce a notion of support divergence. A support divergence2 between two distributions in $\mathcal{P}$ is a function $\mathcal{D}_S(\cdot ,\cdot):\mathcal{P}\times \mathcal{P}\to \mathbb{R}$ satisfying: 1) $\mathcal{D}_S(p,q)\geq 0$ for all $p,q\in \mathcal{P}$ ; 2) $\mathcal{D}_S(p,q) = 0$ iff $\mathrm{supp}(p) = \mathrm{supp}(q)$ .

While a distribution divergence is sensitive to both density and support differences, a support divergence only needs to detect mismatches in supports, which are subsets of the metric space $\mathcal{X}$ .

An example of a distance between subsets of a metric space is the Hausdorff distance: $d_H(X, Y) = \max { \sup_{x \in X} d(x, Y), \sup_{y \in Y} d(y, X) }$ . Since it depends only on the greatest distance between a point and a set, minimizing this objective for alignment only provides signal to a single point. To make the optimization less sparse, we consider all points that violate the support alignment criterion and introduce symmetric support difference (SSD) divergence:

D(p,q)=Exp[d(x,supp(q))]+Exq[d(x,supp(p))].(1) \mathcal {D} _ {\triangle} (p, q) = \mathbb {E} _ {x \sim p} [ d (x, \operatorname {s u p p} (q)) ] + \mathbb {E} _ {x \sim q} [ d (x, \operatorname {s u p p} (p)) ]. \tag {1}

Proposition 2.1. SSD divergence $\mathcal{D}_{\triangle}(p,q)$ is a support divergence.

We note that our proposed SSD divergence is closely related to Chamfer distance/divergence (CD) (Fan et al., 2017; Nguyen et al., 2021) and Relaxed Word Mover's Distance (RWMD) (Kusner et al., 2015). While both CD and RWMD are stated for discrete points (see Section 6 for further comments), SSD divergence is a general difference measure between arbitrary (discrete or continuous) distributions. This distinction, albeit small, is important in our theoretical analysis (Sections 2.2, 4.1).

2.2 SUPPORT ALIGNMENT IN ONE-DIMENSIONAL SPACE

Goodfellow et al. (2014) showed that the log-loss discriminator $f: \mathcal{X} \to [0,1]$ , trained to distinguish samples from distributions $p$ and $q$ ( $\sup_f \mathbb{E}{x \sim p} [\log f(x)] + \mathbb{E}{x \sim q} [\log (1 - f(x))]$ ) can be used to estimate the Jensen-Shannon divergence between $p$ and $q$ . The closed form maximizer $f^*$ is

f(x)=p(x)p(x)+q(x),xsupp(p)supp(q).(2) f ^ {*} (x) = \frac {p (x)}{p (x) + q (x)}, \quad \forall x \in \operatorname {s u p p} (p) \cup \operatorname {s u p p} (q). \tag {2}

Note that for a point $x \notin \operatorname{supp}(p) \cup \operatorname{supp}(q)$ the value of $f^{*}(x)$ can be set to an arbitrary value in [0, 1], since the log-loss does not depend on $f(x)$ for such $x$ . The form of the optimal discriminator (2) gives rise to our main theorem below, which characterizes the ability of the log-loss discriminator to identify support misalignment.

Theorem 2.1. Let $p$ and $q$ be the distributions with densities satisfying

1C<p(x)<C,xsupp(p);1C<q(x)<C,xsupp(q).(3) \frac {1}{C} < p (x) < C, \quad \forall x \in \operatorname {s u p p} (p); \quad \frac {1}{C} < q (x) < C, \quad \forall x \in \operatorname {s u p p} (q). \tag {3}

Let $f^{}$ be the optimal discriminator (2). Then, $\mathcal{D}{\triangle}(p,q) = 0$ if and only if $\mathcal{D}{\triangle}(f_{\sharp}^{}p,f_{\sharp}^{*}q) = 0$ .

The idea of the proof is to show that the extreme values (0 and 1) of $f^{}(x)$ can only be attained in $x \in \operatorname{supp}(p) \triangleq \operatorname{supp}(q)$ . Assumption (3) guarantees that $f^{}(x)$ cannot approach neither 0 nor 1 in the intersection of the supports $\operatorname{supp}(p) \cap \operatorname{supp}(q)$ , i.e. the values ${f^{*}(x) \mid x \in \operatorname{supp}(p) \cap \operatorname{supp}(q)}$ are separated from the extreme values 0 and 1.

We conclude this section with two technical remarks on Theorem 2.1.

Remark 2.1.1. The result of Theorem 2.1 does not necessarily hold for other types of discriminators. For instance, the dual Wasserstein discriminator (Arjovsky et al., 2017; Gulrajani et al., 2017) does not always highlight the support difference in the original space as a support difference in the discriminator output space. This observation is formally stated in the following proposition.

Proposition 2.2. Let $f_{W}^{\star}$ be the maximizer of $\sup_{f: \mathrm{L}(f) \leq 1} \mathbb{E}{x \sim p}[f(x)] - \mathbb{E}{x \sim q}[f(x)]$ , where $\mathrm{L}(\cdot)$ is the Lipschitz constant. There exist $p$ and $q$ with $\operatorname{supp}(p) \neq \operatorname{supp}(q)$ but $\operatorname{supp}(f_{W\sharp}^{\star}p) = \operatorname{supp}(f_{W\sharp}^{\star}q)$ .

Remark 2.1.2. In practice the discriminator is typically parameterized as $f(x) = \sigma(g(x))$ , where $g: \mathcal{X} \to \mathbb{R}$ is realized by a deep neural network and $\sigma(x) = (1 + e^{-x})^{-1}$ is the sigmoid function. The optimization problem for $g$ is

infgExp[log(1+eg(x))]+Exq[log(1+eg(x))],(4) \inf _ {g} \mathbb {E} _ {x \sim p} \left[ \log \left(1 + e ^ {- g (x)}\right) \right] + \mathbb {E} _ {x \sim q} \left[ \log \left(1 + e ^ {g (x)}\right) \right], \tag {4}

and the optimal solution is $g^{}(x) = \log p(x) - \log q(x)$ . Naturally the result of Theorem 2.1 holds for $g^{}$ , since $g^{}(x) = \sigma^{-1}(f^{}(x))$ and $\sigma$ is a bijective mapping from $\mathbb{R} \cup {-\infty, \infty}$ to $[0,1]$ .

3 ADVERSARIAL SUPPORT ALIGNMENT

We consider distributions $p$ and $q$ parameterized by $\theta$ : $p^{\theta}$ , $q^{\theta}$ . The log-loss discriminator $g$ optimized for (4) is parameterized by $\psi$ : $g^{\psi}$ . Our analysis in Section 2.2 already suggests an algorithm. Namely, we can optimize $\theta$ by minimizing $\mathcal{D}{\triangle}(g{\sharp}^{\psi}p^{\theta},g_{\sharp}^{\psi}q^{\theta})$ while optimizing $\psi$ by (4). This adversarial game is analogous to the setup of the existing distribution alignment algorithms3.

In practice, rather than having direct access to $p^{\theta}, q^{\theta}$ , which is unavailable, we are often given i.i.d. samples ${x_{i}^{p}}{i = 1}^{N}, {x{i}^{q}}{i = 1}^{M}$ . They form discrete distributions $\hat{p}^{\theta}(x) = \frac{1}{N}\sum{i = 1}^{N}\delta (x - x_{i}^{p}), \hat{q}^{\theta}(x) = \frac{1}{M}\sum_{i = 1}^{M}\delta (x - x_{i}^{q})$ , and $g^{\psi}_{\sharp}\hat{p}^{\theta} = \frac{1}{N}\sum_{i = 1}^{N}\delta (t - g^{\psi}(x_{i}^{p}))$ , $g^{\psi}_{\sharp}\hat{q}^{\theta} = \frac{1}{M}\sum_{i = 1}^{M}\delta (t - g^{\psi}(x_{i}^{q}))$ . Since $g^{\psi}{\sharp}\hat{p}^{\theta}$ and $g^{\psi}{\sharp}\hat{q}^{\theta}$ are discrete distributions, they have supports ${g^{\psi}(x_i^p)}{i = 1}^N$ and ${g^{\psi}(x_i^q)}{i = 1}^M$ respectively. SSD divergence between discrete distributions $g^{\psi}{\sharp}\hat{p}^{\theta}$ and $g^{\psi}{\sharp}\hat{q}^{\theta}$ is

D(gψp^θ,gψq^θ)=1Ni=1Nd(gψ(xip),{gψ(xjq)}j=1M)+1Mi=1Md(gψ(xiq),{gψ(xjp)}j=1N).(5) \mathcal {D} _ {\triangle} \left(g _ {\sharp} ^ {\psi} \hat {p} ^ {\theta}, g _ {\sharp} ^ {\psi} \hat {q} ^ {\theta}\right) = \frac {1}{N} \sum_ {i = 1} ^ {N} d \left(g ^ {\psi} \left(x _ {i} ^ {p}\right), \left\{g ^ {\psi} \left(x _ {j} ^ {q}\right) \right\} _ {j = 1} ^ {M}\right) + \frac {1}{M} \sum_ {i = 1} ^ {M} d \left(g ^ {\psi} \left(x _ {i} ^ {q}\right), \left\{g ^ {\psi} \left(x _ {j} ^ {p}\right) \right\} _ {j = 1} ^ {N}\right). \tag {5}

Effect of mini-batch training. When training on large datasets, we need to rely on stochastic optimization with mini-batches. We denote the mini-batches (of same size, as in common practice) from $p^{\theta}$ and $q^{\theta}$ as $x^{p} = {x_{i}^{p}}{i=1}^{m}$ and $x^{q} = {x{i}^{q}}{i=1}^{m}$ respectively. By minimizing $\mathcal{D}{\triangle}(g^{\psi}(x^{p}), g^{\psi}(x^{q}))$ , we only consider the mini-batch support distance rather than the population support distance (5). We observe that in practice the described algorithm brings the distributions to a state closer to distribution alignment rather than support alignment (see Appendix D.5 for details). The problem is in the typically small batch size. The algorithm actually tries to enforce support alignment for all possible pairs of mini-batches, which is a much stricter constraint than population support alignment.

To address the issue mentioned above, without working with a much larger batch size, we create two "history buffers": $h^p$ , storing the previous 1D discriminator outputs of (at most) $n$ samples from $p^\theta$ , and a similar buffer $h^q$ for $q^\theta$ . Specifically, $h = {g^{\psi_{\mathrm{old},i}}(x_{\mathrm{old},i})}{i=1}^{n}$ stores the values of the previous $n$ samples $x{\mathrm{old},i}$ mapped by their corresponding past "versions" of the discriminator $g^{\psi_{\mathrm{old},i}}$ . We minimize $\mathcal{D}_{\triangle}(v^p, v^q)$ , where $v^p = \operatorname{concat}(h^p, g^{\psi}(x^p))$ , $v^q = \operatorname{concat}(h^q, g^{\psi}(x^q))$ :

D(vp,vq)=1n+m(i=1n+md(vip,vq)+j=1n+md(vjq,vp)).(6) \mathcal {D} _ {\triangle} \left(v ^ {p}, v ^ {q}\right) = \frac {1}{n + m} \left(\sum_ {i = 1} ^ {n + m} d \left(v _ {i} ^ {p}, v ^ {q}\right) + \sum_ {j = 1} ^ {n + m} d \left(v _ {j} ^ {q}, v ^ {p}\right)\right). \tag {6}

Note that $\mathcal{D}{\triangle}(\cdot, \cdot)$ between two sets of 1D samples can be efficiently calculated since $d(v{i}^{p}, v^{q})$ and $d(v_{j}^{q}, v^{p})$ are simply 1-nearest neighbor distances in 1D. Moreover the history buffers store only the scalar values from the previous batches. These values are only considered in nearest neighbor assignment but do not directly provide gradient signal for optimization. Thus, the computation overhead of including a long history buffer is very light. We present our full algorithm, Adversarial Support Alignment (ASA), in Algorithm 1.

4 SPECTRUM OF NOTIONS OF ALIGNMENT

In this section, we take a closer look into our work and different existing notions of alignment that have been proposed in the literature, especially their formulations from the optimal transport perspective. We show that our proposed support alignment framework is a limit of existing notions of alignment, both in terms of theory and algorithm, by increasing transportation assignment tolerance.

4.1 THEORETICAL CONNECTIONS

Distribution alignment. Wasserstein distance is a commonly used objective for distribution alignment. In our analysis, we focus on the Wasserstein-1 distance:

DW(p,q)=infγΓ(p,q)E(x,y)γ[d(x,y)],(7) \mathcal {D} _ {W} (p, q) = \inf _ {\gamma \in \Gamma (p, q)} \mathbb {E} _ {(x, y) \sim \gamma} [ d (x, y) ], \tag {7}

Algorithm 1 Our proposed ASA algorithm. $n$ (maximum history buffer size), we use $n = 1000$ .
1: for number of training steps do
2: Sample mini-batches ${x_i^p}{i=1}^m \sim p^\theta$ , ${x_i^q}{i=1}^m \sim q^\theta$ .
3: Perform optimization step on $\psi$ using stochastic gradient $\nabla_{\psi}\left(\frac{1}{m}\sum_{i=1}^{m}\left[\log(1+\exp(-g^{\psi}(x_i^p)))+\log(1+\exp(g^{\psi}(x_i^q)))\right]\right)$ .
4: $v^p \gets \text{concat}(h^p, {g^{\psi}(x_i^p)}{i=1}^m)$ , $v^q \gets \text{concat}(h^q, {g^{\psi}(x_i^q)}{i=1}^m)$ .
5: $\pi_{p \to q}^i \gets \arg \min_j d(v_i^p, v_j^q)$ , $\pi_{q \to p}^j \gets \arg \min_i d(v_i^p, v_j^q)$ .
6: Perform optimization step on $\theta$ using stochastic gradient $\nabla_{\theta}\left(\frac{1}{n+m}\sum_{i=1}^{n+m}\left[d(v_i^p, v_{\pi_{p \to q}}^q) + d(v_i^q, v_{\pi_{q \to p}}^p)\right]\right)$ .
7: UPDATEHISTORY $(h^p, {g^{\psi}(x_i^p)}{i=1}^m)$ , UPDATEHISTORY $(h^q, {g^{\psi}(x_i^q)}{i=1}^m)$ .
8: end for

where $\Gamma(p, q)$ is the set of all measures on $\mathcal{X} \times \mathcal{X}$ with marginals of $p$ and $q$ , respectively. The value of $\mathcal{D}_W(p, q)$ is the minimal transportation cost for transporting probability mass from $p$ to $q$ . The transportation cost is zero if and only if $p = q$ , meaning the distributions are aligned.

Relaxed distribution alignment. Wu et al. (2019b) proposed a modified Wasserstein distance to achieve asymmetrically-relaxed distribution alignment, namely $\beta$ -admissible Wasserstein distance:

DWβ(p,q)=infγΓβ(p,q)E(x,y)γ[d(x,y)],(8) \mathcal {D} _ {W} ^ {\beta} (p, q) = \inf _ {\gamma \in \Gamma_ {\beta} (p, q)} \mathbb {E} _ {(x, y) \sim \gamma} [ d (x, y) ], \tag {8}

where $\Gamma_{\beta}(p,q)$ is the set of all measures $\gamma$ on $\mathcal{X} \times \mathcal{X}$ such that $\int \gamma(x,y) dy = p(x), \forall x$ and $\int \gamma(x,y) dx \leq (1 + \beta) q(y), \forall y$ . With the relaxed marginal constraints, one could choose a transportation plan $\gamma$ which transports probability mass from $p$ to a modified distribution $q'$ rather than the original distribution $q$ as long as $q'$ satisfies the constraint $q'(x) \leq (1 + \beta) q(x), \forall x$ . Therefore, $\mathcal{D}_W^\beta(p,q)$ is zero if and only if $p(x) \leq (1 + \beta) q(x), \forall x$ . In (Wu et al., 2019b), $\beta$ is normally set to a positive finite number to achieve the asymmetric-relaxation of distribution alignment, and it is shown that $\mathcal{D}_W^0(p,q) = \mathcal{D}_W(p,q)$ . We can extend $\mathcal{D}_W^\beta(p,q)$ to a symmetric version, which we term $\beta_1, \beta_2$ -admissible Wasserstein distance:

DWβ1,β2(p,q)=DWβ1(p,q)+DWβ2(q,p).(9) \mathcal {D} _ {W} ^ {\beta_ {1}, \beta_ {2}} (p, q) = \mathcal {D} _ {W} ^ {\beta_ {1}} (p, q) + \mathcal {D} _ {W} ^ {\beta_ {2}} (q, p). \tag {9}

The aforementioned property of $\beta$ -admissible Wasserstein distance implies that $\mathcal{D}W^{\beta_1,\beta_2}(p,q) = 0$ if and only if $p(x)\leq (1 + \beta{1})q(x),\forall x$ and $q(x)\leq (1 + \beta_{2})p(x),\forall x$ , in which case we call $p$ and $q$ “ $(\beta_{1},\beta_{2})$ -aligned”, with $\beta_{1}$ and $\beta_{2}$ controlling the transportation assignment tolerances.

Support alignment. The term $\mathbb{E}_p[d(x,\mathrm{supp}(q))]$ in (1) represents the average distance from samples in $p$ to the support of $q$ . From the optimal transport perspective, this value is the minimal transportation cost of transporting the probability mass of $p$ into the support of $q$ . We show that SSD divergence can be considered as a transportation cost in the limit of infinite assignment tolerance.

Proposition 4.1. $\mathcal{D}W^{\infty,\infty}(p,q) := \lim{\beta_1,\beta_2\to \infty}\mathcal{D}W^{\beta_1,\beta_2}(p,q) = \mathcal{D}{\triangle}(p,q)$ .

We now have completed the spectrum of alignment objectives defined within the optimal transport framework. The following proposition establishes the relationship within the spectrum.

Proposition 4.2. Let $p$ and $q$ be two distributions in $\mathcal{P}$ . Then,

  1. $\mathcal{D}W(p,q) = 0$ implies $\mathcal{D}W^{\beta_1,\beta_2}(p,q) = 0$ for all finite $\beta{1},\beta{2} > 0$
  2. $\mathcal{D}W^{\beta_1,\beta_2}(p,q) = 0$ for some finite $\beta{1},\beta_{2} > 0$ implies $\mathcal{D}_{\triangle}(p,q) = 0$
  3. The converse of statements 1 and 2 are false.

In addition to the result presented in Theorem 2.1, we can show that the log-loss discriminator can also "preserve" the existing notions of alignment.

Proposition 4.3. Let $f^{*}$ be the optimal discriminator (2) for given distributions $p$ and $q$ . Then,

1.DW(p,q)=0iffDW(fp,fq)=0;2.DWβ1,β2(p,q)=0iffDWβ1,β2(fp,fq)=0. 1. \mathcal {D} _ {W} (p, q) = 0 i f f \mathcal {D} _ {W} \left(f _ {\sharp} ^ {*} p, f _ {\sharp} ^ {*} q\right) = 0; \quad 2. \mathcal {D} _ {W} ^ {\beta_ {1}, \beta_ {2}} (p, q) = 0 i f f \mathcal {D} _ {W} ^ {\beta_ {1}, \beta_ {2}} \left(f _ {\sharp} ^ {*} p, f _ {\sharp} ^ {*} q\right) = 0.

4.2 ALGORITHMIC CONNECTIONS

The result of Proposition 4.3 suggests methods similar to our ASA algorithm presented in Section 3 can achieve different notions of alignment by minimizing objectives discussed in Section 4.1 between the 1D pushforward distributions. We consider the setup used in Section 3 but without history buffers to simplify the analysis, as their usage is orthogonal to our discussion in this section.

Recall that we work with a mini-batch setting, where ${x_i^p}{i=1}^m$ and ${x_i^q}{i=1}^m$ are sampled from $p$ and $q$ respectively, and $g$ is the adversarial log-loss discriminator. We denote the corresponding 1D outputs from the log-loss discriminator by $o^p = {o_i^p}{i=1}^m = {g(x_i^p)}{i=1}^m$ and $o^q$ (defined similarly).

Distribution alignment. We adapt (7) for ${o_i^p}{i = 1}^m$ and ${o_i^q}{i = 1}^m$ :

DW(op,oq)=infγΓ(op,oq)1mi=1mj=1mγijd(oip,ojq),(10) \mathcal {D} _ {W} \left(o ^ {p}, o ^ {q}\right) = \inf _ {\gamma \in \Gamma \left(o ^ {p}, o ^ {q}\right)} \frac {1}{m} \sum_ {i = 1} ^ {m} \sum_ {j = 1} ^ {m} \gamma_ {i j} d \left(o _ {i} ^ {p}, o _ {j} ^ {q}\right), \tag {10}

where $\Gamma(o^p, o^q)$ is the set of $m \times m$ doubly stochastic matrices. Since $o^p$ and $o^q$ are sets of 1D samples with the same size, it can be shown (Rabin et al., 2011) that the optimal $\gamma^*$ corresponds to an assignment $\pi^*$ , which pairs points in the sorting order and can be computed efficiently by sorting both sets $o^p$ and $o^q$ . The transportation cost is zero if and only if there exists an invertible 1-to-1 assignment $\pi^*$ such that $o_i^p = o_{\pi^*(i)}^q$ . GAN training algorithms proposed in (Deshpande et al., 2018; 2019) utilize the above sorting procedure to estimate the maximum sliced Wasserstein distance.

Relaxed distribution alignment. Similarly, we can adapt (8):

DWβ(op,oq)=infγΓβ(op,oq)1mi=1mj=1mγijd(oip,ojq),(11) \mathcal {D} _ {W} ^ {\beta} \left(o ^ {p}, o ^ {q}\right) = \inf _ {\gamma \in \Gamma_ {\beta} \left(o ^ {p}, o ^ {q}\right)} \frac {1}{m} \sum_ {i = 1} ^ {m} \sum_ {j = 1} ^ {m} \gamma_ {i j} d \left(o _ {i} ^ {p}, o _ {j} ^ {q}\right), \tag {11}

where $\Gamma_{\beta}(o^p,o^q)$ is the set of $m\times m$ matrices with non-negative real entries, such that $\sum_{j = 1}^{m}\gamma_{ij} = 1,\forall i$ and $\sum_{i = 1}^{m}\gamma_{ij}\leq 1 + \beta ,\forall j.$ The optimization goal in (11) is to find a "soft-assignment" $\gamma$ which describes the transportation of probability mass from points $o_i^p$ in $o^p$ to points $o_i^q$ in $o^q$ . The parameter $\beta$ controls the set of admissible assignments $\Gamma_{\beta}$ , which is similar to its role discussed in Section 4.1: with transportation assignment tolerance $\beta$ , the total mass of points in $o^p$ transported to each of the points $o_i^q$ cannot exceed $1 + \beta$ . We refer to such assignments as $(\beta +1)$ -to-1 assignment. The transportation cost is zero if and only if there exists such an assignment between $o^p$ and $o^q$

It can be shown (see Appendix C) that for integer value of $\beta$ , the set of minimizers of (11) must contain a "hard-assignment" transportation plan, which assigns each point $o_{i}^{p}$ to exactly one point $o_{j}^{q}$ . Then $(1 + \beta)$ gives the upper bound on the number of points $o_{i}^{p}$ that can be transported to given point $o_{j}^{q}$ . This hard assignment problem can be solved quasi-linearly with worst case time complexity $\mathcal{O}\left((\beta + 1)m^{2}\right)$ (Bonneel & Coeurjolly, 2019), which, combined with Proposition 4.3, can lead to new algorithms for relaxed distribution alignment besides those proposed in Wu et al. (2019b).

Support alignment. When $\beta = \infty$ , the sum $\sum_{i=1}^{m} \gamma_{ij}$ is unconstrained for all $j$ , and each point $o_i^p$ can be assigned to any of the points $o_j^q$ . The optimal solution is simply 1-nearest neighbor assignment, or to follow the above terminology, $\underline{\infty}$ -to- $\underline{1}$ assignment.

5 EXPERIMENTS

Problem setting. We evaluate our proposed ASA method in the setting of unsupervised domain adaptation (UDA). The goal of UDA algorithms is to train and "adapt" a classification model $M: \mathcal{X} \to \mathcal{Y}$ from source domain distribution $p_{X,Y}$ to target domain distribution $q_{X,Y}$ given the access to a labeled source dataset ${x_i^p, y_i^p}{i=1}^{N^p} \sim p{X,Y}$ and an unlabeled target dataset ${x_i^q}_{i=1}^{N^q} \sim q_X$ .

A common approach for UDA is to represent $M$ as $C^\phi \circ F^\theta$ : a classifier $C^\phi : \mathcal{Z} \to \mathcal{V}$ and a feature extractor $F^\theta : \mathcal{X} \to \mathcal{Z}$ , and train $C^\phi$ and $F^\theta$ by minimizing: 1) classification loss $\ell_{\mathrm{cls}}$ on source examples; 2) alignment loss $\mathcal{D}_{\mathrm{align}}$ measuring discrepancy between $p_Z^\theta = F_\sharp^{\theta}p_X$ and $q_Z^\theta = F_\sharp^\theta q_X$ :

minϕ,θ1Npi=1Npcls(Cϕ(Fθ(xip)),yip)+λDa l i g n({Fθ(xip)}i=1Np,{Fθ(xiq)}i=1Nq),(12) \min _ {\phi , \theta} \frac {1}{N ^ {p}} \sum_ {i = 1} ^ {N ^ {p}} \ell_ {\mathrm {c l s}} \left(C ^ {\phi} \left(F ^ {\theta} \left(x _ {i} ^ {p}\right)\right), y _ {i} ^ {p}\right) + \lambda \cdot \mathcal {D} _ {\text {a l i g n}} \left(\left\{F ^ {\theta} \left(x _ {i} ^ {p}\right) \right\} _ {i = 1} ^ {N ^ {p}}, \left\{F ^ {\theta} \left(x _ {i} ^ {q}\right) \right\} _ {i = 1} ^ {N ^ {q}}\right), \tag {12}


(a) No DA (avg acc: $63%$

DW(pZθ,qZθ)=0.78 \mathcal {D} _ {W} \left(p _ {Z} ^ {\theta}, q _ {Z} ^ {\theta}\right) = 0. 7 8

D(pZθ,qZθ)=0.10 \mathcal {D} _ {\triangle} \left(p _ {Z} ^ {\theta}, q _ {Z} ^ {\theta}\right) = 0. 1 0


(b) DANN (avg acc: 75%)
Figure 2: Visualization of learned 2D embeddings on 3-class $\mathrm{USPS} \rightarrow \mathrm{MNIST}$ with label distribution shift. In source domain, all classes have equal probability $\frac{1}{3}$ . The target probabilities of classes '3', '5', '9' are $[23%, 65%, 12%]$ . Each panel shows 2 level sets (outer one approximates the support) of the kernel density estimates of embeddings in source (filled regions) and target domains (solid/dashed lines). We report the average class accuracy of the target domain, $\mathcal{D}W$ and $\mathcal{D}{\triangle}$ between embeddings.

DW(pZθ,qZθ)=0.07 \mathcal {D} _ {W} \left(p _ {Z} ^ {\theta}, q _ {Z} ^ {\theta}\right) = 0. 0 7

D(pZθ,qZθ)=0.02 \mathcal {D} _ {\triangle} (p _ {Z} ^ {\theta}, q _ {Z} ^ {\theta}) = 0. 0 2


(c) ASA-abs (avg acc: 94%)

DW(pZθ,qZθ)=0.59 \mathcal {D} _ {W} (p _ {Z} ^ {\theta}, q _ {Z} ^ {\theta}) = 0. 5 9

D(pZθ,qZθ)=0.03 \mathcal {D} _ {\triangle} (p _ {Z} ^ {\theta}, q _ {Z} ^ {\theta}) = 0. 0 3

In practice $\mathcal{D}{\mathrm{align}}$ is an estimate of a divergence measure via an adversarial discriminator $g^{\psi}$ . Choices of $\mathcal{D}{\mathrm{align}}$ include f-divergences (Ganin et al., 2016; Nowozin et al., 2016) and Wasserstein distance (Arjovsky et al., 2017) to enforce distribution alignment and versions of re-weighted/relaxed distribution divergences (Wu et al., 2019b; Tachet des Combes et al., 2020) to enforce relaxed distribution alignment. For support alignment, we apply the proposed ASA method as the alignment subroutine in (12) with log-loss discriminator $g^{\psi}$ (4) and $\mathcal{D}_{\mathrm{align}}$ computed as (6).

Task specifications. We consider 3 UDA tasks: USPS $\rightarrow$ MNIST, STL $\rightarrow$ CIFAR, and VisDA-2017, and 2 versions of ASA: ASA-sq, ASA-abs corresponding to squared and absolute distances respectively for $d(\cdot ,\cdot)$ in (6). We compare ASA with: No DA (no domain adaptation), DANN (Ganin et al., 2016) (distribution alignment with JS divergence), VADA (Shu et al., 2018) (distribution alignment with virtual adversarial training), IWDAN, IWCDAN (Tachet des Combes et al., 2020) (relaxed distribution alignment via importance weighting) sDANN- $\beta$ (Wu et al., 2019b) (relaxed/ $\beta$ -admissible JS divergence via re-weighting). Please refer to Appendix D for full experimental details.

To evaluate the robustness of the methods, we simulate label distribution shift by subsampling source and target dataset, so that source has balanced label distribution and target label distribution follows the power law $q_{Y}(y) \propto \sigma(y)^{-\alpha}$ , where $\sigma$ is a random permutation of class labels ${1, \ldots, K}$ and $\alpha$ controls the severity of the shift ( $\alpha = 0$ means balanced label distribution). For each task, we generate 5 random permutations $\sigma$ for 4 different shift levels $\alpha \in {0, 1, 1.5, 2}$ . Essentially we transform each (source, target) dataset pair to $5 \times 4 = 20$ tasks of different difficulty levels, since classes are not equally difficult and different permutations can give them different weights.

Evaluation metrics. We choose the average (per-)class accuracy and minimum (per-)class accuracy on the target test set as evaluation metrics. Under the average class accuracy metric, all classes are treated as equally important (despite the unequal representation during training for $\alpha > 0$ ), and the minimum class accuracy focuses on model's worst within-class performance. In order to account for the variability of task difficulties across random permutations of target labels, we report robust statistics, median and a 25-75 percentile interval, across 5 runs.

Illustrative example. First we consider a simplified setting to intuitively understand and directly analyze the behavior of our proposed support alignment method in domain adaptation under label distribution shift. We consider a 3-class $\mathrm{USPS} \rightarrow \mathrm{MNIST}$ problem by selecting a subset of examples corresponding to digits '3', '5', and '9', and use a feature extractor network with 2D output space. We introduce label distribution shift as described above with $\alpha = 1.5$ , i.e. the probabilities of classes in the target domain are $12%$ , $23%$ , and $65%$ . We compare No DA, DANN, ASA-abs by their average target classification accuracy, Wasserstein distance $\mathcal{D}W(p_Z^\theta, q_Z^\theta)$ and SSD divergence $\mathcal{D}{\triangle}(p_Z^\theta, q_Z^\theta)$ between the learned embeddings of source and target domain. We apply a global affine transformation to each embedding space in order to have comparable distances between different spaces: we center the embeddings so that their average is 0 and re-scale them so that their average norm is 1. The

Table 1: Average and minimum class accuracy (%) on USPS→MNIST with different levels of shifts in label distributions (higher $\alpha$ implies more severe imbalance). We report median (the main number), and 25 (subscript) and 75 (superscript) percentiles across 5 runs.

Algorithmα = 0.0α = 1.0α = 1.5α = 2.0
averageminaverageminaverageminaveragemin
No DA71.972.972.974.773.874.773.076.6
70.470.472.072.071.272.570.610.8
DANN97.897.896.194.695.896.197.896.1
97.696.095.895.896.196.197.896.1
VADA98.098.096.398.298.999.098.099.0
97.996.295.998.198.999.097.998.0
IWDAN97.597.595.995.892.692.397.596.3
97.496.795.796.797.597.497.596.3
IWCDAN98.098.196.696.997.597.498.096.3
97.996.696.496.497.597.498.096.3
sDANN-487.485.790.094.984.785.784.484.4
87.285.690.094.984.785.784.484.4
ASA-sq93.793.989.293.683.588.790.982.0
93.393.488.491.580.889.666.685.8
ASA-abs94.194.588.991.278.982.982.485.4
93.893.887.089.365.190.974.589.2

Table 2: Results on STL $\rightarrow$ CIFAR. Same setup and reporting metrics as Table 1.

Algorithmα = 0.0α = 1.0α = 1.5α = 2.0
averageminaverageminaverageminaveragemin
No DA69.970.049.850.668.869.347.248.266.867.246.047.0
69.869.845.368.368.368.345.366.466.466.445.845.8
DANN75.375.454.656.669.970.144.845.164.967.134.936.8
74.974.954.268.668.668.640.763.763.733.933.927.0
VADA76.776.756.958.370.671.047.748.866.166.535.739.3
76.676.653.570.070.070.044.044.066.166.535.733.3
IWDAN69.970.750.550.668.769.145.850.567.167.344.744.8
69.969.947.968.668.668.644.844.865.965.940.440.4
IWCDAN70.170.247.849.369.469.447.151.366.167.239.940.8
70.170.142.469.169.169.146.346.366.165.037.737.0
sDANN-471.872.152.152.871.171.749.951.869.470.048.649.0
71.771.752.152.171.171.748.148.169.468.743.533.6
ASA-sq71.771.952.953.470.771.051.652.769.269.345.652.0
71.771.746.746.770.470.446.846.869.269.243.333.3
ASA-abs71.671.749.053.570.971.049.250.069.669.943.249.0
71.671.748.448.470.870.847.347.369.669.642.135.4

results are shown in Figure 2 and Table D.5. Compared to No DA, both DANN and ASA achieve support alignment. DANN enforces distribution alignment, and thus places some target embeddings into regions corresponding to the wrong class. In comparison, ASA does not enforce distribution alignment and maintains good class correspondence across the source and target embeddings.

Main results. The results of the main experimental evaluations are shown in Tables 1, 2, 3. Without any alignment, source only training struggles relatively to adapt to the target domain. Nonetheless, its performance across the imbalance levels remains robust, since the training procedure is the same. Agreeing with the observation and theoretical results from previous work (Zhao et al., 2019; Li et al., 2020; Tan et al., 2020; Wu et al., 2019b; Tachet des Combes et al., 2020), distribution alignment methods (DANN and VADA) perform well when there is no shift but suffer otherwise, whereas relaxed distribution alignment methods (IWDAN, IWCDAN and sDANN- $\beta$ ) show more resilience to shifts. On all tasks with positive $\alpha$ , we observe that it is common for the existing methods to achieve good class average accuracies while suffering significantly on some individual classes. These results suggest that the often-ignored but important min-accuracy metric can be very challenging. Finally, our support alignment methods (ASA-sq and ASA-abs) are the most robust ones against the shifts, while still being competitive in the more balanced settings ( $\alpha = 0$ or 1). We achieve best results in the more imbalanced and difficult tasks ( $\alpha = 1.5$ or 2) for almost all categories on all datasets. Please refer to Appendix D for ablation studies and additional comparisons.

6 RELATED WORK

Distribution alignment. Apart from the works, e.g. (Ajakan et al., 2014; Ganin et al., 2016; Ganin & Lempitsky, 2015; Pei et al., 2018; Zhao et al., 2018; Long et al., 2018; Tachet des Combes et al.,

Table 3: Results on VisDA17. Same setup and reporting metrics as Table 1.

Algorithmα = 0.0α = 1.0α = 1.5α = 2.0
averageminaverageminaverageminaveragemin
No DA49.522.242.621.247.118.645.319.8
49.422.249.220.747.118.645.219.4
DANN75.436.740.925.052.111.543.114.3
74.435.665.324.852.111.439.103.6
VADA75.340.565.122.853.014.843.911.1
74.839.761.221.753.013.740.905.0
IWDAN73.231.734.812.151.304.645.113.6
72.922.861.105.051.002.145.101.2
IWCDAN71.627.628.001.149.702.238.301.7
70.622.860.200.745.600.237.300.3
sDANN-472.437.840.826.657.218.650.717.1
71.832.368.426.257.216.751.920.0
ASA-sq64.935.735.831.457.826.751.921.2
63.732.160.620.455.517.350.816.9
ASA-abs64.840.641.927.357.126.052.522.2
64.536.060.516.756.213.951.917.7

2020; Li et al., 2018b; Tzeng et al., 2017; Shen et al., 2018; Kumar et al., 2018; Li et al., 2018a; Wang et al., 2021; Goodfellow et al., 2014; Arjovsky et al., 2017; Gulrajani et al., 2017; Mao et al., 2017; Radford et al., 2015; Salimans et al., 2018; Genevay et al., 2018; Wu et al., 2019a; Deshpande et al., 2018; 2019), that do distribution alignment, there are also papers (Long et al., 2015; 2017; Peng et al., 2019; Sun et al., 2016; Sun & Saenko, 2016) focusing on aligning some characteristics of the distribution, such as first or second moments. Our work is concerned with a different problem, support alignment, which is a novel objective in this line of work. In terms of methodology, our use of the discriminator output space to work with easier optimization in 1D is inspired by a line of work (Salimans et al., 2018; Genevay et al., 2018; Wu et al., 2019a; Deshpande et al., 2018; 2019) on sliced Wasserstein distance based models. Our result in Proposition 4.3 also provides theoretical insight on the practical effectiveness of 1D OT in (Deshpande et al., 2019).

Relaxed distribution alignment. In Section 4, we have already covered in detail the connections between our work and (Wu et al., 2019b). Balaji et al. (2020) introduced relaxed distribution alignment with a different focus, aiming to be insensitive to outliers. Chamfer distance/divergence (CD) is used to compute similarity between images/3D point clouds (Fan et al., 2017; Nguyen et al., 2021). For text data, Kusner et al. (2015) presented Relaxed Word Mover's Distance (RWMD) to prune candidates of similar documents. CD and RWMD are essentially the same as (5) with $d(\cdot, \cdot)$ being the Euclidean distance. They are computed by finding the nearest neighbor assignments. Our subroutine of calculating the support distance in the 1D discriminator output space is done similarly by finding nearest neighbors within the current batch and history buffers.

Support estimation. There exists a series of work, e.g. (Schölkopf et al., 2001; Hoffmann, 2007; Tax & Duin, 2004; Knorr et al., 2000; Chalapathy et al., 2017; Ruff et al., 2018; Perera et al., 2019; Deecke et al., 2018; Zenati et al., 2018), on novelty/anomaly detection problem, which can be casted as support estimation. We consider a fundamentally different problem setting. Our goal is to align the supports and our approach does not directly estimate the supports. Instead, we implicitly learn the relationships between supports (density ratio to be specific) via a discriminator.

7 CONCLUSION AND FUTURE WORK

In this paper, we studied the problem of aligning the supports of distributions. We formalized its theoretical connections with existing alignment notions and demonstrated the effectiveness of the approach in domain adaptation. We believe that our methodology opens possibilities for the design of more nuanced and structured alignment constraints, suitable for various use cases. One natural extension is support containment, achievable with only one term in (1). This approach is fitting for partial domain adaptation, where some source domain classes do not appear in the target domain. Another interesting direction is unsupervised domain transfer, where support alignment is more desired than existing distribution alignment methods due to mode imbalance (Binkowski et al., 2019).

ACKNOWLEDGMENTS

The computational experiments presented in this paper were performed on "Satori" cluster developed as a collaboration between MIT and IBM. We used Weights & Biases (Biewald, 2020) for experiment tracking and visualizations to develop insights for this paper.

TJ acknowledges support from MIT-IBM Watson AI Lab, from Singapore DSO, and MIT-DSTA Singapore collaboration. We thank Xiang Fu and all anonymous reviewers for their helpful comments regarding the paper's writing and presentation.

REFERENCES

Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, and Mario Marchand. Domain-adversarial neural networks. arXiv preprint arXiv:1412.4446, 2014. 1, 8
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214-223. PMLR, 2017. 3, 7, 9
Yogesh Balaji, Rama Chellappa, and Soheil Feizi. Robust optimal transport with applications in generative modeling and domain adaptation. Advances in Neural Information Processing Systems Foundation (NeurIPS), 2020. 9
Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19:137, 2007. 1
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1):151-175, 2010. 1
Dimitris Bertsimas and John N Tsitsiklis. Introduction to linear optimization, volume 6. Athena Scientific Belmont, MA, 1997. 21
Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/. Software available from wandb.com. 10
Mikolaj Binkowski, Devon Hjelm, and Aaron Courville. Batch weight for domain adaptation with mass shift. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1844-1853, 2019. 9
Nicolas Bonneel and David Coeurjolly. Spot: sliced partial optimal transport. ACM Transactions on Graphics (TOG), 38(4):1-13, 2019. 6
Eric Budish, Yeon-Koo Che, Fuhito Kojima, and Paul Milgrom. Implementing random assignments: A generalization of the birkhoff-von neumann theorem. In 2009 Cowles Summer Conference, 2009. 21
Raghavendra Chalapathy, Aditya Krishna Menon, and Sanjay Chawla. Robust, deep and inductive anomaly detection. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 36-51. Springer, 2017. 9
Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 215-223. JMLR Workshop and Conference Proceedings, 2011. 22
Lucas Deecke, Robert Vandermeulen, Lukas Ruff, Stephan Mandt, and Marius Kloft. Image anomaly detection with generative adversarial networks. In Joint European conference on machine learning and knowledge discovery in databases, pp. 3-17. Springer, 2018. 9
Ishan Deshpande, Ziyu Zhang, and Alexander G Schwing. Generative modeling using the sliced wasserstein distance. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3483-3491, 2018. 6, 9, 20
Ishan Deshpande, Yuan-Ting Hu, Ruoyu Sun, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Zhizhen Zhao, David Forsyth, and Alexander G Schwing. Max-sliced wasserstein distance and its use for gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10648-10656, 2019. 6, 9, 20, 23

Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 605-613, 2017. 3, 9
Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pp. 1180-1189. PMLR, 2015. 1, 8
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096-2030, 2016. 1, 7, 8
Aude Geneva, Gabriel Peyre, and Marco Cuturi. Learning generative models with sinkhorn divergences. In International Conference on Artificial Intelligence and Statistics, pp. 1608-1617. PMLR, 2018. 9
Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. arXiv preprint arXiv:1406.2661, 2014. 3, 4, 9
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 5769-5779, 2017. 3, 9, 23
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. 25
Heiko Hoffmann. Kernel pca for novelty detection. Pattern recognition, 40(3):863-874, 2007. 9
Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550-554, 1994. 22
Fredrik D Johansson, David Sontag, and Rajesh Ranganath. Support and invertibility in domain-invariant representations. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 527-536. PMLR, 2019. 1
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 22
Edwin M Knorr, Raymond T Ng, and Vladimir Tucakov. Distance-based outliers: algorithms and applications. The VLDB Journal, 8(3):237-253, 2000. 9
Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. 22
Abhishek Kumar, Prasanna Sattigeri, Kahini Wadhawan, Leonid Karlinsky, Rogerio Feris, Bill Freeman, and Gregory Wornell. Co-regularized alignment for unsupervised domain adaptation. Advances in Neural Information Processing Systems, 31:9345-9356, 2018. 1, 9
Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. From word embeddings to document distances. In International conference on machine learning, pp. 957-966. PMLR, 2015. 3, 9
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. 22
Bo Li, Yezhen Wang, Tong Che, Shanghang Zhang, Sicheng Zhao, Pengfei Xu, Wei Zhou, Yoshua Bengio, and Kurt Keutzer. Rethinking distributional matching based domain adaptation. arXiv preprint arXiv:2006.13352, 2020. 1, 8
Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5400-5409, 2018a. 1, 9
Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 624-639, 2018b. 9

Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pp. 97-105. PMLR, 2015. 9
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In International conference on machine learning, pp. 2208-2217. PMLR, 2017. 9
Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems, pp. 1640-1650, 2018. 8
Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2794-2802, 2017. 9
Trung Nguyen, Quang-Hieu Pham, Tam Le, Tung Pham, Nhat Ho, and Binh-Son Hua. Point-set distances for learning representations of 3d point clouds. arXiv preprint arXiv:2102.04014, 2021. 3, 9
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 271-279, 2016. 7
Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Multi-adversarial domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. 1, 8
Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge, 2017. 25
Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1406-1415, 2019. 9
Pramuditha Perera, Ramesh Nallapati, and Bing Xiang. Ocgan: One-class novelty detection using gans with constrained latent representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2898-2906, 2019. 9
Gabriel Peyre, Marco Cuturi, et al. Computational optimal transport: With applications to data science. Foundations and Trends® in Machine Learning, 11(5-6):355-607, 2019. 21
Julien Rabin, Gabriel Peyre, Julie Delon, and Marc Bernot. Wasserstein barycenter and its application to texture mixing. In International Conference on Scale Space and Variational Methods in Computer Vision, pp. 435-446. Springer, 2011. 6, 20
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. 9
Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one-class classification. In International conference on machine learning, pp. 4393-4402. PMLR, 2018. 9
Tim Salimans, Han Zhang, Alec Radford, and Dimitris Metaxas. Improving GANs using optimal transport. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rkQkBnJAb.9
Bernhard Schölkopf, John C Platt, John Shawe-Taylor, Alex J Smola, and Robert C Williamson. Estimating the support of a high-dimensional distribution. Neural computation, 13(7):1443-1471, 2001. 9
Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representation learning for domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018. 1, 9

Rui Shu, Hung Bui, Hirokazu Narui, and Stefano Ermon. A DIRT-t approach to unsupervised domain adaptation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1q-TM-AW.7,22,24
Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In European conference on computer vision, pp. 443-450. Springer, 2016. 9
Baochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016. 9
Remi Tachet des Combes, Han Zhao, Yu-Xiang Wang, and Geoffrey J Gordon. Domain adaptation with conditional distribution matching and generalized label shift. Advances in Neural Information Processing Systems, 33, 2020. 1, 7, 8, 22
Shuhan Tan, Xingchao Peng, and Kate Saenko. Class-imbalanced domain adaptation: An empirical odyssey. In European Conference on Computer Vision, pp. 585-602. Springer, 2020. 1, 8
David MJ Tax and Robert PW Duin. Support vector data description. Machine learning, 54(1):45-66, 2004. 9
Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7167-7176, 2017. 1, 9
Jing Wang, Jiahong Chen, Jianzhe Lin, Leonid Sigal, and Clarence W de Silva. Discriminative feature alignment: Improving transferability of unsupervised domain adaptation by gaussian-guided latent alignment. Pattern Recognition, 116:107943, 2021. 1, 9
Jiqing Wu, Zhiwu Huang, Dinesh Acharya, Wen Li, Janine Thoma, Danda Pani Paudel, and Luc Van Gool. Sliced Wasserstein generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019a. 9, 20
Yifan Wu, Ezra Winston, Divyansh Kaushik, and Zachary Lipton. Domain adaptation with asymmetrically-relaxed distribution alignment. In International Conference on Machine Learning, pp. 6872-6881. PMLR, 2019b. 1, 5, 6, 7, 8, 9, 23
Houssam Zenati, Manon Romain, Chuan-Sheng Foo, Bruno Lecouat, and Vijay Chandrasekhar. Adversarily learned anomaly detection. In 2018 IEEE International conference on data mining (ICDM), pp. 727-736. IEEE, 2018. 9
Han Zhao, Shanghang Zhang, Guanhang Wu, José MF Moura, Joao P Costeira, and Geoffrey J Gordon. Adversarial multiple source domain adaptation. In Advances in Neural Information Processing Systems, pp. 8568-8579, 2018. 1, 8
Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. On learning invariant representations for domain adaptation. In International Conference on Machine Learning, pp. 7523-7532. PMLR, 2019. 1, 8

A PROOFS OF THE THEORETICAL RESULTS

A.1 PROOF OF PROPOSITION 2.1

  1. $\mathcal{D}_{\triangle}(p,q)\geq 0$ for all $p,q\in \mathcal{P}$

Since $d(\cdot ,\cdot)\geq 0$ , for all $p,q$

SD(p,q):=Exp[d(x,supp(q))]=Exp[infysupp(q)d(x,y)]0,(13) \operatorname {S D} (p, q) := \mathbb {E} _ {x \sim p} [ d (x, \operatorname {s u p p} (q)) ] = \mathbb {E} _ {x \sim p} \left[ \inf _ {y \sim \operatorname {s u p p} (q)} d (x, y) \right] \geq 0, \tag {13}

which makes $\mathcal{D}_{\triangle}(p,q) = \mathrm{SD}(p,q) + \mathrm{SD}(q,p)\geq 0$

  1. $\mathcal{D}_{\triangle}(p,q) = 0$ if and only if $\operatorname {supp}(p) = \operatorname {supp}(q)$

With statement 1, $\mathcal{D}_{\triangle}(p,q) = 0$ if and only if $\mathrm{SD}(p,q) = 0$ and $\mathrm{SD}(q,p) = 0$ .

Then,

SD(p,q)=0Exp[d(x,supp(q))]=0p({xd(x,supp(q))>0})=0. \mathrm {S D} (p, q) = 0 \Longrightarrow \mathbb {E} _ {x \sim p} [ d (x, \operatorname {s u p p} (q)) ] = 0 \Longrightarrow p (\{x | d (x, \operatorname {s u p p} (q)) > 0 \}) = 0.

This is equivalent to

xsupp(p),d(x,supp(q))=0. \forall x \in \operatorname {s u p p} (p), d (x, \operatorname {s u p p} (q)) = 0.

Thus, $\operatorname{supp}(p) \subseteq \operatorname{supp}(q)$ , and similarly, $\operatorname{supp}(q) \subseteq \operatorname{supp}(p)$ , which makes $\operatorname{supp}(p) = \operatorname{supp}(q)$ .

A.2 ASSUMPTION AND PROOF OF THEOREM 2.1

A.2.1 COMMENTS ON ASSUMPTION (3)

Assumption (3) is not restrictive. Indeed, distributions satisfying Assumption (3) include:

uniform $p(x) = U(x;[a,b])$
truncated normal;

  • $p(x)$ of the form

p(x)={1ZpeEp(x),xsupp(p),0,x∉supp(p), p (x) = \left\{ \begin{array}{l l} \frac {1}{Z _ {p}} e ^ {- E _ {p} (x)}, & x \in \operatorname {s u p p} (p), \\ 0, & x \not \in \operatorname {s u p p} (p), \end{array} \right.

with non-negative energy (unnormized log-density) function $E_{p}:\mathcal{X}\rightarrow [0,\infty)$ ;

  • mixture of any distributions satisfying Assumption (3), for instance the distributions shown in Figure A.1 top-left are mixtures of truncated normal distributions on $[-2, 2]$ .

Starting from arbitrary density $p_0(x)$ with bounded support we can derive a density $p(x)$ satisfying Assumption (3) via density clipping and re-normalization

p(x)clip(p0(x),[1C,C]), p (x) \propto \operatorname {c l i p} \left(p _ {0} (x), \left[ \frac {1}{C ^ {\prime}}, C ^ {\prime} \right]\right),

for some $C' > 1$ .

A.2.2 PROOF OF THEOREM 2.1

First, we show that $\mathcal{D}{\triangle}(p,q) = 0$ implies $\mathcal{D}{\triangle}(f_{\sharp}^{}p,f_{\sharp}^{}q) = 0$

$\mathcal{D}{\triangle}(p,q) = 0$ implies $\mathrm{supp}(p) = \mathrm{supp}(q)$ . Then for any mapping $f:\mathcal{X}\to \mathbb{R}$ , we have $\mathrm{supp}(f{\sharp}p) = \mathrm{supp}(f_{\sharp}q)$ , which implies $\mathrm{supp}(f^{}_{\sharp}p) = \mathrm{supp}(f^{}{\sharp}q)$ . Thus, $\mathcal{D}{\triangle}(f^{}_{\sharp}p,f^{}_{\sharp}q) = 0$ .

$\mathcal{D}{\triangle}(f{\sharp}^{}p,f_{\sharp}^{}q) = 0$ implies the following:

Etfp[d(t,supp(fq)]=0,Etfq[d(t,supp(fp)]=0. \mathbb {E} _ {t \sim f ^ {*} _ {\sharp} p} \left[ d (t, \operatorname {s u p p} \left(f ^ {*} _ {\sharp} q\right) \right] = 0, \qquad \mathbb {E} _ {t \sim f ^ {*} _ {\sharp} q} \left[ d (t, \operatorname {s u p p} \left(f ^ {*} _ {\sharp} p\right) \right] = 0.

  1. Suppose $\mathbb{E}_{x\sim p}[d(x,\mathrm{supp}(q))] > 0$ . This is only possible if $p({x\mid x\in \mathrm{supp}(p)\setminus \mathrm{supp}(q)}) > 0$ . Since $x\in \operatorname {supp}(p)\setminus \operatorname {supp}(q)$ implies $p(x) > 0,q(x) = 0$ , and for any $x\in \operatorname {supp}(p)\cup \operatorname {supp}(q)$ , $p(x) > 0,q(x) = 0$ if and only if $f^{*}(x) = \frac{p(x)}{p(x) + q(x)} = 1$ , we have:

Pfp({1})=Pp({xxsupp(p)\supp(q)})>0, \mathbb {P} _ {f ^ {*} \sharp p} (\{1 \}) = \mathbb {P} _ {p} (\{x \mid x \in \operatorname {s u p p} (p) \backslash \operatorname {s u p p} (q) \}) > 0,

and therefore $1 \in \operatorname{supp}(f_{\sharp}^{*}p)$ .

For a real number $\alpha : 0 < \alpha < \frac{1}{C^2 + 1}$ , consider the probability of the event $(1 - \alpha, 1] \subset [0, 1]$ under distribution $f_{\sharp}^{*}q$ :

Pfq((1α,1])=Pq({xf(x)(1α,1]}). \mathbb {P} _ {f ^ {*} \sharp q} ((1 - \alpha , 1 ]) = \mathbb {P} _ {q} (\{x \mid f ^ {*} (x) \in (1 - \alpha , 1 ] \}).

By assumption (3), $p(x) < C$ and $q(x) > 0$ implies $q(x) > \frac{1}{C}$ , therefore for $x: q(x) > 0$ we have

f(x)=p(x)p(x)+q(x)<p(x)p(x)+1C<CC+1C=11C2+1<1α. f ^ {*} (x) = \frac {p (x)}{p (x) + q (x)} < \frac {p (x)}{p (x) + \frac {1}{C}} < \frac {C}{C + \frac {1}{C}} = 1 - \frac {1}{C ^ {2} + 1} < 1 - \alpha .

This means that $\mathbb{P}{f^{*}{\sharp}q}((1 - \alpha ,1]) = 0$ , i.e. $\mathrm{supp}(f^{*}_{\sharp}q)\cap (1 - \alpha ,1] = \varnothing$

To summarize, starting from the assumption that $\mathbb{E}_{x\sim p}[d(x,\mathrm{supp}(q))] > 0$ we showed that

  • $1 \in \operatorname{supp}(f_{\sharp p}^{}, \mathbb{P}{f{\sharp p}^{}}({1}) > 0$ ;
    $\operatorname {supp}(f^{*}_{\sharp}q)\cap (1 - \alpha ,1] = \varnothing .$

Because $\mathcal{D}{\triangle}(f^{*}{\sharp}p,f^{}{\sharp}q)\geq \mathbb{E}{t\sim f^{}{\sharp}p}\bigl [d(t,\mathrm{supp}(f^{*}{\sharp}q))\bigr ]\geq \mathbb{P}{f^{*}{\sharp}p}({1})\cdot d(1,\mathrm{supp}(f^{}{\sharp}q))\geq$ $\mathbb{P}{f^{}{\sharp}p}({1})\cdot \alpha >0$ , which contradicts with the given $\mathcal{D}{\triangle}(f^{}_{\sharp}p,f^{}{\sharp}q) = 0$ , we have $\mathbb{E}{x\sim p}[d(x,\mathrm{supp}(q))] = 0$

  1. Similarly, it can be shown $\mathbb{E}_{x\sim q}[d(x,\mathrm{supp}(p))] = 0$

Thus, $\mathcal{D}{\triangle}(f{\sharp}^{}p,f_{\sharp}^{}q) = 0$ implies $\mathcal{D}_{\triangle}(p,q) = 0$

A.3 PROOF OF PROPOSITION 2.2

Consider a 1-dimensional Euclidean space $\mathbb{R}$ . Let $\mathrm{supp}(p) = [-\frac{1}{2}, \frac{1}{2}] \cup [1, 2]$ with $p([- \frac{1}{2}, \frac{1}{2}]) = \frac{3}{4}$ and $p([1, 2]) = \frac{1}{4}$ . Let $\mathrm{supp}(q) = [-2, -1] \cup [-\frac{1}{2}, \frac{1}{2}] \cup [1, 2]$ with $q([-2, -1]) = \frac{1}{4}$ , $q([- \frac{1}{2}, \frac{1}{2}]) = \frac{1}{4}$ and $q([1, 2]) = \frac{1}{2}$ . The supports of $p$ and $q$ consist of disjoint closed intervals, and we assume uniform distribution within each of these intervals, i.e. $p$ has density $p(x) = \frac{3}{4}, \forall x \in [-\frac{1}{2}, \frac{1}{2}]$ ; $p(x) = \frac{1}{4}, \forall x \in [1, 2]$ and $q$ has density $q(x) = \frac{1}{4}, \forall x \in [-2, -1]$ ; $q(x) = \frac{1}{4}, \forall x \in [-\frac{1}{2}, \frac{1}{2}]$ ; $q(x) = \frac{1}{2}, \forall x \in [1, 2]$ . Clearly, $\mathrm{supp}(p) \neq \mathrm{supp}(q)$ .

The optimal dual Wasserstein discriminator $f_{W}^{*}$ is the maximizer of

supf:Lip(f)1Exp[f(x)]Eyq[f(y)]. \sup_{f:\operatorname {Lip}(f)\leq 1}\mathbb{E}_{x\sim p}[f(x)] - \mathbb{E}_{y\sim q}[f(y)].

Thus, $f_{W}^{*}$ is the maximizer of

supf:Lip(f)114(31212f(x)dx+12f(x)dx21f(x)dx1212f(x)dx212f(x)dx), \sup _ {f: \operatorname {L i p} (f) \leq 1} \frac {1}{4} \left(3 \int_ {- \frac {1}{2}} ^ {\frac {1}{2}} f (x) d x + \int_ {1} ^ {2} f (x) d x - \int_ {- 2} ^ {- 1} f (x) d x - \int_ {- \frac {1}{2}} ^ {\frac {1}{2}} f (x) d x - 2 \int_ {1} ^ {2} f (x) d x\right),

which simplifies to

supf:Lip(f)114(21f(x)dx+21212f(x)dx12f(x)dx). \sup _ {f: \operatorname {L i p} (f) \leq 1} \frac {1}{4} \left(- \int_ {- 2} ^ {- 1} f (x) d x + 2 \int_ {- \frac {1}{2}} ^ {\frac {1}{2}} f (x) d x - \int_ {1} ^ {2} f (x) d x\right).

Since the optimization objective and the constraint are invariant to replacing the function $f(x)$ with its symmetric reflection $g(x) = f(-x)$ , if $f'$ is a optimal solution, then there exists a symmetric

maximizer $f_W^*(x) = \frac{1}{2} f'(x) + \frac{1}{2} f'(-x)$ , since $f_W^*(x) = f_W^*(-x)$ and $\operatorname{Lip}(f_W^*) \leq \operatorname{Lip}(f') \leq 1$ . Thus, $\operatorname{supp}(f_{W^\sharp}^\star p) = \operatorname{supp}(f_{W^\sharp}^\star q)$ as $f_W^*(x) = f_W^*(-x)$ for $x \in [1,2]$ .

Note that one can easily "extend" the above proof to discrete distributions, by replacing the disjoint segments $[-2, -1]$ , $[- \frac{1}{2}, \frac{1}{2}]$ , $[1, 2]$ with points ${-1}$ , ${0}$ , ${1}$ .

A.4 PROOF OF PROPOSITION 4.1

From (8), we have

DW(p,q):=limβDWβ(p,q)=limβinfγΓβ(p,q)E(x,y)γ[d(x,y)], \mathcal {D} _ {W} ^ {\infty} (p, q) := \lim _ {\beta \to \infty} \mathcal {D} _ {W} ^ {\beta} (p, q) = \lim _ {\beta \to \infty} \inf _ {\gamma \in \Gamma_ {\beta} (p, q)} \mathbb {E} _ {(x, y) \sim \gamma} [ d (x, y) ],

where $\lim_{\beta \to \infty}\Gamma_{\beta}(p,q)$ is the set of all measures $\gamma$ on $\mathcal{X} \times \mathcal{X}$ such that $\int \gamma (x,y)dy = p(x),\forall x$ and $\int \gamma (x,y)dx\leq \lim_{\beta \to \infty}(1 + \beta)q(y),\forall y.$

The set of inequalities

γ(x,y)dxlimβ(1+β)q(y),y \int \gamma (x, y) d x \leq \lim _ {\beta \rightarrow \infty} (1 + \beta) q (y), \quad \forall y

can be simplified to

γ(x,y)dx=0,ys u c hq(y)=0. \int \gamma (x, y) d x = 0, \quad \forall y \text {s u c h} q (y) = 0.

To put it together, we have

DW(p,q)=infγΓ(p,q)E(x,y)γ[d(x,y)], \mathcal {D} _ {W} ^ {\infty} (p, q) = \inf _ {\gamma \in \Gamma_ {\infty} (p, q)} \mathbb {E} _ {(x, y) \sim \gamma} [ d (x, y) ],

where $\Gamma_{\infty}(p,q)$ is the set of all measures $\gamma$ on $\mathcal{X} \times \mathcal{X}$ such that $\int \gamma(x,y) dy = p(x), \forall x$ and $\int \gamma(x,y) dx = 0, \forall y$ such that $q(y) = 0$ . In other words, we seek the coupling $\gamma(x,y)$ which defines a transportation plan such that the total mass transported from given point $x$ is equal to $p(x)$ , and the only constraint on the destination points $y$ is that no probability mass can be transported to points $y$ where $q(y) = 0$ , i.e. $y \notin \operatorname{supp}(q)$ .

Let $y^{*}(x)$ denote a function such that

y(x)supp(q),x;d(x,y(x))=infysupp(q)d(x,y). y ^ {*} (x) \in \operatorname {s u p p} (q), \forall x; \quad d (x, y ^ {*} (x)) = \inf _ {y \in \operatorname {s u p p} (q)} d (x, y).

We can see that $\gamma^{*}$ given by

γ(x,y)=p(x)δ(yy(x)), \gamma^ {*} (x, y) = p (x) \delta (y - y ^ {*} (x)),

is the optimal coupling. Indeed, $\gamma^{}$ satisfies the constraints $\gamma^{} \in \Gamma_{\infty}$ , and the cost of any other transportation cost $\gamma \in \Gamma_{\infty}$ is at least that of $\gamma^{}$ (since $y^{}(x)$ is defined as a closest point $y$ in $\operatorname{supp}(q)$ to a given point $x$ ).

Thus,

DW(p,q)=infγΓ(p,q)E(x,y)γ[d(x,y)]=E(x,y)γ[d(x,y)]=Exp[infysupp(q)d(x,y)]. \mathcal {D} _ {W} ^ {\infty} (p, q) = \inf _ {\gamma \in \Gamma_ {\infty} (p, q)} \mathbb {E} _ {(x, y) \sim \gamma} [ d (x, y) ] = \mathbb {E} _ {(x, y) \sim \gamma^ {*}} [ d (x, y) ] = \mathbb {E} _ {x \sim p} \left[ \inf _ {y \in \operatorname {s u p p} (q)} d (x, y) \right].

The last equation implies $\mathcal{D}_W^\infty (p,q) = \mathrm{SD}(p,q)$ $(\mathrm{SD}(\cdot ,\cdot))$ is defined in (13)). Then,

DW,(p,q):=limβ1,β2DWβ1,β2(p,q)=DW(p,q)+DW(q,p)=SD(p,q)+SD(q,p)=D(p,q). \mathcal {D} _ {W} ^ {\infty , \infty} (p, q) := \lim _ {\beta_ {1}, \beta_ {2} \to \infty} \mathcal {D} _ {W} ^ {\beta_ {1}, \beta_ {2}} (p, q) = \mathcal {D} _ {W} ^ {\infty} (p, q) + \mathcal {D} _ {W} ^ {\infty} (q, p) = \mathrm {S D} (p, q) + \mathrm {S D} (q, p) = \mathcal {D} _ {\triangle} (p, q).

A.5 PROOF OF PROPOSITION 4.2

  1. $\mathcal{D}_W(p,q) = 0$ implies $p = q$ , which is equivalent to

p(x)q(x)=1,xsupp(p)supp(q). \frac {p (x)}{q (x)} = 1, \quad \forall x \in \operatorname {s u p p} (p) \cup \operatorname {s u p p} (q).

Then clearly, for all finite $\beta_{1},\beta_{2} > 0$ it satisfies

11+β2p(x)q(x)1+β1,xsupp(p)supp(q).(14) \frac {1}{1 + \beta_ {2}} \leq \frac {p (x)}{q (x)} \leq 1 + \beta_ {1}, \quad \forall x \in \operatorname {s u p p} (p) \cup \operatorname {s u p p} (q). \tag {14}

Thus, $\mathcal{D}W^{\beta_1,\beta_2}(p,q) = 0$ for all finite $\beta{1},\beta_{2} > 0$

  1. $\mathcal{D}W^{\beta_1,\beta_2}(p,q) = 0$ for some finite $\beta{1},\beta_{2} > 0$ means that (14) is satisfied. This implies that $\forall x\in \operatorname {supp}(p),x\in \operatorname {supp}(q)$ and $\forall x\in \operatorname {supp}(q),x\in \operatorname {supp}(p)$ , which makes $\mathrm{supp}(p) =$ $\mathrm{supp}(q)$ . Thus, $\mathcal{D}_{\triangle}(p,q) = 0$

  2. The converse of statements 1 and 2 are false:

(a) For all finite $\beta_{1},\beta_{2} > 0$ , let $\operatorname {supp}(p) = \operatorname {supp}(q) = {x_1,x_2}$ . Let $p(x_{1}) = p(x_{2}) =$ $1 / 2$ and $q(x_{1}) = (1 + \beta^{\prime}) / 2$ and $q(x_{2}) = (1 - \beta^{\prime}) / 2$ where

β=min(β2,111+β1). \beta^ {\prime} = \min \left(\beta_ {2}, 1 - \frac {1}{1 + \beta_ {1}}\right).

Then, it can be easily checked that (14) is satisfied, which makes $\mathcal{D}_W^{\beta_1,\beta_2}(p,q) = 0$ . However, since $\beta' \neq 0$ , $p \neq q$ and thus $\mathcal{D}_W(p,q) \neq 0$ .

(b) Similar to (a), let $\operatorname{supp}(p) = \operatorname{supp}(q) = {x_1, x_2}$ . Let $p(x_1) = q(x_2) = \varepsilon$ and $p(x_2) = q(x_1) = 1 - \varepsilon$ for some $\varepsilon > 0$ . Since $\operatorname{supp}(p) = \operatorname{supp}(q)$ , $\mathcal{D}_{\triangle}(p, q) = 0$ . However,

limε0p(x1)q(x1)=limε0ε1ε=0, \lim _ {\varepsilon \downarrow 0} \frac {p (x _ {1})}{q (x _ {1})} = \lim _ {\varepsilon \downarrow 0} \frac {\varepsilon}{1 - \varepsilon} = 0,

and, thus, for any finite $\beta_{2} > 0$ we can choose $\varepsilon > 0$ such that

p(x1)q(x1)=ε1ε<11+β2. \frac {p (x _ {1})}{q (x _ {1})} = \frac {\varepsilon}{1 - \varepsilon} < \frac {1}{1 + \beta_ {2}}.

Therefore, (14) is not satisfied and $\mathcal{D}_W^{\beta_1,\beta_2}(p,q)\neq 0$

A.6 PROOF OF PROPOSITION 4.3

Using (2), we first establish a connection between pushforward distributions $f^{}_{\sharp}p$ and $f^{}_{\sharp}q$ .

Proposition A.1. Let $f^*$ be the optimal log-loss discriminator (2) between $p$ and $q$ . Then,

[fp](t)[fp](t)+[fq](t)=t,tsupp(fp)supp(fq).(15) \frac {\left[ f _ {\sharp} ^ {*} p \right] (t)}{\left[ f _ {\sharp} ^ {*} p \right] (t) + \left[ f _ {\sharp} ^ {*} q \right] (t)} = t, \quad \forall t \in \operatorname {s u p p} \left(f _ {\sharp} ^ {*} p\right) \cup \operatorname {s u p p} \left(f _ {\sharp} ^ {*} q\right). \tag {15}

Proof. For any point $t \in \operatorname{supp}(f_{\sharp}^{}p) \cup \operatorname{supp}(f_{\sharp}^{}q)$ , the values of the densities

[fp](t)=limε0Pp({xtε<f(x)<t+ε})2ε=limε0{xtε<f(x)<t+ε}p(x)dx2ε, [ f _ {\sharp} ^ {*} p ] (t) = \lim _ {\varepsilon \downarrow 0} \frac {\mathbb {P} _ {p} (\{x \mid t - \varepsilon < f ^ {*} (x) < t + \varepsilon \})}{2 \varepsilon} = \lim _ {\varepsilon \downarrow 0} \frac {\int_ {\{x \mid t - \varepsilon < f ^ {*} (x) < t + \varepsilon \}} p (x) d x}{2 \varepsilon},

[fq](t)=limε0Pq({xtε<f(x)<t+ε})2ε=limε0{xtε<f(x)<t+ε}q(x)dx2ε. [ f _ {\sharp} ^ {*} q ] (t) = \lim _ {\varepsilon \downarrow 0} \frac {\mathbb {P} _ {q} (\{x \mid t - \varepsilon < f ^ {*} (x) < t + \varepsilon \})}{2 \varepsilon} = \lim _ {\varepsilon \downarrow 0} \frac {\int_ {\{x \mid t - \varepsilon < f ^ {*} (x) < t + \varepsilon \}} q (x) d x}{2 \varepsilon}.

Note that for all $x:t - \varepsilon < f^{*}(x) < t + \varepsilon$ we have

tε<p(x)p(x)+q(x)<t+ε, t - \varepsilon < \frac {p (x)}{p (x) + q (x)} < t + \varepsilon ,

which implies

(tε)(p(x)+q(x))<p(x)<(t+ε)(p(x)+q(x)). (t - \varepsilon) (p (x) + q (x)) < p (x) < (t + \varepsilon) (p (x) + q (x)).

Since these inequalities hold for all $x: t - \varepsilon < f^{*}(x) < t + \varepsilon$ , the similar relationship holds for the integrals:

(tε){xtε<f(x)<t+ε}(p(x)+q(x))dx<{xtε<f(x)<t+ε}p(x)dx<(t+ε){xtε<f(x)<t+ε}(p(x)+q(x))dx. \begin{array}{l} (t - \varepsilon) \int_ {\{x \mid t - \varepsilon < f ^ {*} (x) < t + \varepsilon \}} (p (x) + q (x)) d x \\ < \int_ {\{x \mid t - \varepsilon < f ^ {*} (x) < t + \varepsilon \}} p (x) d x < \\ (t + \varepsilon) \int_ {\{x \mid t - \varepsilon < f ^ {*} (x) < t + \varepsilon \}} (p (x) + q (x)) d x. \\ \end{array}

The ratio $f^{*}_{\sharp}p / (f^{*}_{\sharp}p + f^{*}_{\sharp}q)$ can be expressed as

[fp](t)[fp](t)+[fq](t)=limε0{xtε<f(x)<t+ε}p(x)dx{xtε<f(x)<t+ε}(p(x)+q(x))dx. \frac {[ f ^ {*} _ {\sharp} p ] (t)}{[ f ^ {*} _ {\sharp} p ] (t) + [ f ^ {*} _ {\sharp} q ] (t)} = \lim _ {\varepsilon \downarrow 0} \frac {\int_ {\{x | t - \varepsilon < f ^ {*} (x) < t + \varepsilon \}} p (x) d x}{\int_ {\{x | t - \varepsilon < f ^ {*} (x) < t + \varepsilon \}} (p (x) + q (x)) d x}.

Using the inequality above we observe that

tε<{xtε<f(x)<t+ε}p(x)dx{xtε<f(x)<t+ε}(p(x)+q(x))dx<t+ε, t - \varepsilon < \frac {\int_ {\{x \mid t - \varepsilon < f ^ {*} (x) < t + \varepsilon \}} p (x) d x}{\int_ {\{x \mid t - \varepsilon < f ^ {*} (x) < t + \varepsilon \}} (p (x) + q (x)) d x} < t + \varepsilon ,

for all $\varepsilon > 0$ , and taking the limit $\varepsilon \downarrow 0$ we obtain

[fp](t)[fp](t)+[fq](t)=t. \frac {[ f _ {\sharp} ^ {*} p ] (t)}{[ f _ {\sharp} ^ {*} p ] (t) + [ f _ {\sharp} ^ {*} q ] (t)} = t.


Figure A.1: Visual illustration of the statement of Proposition A.1. The top-left panel shows two example PDFs $p(x)$ , $q(x)$ on closed interval $[-2, 2]$ . The bottom-left panel shows the optimal discriminator function $f^{}(x) = p(x) / (p(x) + q(x))$ as a function of $x$ on $[-2, 2]$ . The top-right panel shows the PDFs $f_{\sharp}^{*}p, f_{\sharp}^{*}q$ of the pushforward distributions $f_{\sharp}^{}p$ , $f_{\sharp}^{}q$ induced by the discriminator mapping $f^{}$ . $f^{}$ maps $[-2, 2]$ to $[0, 1]$ and $[f_{\sharp}^{}p]$ , $[f_{\sharp}^{*}q]$ are defined on $[0, 1]$ .

Consider point $x_1 \in [-2, 2]$ . The value $f^*(x_1)$ characterizes the ratio of densities $p(x_1) / (p(x_1) + q(x_1))$ at $x_1$ . For another point $x_2$ mapped to the same value $f^*(x_2) = f^*(x_1) = t_{1,2}$ , the ratio of densities $p(x_2) / (p(x_2) + q(x_2))$ is the same as $p(x_1) / (p(x_1) + q(x_1))$ . All points $x$ mapped to $t_{1,2}$ share the same ratio of the densities $p(x) / (p(x) + q(x))$ . This fact implies that the ratio of the pushforward densities $f^*_\sharp p / (f^*_\sharp p + f^*_\sharp q)$ at $t_{1,2}$ must be the same as the ratio of densities $p(x_1) / (p(x_1) + q(x_1)) = t_{1,2}$ at $x_1$ (or $x_2$ ). The pushforward PDFs $f^*_\sharp q, f^*_\sharp q$ satisfy property $f^*_\sharp p / (f^*_\sharp p + f^*_\sharp q) = t$ for all $t \in \operatorname{supp}(f^*_\sharp p) \cup \operatorname{supp}(f^*_\sharp q)$ .

Comment. Intuitively this proposition states the following. If for some $x \in \mathcal{X}$ we have $f^{*}(x) = t \in [0,1]$ , $t$ directly corresponds to the ratio of densities not only in the original space $t = p(x) / (p(x) + q(x))$ , but also in the 1D discriminator output space $t = f^{*}_{\sharp}p / (f^{*}_{\sharp}p + f^{*}_{\sharp}q)$ .

We also provide an intuitive example in Figure A.1.

Proof of Proposition 4.3, statement #1.

$\Longrightarrow$ : If $\mathcal{D}W(p,q) = 0$ then $p = q$ , then $f^{*}{\sharp}p = f^{}_{\sharp}q$ . Thus, $\mathcal{D}_W(f^{}{\sharp}p,f^{*}{\sharp}q) = 0$

$\Longleftarrow$ : If $\mathcal{D}W(f{\sharp}^* p, f_{\sharp}^* q) = 0$ , then $f_{\sharp}^* p = f_{\sharp}^* q$ .

Consider probability of event ${t\mid t > \frac{1}{2}}$ under distribution $f^{*}_{\sharp}p$

Pfp({tt>12})=I[f(x)>12]p(x)dx, \mathbb {P} _ {f ^ {*} \sharp p} \left(\left\{t \mid t > \frac {1}{2} \right\}\right) = \int \mathbb {I} \left[ f ^ {*} (x) > \frac {1}{2} \right] p (x) d x,

where $\mathbb{I}[\cdot ]$ is the indicator function ( $\mathbb{I}[c]$ equal to 1 when the condition $c$ is satisfied, and equal to 0 otherwise). For all $x:p(x) > 0$ , we have that $f^{*}(x) = \frac{p(x)}{p(x) + q(x)}$ and $p(x) + q(x) > 0$ . Therefore, the expression above can be re-written as

Pfp({tt>12})=I[p(x)p(x)+q(x)>12]p(x)dx=I[p(x)q(x)>0]p(x)dx. \mathbb {P} _ {f ^ {*} _ {\sharp} p} \left(\left\{t \mid t > \frac {1}{2} \right\}\right) = \int \mathbb {I} \left[ \frac {p (x)}{p (x) + q (x)} > \frac {1}{2} \right] p (x) d x = \int \mathbb {I} [ p (x) - q (x) > 0 ] p (x) d x.

Similarly, the probability of event $\left{t \mid t > \frac{1}{2}\right}$ under distribution $f^{*}_{\sharp} q$ is

Pfq({tt>12})=I[p(x)q(x)>0]q(x)dx. \mathbb {P} _ {f _ {\sharp} ^ {*} q} \left(\left\{t \mid t > \frac {1}{2} \right\}\right) = \int \mathbb {I} [ p (x) - q (x) > 0 ] q (x) d x.

$f^{}_{\sharp}p = f^{}_{\sharp}q$ implies that

Pfp({tt>12})=Pfq({tt>12}), \mathbb {P} _ {f ^ {*} _ {\sharp} p} \left(\left\{t \mid t > \frac {1}{2} \right\}\right) = \mathbb {P} _ {f ^ {*} _ {\sharp} q} \left(\left\{t \mid t > \frac {1}{2} \right\}\right),

or equivalently

I[p(x)q(x)>0](p(x)q(x))dx=0. \int \mathbb {I} [ p (x) - q (x) > 0 ] (p (x) - q (x)) d x = 0.

Note, that the function $\mathbb{I}p(x) - q(x) > 0 - q(x))$ is non-negative for any $x$ . This means that the integral can be zero only if the function is zero everywhere implying that for any $x$ either $\mathbb{I}[p(x) - q(x) > 0] = 0$ or $p(x) - q(x) = 0$ . In other words,

p(x)q(x),x. p (x) \leq q (x), \quad \forall x.

Using the fact the both densities $p(x)$ and $q(x)$ must sum up to 1, we conclude that $p = q$ and $\mathcal{D}_W(p,q) = 0$ .

Proof of Proposition 4.3, statement #2.

Note that by (2) and (15), we have

f(x)=p(x)p(x)+q(x)=t=[fp](t)[fp](t)+[fq](t),xsupp(p)supp(q). f ^ {*} (x) = \frac {p (x)}{p (x) + q (x)} = t = \frac {[ f _ {\sharp} ^ {*} p ] (t)}{[ f _ {\sharp} ^ {*} p ] (t) + [ f _ {\sharp} ^ {*} q ] (t)}, \qquad \forall x \in \operatorname {s u p p} (p) \cup \operatorname {s u p p} (q).

Since

f(x)=p(x)p(x)+q(x)=p(x)q(x)1+p(x)q(x),xS, f ^ {*} (x) = \frac {p (x)}{p (x) + q (x)} = \frac {\frac {p (x)}{q (x)}}{1 + \frac {p (x)}{q (x)}}, \quad \forall x \in S,

the inequalities above are equivalent to

12+β2f(x)1+β12+β1,xS. \frac {1}{2 + \beta_ {2}} \leq f ^ {*} (x) \leq \frac {1 + \beta_ {1}}{2 + \beta_ {1}}, \quad \forall x \in S.

Combined with Proposition A.1, the above implies that

12+β2[fp](t)[fp](t)+[fq](t)1+β12+β1,tT, \frac {1}{2 + \beta_ {2}} \leq \frac {[ f _ {\sharp} ^ {*} p ] (t)}{[ f _ {\sharp} ^ {*} p ] (t) + [ f _ {\sharp} ^ {*} q ] (t)} \leq \frac {1 + \beta_ {1}}{2 + \beta_ {1}}, \quad \forall t \in T,

or equivalently

11+β2[fp](t)[fq](t)1+β1,tT. \frac {1}{1 + \beta_ {2}} \leq \frac {[ f _ {\sharp} ^ {*} p ] (t)}{[ f _ {\sharp} ^ {*} q ] (t)} \leq 1 + \beta_ {1}, \quad \forall t \in T.

Therefore, $\mathcal{D}W^{\beta_1,\beta_2}(f^{*}{\sharp}p,f^{*}_{\sharp}q) = 0$

$\Longleftarrow$ : similarly, when $\mathcal{D}W^{\beta_1,\beta_2}(f*^\sharp p,f_*^\sharp q) = 0,$

supp(fp)=supp(fq)=Tsupp(p)=supp(q)=S. \operatorname {s u p p} \left(f _ {\sharp} ^ {*} p\right) = \operatorname {s u p p} \left(f _ {\sharp} ^ {*} q\right) = T \quad \Longrightarrow \quad \operatorname {s u p p} (p) = \operatorname {s u p p} (q) = S.

Moreover,

11+β2[fp](t)[fq](t)1+β1,tT, \frac {1}{1 + \beta_ {2}} \leq \frac {[ f _ {\sharp} ^ {*} p ] (t)}{[ f _ {\sharp} ^ {*} q ] (t)} \leq 1 + \beta_ {1}, \qquad \forall t \in T,

12+β2[fp](t)[fp](t)+[fq](t)1+β12+β2,tT \frac {1}{2 + \beta_ {2}} \leq \frac {[ f _ {\sharp} ^ {*} p ] (t)}{[ f _ {\sharp} ^ {*} p ] (t) + [ f _ {\sharp} ^ {*} q ] (t)} \leq \frac {1 + \beta_ {1}}{2 + \beta_ {2}}, \qquad \forall t \in T

12+β2f(x)1+β12+β1,xS, \begin{array}{c} \frac {1}{2 + \beta_ {2}} \leq f ^ {*} (x) \leq \frac {1 + \beta_ {1}}{2 + \beta_ {1}}, \qquad \forall x \in S, \\ \Downarrow \end{array}

11+β2p(x)q(x)1+β1,xS. \frac {1}{1 + \beta_ {2}} \leq \frac {p (x)}{q (x)} \leq 1 + \beta_ {1}, \quad \forall x \in S.

Therefore, $\mathcal{D}_W^{\beta_1,\beta_2}(p,q) = 0$

B A COMMENT ON "SLICED" SSD DIVERGENCE

Recent works (Deshpande et al., 2018; 2019) have proposed to perform optimal transport (OT)-based distribution alignment by reducing the OT problem (7) in the original, potentially high-dimensional space, to that between 1D distributions. Specifically, Deshpande et al. (2018) consider the sliced Wasserstein distance (Rabin et al., 2011):

DSW(p,q)=Sn1DW(fθp,fθq)dθ,(16) \mathcal {D} _ {S W} (p, q) = \int_ {\mathbb {S} ^ {n - 1}} \mathcal {D} _ {W} \left(f _ {\sharp} ^ {\theta} p, f _ {\sharp} ^ {\theta} q\right) d \theta , \tag {16}

where $\mathbb{S}^{n - 1} = {\theta \in \mathbb{R}^n\mid | \theta | = 1}$ is a unit sphere in $\mathbb{R}^n$ , and $f^{\theta}$ is a 1D linear projection $f^{\theta}(x) = \langle \theta ,x\rangle$ . It is known that $\mathcal{D}{SW}$ is a valid distribution divergence: for any $p\neq q$ there exists a linear slicing function $f^{\theta}$ , $\theta \in \mathbb{S}^{n - 1}$ which identifies difference in the distributions, i.e. $f^{\theta}{\sharp}p\neq f^{\theta}_{\sharp}q$ (Cramér-Wold theorem). By reducing the original OT problem to that in a 1D space, Wu et al. (2019a) and Deshpande et al. (2019) develop efficient practical methods for distribution alignment based on fast algorithms for the 1D OT problem.

Unfortunately, the straight-forward extension of SSD divergence (1) to a 1D linearly sliced version does not provide a valid support divergence.

Proposition B.1. There exist two distributions $p$ and $q$ in $\mathcal{P}$ , such that $\mathrm{supp}(p) \neq \mathrm{supp}(q)$ but $\mathrm{supp}(f^{\theta}{\sharp}p) = \mathrm{supp}(f^{\theta}{\sharp}q)$ , $\forall f^{\theta}(x) = \langle \theta, x \rangle$ with $\theta \in \mathbb{S}^{n-1}$ .

Proof. Consider a 2-dimensional Euclidean space $\mathbb{R}^2$ and let $\mathrm{supp}(p) = {(x,y)|x^2 +y^2\leq 2}$ and $\mathrm{supp}(q) = {(x,y)|1\leq x^2 +y^2\leq 2}$ . Then, $\forall f^{\theta}(x) = \langle \theta ,x\rangle$ with $\theta \in \mathbb{S}^1$ ,

supp(fθp)=supp(fθq)=[2,2]. \operatorname {s u p p} \left(f ^ {\theta} _ {\sharp} p\right) = \operatorname {s u p p} \left(f ^ {\theta} _ {\sharp} q\right) = [ - 2, 2 ].

This counterexample is shown in Figure B.1.


Figure B.1: Visualization of example distributions for Proposition B.1

C DISCUSSION OF “SOFT” AND “HARD” ASSIGNMENTS WITH 1D DISCRETE DISTRIBUTIONS

In Section 4.2 we considered the "soft-assignment" relaxed OT problem (11) and claimed that for integer $\beta$ , the set of minimizers of (11) must contain a "hard-assignment" transportation plan, meaning $\gamma_{ij} \in {0,1}, \forall i,j$ . Below we justify this claim.

Note that for $\beta = 0$ the OT problem (11) is the standard OT problem for Wasserstein-1 distance (10), since the inequality constraints $\sum_{i=1}^{m} \gamma_{ij} \leq 1$ , $\forall j$ can only be satisfied as equalities. For this problem, it is known (e.g. see Peyré et al. (2019) Proposition 2.1) that the set of optimal "soft-assignment" contains a "hard-assignment" represented by a normalized permutation matrix. This fact can be proven using the Birkhoff-von Neumann theorem. The Birkhoff-von Neumann theorem states that the set of doubly stochastic matrices

PRn×n:Pij0,i,j,j=1nPij=1,i,i=1nPij=1,j P \in \mathbb {R} ^ {n \times n}: \qquad P _ {i j} \geq 0, \forall i, j, \qquad \sum_ {j = 1} ^ {n} P _ {i j} = 1, \forall i, \qquad \sum_ {i = 1} ^ {n} P _ {i j} = 1, \forall j

is exactly the set of all finite convex combinations of permutation matrices. In the context of the linear program (11) with $\beta = 0$ , the Birkhoff-von Neumann theorem means that all extreme points of the polyhedron $\Gamma_{\beta}(o^p,o^q)$ are hard-assignment matrices. Therefore, by the fundamental theorem of linear programming (Bertsimas & Tsitsiklis, 1997), the minimum of the objective is reached at a "hard-assignment" matrix.

We argue that a similar result holds for the case of integer $\beta > 0$ . In this case, the matrices in $\Gamma_{\beta}(o^p, o^q)$ can not be associated with the doubly stochastic matrices, since constraints on the margins of $\gamma$ are relaxed to inequality constraints. Because of that, the Birkhoff-von Neumann theorem can not be applied. However, Budish et al. (2009) provide a generalization of the Birkhoff-von Neumann theorem (Theorem 1 in (Budish et al., 2009)) which applies to the cases where the equality constraints are replaced with integer-valued inequality constraints (recall that we consider integer $\beta$ ). Using this generalized result, our claim can be proven by performing the following steps.

Clearly, the polyhedron $\Gamma_{\beta}(o^p,o^q)$ contains all "hard-assignment" matrices and all their finite convex combinations. The result proven in (Budish et al., 2009) implies that each element of $\Gamma_{\beta}(o^{p},o^{q})$ can be represented as a finite convex combination of "hard-assignment" matrices. Thus, the polyhedron $\Gamma_{\beta}(o^p,o^q)$ is exactly the set of all finite convex combinations of "hard-assignment" matrices and all extreme points of the polyhedron are "hard-assignment" matrices. Finally, by analogy with the case of $\beta = 0$ , we invoke the fundamental theorem of the linear programming and conclude that the minimum of the objective (11) is reached at $\gamma$ corresponding to a "hard-assignment" matrix.

D EXPERIMENT DETAILS

D.1 USPS TO MNIST EXPERIMENT SPECIFICATIONS

We use USPS (Hull, 1994) and MNIST (LeCun et al., 1998) datasets for this adaptation problem.

Following Tachet des Combes et al. (2020) we use LeNet-like (LeCun et al., 1998) architecture for the feature extractor with the 500-dimensional feature representation. The classifier consists of a single linear layer. The discriminator is implemented by a 3-layer MLP with 512 hidden units and leaky-ReLU activation.

We train all methods for 65000 steps with batch size 64. We train the feature extractor, the classifier, and the discriminator with SGD (learning rate 0.02, momentum 0.9, weight decay $5 \cdot 10^{-4}$ ). We perform a single discriminator update per 1 update of the feature extractor and the classifier. After the first 30000 steps we linearly anneal the feature extractor's and classifier's learning rates for 30000 steps to the final value $2 \cdot 10^{-5}$ .

The feature extractor's loss is given by a weighted combination of the cross-entropy classification loss on the labeled source example and a domain alignment loss computed from the discriminator's signal (recall that different method use different forms of the alignment loss). The weight for the classification term is constant and set to $\lambda_{\mathrm{cls}} = 1$ . We introduce schedule for the alignment weight $\lambda_{\mathrm{align}}$ . For all alignment methods we linearly increase $\lambda_{\mathrm{align}}$ from 0 to 1.0 during the first 10000 steps.

For ASA we use history buffers of size 1000.

D.2 STL TO CIFAR EXPERIMENT SPECIFICATIONS

We use STL (Coates et al., 2011) and CIFAR-10 (Krizhevsky, 2009) for this adaptation task. STL and CIFAR-10 are both 10-class classification problems. There are 9 common classes between the two datasets. Following Shu et al. (2018) we create a 9-class classification problem by selecting the subsets of examples of the 9 common classes.

For the feature extractor, we adapt the deep CNN architecture of Shu et al. (2018). The feature representation is a 192-dimensional vector. The classifier consists of a single linear layer. The discriminator is implemented by a 3-layer MLP with 512 hidden units and leaky-ReLU activation.

We train all methods for 40 000 steps with batch size 64. We train the feature extractor, the classifier, and the discriminator with ADAM (Kingma & Ba, 2014) (learning rate $0.001$ , $\beta_{1} = 0.5$ , $\beta_{2} = 0.999$ , no weight decay). We perform a single discriminator update per 1 update of the feature extractor and the classifier.

The weight for the classification loss term is constant and set to $\lambda_{\mathrm{cls}} = 1$ . For all alignment methods we use constant alignment weight $\lambda_{\mathrm{align}} = 0.1$ .

For ASA we use history buffers of size 1000.

Conditional entropy loss. Following (Shu et al., 2018) we use auxiliary conditional entropy loss on target examples for domain adaptation methods. For a classifier $C^\phi : \mathcal{Z} \to \mathcal{Y}$ and a feature extractor $F^{\theta}: \mathcal{X} \to \mathcal{Z}$ where classifier outputs the distribution over class labels ${1, \dots, K}$

Cϕ(z)RK:[Cϕ(z)]k0,k=1K[Cϕ(z)]k=1, C ^ {\phi} (z) \in \mathbb {R} ^ {K}: \quad \left[ C ^ {\phi} (z) \right] _ {k} \geq 0, \quad \sum_ {k = 1} ^ {K} \left[ C ^ {\phi} (z) \right] _ {k} = 1,

the conditional entropy loss on target examples ${x_{i}^{q}}{i = 1}^{N{q}}$ is given by

Le n t=λe n t1Nqi=1Nq(k=1K[Cϕ(Fθ(xiq))]klog[Cϕ(Fθ(xiq))]k).(17) \mathcal {L} _ {\text {e n t}} = \lambda_ {\text {e n t}} \cdot \frac {1}{N ^ {q}} \sum_ {i = 1} ^ {N _ {q}} \left(- \sum_ {k = 1} ^ {K} \left[ C ^ {\phi} \left(F ^ {\theta} \left(x _ {i} ^ {q}\right)\right) \right] _ {k} \log \left[ C ^ {\phi} \left(F ^ {\theta} \left(x _ {i} ^ {q}\right)\right) \right] _ {k}\right). \tag {17}

$\lambda_{\mathrm{ent}}$ is the weight of the conditional entropy loss in the total training objective. This loss acts as an additional regularization of the embeddings of the unlabeled target examples: minimization of the conditional entropy pushes target embeddings away from the classifier's decision boundary.

For all domain adaptation methods we use the conditional entropy loss (17) on target examples with the weight $\lambda_{\mathrm{ent}} = 0.1$

D.3 EXTENDED EXPERIMENTAL RESULTS ON STL TO CIFAR

Effect of conditional entropy loss. In order to quantify the improvements of the support alignment objective and the conditional entropy objective in separation, we conduct an ablation study. In addition to the results reported in Section 5, we evaluate all domain adaptation methods (except VADA which uses the conditional entropy in the original implementation) on STL $\rightarrow$ CIFAR task without the conditional entropy loss ( $\lambda_{\mathrm{ent}} = 0$ ). The results of the ablation study are presented in Table D.1. We observe that the effect of the auxiliary conditional entropy is essentially the same for all methods across all imbalance levels: with $\lambda_{\mathrm{ent}} = 0.1$ the accuracy either improves (especially the average class accuracy) or roughly stays on the same level. The relative ranking of distribution alignment, relaxed distribution alignment, and support alignment methods is the same with both $\lambda_{\mathrm{ent}} = 0$ and $\lambda_{\mathrm{ent}} = 0.1$ . The results demonstrate that the benefits of support alignment approach and conditional entropy are orthogonal.

Table D.1: Results of ablation experiments of the effect of auxiliary conditional entropy loss on STL $\rightarrow$ CIFAR data. Same setup and reporting metrics as Table 1.

Algorithmλentα = 0.0α = 1.0α = 1.5α = 2.0
averageminaverageminaverageminaveragemin
DANN0.074.651.568.443.243.735.562.527.5
74.149.967.041.262.829.660.025.7
DANN0.175.354.669.944.845.134.963.327.0
74.954.268.640.763.733.964.828.5
IWDAN0.070.447.268.643.646.344.763.937.3
70.246.868.443.266.043.362.932.7
IWDAN0.169.950.568.745.850.544.864.437.9
69.947.968.644.867.144.863.634.5
IWCDAN0.070.150.568.644.245.845.063.837.7
70.049.168.241.266.043.762.333.6
IWCDAN0.170.147.869.447.151.339.964.537.0
70.142.469.146.366.140.863.935.5
sDANN-40.069.446.569.649.149.248.266.342.9
68.845.169.347.468.042.664.236.6
sDANN-40.171.852.171.151.869.449.066.447.1
71.752.170.448.168.743.566.233.6
ASA-sq0.069.948.068.849.368.147.865.743.6
69.946.668.645.367.245.265.641.3
ASA-sq0.171.752.970.751.662.745.668.144.7
71.746.770.446.869.243.368.139.8
ASA-abs0.069.845.768.444.367.948.466.344.9
68.945.468.444.067.040.465.740.3
ASA-abs0.171.649.070.950.069.649.567.849.0
71.248.470.847.369.642.166.635.4

Comparison with optimal transport based baselines.

We provide additional experimental results comparing our method with OT-based methods for domain adaptation. We implement two OT-based methods which we describe below.

  • The first method is a variant of the max-sliced Wasserstein distance (which was proposed for GAN training by Deshpande et al. (2019)) for domain adaptation. In the table below we refer to this method as DANN-OT. In our implementation DANN-OT minimizes the Wasserstein distance between the pushforward distributions $g_{\sharp}^{}p$ , $g_{\sharp}^{}q$ induced by the optimal log-loss discriminator $g^{*}$ (4). As discussed in Section 4.2 (paragraph "Distribution alignment") the computation of the Wasserstein distance between 1D distributions can be implemented efficiently via sorting.
  • The second method is an OT-based variant of DANN which uses a dual Wasserstein discriminator instead of the log-loss discriminator. In the table below we refer to this method as DANN-WGP. This method minimizes the Wasserstein distance in its dual Kantorovich form. We train the discriminator with the Wasserstein dual objective and a gradient penalty proposed to enforce Lipschitz-norm constraint (Gulrajani et al., 2017).

We present the evaluation results of the OT-based methods on STL $\rightarrow$ CIFAR domain adaptation task in the Table D.2. Note that the OT-based methods aim to enforce distribution alignment constraints. We observe that the OT-based methods follow the same trend as DANN: they deliver improved accuracy compared to No DA in the balanced setting, but suffer in the imbalanced settings ( $\alpha > 0$ ) due to their distribution alignment nature.

We would also like to make a comment on OT-based relaxed distribution alignment. Wu et al. (2019b) propose method WDANN- $\beta$ which minimizes the dual form of the asymmetrically-relaxed

Table D.2: Results of comparison with optimal transport based methods on STL $\rightarrow$ CIFAR data. Same setup and reporting metrics as Table 1.

Algorithmα = 0.0α = 1.0α = 1.5α = 2.0
averageminaverageminaverageminaveragemin
No DA69.970.069.849.850.645.368.869.368.347.248.245.366.867.266.446.047.045.865.866.764.843.744.641.6
DANN75.375.474.954.654.269.970.168.644.845.140.764.967.163.734.936.833.963.364.857.427.028.521.2
DANN-OT76.075.855.254.367.768.967.143.043.736.564.565.160.934.434.629.361.362.054.424.325.523.2
DANN-WGP74.875.174.753.554.453.367.767.965.338.641.034.463.363.457.127.032.426.359.061.854.321.922.518.6
sDANN-471.872.171.752.152.852.171.171.770.449.951.848.169.470.068.748.649.043.566.467.966.239.047.133.6
ASA-sq71.771.971.752.953.446.770.771.070.451.652.746.869.269.369.245.652.043.368.168.267.244.739.839.8
ASA-abs71.671.771.249.053.548.470.971.070.849.250.047.369.669.969.643.249.542.167.868.266.640.949.035.4

Wasserstein distance. However, they observe that sDANN- $\beta$ outperforms WDANN- $\beta$ in experiments. Hence, we use sDANN- $\beta$ as a relaxed distribution alignment baseline in our experiments.

Effect of alignment weight. We provide additional experimental results comparing ASA with DANN and VADA across different values of the alignment loss weight $\lambda_{\mathrm{align}}$ on STL→CIFAR task. The results are shown in Table D.3. DANN with a higher alignment weight $\lambda_{\mathrm{align}} = 1.0$ performs better in the balanced ( $\alpha = 0$ ) setting and worse in the imbalanced ( $\alpha > 0$ ) setting compared to a lower weight $\lambda_{\mathrm{align}} = 0.1$ , as the distribution alignment constraint is enforced stricter. VADA optimizes a combination of distribution alignment + VAT (virtual adversarial training) objectives (Shu et al., 2018), and we observe the same trend: with lower alignment weight $\lambda_{\mathrm{align}} = 0.01$ , VADA performs worse in the balanced setting and better in the imbalanced setting compared to a higher weight $\lambda_{\mathrm{align}} = 0.1$ . Weight $\lambda_{\mathrm{align}} = 0.1$ is a middle ground between having poor performance in the imbalanced setting ( $\lambda_{\mathrm{align}} = 1.0$ ) and not sufficiently enforcing distribution alignment ( $\lambda_{\mathrm{align}} = 0.01$ ).

The role of VAT (similarly to that of conditional entropy loss) is orthogonal to alignment objectives. Thus, we provide additional evaluations of combining support alignment and VAT (the "ASA-sq + VAT" entry in Table D.3) with alignment weight $\lambda_{\mathrm{align}} = 1.0$ . The good performance of such combination shows that:

  • One could improve our current support alignment performance by using auxiliary objectives.
  • Support alignment based method performs qualitatively different from distribution alignment based method, since the performance holds with stricter support alignment while distribution alignment needs to loosen the constraints considerably to reduce the performance degradation in the imbalanced setting.

Table D.3: Results of comparison of ASA with DANN and VADA across different values of the alignment loss weight $\lambda_{\mathrm{align}}$ on STL $\rightarrow$ CIFAR data. Same setup and reporting metrics as Table 1.

Algorithmλalignα = 0.0α = 1.0α = 1.5α = 2.0
averageminaverageminaverageminaveragemin
DANN0.0172.372.749.550.870.671.248.951.2
72.272.248.848.869.769.741.541.5
DANN0.175.375.454.656.669.970.144.845.1
74.974.954.254.268.668.640.740.7
DANN1.077.277.358.559.466.366.837.937.5
76.876.856.756.764.564.537.537.5
VADA0.0174.474.454.255.471.771.751.652.0
74.274.252.652.671.771.745.045.0
VADA0.176.776.756.958.370.671.047.748.8
76.676.653.553.570.670.044.044.0
ASA-sq0.171.771.952.953.470.771.051.652.7
71.771.746.746.770.470.446.846.8
ASA-sq + VAT1.074.274.552.252.572.272.253.553.6
74.074.051.951.972.271.945.445.4

D.4 VISDA-17 EXPERIMENT SPECIFICATIONS

We use train and validation sets of the VisDA-17 challenge (Peng et al., 2017).

For the feature extractor we use ResNet-50 He et al. (2016) architecture with modified output size of the final linear layer. The feature representation is 256-dimensional vector. We use the weights from pre-trained ResNet-50 model (torchvision model hub) for all layers except the final linear layer. The classifier consists of a single linear layer. The discriminator is implemented by a 3-layer MLP with 1024 hidden units and leaky-ReLU activation.

We train all methods for 50 000 steps with batch size 36. We train the feature extractor, the classifier, and the discriminator with SGD. For the feature extractor we use learning rate 0.001, momentum 0.9, weight decay 0.001. For the classifier we use learning rate 0.01, momentum 0.9, weight decay 0.001. For the discriminator we use learning rate 0.005, momentum 0.9, weight decay 0.001. We perform a single discriminator update per 1 update of the feature extractor and the classifier. We linearly anneal the feature extractor's and classifier's learning rate throughout the training (50 000) steps. By the end of the training the learning rates of the feature extractor and the classifier are decreased by a factor of 0.05.

The weight for the classification term is constant and set to $\lambda_{\mathrm{cls}} = 1$ . We introduce schedule for the alignment weight $\lambda_{\mathrm{align}}$ . For all alignment methods we linearly increase $\lambda_{\mathrm{align}}$ from 0 to 0.1 during the first 10000 steps. For all methods we use auxiliary conditional entropy loss on target examples with the weight $\lambda_{\mathrm{ent}} = 0.1$ .

For ASA we use history buffers of size 1000.

D.5 HISTORY SIZE EFFECT AND EVALUATION OF SUPPORT DISTANCE

History size effect. To quantify the effects of mini-batch training mentioned in Section 3, we explore different sizes of history buffers on $\mathrm{USPS} \rightarrow \mathrm{MNIST}$ task with the label distribution shift $\alpha = 1.5$ . The results are presented in Figure D.1 and Table D.4. Figure D.2 shows the distributions of outputs of the learned discriminator at the end of the training. While without any alignment objectives neither the densities nor the supports of $g_{\sharp}^{\psi} p_Z^\theta$ and $g_{\sharp}^{\psi} q_Z^\theta$ are aligned, both alignment methods approximately satisfy their respective alignment constraints. Compared with DANN results, ASA with small history size performs similarly to distribution alignment, while all history sizes are enough for support alignment. We also observe the correlation between distribution distance and target accuracy: under label distribution shifts, the better distribution alignment is achieved, the more target accuracy suffers. Note that with too big history buffers (e.g. $n = 5000$ ), we observe a sudden drop in performance and increases in distances. We hypothesize that this could be caused by the fact that the history buffer stores discriminator output values from the past steps while the discriminator parameters constantly evolve during training. As a result, for a large history buffer, the older items might no longer accurately represent the current pushforward distribution as they become outdated.


Figure D.1: Evaluation of history size effect for ASA on MNIST $\rightarrow$ USPS with the label distribution shift ( $\alpha = 1.5$ ). The panels show (left to right): minimum class accuracy on target test set; Wasserstein distance $\mathcal{D}W(g^{\psi{\sharp}}p_Z^\theta, g^{\psi_{\sharp}}q_Z^\theta)$ between the pushforward distributions of source and target representations induced by the discriminator; SSD divergence $\mathcal{D}{\triangle}(g^{\psi{\sharp}}p_Z^\theta, g^{\psi_{\sharp}}q_Z^\theta)$ between the pushforward distributions. In each panel the dashed lines show the respective quantities for "No DA" and DANN methods.

Direct evaluation of support distance. In order to directly evaluate the ability of ASA (with history buffers) to enforce support alignment, we consider the setting of the illustrative experiment described in Section 5 (3-class USPS $\rightarrow$ MNIST adaptation with 2D feature extractor, $\alpha = 1.5$ ). We compare methods No DA, DANN, and ASA-abs (with different history buffer sizes). For each method we consider the embedding space of the learned feature extractor at the end of training and compute Wasserstein distance $\mathcal{D}W(p_Z^\theta, q_Z^\theta)$ and SSD divergence $\mathcal{D}{\triangle}(p_Z^\theta, q_Z^\theta)$ between the embeddings of source and target domain (note that we compute the distances in the original embedding space directly without projecting data to 1D with the discriminator). To ensure meaningful comparison of the distances between different embedding spaces, we apply a global affine transformation for each embedding space: we center the embeddings so that their average is 0 and re-scale them so that their average norm is 1. The results of this evaluation are shown in Table D.5. We observe that, compared to no alignment and distribution alignment (DANN) methods, ASA aligns the supports without necessarily aligning the distributions (in this imbalanced setting, distribution alignment implies low adaptation accuracy).

Table D.4: Analysis of effect history size parameter for ASA on USPS $\rightarrow$ MNIST with class label distribution shift corresponding to $\alpha = 1.5$ . We report distribution and support distances between the pushforward distributions $g^{\psi}{\sharp}p{Z}^{\theta}$ and $g^{\psi}{\sharp}q{Z}^{\theta}$ , as well as the value of discriminator's log-loss.

MethodHistory sizeTarget accuracy (%)Distribution distancesLog-loss
averagemin\( \mathcal{D}_W(g^{\psi}_{\sharp}p_{Z}^{\theta},g^{\psi}_{\sharp}q_{Z}^{\theta}) \)\( \mathcal{D}_{\triangle}(g^{\psi}_{\sharp}p_{Z}^{\theta},g^{\psi}_{\sharp}q_{Z}^{\theta}) \)
No DA71.28 72.51 71.2527.46 37.26 24.21307.56 322.00 277.3340.35 46.06 32.1000.05 00.07 00.04
DANN69.96 71.25 63.8901.11 01.53 00.9900.11 00.11 00.1000.00 00.00 00.0000.65 00.65 00.65
ASA-abs062.75 64.35 61.7819.36 23.63 17.9001.07 01.15 00.9900.01 00.01 00.0000.57 00.58 00.56
ASA-abs10080.58 81.73 78.2235.09 44.37 32.1002.64 02.70 02.1500.00 00.00 00.0000.53 00.53 00.52
ASA-abs50092.02 92.76 90.5676.96 83.72 70.9406.21 06.48 05.6900.00 00.01 00.0000.45 00.45 00.45
ASA-abs100092.54 92.93 90.9082.41 85.43 74.5308.06 08.19 07.9700.01 00.02 00.0100.41 00.41 00.40
ASA-abs500086.03 87.50 84.8662.19 71.62 46.9829.23 29.63 24.5400.05 00.08 00.0500.29 00.30 00.29


(a) No DA


(b) DANN
Figure D.2: Kernel density estimates (in the discriminator output space) of $g^{\psi}{}{\sharp}p_Z^\theta$ , $g^{\psi}{}{\sharp}q_Z^\theta$ at the end of the training on USPS→MNIST task with $\alpha = 1.5$ . $n$ is the size of ASA history buffers.


(c) ASA $(n = 0)$


(d) ASA $(n = 1000)$

Table D.5: Results of No DA, DANN, and ASA-abs (with different history sizes) on 3-class USPS $\rightarrow$ MNIST adaptation with 2D feature extractor and label distribution shift corresponding to $\alpha = 1.5$ . We report average and minimum target class accuracy, as well as Wasserstein distance $\mathcal{D}W$ and support divergence $\mathcal{D}{\triangle}$ between source $p_Z^\theta$ and target $q_Z^\theta$ 2D embedding distributions. We report median (the main number), and 25 (subscript) and 75 (superscript) percentiles across 5 runs.

AlgorithmHistory sizeAccuracy (avg)Accuracy (min)Dw(pZ, qZ)D△(pZ, qZ)
No DA63.0 69.645.3 53.60.78 00.840.10 0.10
62.337.900.750.10 0.10
DANN75.6 83.754.8 55.10.07 0.080.02 0.02
72.449.60.060.02 0.02
ASA-abs073.9 84.161.8 72.40.23 0.470.03 0.03
73.454.60.220.03 0.03
ASA-abs10088.5 95.171.4 93.30.54 0.360.03 0.03
86.870.60.560.03 0.03
ASA-abs50094.5 94.789.0 90.30.59 0.640.03 0.03
88.783.10.550.03 0.03
ASA-abs100091.1 93.085.6 86.20.59 0.620.03 0.03
91.180.70.550.03 0.03
ASA-abs200094.0 94.788.6 89.40.62 0.660.03 0.03
91.280.20.580.03 0.03
ASA-abs500082.1 83.968.9 70.90.64 0.670.04 0.04
81.865.50.630.04 0.04