Abstract This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice. Domain Generalization come from the same distribution, learn a classifier/regressor that generalizes well to the test data.Standard Setting: Assume that the training data and test data Domain Adaptation: the training data and test data may come from different distributions. The common assumption is that we observe the test data at the training time. Adapt the classi- fier/regressor trained using the training data to the specific set of test data. tional P(Y |X) stays the same.Covariate Shift: The marginal P(X) changes, but the condi- Target Shift/Concept Drift The marginal P(Y ) or condi- tional P(Y |X) may also change. distributions. Learn a classifier/regressor that generalizes well to the unseen test data, which also comes from different distribution.Domain Generalization: The training data comes from different Applications: medical diagnosis: aggregating the diagnosis of pre- vious patients to the new patients who have similar demographic and medical profiles.
Figure 1: A simplified schematic diagram of the domain generalization framework. A major difference between our framework and most previous work in domain adap- tation is that we do not observe the test domains during training time. Objective
(t) tGiven the training sample S , our goal is to produce an estimate f : PX × X → R that generalizes well to test samples S t = {x k }n k=1 . To actively reduce the dissimilarity between domains, we find transformation B in the RKHS H that i maximizing E 1. minimizes the distance between empirical distributions of the transformed samples B(S i ) and 2. preserves the functional relationship between X and Y , i.e., Y ⊥ X | B(X). À Minimizing Distributional Variance Distributional variance VH (P) estimates the variance of PX which generates P1 X , P2 X , . . . , PN X. Definition 1 Introduce probability distribution P on H with P(µ Pi ) = 1 N and center G to obtain the covariance operator of P , denoted as Σ := G − 1N G − G1N + 1N G1N . The distributional variance is VH (P) :==1 PN− 2 i,j=1 Gij . N1 N tr(G)1 N tr(Σ) The empirical distributional variance can be computed by Á Preserving Functional Relationship The central subspace C is the minimal subspace that captures the functional relationship between X and Y , i.e., Y ⊥ X | C > X . Theorem 1 If there exists a central subspace C = [c1 , . . . , cm ] sat- isfying Y ⊥ X|C > X , and for any a ∈ Rd , E[a> X|C > X] is linear in mm{c>X},thenE[X|Y]⊂span{Σc}xxiii=1i=1 . It follows that the bases C of the central subspace coincide with the m largest eigenvectors of V(E[X|Y ]) premultiplied by Σ−1 xx . Thus, the basis c is the solution to the eigenvalue problem V(E[X|Y ])Σxx c = γΣxx c. Domain-Invariant Component Analysis Combining À and Á, DICA finds B = [β1 , β2 , . . . , βm ] that solves which leads to the following algorithms: DICA Algorithm Parameters λ, ε, and m  n. (i) (i)Ni}Sample S = {S i = {(x k , y k )}n k=1 i=1 Input: Output:e n×n .Projection Bn×m and kernel K 1:(i) (j)l(y, y).(j) Calculate gram matrix [Kij ]kl = k(x k , x l ) and [Lij ]kl = (i) k l Supervised: C = L(L + nεI)−1 K 2 .2: Unsupervised: C = K 2 . 1 3: Solve n1 CB = (KQK + K + λI)BΓ for B .4: 5:e ← KBB > K .Output B and K t t > 6:e t ← K t BB > K where Knt ×n is the joint kernelThe test kernel K t between test and training data. A Learning-Theoretic Bound Theorem 2 Under reasonable technical assumptions, it holds with probability at least 1 − δ that, The bound reveals a tradeoff between reducing the distributional vari- ance and the complexity or size of the transform used to do so. The denominator of (1) is a sum of these terms, so that DICA tightens the bound in Theorem 2. Preserving the functional relationship (i.e. central subspace) by maximizing the numerator in (1) should reduce the empirical risk ˜ ij B), Yi ), but a rigorous demonstration has yet to be found.Eˆ `(f (X Relations to Existing Methods The DICA and UDICA algorithms generalize many well-known dimen- sion reduction techniques. In the supervised setting, if dataset S con- tains samples drawn from a single distribution PXY then we have KQK = 0. Substituting α := KB gives the eigenvalue problem 1−1L(L+nεI)Kα = KαΓ, which corresponds to covariance opera- n tor inverse regression (COIR) [KP11]. If there is only a single distribution then unsupervised DICA reduces to KPCA since KQK = 0 and finding B requires solving the eigen- system KB = BΓ which recovers KPCA [SSM98]. If there are two domains, source PS and target PT , then UDICA is closely related – though not identical to – Transfer Component Analysis [Pan+11]. This follows from the observation that VH ({PS , PT }) = kµ P− µ Pk2 . Experimental Results
Figure 2: Projections of a synthetic dataset onto the first two eigenvectors obtained from the KPCA, UDICA, COIR, and DICA. The col- ors of data points corresponds to the output values. The shaded boxes depict the projection of training data, whereas the unshaded boxes show projections of unseen test datasets.
Table 1: Average accuracies over 30 random subsamples of GvHD datasets. Pooling SVM applies standard kernel function on the pooled data from multiple domains, whereas dis- tributional SVM also considers similarity between domains using kernel on distributions. With sufficiently many samples, DICA outperforms other methods in both pooling and dis- tributional settings.
Table 2: The average leave-one- out accuracies over 30 subjects on GvHD data. The distribu- tional SVM outperforms the pool- ing SVM. DICA improves classifier accuracy.
Conclusions Domain-Invariant Component Analysis (DICA) is a new algorithm for domain generalization based on learning an invariant transformation of the data. The algorithm is theoretically justified and performs well in practice. References [KP11]M. Kim and V. Pavlovic. “Central subspace dimensionality reduction using covariance operators”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 33.4 (2011), pp. 657–670. [Pan+11]Sinno Jialin Pan et al. “Domain adaptation via transfer component analysis”. In: IEEE Transactions on Neural Networks 22.2 (2011), pp. 199–210. [SSM98]B. Schölkopf, A. Smola, and K-R. Müller. “Nonlinear component analysis as a kernel eigenvalue problem”. In: Neural Computation 10.5 (July 1998), pp. 1299–1319.