AbstractThis paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it topreviously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns aninvariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and outputvariables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on newdomains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfullylearns invariant features and improves classifier performance in practice.Domain Generalizationcome from the same distribution, learn a classifier/regressor thatgeneralizes well to the test data.Standard Setting: Assume that the training data and test dataDomain Adaptation: the training data and test data may comefrom different distributions. The common assumption is that weobserve the test data at the training time. Adapt the classi-fier/regressor trained using the training data to the specific setof test data.tional P(Y |X) stays the same.Covariate Shift: The marginal P(X) changes, but the condi-Target Shift/Concept Drift The marginal P(Y ) or condi-tional P(Y |X) may also change.distributions. Learn a classifier/regressor that generalizes well tothe unseen test data, which also comes from different distribution.Domain Generalization: The training data comes from differentApplications: medical diagnosis: aggregating the diagnosis of pre-vious patients to the new patients who have similar demographicand medical profiles. Figure 1: A simplified schematic diagram of the domaingeneralization framework. A major difference betweenour framework and most previous work in domain adap-tation is that we do not observe the test domains duringtraining time.Objective(t)tGiven the training sample S , our goal is to produce an estimate f : PX × X → R that generalizes well to test samples S t = {xk }nk=1 . Toactively reduce the dissimilarity between domains, we find transformation B in the RKHS H thati maximizing EPˆ1. minimizes the distance between empirical distributions of the transformed samples B(S i ) and2. preserves the functional relationship between X and Y , i.e., Y ⊥ X | B(X).À Minimizing Distributional VarianceDistributional variance VH (P) estimates the variance of PXwhich generates P1X , P2X , . . . , PNX.Definition 1 Introduce probability distribution P on H with P(µPi ) =1N and center G to obtain the covariance operator of P , denoted asΣ := G − 1N G − G1N + 1N G1N . The distributional variance isVH (P) :==1 PN− 2 i,j=1 Gij .N1N tr(G)1N tr(Σ)The empirical distributional variance can be computed byÁ Preserving Functional RelationshipThe central subspace C is the minimal subspace that captures thefunctional relationship between X and Y , i.e., Y ⊥ X | C > X .Theorem 1 If there exists a central subspace C = [c1 , . . . , cm ] sat-isfying Y ⊥ X|C > X , and for any a ∈ Rd , E[a> X|C > X] is linear inmm{c>X},thenE[X|Y]⊂span{Σc}xxiii=1i=1 .It follows that the bases C of the central subspace coincide with them largest eigenvectors of V(E[X|Y ]) premultiplied by Σ−1xx . Thus, thebasis c is the solution to the eigenvalue problem V(E[X|Y ])Σxx c =γΣxx c.Domain-Invariant Component AnalysisCombining À and Á, DICA finds B = [β1 , β2 , . . . , βm ] that solveswhich leads to the following algorithms:DICA AlgorithmParameters λ, ε, and m n.(i) (i)Ni}Sample S = {S i = {(xk , yk )}nk=1 i=1 Input:Output:e n×n .Projection Bn×m and kernel K1:(i)(j)l(y, y).(j)Calculate gram matrix [Kij ]kl = k(xk , xl ) and [Lij ]kl =(i)k l Supervised: C = L(L + nεI)−1 K 2 .2:Unsupervised: C = K 2 .1 3:Solve n1 CB = (KQK + K + λI)BΓ for B .4:5:e ← KBB > K .Output B and Kt t >6:e t ← K t BB > K where Knt ×n is the joint kernelThe test kernel Ktbetween test and training data.A Learning-Theoretic BoundTheorem 2 Under reasonable technical assumptions, it holds withprobability at least 1 − δ that,The bound reveals a tradeoff between reducing the distributional vari-ance and the complexity or size of the transform used to do so. Thedenominator of (1) is a sum of these terms, so that DICA tightens thebound in Theorem 2.Preserving the functional relationship (i.e. central subspace) bymaximizing the numerator in (1) should reduce the empirical risk˜ ij B), Yi ), but a rigorous demonstration has yet to be found.Eˆ `(f (XRelations to Existing MethodsThe DICA and UDICA algorithms generalize many well-known dimen-sion reduction techniques. In the supervised setting, if dataset S con-tains samples drawn from a single distribution PXY then we haveKQK = 0. Substituting α := KB gives the eigenvalue problem1−1L(L+nεI)Kα = KαΓ, which corresponds to covariance opera-ntor inverse regression (COIR) [KP11].If there is only a single distribution then unsupervised DICA reducesto KPCA since KQK = 0 and finding B requires solving the eigen-system KB = BΓ which recovers KPCA [SSM98]. If there are twodomains, source PS and target PT , then UDICA is closely related –though not identical to – Transfer Component Analysis [Pan+11]. Thisfollows from the observation that VH ({PS , PT }) = kµP− µPk2 .Experimental Results Figure 2: Projections of a synthetic datasetonto the first two eigenvectors obtained fromthe KPCA, UDICA, COIR, and DICA. The col-ors of data points corresponds to the outputvalues. The shaded boxes depict the projectionof training data, whereas the unshaded boxesshow projections of unseen test datasets. Table 1: Average accuracies over 30 random subsamples of GvHD datasets. Pooling SVMapplies standard kernel function on the pooled data from multiple domains, whereas dis-tributional SVM also considers similarity between domains using kernel on distributions.With sufficiently many samples, DICA outperforms other methods in both pooling and dis-tributional settings.Table 2: The average leave-one-out accuracies over 30 subjectson GvHD data. The distribu-tional SVM outperforms the pool-ing SVM. DICA improves classifieraccuracy.ConclusionsDomain-Invariant Component Analysis (DICA) is a new algorithm fordomain generalization based on learning an invariant transformationof the data. The algorithm is theoretically justified and performs wellin practice.References[KP11]M. Kim and V. Pavlovic. “Central subspace dimensionality reduction using covariance operators”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence33.4 (2011), pp. 657–670.[Pan+11]Sinno Jialin Pan et al. “Domain adaptation via transfer component analysis”. In: IEEE Transactions on Neural Networks 22.2 (2011), pp. 199–210.[SSM98]B. Schölkopf, A. Smola, and K-R. Müller. “Nonlinear component analysis as a kernel eigenvalue problem”. In: Neural Computation 10.5 (July 1998), pp. 1299–1319.