| { |
| "File Number": "1079", |
| "Title": "Diverse Weight Averaging for Out-of-Distribution Generalization", |
| "Limitation": "Critically, DiWA has no additional inference cost — removing a key limitation of standard ensembling. Our work may encourage the community to further create diverse learning procedures and objectives — whose models may be averaged in weights.", |
| "Reviewer Comment": "Reviewer_3: Strengths:\nThe paper is very well written -- exposition is clear, well-organized, high quality (free of typos), and each section is well-motivated. For example, \"Limitations of the flatness-based analysis\" discusses why the current explanation for weight averaging's performance is insufficient and why a different type of analysis (like their 4 term decomposition) is needed.\nThe proposed bias-variance-covariance-locality decomposition (Proposition 1) is novel and aids our understanding of weight averaging. Furthermore, it directly motivates their proposed DiWA.\nDiWA is evaluated on DomainBed, which is the standard public benchmark for out-of-domain generalization right now. Experimental setup is sound and the comparisons with baselines fair.\nWeaknesses:\nDespite DiWA outperforming weight averaging checkpoints of a single run (WA), it is unlikely to be adopted for some use cases. For example, for large language models, there is no extra cost for doing WA but training the model multiple times as needed by DiWA is not practical or feasible. The extra compute is probably better spent scaling the model size or training for longer.\nQuestions:\nI would be curious to know how DiWA and WA would fare when the Sharpness-Aware Minimization (SAM) is used in each training run. Perhaps, the benefits of DiWA or WA would be diminish in the presence of SAM?\nLimitations:\nYes.\n\"extreme hyperparameter ranges lead to weights whose average may perform poorly\"\n\"diversity is key as long as the weights remain averageable\" -- linear connectivity in weight space is needed.\nEthics Flag: No\nSoundness: 4 excellent\nPresentation: 4 excellent\nContribution: 4 excellent\n\nReviewer_4: Strengths:\nThe research question of when WA succeeds and how to improve it, is interesting and relatively less explored.\nThe paper provides theoretical insights for WA and the proposed method.\nWeaknesses:\nThe writings of the paper may have some overclaim issues. In the abstract, the authors claim that \"for out-of-distribution generalization in computer vision, the best current approach averages the weights along a training run\" (line 2-3). But from the introduction, it seems that such a claim is only based on the observation from the DomainBed benchmark (line 22-24). I'm wondering does the observation on one benchmark could represent the whole situation in computer vision?\nThe contribution of this paper is quite minimal. Especially regarding the experimental results, it seems there is no large performance improvement of the proposed method when compared with MA [29] in Table 1; under the random initialization, the performance of the proposed method looks similar to the performance of SWAD [14].\nThe proposed method needs to ensemble weights from different runs (e.g., >=20 runs in the experiments), which may cause unnecessary additional computational cost.\nQuestions:\nHow to decide the total number of runs? Is the number larger usually mean the final performance is better?\nIs there any intuition of why using MMD in Proposition 3? How about other divergences?\nLimitations:\nThe authors have addressed the limitations of their proposed method.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 2 fair\n\nReviewer_5: Overall, I think this is a solid paper with a conceptually simple method and clean writing.\nOver ENS, WA has the benefit of requiring only one feedforward at test time. Both Lemma 1 and figure 1 show that WA is a close approximation to ENS in the training setting of DiWA. In fact, WA seems to perform slightly better than ENS.\nDiWA is simple and effective, and the experiments show that it can be combined with previous advances for more benefits: LP initialization, leveraging diversity from ERM/Mixup/Coral algorithms, etc.\nQuestions:\nWhat is the performance of ENS using the models obtained by DiWA?\nWhat is the rough training cost of obtaining 60 models for the final results? Were you unable to obtain error bounds in table 1 because of the computation costs?\nMinor comments Line 89: “…why WA outperforms in OOD Sharpness-Aware Minimizer…” I’m unsure whether the grammar here is technically wrong, but I took a while to parse this sentence.\nLimitations:\nTo my knowledge, this work has no potential negative societal impact other than what was already present in the existing literature.\nEthics Flag: No\nSoundness: 4 excellent\nPresentation: 3 good\nContribution: 3 good\n\nReviewer_6: Strengths:\nThe paper is well written. The sections are subdivided cleanly and they preview the arguments established in each subsection, culminating in a larger point. The authors are clearly well versed in the literature of ensembling and transfer learning.\nIn particular, section 2.4 and its subsections develop the argument for why weight averaged ensembles are robust to covariate shift (or diversity shift) quite convincingly.\nAlso, the analysis in section 2.2 of why flatness-based minimizers do not have any effect on OOD error was illuminating.\nWeaknesses:\nThere are a few small issues that I'd like to see addressed:\nAt the outset in the abstract, the authors state that the best current approaches for out of distribution generalization are derived from averaging the weights along a training run. This is not quite so, as there are multiple other approaches (SWAG, Deep Ensembles, LNNS, Loss surface simplexes for mode connecting volumes and fast ensembling). These all introduce greater member diversity though different mechanisms, so it would be more correct to broaden the related work section to include them, and to acknowledge them in the abstract.\nOn lines 185-186 the authors state that hey follow the objective of decorrelating the learning procedures of ensemble members, citing the DICE paper. It’s an exaggeration to say that the DICE objective is followed in section 3. The authors of DICE perform considerable work to implement a variational approximation to the conditional entropy bottleneck between two learners. The authors of this work do not go to such lengths to decorrelate the features extracted from the inputs by ensemble members.\nComments:\nI think the paper would benefit from a perspective that comments upon the difficulty of transfer learning based on the presence of neural collapse, or learning neural network subspaces, especially as the authors note that the size of the space spanned by the furthest member from the barycentre is important.\nIt's hard to believe that this paper's claims of increased diversity by new initializations are an actual advance, this is present in all the modern ensembling literature (cf. papers descended from the DeepEnsembles literature). In Table 1, there does not seem to be a comparison against simple deep ensembles (\nf\nE\nN\nS\nin the notation of the paper.)\nA question I have with the setup in 2.1 is why would we not want to compare against gains made by fine-tuning in the target distribution. In most practical scenarios, there is limited target data available to train on. Certainly in almost all modern NLP tasks, this is a baseline to consider. Including this baseline, where a model (or ensemble of models) that is trained on S is allowed to be fine-tuned in increments (of data or learning iterations) on data drawn from T before being evaluated, would qualify the degree to which DiWA is a feasible solution to practical transfer learning.\nOverall, despite some small quibbles, I find that there is a lot to offer in this paper and the community would benefit from seeing it in a first class venue.\nQuestions:\nOne thing I found inconsistent, was the combination of the results of Lemma 1 and the evidence in Figure 1. Lemma 1 relates the loss of a weight-averaged model and a prediction-averaged model, showing that the loss of the weight averaged model is the same as the prediction averaged model, plus a (non-negative) term based on the max of the 2-norm between the weight average parameters and an individual member. Yet if that is so, I would expect prediction averaged models to outperform weight-averaged ones. Figure 1 establishes that in fact the opposite is true. Is it the case that I am misinterpreting Lemma 1?\nAn answer might lie at the end of section 4 on lines 256-266, as well as part of Appendix D: \"These experiments confirm that diversity is key as long as the weights remain averageable.\". So, in effect, weight averaging performs well when the members being averaged lie within a low-error subspace that is compact. I would like to see this concept of \"averagable\" explored further. There seems to be a practical friction between training diverse members of the ensemble (say by varying the initializations) and ensuring that the members do not stray too far apart so as to no longer stay averagable. A closer investigation of when this happens (say from Figure 5) in terms of violations of the linear mode connectivity would be illuminating.\nLimitations:\nYes, they have.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 3 good", |
| "2.2 Weight averaging for OOD and limitations of current analysis": "Weight averaging. We study the benefits of combining M individual member weights {✓m}Mm=1 , {✓(l(m)\nS )}M m=1 obtained from M (potentially correlated) identically distributed (i.d.) learning procedures LM\nS , {l(m) S }M m=1. Under conditions discussed in Section 3.2, these M weights can be\naveraged despite nonlinearities in the architecture f . Weight averaging (WA) [13], defined as:\nfWA , f(·, ✓WA), where ✓WA , ✓WA(LMS ) , 1/M XM\nm=1 ✓m, (2)\nis the state of the art [14, 29] on DomainBed [12] when the weights {✓m}Mm=1 are sampled along a single training trajectory (a description we refine in Remark 1 from Appendix C.2).\nLimitations of the flatness-based analysis. To explain this success, Cha et al. [14] argue that flat minima generalize better; indeed, WA flattens the loss landscape. Yet, as shown in Appendix B, this analysis does not fully explain WA’s spectacular results on DomainBed. First, flatness does not act on distribution shifts thus the OOD error is uncontrolled with their upper bound (see Appendix B.1). Second, this analysis does not clarify why WA outperforms Sharpness-Aware Minimizer (SAM) [30] for OOD generalization, even though SAM directly optimizes flatness (see Appendix B.2). Finally, it does not justify why combining WA and SAM succeeds in IID [31] yet fails in OOD (see Appendix B.3). These observations motivate a new analysis of WA; we propose one below that better explains these results.", |
| "abstractText": "Standard neural networks struggle to generalize under distribution shifts in computer vision. Fortunately, combining multiple networks can consistently improve out-of-distribution generalization. In particular, weight averaging (WA) strategies were shown to perform best on the competitive DomainBed benchmark; they directly average the weights of multiple networks despite their nonlinearities. In this paper, we propose Diverse Weight Averaging (DiWA), a new WA strategy whose main motivation is to increase the functional diversity across averaged models. To this end, DiWA averages weights obtained from several independent training runs: indeed, models obtained from different runs are more diverse than those collected along a single run thanks to differences in hyperparameters and training procedures. We motivate the need for diversity by a new bias-variance-covariancelocality decomposition of the expected error, exploiting similarities between WA and standard functional ensembling. Moreover, this decomposition highlights that WA succeeds when the variance term dominates, which we show occurs when the marginal distribution changes at test time. Experimentally, DiWA consistently improves the state of the art on DomainBed without inference overhead.", |
| "1 Introduction": "Learning robust models that generalize well is critical for many real-world applications [1, 2]. Yet, the classical Empirical Risk Minimization (ERM) lacks robustness to distribution shifts [3, 4, 5]. To improve out-of-distribution (OOD) generalization in classification, several recent works proposed to train models simultaneously on multiple related but different domains [6]. Though theoretically appealing, domain-invariant approaches [7] either underperform [8, 9] or only slightly improve [10, 11] ERM on the reference DomainBed benchmark [12]. The state-of-the-art strategy on DomainBed is currently to average the weights obtained along a training trajectory [13]. [14] argues that this weight averaging (WA) succeeds in OOD because it finds solutions with flatter loss landscapes.\nIn this paper, we show the limitations of this flatness-based analysis and provide a new explanation for the success of WA in OOD. It is based on WA’s similarity with ensembling [15], a well-known strategy to improve robustness [16, 17], that averages the predictions from various models. Based on [18], we present a bias-variance-covariance-locality decomposition of WA’s expected error. It contains four terms: first the bias that we show increases under shift in label posterior distributions (i.e., correlation shift [19]); second, the variance that we show increases under shift in input marginal distributions (i.e., diversity shift [19]); third, the covariance that decreases when models are diverse; finally, a locality condition on the weights of averaged models.\nBased on this analysis, we aim at obtaining diverse models whose weights are averageable with our Diverse Weight Averaging (DiWA) approach. In practice, DiWA averages in weights the models\n36th Conference on Neural Information Processing Systems (NeurIPS 2022).\nobtained from independent training runs that share the same initialization. The motivation is that those models are more diverse than those obtained along a single run [20, 21]. Yet, averaging the weights of independently trained networks with batch normalization [22] and ReLU layers [23] may be counter-intuitive. Such averaging is efficient especially when models can be connected linearly in the weight space via a low loss path. Interestingly, this linear mode connectivity property [24] was empirically validated when the runs start from a shared pretrained initialization [25]. This insight is at the heart of DiWA but also of other recent works [26, 27, 28], as discussed in Section 6.\nIn summary, our main contributions are the following:\n• We propose a new theoretical analysis of WA for OOD based on a bias-variance-covariancelocality decomposition of its expected error (Section 2). By relating correlation shift to its bias and diversity shift to its variance, we show that WA succeeds under diversity shift.\n• We empirically tackle the covariance term by increasing the diversity across models averaged in weights. In our DiWA approach, we decorrelate their training procedures: in practice, these models are obtained from independent runs (Section 3). We then empirically validate that diversity improves OOD performance (Section 4) and show that DiWA is state of the art on all real-world datasets from the DomainBed benchmark [12] (Section 5).", |
| "2 Theoretical insights": "Under the setting described in Section 2.1, we introduce WA in Section 2.2 and decompose its expected OOD error in Section 2.3. Then, we separately consider the four terms of this bias-variancecovariance-locality decomposition in Section 2.4. This theoretical analysis will allow us to better understand when WA succeeds, and most importantly, how to improve it empirically in Section 3.", |
| "2.1 Notations and problem definition": "Notations. We denote X the input space of images, Y the label space and ` : Y2 ! R+ a loss function. S is the training (source) domain with distribution pS , and T is the test (target) domain with distribution pT . For simplicity, we will indistinctly use the notations pS and pT to refer to the joint, posterior and marginal distributions of (X,Y ). We note fS , fT : X ! Y the source and target labeling functions. We assume that there is no noise in the data: then fS is defined on XS , {x 2 X/pS(x) > 0} by 8(x, y) ⇠ pS , fS(x) = y and similarly fT is defined on XT , {x 2 X/pT (x) > 0} by 8(x, y) ⇠ pT , fT (x) = y.\nProblem. We consider a neural network (NN) f(·, ✓) : X ! Y made of a fixed architecture f with weights ✓. We seek ✓ minimizing the target generalization error:\nET (✓) = E(x,y)⇠pT [`(f(x, ✓), y)]. (1) f(·, ✓) should approximate fT on XT . However, this is complex in the OOD setup because we only have data from domain S in training, related yet different from T . The differences between S and T are due to distribution shifts (i.e., the fact that pS(X,Y ) 6= pT (X,Y )) which are decomposed per [19] into diversity shift (a.k.a. covariate shift), when marginal distributions differ (i.e., pS(X) 6= pT (X)), and correlation shift (a.k.a. concept shift), when posterior distributions differ (i.e., pS(Y |X) 6= pT (Y |X) and fS 6= fT ). The weights are typically learned on a training dataset dS from S (composed of nS i.i.d. samples from pS(X,Y )) with a configuration c, which contains all other sources of randomness in learning (e.g., initialization, hyperparameters, training stochasticity, epochs, etc.). We call lS = {dS , c} a learning procedure on domain S, and explicitly write ✓(lS) to refer to the weights obtained after stochastic minimization of 1/nS P (x,y)2dS `(f(x, ✓), y) w.r.t. ✓ under lS .", |
| "2.3 Bias-variance-covariance-locality decomposition": "We now introduce our bias-variance-covariance-locality decomposition which extends the biasvariance decomposition [32] to WA. In the rest of this theoretical section, ` is the Mean Squared Error for simplicity: yet, our results may be extended to other losses as in [33]. In this case, the expected error of a model with weights ✓(lS) w.r.t. the learning procedure lS was decomposed in [32] into:\nElSET (✓(lS)) = E(x,y)⇠pT [bias 2(x, y) + var(x)], (BV)\nwhere bias(x, y), var(x) are the bias and variance of the considered model w.r.t. a sample (x, y), defined later in Equation (BVCL). To decompose WA’s error, we leverage the similarity (already highlighted in [13]) between WA and functional ensembling (ENS) [15, 34], a more traditional way to combine a collection of weights. More precisely, ENS averages the predictions, fENS , fENS(·, {✓m}Mm=1) , 1/M P M\nm=1 f(·, ✓m). Lemma 1 establishes that fWA is a first-order approximation of fENS when {✓m}Mm=1 are close in the weight space. Lemma 1 (WA and ENS. Proof in Appendix C.1. Adapted from [13, 28].). Given {✓m}Mm=1 with learning procedures LM\nS , {l(m) S }M m=1. Denoting LMS = max M m=1k✓m ✓WAk2, 8(x, y) 2 X ⇥Y:\nfWA(x) = fENS(x) +O( 2 L M S ) and `(fWA(x), y) = `(fENS(x), y) +O( 2LMS ).\nThis similarity is useful since Equation (BV) was extended into a bias-variance-covariance decomposition for ENS in [18, 35]. We can then derive the following decomposition of WA’s expected test error. To take into account the M averaged weights, the expectation is over the joint distribution describing the M identically distributed (i.d.) learning procedures LM\nS , {l(m) S }M m=1.\nProposition 1 (Bias-variance-covariance-locality decomposition of the expected generalization error of WA in OOD. Proof in Appendix C.2.). Denoting f̄S(x) = ElS [f(x, ✓(lS))], under identically distributed learning procedures LM\nS , {l(m) S }M m=1, the expected generalization error on domain T\nof ✓WA(LMS ) , 1M P M m=1 ✓m over the joint distribution of L M S is:\nELMS ET (✓WA(L M S )) = E(x,y)⇠pT\nh bias2(x, y) + 1\nM var(x) + M 1 M\ncov(x) i +O(̄2),\nwhere bias(x, y) = y f̄S(x),\nand var(x) = ElS h f(x, ✓(lS)) f̄S(x) 2i , and cov(x) = ElS ,l0S ⇥ f(x, ✓(lS)) f̄S(x) f(x, ✓(l0 S ))) f̄S(x) ⇤ ,\nand ̄2 = ELMS 2 L M S with LMS = M max m=1 k✓m ✓WAk2.\n(BVCL)\ncov is the prediction covariance between two member models whose weights are averaged. The locality term ̄2 is the expected squared maximum distance between weights and their average.\nEquation (BVCL) decomposes the OOD error of WA into four terms. The bias is the same as that of each of its i.d. members. WA’s variance is split into the variance of each of its i.d. members divided by M and a covariance term. The last locality term constrains the weights to ensure the validity of our approximation. In conclusion, combining M models divides the variance by M but introduces the covariance and locality terms which should be controlled along bias to guarantee low OOD error.", |
| "2.4 Analysis of the bias-variance-covariance-locality decomposition": "We now analyze the four terms in Equation (BVCL). We show that bias dominates under correlation shift (Section 2.4.1) and variance dominates under diversity shift (Section 2.4.2). Then, we discuss a trade-off between covariance, reduced with diverse models (Section 2.4.3), and the locality term, reduced when weights are similar (Section 2.4.4). This analysis shows that WA is effective against diversity shift when M is large and when its members are diverse but close in the weight space.", |
| "2.4.1 Bias and correlation shift (and support mismatch)": "We relate OOD bias to correlation shift [19] under Assumption 1, where f̄S(x) , ElS [f(x, ✓(lS))]. As discussed in Appendix C.3.2, Assumption 1 is reasonable for a large NN trained on a large dataset representative of the source domain S. It is relaxed in Proposition 4 from Appendix C.3. Assumption 1 (Small IID bias). 9✏ > 0 small s.t. 8x 2 XS , |fS(x) f̄S(x)| ✏. Proposition 2 (OOD bias and correlation shift. Proof in Appendix C.3). With a bounded difference between the labeling functions fT fS on XT \\ XS , under Assumption 1, the bias on domain T is:\nE(x,y)⇠pT [bias 2(x, y)] = Correlation shift + Support mismatch +O(✏),\nwhere Correlation shift = Z\nXT\\XS (fT (x) fS(x))2pT (x)dx,\nand Support mismatch = Z\nXT \\XS\nfT (x) f̄S(x) 2 pT (x)dx.\n(3)\nWe analyze the first term by noting that fT (x) , EpT [Y |X = x] and fS(x) , EpS [Y |X = x], 8x 2 XT \\ XS . This expression confirms that our correlation shift term measures shifts in posterior distributions between source and target, as in [19]. It increases in presence of spurious correlations: e.g., on ColoredMNIST [8] where the color/label correlation is reversed at test time. The second term is caused by support mismatch between source and target. It was analyzed in [36] and shown irreducible in their “No free lunch for learning representations for DG”. Yet, this term can be tackled if we transpose the analysis in the feature space rather than the input space. This motivates encoding the source and target domains into a shared latent space, e.g., by pretraining the encoder on a task with minimal domain-specific information as in [36].\nThis analysis explains why WA fails under correlation shift, as shown on ColoredMNIST in Appendix H. Indeed, combining different models does not reduce the bias. Section 2.4.2 explains that WA is however efficient against diversity shift.", |
| "2.4.2 Variance and diversity shift": "Variance is known to be large in OOD [5] and to cause a phenomenon named underspecification, when models behave differently in OOD despite similar test IID accuracy. We now relate OOD variance to diversity shift [19] in a simplified setting. We fix the source dataset dS (with input support XdS ), the target dataset dT (with input support XdT ) and the network’s initialization. We get a closed-form expression for the variance of f over all other sources of randomness under Assumptions 2 and 3. Assumption 2 (Kernel regime). f is in the kernel regime [37, 38].\nThis states that f behaves as a Gaussian process (GP); it is reasonable if f is a wide network [37, 39]. The corresponding kernel K is the neural tangent kernel (NTK) [37] depending only on the initialization. GPs are useful because their variances have a closed-form expression (Appendix C.4.1). To simplify the expression of variance, we now make Assumption 3. Assumption 3 (Constant norm and low intra-sample similarity on dS). 9( S , ✏) with 0 ✏⌧ S such that 8xS 2 XdS ,K(xS , xS) = S and 8x0S 6= xS 2 XdS , |K(xS , x0S)| ✏.\nThis states that training samples have the same norm (following standard practice [39, 40, 41, 42]) and weakly interact [43, 44]. This assumption is further discussed and relaxed in Appendix C.4.2. We are now in a position to relate variance and diversity shift when ✏! 0.\nProposition 3 (OOD variance and diversity shift. Proof in Appendix C.4). Given f trained on source dataset dS (of size nS) with NTK K, under Assumptions 2 and 3, the variance on dataset dT is:\nExT2XdT [var(xT )] = nS 2 S MMD2(XdS , XdT ) + T nS 2 S T +O(✏), (4)\nwhere MMD is the empirical Maximum Mean Discrepancy in the RKHS of K2(x, y) = (K(x, y))2; T , ExT2XdT K(xT , xT ) and T , E(xT ,x0T )2X2dT ,xT 6=x0TK\n2(xT , x0T ) are the empirical mean similarities respectively measured between identical (w.r.t. K) and different (w.r.t. K2) samples averaged over XdT .\nThe MMD empirically estimates shifts in input marginals, i.e., between pS(X) and pT (X). Our expression of variance is thus similar to the diversity shift formula in [19]: MMD replaces the L1 divergence used in [19]. The other terms, T and T , both involve internal dependencies on the target dataset dT : they are constants w.r.t. XdT and do not depend on distribution shifts. At fixed dT and under our assumptions, Equation (4) shows that variance on dT decreases when XdS and XdT are closer (for the MMD distance defined by the kernel K2) and increases when they deviate. Intuitively, the further XdT is from XdS , the less the model’s predictions on XdT are constrained after fitting dS .\nThis analysis shows that WA reduces the impact of diversity shift as combining M models divides the variance per M . This is a strong property achieved without requiring data from the target domain.", |
| "2.4.3 Covariance and diversity": "The covariance term increases when the predictions of {f(·, ✓m)}Mm=1 are correlated. In the worst case where all predictions are identical, covariance equals variance and WA is no longer beneficial. On the other hand, the lower the covariance, the greater the gain of WA over its members; this is derived by comparing Equations (BV) and (BVCL), as detailed in Appendix C.5. It motivates tackling covariance by encouraging members to make different predictions, thus to be functionally diverse. Diversity is a widely analyzed concept in the ensemble literature [15], for which numerous measures have been introduced [45, 46, 47]. In Section 3, we aim at decorrelating the learning procedures to increase members’ diversity and reduce the covariance term.", |
| "2.4.4 Locality and linear mode connectivity": "To ensure that WA approximates ENS, the last locality term O(̄2) constrains the weights to be close. Yet, the covariance term analyzed in Section 2.4.3 is antagonistic, as it motivates functionally diverse models. Overall, to reduce WA’s error in OOD, we thus seek a good trade-off between diversity and locality. In practice, we consider that the main goal of this locality term is to ensure that the weights are averageable despite the nonlinearities in the NN such that WA’s error does not explode. This is why in Section 3, we empirically relax this locality constraint and simply require that the weights are linearly connectable in the loss landscape, as in the linear mode connectivity [24]. We empirically verify later in Figure 1 that the approximation fWA ⇡ fENS remains valid even in this case.", |
| "3.1 Motivation: weight averaging from different runs for more diversity": "Limitations of previous WA approaches. Our analysis in Sections 2.4.1 and 2.4.2 showed that the bias and the variance terms are mostly fixed by the distribution shifts at hand. In contrast, the covariance term can be reduced by enforcing diversity across models (Section 2.4.3) obtained from learning procedures {l(m)\nS }M m=1. Yet, previous methods [14, 29] only average weights obtained\nalong a single run. This corresponds to highly correlated procedures sharing the same initialization, hyperparameters, batch orders, data augmentations and noise, that only differ by the number of training steps. The models are thus mostly similar: this does not leverage the full potential of WA.\nDiWA. Our Diverse Weight Averaging approach seeks to reduce the OOD expected error in Equation (BVCL) by decreasing covariance across predictions: DiWA decorrelates the learning procedures {l(m)\nS }M m=1. Our weights are obtained from M 1 different runs, with diverse learning procedures:\nAlgorithm 1 DiWA Pseudo-code Require: ✓0 pretrained encoder and initialized classifier; {hm}Hm=1 hyperparameter configurations. Training: 8m = 1 to H , ✓m , FineTune(✓0, hm) Weight selection:\nUniform: M = {1, · · · , H}. Restricted: Rank {✓m}Hm=1 by decreasing ValAcc(✓m). M ;. for m = 1 to H do\nIf ValAcc(✓M[{m}) ValAcc(✓M) M M [ {m} Inference: with f(·, ✓M), where ✓M = P m2M ✓m/|M|.\nthese have different hyperparameters (learning rate, weight decay and dropout probability), batch orders, data augmentations (e.g., random crops, horizontal flipping, color jitter, grayscaling), stochastic noise and number of training steps. Thus, the corresponding models are more diverse on domain T per [21] and reduce the impact of variance when M is large. However, this may break the locality requirement analyzed in Section 2.4.4 if the weights are too distant. Empirically, we show that DiWA works under two conditions: shared initialization and mild hyperparameter ranges.", |
| "3.2 Approach: shared initialization, mild hyperparameter search and weight selection": "Shared initialization. The shared initialization condition follows [25]: when models are fine-tuned from a shared pretrained model, their weights can be connected along a linear path where error remains low [24]. Following standard practice on DomainBed [12], our encoder is pretrained on ImageNet [48]; this pretraining is key as it controls the bias (by defining the feature support mismatch, see Section 2.4.1) and variance (by defining the kernel K, see Appendix C.4.4). Regarding the classifier initialization, we test two methods. The first is the random initialization, which may distort the features [49]. The second is Linear Probing (LP) [49]: it first learns the classifier (while freezing the encoder) to serve as a shared initialization. Then, LP fine-tunes the encoder and the classifier together in the M subsequent runs; the locality term is smaller as weights remain closer (see [49]).\nMild hyperparameter search. As shown in Figure 5, extreme hyperparameter ranges lead to weights whose average may perform poorly. Indeed, weights obtained from extremely different hyperparameters may not be linearly connectable; they may belong to different regions of the loss landscape. In our experiments, we thus use the mild search space defined in Table 7, first introduced in SWAD [14]. These hyperparameter ranges induce diverse models that are averageable in weights.\nWeight selection. The last step of our approach (summarized in Algorithm 1) is to choose which weights to average among those available. We explore two simple weight selection protocols, as in [28]. The first uniform equally averages all weights; it is practical but may underperform when some runs are detrimental. The second restricted (greedy in [28]) solves this drawback by restricting the number of selected weights: weights are ranked in decreasing order of validation accuracy and sequentially added only if they improve DiWA’s validation accuracy.\nIn the following sections, we experimentally validate our theory. First, Section 4 confirms our findings on the OfficeHome dataset [50] where diversity shift dominates [19] (see Appendix E.2 for a similar analysis on PACS [51]). Then, Section 5 shows that DiWA is state of the art on DomainBed [12].", |
| "4 Empirical validation of our theoretical insights": "We consider several collections of weights {✓m}Mm=1 (2 M < 10) trained on the “Clipart”, “Product” and “Photo” domains from OfficeHome [50] with a shared random initialization and mild hyperparameter ranges. These weights are first indifferently sampled from a single run (every 50 batches) or from different runs. They are evaluated on “Art”, the fourth domain from OfficeHome.\nWA vs. ENS. Figure 1 validates Lemma 1 and that fWA ⇡ fENS. More precisely, fWA slightly but consistently improves fENS: we discuss this in Appendix D. Moreover, a larger M improves the\nresults; in accordance with Equation (BVCL), this motivates averaging as many weights as possible. In contrast, large M is computationally impractical for ENS at test time, requiring M forwards.\nDiversity and accuracy. We validate in Figure 2 that fWA benefits from diversity. Here, we measure diversity with the ratio-error [46], i.e., the ratio Ndiff/Nsimul between the number of different errors Ndiff and of simultaneous errors Nsimul in test for a pair in {f(·, ✓m)}Mm=1. A higher average over the M\n2\npairs means that members are less likely to err on the same inputs. Specifically, the gain of\nAcc(✓WA) over the mean individual accuracy 1M P M\nm=1 Acc(✓m) increases with diversity. Moreover, this phenomenon intensifies for larger M : the linear regression’s slope (i.e., the accuracy gain per unit of diversity) increases with M . This is consistent with the (M 1)/M factor of cov(x) in Equation (BVCL), as further highlighted in Appendix E.1.2. Finally, in Appendix E.1.1, we show that the conclusion also holds with CKAC [47], another established diversity measure.\nIncreasing diversity thus accuracy via different runs. Now we investigate the difference between sampling the weights from a single run or from different runs. Figure 3 first shows that diversity increases when weights come from different runs. Second, in Figure 4, this is reflected on the accuracies in OOD. Here, we rank by validation accuracy the 60 weights obtained (1) from 60 different runs and (2) along 1 well-performing run. We then consider the WA of the top M weights as M increases from 1 to 60. Both have initially the same performance and improve with M ; yet, WA of weights from different runs gradually outperforms the single-run WA. Finally, Figure 5 shows that this holds only for mild hyperparameter ranges and with a shared initialization. Otherwise, when hyperparameter distributions are extreme (as defined in Table 7) or when classifiers are not similarly initialized, DiWA may perform worse than its members due to a violation of the locality condition. These experiments confirm that diversity is key as long as the weights remain averageable.", |
| "5 Experimental results on the DomainBed benchmark": "Datasets. We now present our evaluation on DomainBed [12]. By imposing the code, the training procedures and the ResNet50 [52] architecture, DomainBed is arguably the fairest benchmark for OOD generalization. It includes 5 multi-domain real-world datasets: PACS [51], VLCS [53], OfficeHome [50], TerraIncognita [54] and DomainNet [55]. [19] showed that diversity shift dominates in these datasets. Each domain is successively considered as the target T while other domains are merged into the source S. The validation dataset is sampled from S, i.e., we follow DomainBed’s training-domain model selection. The experimental setup is further described in Appendix G.1. Our code is available at https://github.com/alexrame/diwa.\nBaselines. ERM is the standard Empirical Risk Minimization. Coral [10] is the best approach based on domain invariance. SWAD (Stochastic Weight Averaging Densely) [14] and MA (Moving Average) [29] average weights along one training trajectory but differ in their weight selection strategy. SWAD [14] is the current state of the art (SoTA) thanks to it “overfit-aware” strategy, yet at the cost of three additional hyperparameters (a patient parameter, an overfitting patient parameter and a tolerance rate) tuned per dataset. In contrast, MA [29] is easy to implement as it simply combines all checkpoints uniformly starting from batch 100 until the end of training. Finally, we report the scores obtained in [29] for the costly Deep Ensembles (DENS) [15] (with different initializations): we discuss other ensembling strategies in Appendix D.\nOur runs. ERM and DiWA share the same training protocol in DomainBed: yet, instead of keeping only one run from the grid-search, DiWA leverages M runs. In practice, we sample 20 configurations from the hyperparameter distributions detailed in Table 7 and report the mean and standard deviation across 3 data splits. For each run, we select the weights of the epoch with the highest validation accuracy. ERM and MA select the model with highest validation accuracy across the 20 runs, following standard practice on DomainBed. Ensembling (ENS) averages the predictions of all M = 20 models (with shared initialization). DiWA-restricted selects 1 M 20 weights with Algorithm 1 while DiWA-uniform averages all M = 20 weights. DiWA† averages uniformly the M = 3⇥ 20 = 60 weights from all 3 data splits. DiWA† benefits from larger M (without additional inference cost) and from data diversity (see Appendix E.1.3). However, we cannot report standard deviations for DiWA† for computational reasons. Moreover, DiWA† cannot leverage the restricted weight selection, as the validation is not shared across all 60 weights that have different data splits.", |
| "5.1 Results on DomainBed": "We report our main results in Table 1, detailed per domain in Appendix G.2. With a randomly initialized classifier, DiWA†-uniform is the best on PACS, VLCS and OfficeHome: DiWA-uniform is the second best on PACS and OfficeHome. On TerraIncognita and DomainNet, DiWA is penalized by some bad runs, filtered in DiWA-restricted which improves results on these datasets. Classifier initialization with linear probing (LP) [49] improves all methods on OfficeHome, TerraIncognita and DomainNet. On these datasets, DiWA† increases MA by 1.3, 0.5 and 1.1 points respectively. After averaging, DiWA† with LP establishes a new SoTA of 68.0%, improving SWAD by 1.1 points.\nDiWA with different objectives. So far we used ERM that does not leverage the domain information. Table 2 shows that DiWA-uniform benefits from averaging weights trained with Interdomain Mixup [56] and Coral [10]: accuracy gradually improves as we add more objectives. Indeed, as highlighted in Appendix E.1.3, DiWA benefits from the increased diversity brought by the various objectives. This suggests a new kind of linear connectivity across models trained\nwith different objectives; the full analysis of this is left for future work.", |
| "5.2 Limitations of DiWA": "Despite this success, DiWA has some limitations. First, DiWA cannot benefit from additional diversity that would break the linear connectivity between weights — as discussed in Appendix D. Second, DiWA (like all WA approaches) can tackle diversity shift but not correlation shift: this property is explained for the first time in Section 2.4 and illustrated in Appendix H on ColoredMNIST.", |
| "6 Related work": "Generalization and ensemble. To generalize under distribution shifts, invariant approaches [8, 9, 11, 10, 57, 58] try to detect the causal mechanism rather than memorize correlations: yet, they do not outperform ERM on various benchmarks [12, 19, 59]. In contrast, ensembling of deep networks [15, 60, 61] consistently increases robustness [16] and was successfully applied to domain generalization [29, 62, 63, 64, 65, 66]. As highlighted in [18] (whose analysis underlies our Equation (BVCL)), ensembling works due to the diversity among its members. This diversity comes primarily from the randomness of the learning procedure [15] and can be increased with different hyperparameters [67], data [68, 69, 70], augmentations [71, 72] or with regularizations [73, 65, 66, 74, 75].\nWeight averaging. Recent works [13, 76, 77, 78] combine in weights (rather than in predictions) models collected along a single run. This was shown suboptimal in IID [17] but successful in OOD [14, 29]. Following the linear mode connectivity [24, 79] and the property that many independent models are connectable [80], a second group of works average weights with fewer constraints [26, 27, 28, 81, 82, 83]. To induce greater diversity, [84] used a high constant learning rate; [80] explicitly encouraged the weights to encompass more volume in the weight space; [83] minimized cosine similarity between weights; [85] used a tempered posterior. From a loss landscape perspective [20], these methods aimed at “explor[ing] the set of possible solutions instead of simply converging to a single point”, as stated in [84]. The recent “Model soups” introduced by Wortsman et al. [28] is a WA algorithm similar to Algorithm 1; yet, the theoretical analysis and the goals of these two works are different. Theoretically, we explain why WA succeeds under diversity shift: the bias/correlation shift, variance/diversity shift and diversity-based findings are novel and are confirmed empirically. Regarding the motivation, our work aims at combining more diverse weights: it may be analyzed as a general framework to average weights obtained in various ways. In contrast, [28] challenges the standard model selection after a grid search. Regarding the task, [28] and our work complement each other: while [28] demonstrate robustness on several ImageNet variants with distribution shift, we improve the SoTA on the multi-domain DomainBed benchmark against other established OOD methods after a thorough and fair comparison. Thus, DiWA and [28] are theoretically complementary with different motivations and applied successfully for different tasks.", |
| "7 Conclusion": "In this paper, we propose a new explanation for the success of WA in OOD by leveraging its ensembling nature. Our analysis is based on a new bias-variance-covariance-locality decomposition for WA, where we theoretically relate bias to correlation shift and variance to diversity shift. It also shows that diversity is key to improve generalization. This motivates our DiWA approach that averages in weights models trained independently. DiWA improves the state of the art on DomainBed, the reference benchmark for OOD generalization. Critically, DiWA has no additional inference cost — removing a key limitation of standard ensembling. Our work may encourage the community to further create diverse learning procedures and objectives — whose models may be averaged in weights.", |
| "Acknowledgements": "We would like to thank Jean-Yves Franceschi for his helpful comments and discussions on our paper. This work was granted access to the HPC resources of IDRIS under the allocation AD011011953 made by GENCI. We acknowledge the financial support by the French National Research Agency (ANR) in the chair VISA-DEEP (project number ANR-20-CHIA-0022-01) and the ANR projects DL4CLIM ANR-19-CHIA-0018-01, RAIMO ANR-20-CHIA-0021-01, OATMIL ANR-17-CE230012 and LEAUDS ANR-18-CE23-0020.", |
| "Reviewer Summary": "Reviewer_3: The paper studies why and when averaging the weights of a model improve out-of-domain generalization by way of a new a bias-variance-covariance-locality decomposition of the expected target domain error. They show how diversity is needed and thus propose Diverse Weight Averaging (DiWA) wherein weights of N independent trained models from shared initialization are averaged together (in an optionally greedy fashion). They show how DiWA outperforms weight averaging across N model checkpoints of a single training run and other baselines on the public DomainBed benchmark.\n\nReviewer_4: In this paper, the authors proposed a simple method to improve the out-of-domain generalization, i.e., by averaging weights from different training runs rather than one run. Some theoretical analysis is also given for explaining weight averaging (WA) and the proposed method. Experiments on the DomainBed benchmark show the performance of the proposed method.\n\nReviewer_5: This paper proposes Diverse Weight Averaging (DiWA), which averages weights obtained from different training runs starting from the same initial parameters and mildly different hyperparameters. The paper decomposes the expected OOD error into (bias, variance, covariance, locality) and claims that weight averaging is most effective when the covariance term dominates, verified empirically. They show improved OOD generalization performance on the DomainBed benchmark.\n\nReviewer_6: In this paper, the authors propose an approach to mitigating the generalization problem in computer vision, under the common scenarios of either a difference in the data generating marginals (covariate shift) or a difference in the class-conditional distributions (which they call concept shift or correlation shift). They connect previous work on the errors incurred by weight averaging and standard ensembling, and offer a new decomposition that better explains when weight averaging fails under correlation shift or covariate shift. Their proposed method of averaging diverse sets of weights leads to improved results on the DomainBed benchmark suite of datasets." |
| } |