File size: 75,693 Bytes
13cfecc | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 | # A Bayesian Model Selection Criterion for Selecting Pretraining Checkpoints
Michael Munn\*1 Susan Wei\*2
# Abstract
Recent advances in artificial intelligence have been fueled by the development of foundation models such as BERT, GPT, T5, and Vision Transformers. These models are first pretrained on vast and diverse datasets and then adapted to specific downstream tasks, often with significantly less data. However, the mechanisms behind the success of this ubiquitous pretrain-then-adapt paradigm remain underexplored, particularly the characteristics of pretraining checkpoints that enhance downstream adaptation. We introduce a Bayesian model selection criterion, called the downstream free energy, which quantifies a checkpoint's adaptability by measuring the concentration of nearby favorable parameters for the downstream task. We demonstrate that this Bayesian model selection criterion can be effectively implemented without access to the downstream data or prior knowledge of the downstream task. Furthermore, we provide empirical evidence that the criterion reliably correlates with improved fine-tuning performance, offering a principled approach to predicting model adaptability.
# 1. Introduction
The advent of foundation models has significantly reshaped the landscape of modern machine learning (Bommasani et al., 2021). Trained on expansive, diverse datasets using supervised or self-supervised learning methods, these models learn generalized representations that can then be successfully adapted (or finetuned) to a wide array of downstream tasks, often where there is significantly less data or limited computational resources (Bengio, 2012; Brown et al., 2020). This pretrain-then-adapt paradigm has emerged as a dominant and highly successful technique driving significant
*Equal contribution ¹Google Research, New York, USA ²Dept. of Econometrics and Business Statistics, Monash University, Melbourne, Australia. Correspondence to: Michael Munn <munn@google.com>.
Proceedings of the $42^{nd}$ International Conference on Machine Learning, Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).
progress across natural language processing and computer vision with applications including text classification (Qiu et al., 2020), text generation (Li et al., 2024), image classification (Liu et al., 2023b), object detection (Sanchez et al., 2020), medical imaging (Mormont et al., 2018; Chen et al., 2019; Ke et al., 2021), autonomous driving (Kim & Park, 2017) and robotics (Jaquier et al., 2023).
As a result, there is a growing body of research that aims to better understand the theoretical reasons behind the success of this pre-train-then-adapt paradigm (Galanti et al., 2022; Munn et al., 2024). One of the key open questions is to understand how to select pretraining checkpoints which are optimal for adaptation. A number of practical heuristics have emerged through experimental intuition and empirical analysis (Liu et al., 2023a), but a principled theoretical framework for effective checkpoint selection is still lacking.
To address this, we repurpose well-established concepts from Bayesian statistics and propose downstream free energy as a pretraining model selection criterion. Downstream free energy measures the negative log of the concentration of well-performing network weights near a pretraining checkpoint when evaluated on downstream data. In statistical lingo, this is nothing more than the (negative log) marginal likelihood where the integral is restricted to a local neighborhood around the pretraining checkpoint. Intuitively, lower downstream free energy indicates a higher concentration of parameters in parameter space for which the model is more adaptable and capable of generalizing well on downstream tasks. In short, checkpoints with lower downstream free energy are better suited for adaptation and thus should be preferred during pretraining.
Although the use of downstream free energy as a pretraining model selection criterion has strong theoretical grounding in Bayesian statistics, it comes with an unfortunate caveat: to compute it requires access to the downstream dataset which may not be available to the practitioner during pretraining. However, under certain distributional shift conditions between the pretraining and downstream data, it is possible to overcome this limitation. Namely, we introduce the pretraining free energy, which is computed solely on the pretraining data, and show that minimizing it serves as a reliable proxy for minimizing the downstream free energy (see Proposition 5.1). Together, these insights provide a
solid justification for using the pretraining free energy as a model selection criterion during pretraining. This strategy is particularly advantageous when pretraining is intended to be general purpose, as is the case with most foundation models.


Figure 1. We plot pretraining free energy versus two types of transfer accuracy (top and bottom) for checkpoints at the end of pretraining. As expected, checkpoints with lower pretraining free energy, across various pretraining hyperparameters such as learning rate, batch size, and momentum, show higher transfer accuracy. The size of the icons represent magnitude of the hyperparameter value; e.g., a larger triangle means higher momentum. The reported values are averaged over five random seeds. See Section 6 for details.
To justify our theoretical results, we exploit certain pretraining mechanisms that are known to reduce the pretraining free energy, such as larger learning rates, smaller batch sizes and higher momentum (Lau et al., 2025). We then verify that these mechanisms, which lead to reduced pretraining free energy, in turn correlate with improved downstream adaptation performance. A preview of these results is presented in Figure 1. In summary, our contributions are:
- We introduce the downstream free energy as novel model selection criterion for quantifying downstream adaptability (Section 4.1).
- We prove the downstream free energy can be controlled by the pretraining free energy (Proposition 5.1) and provide insight into how this free energy perspective informs practical pretraining heuristics (Section 5.1).
- We experimentally confirm (Section 6), using varied datasets and architectures, that lower pretraining free energy not only enhances downstream adaptability (Figure 2 and Figure 3) but also exhibits a stronger correlation with adaptability compared to other pretraining metrics (Table 1).
# 2. Relationship to Prior Work
Implicit bias in transfer learning. The term implicit bias refers to the tendency of optimization processes, such as stochastic gradient descent (SGD), to inherently guide the model's learning dynamics towards solutions with properties which are not explicitly prescribed by the loss function (Neyshabur et al., 2017; Soudry et al., 2018; Gunasekar et al., 2018). For example, the selection of training hyperparameters, such as the learning rate and batch size, can have a significant effect on the optimization efficiency as well as on the quality of the learned model (Keskar et al., 2017; Masters & Luschi, 2018; Goyal, 2017; He et al., 2019; Andriushchenko et al., 2023). As a result, there has been considerable effort to understand the mechanisms which govern these implicit biases during model training. However, the effect of implicit bias in transfer learning—particularly how it impacts successful downstream domain adaptation—is a growing but less explored area of research (Lippl & Lindsey, 2024; Kumar et al., 2022).
In transfer learning, the ability to identify and leverage pretraining biases to predict and improve downstream test error is highly valuable. Recent work of (Liu et al., 2023a; Galanti et al., 2022; Munn et al., 2024) can be viewed as establishing relationships of the form
downstream test error $\lesssim$ pretraining characteristic. (1)
Ideally, these pretraining characteristics are sensitive to factors which can be manipulated by practitioners, thus allowing for deliberate influence and intentional design during pretraining. Furthermore, any such pretraining characteristic should be accessible using only pretraining data, since knowledge to the downstream task or data is typically not available. It is worthwhile to note that (Liu et al., 2023a; Galanti et al., 2022; Munn et al., 2024) mainly consider the linear probe as their fine-tuning method while we consider full fine-tuning.
(Liu et al., 2023a) explore the role of implicit bias in language modeling and establish an empirical relationship between the pretraining flatness (measured by the trace of the Hessian of the pretraining loss) and the downstream test accuracy. Their experiments verify that lower pretraining flatness, which they show is effectively regularized by SGD, strongly correlates with better downstream performance. Although this work does not provide a formal bound as in (1), it offers valuable empirical evidence on how the implicit
flatness regularization of SGD acts to benefit transfer learning. This is particularly beneficial since techniques exist for explicitly minimizing loss landscape sharpness; e.g., (Foret et al., 2021; Wen et al., 2023).
(Galanti et al., 2022) examine the efficacy of transfer learning through the lens of neural collapse, a recently observed phenomenon which characterizes the geometry of last-layer features and weights for overparameterized classification networks (Papyan et al., 2020). They show through theory and experiments that the neural collapse exhibited during pretraining generalizes to new classes of the downstream task as well, thus enabling successful model adaptation. Drawing on the formalism described in (1), (Galanti et al., 2022) can be seen as deriving theoretical bounds of the form
$$
\begin{array}{c c c c} \text {d o w n s t r e a m} & \lesssim & \text {d o w n s t r e a m} & \lesssim \\ \text {t e s t e r r o r} & \lesssim & \text {n e u r a l c o l l a p s e} & \lesssim \\ & & & \text {n e u r a l c o l l a p s e}. \end{array}
$$
However, despite supporting neural collapse as a beneficial pretraining characteristic, practical methods to explicitly regularize it are lacking.
(Munn et al., 2024) make progress in this direction by means of the geometric complexity, a model complexity measure introduced and analyzed in (Dherin et al., 2022). They prove that the geometric complexity of the model's learned feature representations upper bounds the model neural collapse. Furthermore, their experiments verify that techniques which implicitly reduce this geometric complexity during pretraining (such as large learning rates, small batch sizes and increased $L^2$ regularization) in turn put regularizing pressure on the pretraining neural collapse leading to improved transfer test accuracy.
Our key contribution is the identification of free energy as a novel and significant pretraining characteristic which exhibits direct theoretical and empirical connections governing successful downstream model adaptability. We prove in Section 5 that, similar to neural collapse, the pretraining free energy bounds from above the downstream free energy. In addition, we establish (see Appendix A) a theoretical link between downstream free energy and the downstream Bayesian prediction, providing theoretical guarantees on the downstream Bayes test error. Together, these theoretical results, viewed in the context of (1), imply
$$
\begin{array}{c c c c c} \text {d o w n s t r a m} & \lesssim & \text {d o w n s t r a m} & \lesssim & \text {p r e t r a i n i n g} \\ \text {B a y e s i a n t e s t e r r o r} & \lesssim & \text {f r e e e n e r g y} & \lesssim & \text {f r e e e n e r g y}. \end{array}
$$
Furthermore, using mechanisms established in (Lau et al., 2025) which are known to implicitly regularize the pretraining free energy—such as large learning rates, small batch sizes, and increased momentum—we experimentally verify (see Section 6) that lower pretraining free energy does indeed lead to improved fine-tuning performance.
Bayesian model selection criterion. The idea of using free energy has its roots in Bayesian model selection. Given a
collection of models, $\mathcal{M}_1, \ldots, \mathcal{M}_k$ , the task of choosing an optimal model for some given data is known as model selection. There are different (and sometimes irreconcilable) model selection criteria; but, in general, all model selection criteria attempt to balance fit and complexity. A particularly appealing Bayesian model selection criterion is the free energy criterion which is widely used and accepted in the both the statistical and machine learning literature (Hinton & van Camp, 1993; Kass & Raftery, 1995; MacKay, 2002; Robert et al., 2007). The free energy model selection criterion says we should pick the model with the lowest free energy. Since the free energy is the negative log of the marginal likelihood, also known as Bayesian model evidence, free energy minimization is equivalent to marginal likelihood maximization. To our knowledge, this work represents the first application of the free energy criterion in the domain of transfer learning.
# 3. Problem Setup
Here, we shall mainly treat the supervised setting though the theory developed below applies equally to the unsupervised setting. During pretraining, for input $x$ and target $y$ , we employ a probabilistic model $p^0 (y|x,w)$ parameterized by $w\in W\subset \mathbb{R}^p$ . Throughout, we assume the pretraining model $p^0 (y|x,w)$ depends on $x$ through a neural network $f_w^{\mathrm{PT}}(x) = \sigma_{\mathrm{out}}(v^T\phi_\theta (x))$ where $w = (v,\theta)$ . Here $\phi_{\theta}$ denotes the feature extractor parameterized by $\theta$ and $v$ the weights of the linear head. The final activation is denoted $\sigma_{\mathrm{out}}$ ; e.g., softmax or sigmoid for classification tasks.
For fine-tuning, we attach a new linear head $u$ to the backbone $\phi_{\theta}$ resulting in a neural network $f_{w'}^{\mathrm{FT}}(x) = \sigma_{\mathrm{out}}(u^T\phi_\theta(x))$ where $w' = (u,\theta)$ with $u$ potentially having different dimension to $v$ . The fine-tuning probabilistic model is denoted $p^1(y|x,w')$ where the dependence on $x$ is through $f_{w'}^{\mathrm{FT}}$ .
Given a pretraining checkpoint $w^{*} = (v^{*},\theta^{*})$ , we initialize $f_{w'}^{\mathrm{FT}}$ at $(u_0,\theta^*)$ where $u_{0}$ is randomly initialized. All parameters of $w'$ are then fine-tuned via stochastic optimization. In this work, we employ limited fine-tuning where the linear head undergoes standard training, while the backbone remains mostly frozen, with updates governed by a separate, smaller learning rate. This approach is particularly useful in scenarios with limited downstream data, where the differential learning rates help to prevent overfitting or loss of general-purpose representations; cf. (Lee et al., 2022).
For theoretical convenience, we will assume that $u$ and $v$ share the same dimensionality
This way, we can use $p(y|x,w)$ to denote both the pretrain-
ing and fine-tuning models. Let the true (and unknown) pretraining $(i = 0)$ and fine-tuning $(i = 1)$ joint distributions be denoted
$$
r ^ {i} (x, y) := r ^ {i} (y | x) r ^ {i} (x), \quad i = 0, 1;
$$
and define the pretraining $(i = 0)$ and fine-tuning $(i = 1)$ test loss to be
$$
\mathrm {K} ^ {i} (w) := \mathbb {E} _ {r ^ {i} (x)} D _ {\mathrm {K L}} \left(r ^ {i} (y | x) \| p (y | x, w)\right).
$$
Let $\mathcal{D}^0$ and $\mathcal{D}^1$ be datasets drawn from the pretraining and downstream distributions (resp.) and
the corresponding pretraining $(i = 0)$ and fine-tuning $(i = 1)$ sample losses be
$$
\hat {\mathrm {K}} ^ {i} (w) := \frac {1}{| \mathcal {D} ^ {i} |} \sum_ {(x, y) \in \mathcal {D} ^ {i}} \left(\log r ^ {i} (y | x) - \log p (y | x, w)\right).
$$
Note that minimization of $\mathrm{K}^i (w)$ and $\hat{\mathrm{K}}^i (w)$ with respect to $w$ can recover the standard cross-entropy loss and squared loss frequently employed in deep learning. Indeed, if we drop the entropy term in $\mathrm{K}^i$ and $\hat{\mathrm{K}}^i$ , which does not depend on $w$ , we obtain the negative log likelihoods, for $i = 0,1$
$$
\mathrm {L} ^ {i} (w) := - \mathbb {E} _ {r ^ {i} (x, y)} \log p (y | x, w)
$$
$$
\hat {\mathrm {L}} ^ {i} (w) := - \frac {1}{| \mathcal {D} ^ {i} |} \sum_ {(x, y) \in \mathcal {D} ^ {i}} \log p (y | x, w).
$$
We double load test loss to mean either $\mathrm{K}^i$ or $\mathrm{L}^i$ and train loss to mean either $\hat{\mathrm{K}}^i$ or $\hat{\mathrm{L}}^i$ .
# 4. Pretraining and downstream free energy
In this section, we begin by introducing the downstream free energy as a measure of how suitable a checkpoint is for downstream adaptation. We then introduce the pretraining free energy as a proxy that can be measured solely using the pretraining data.
Let $U_0 = \{w_\alpha^* = (v_\alpha^*, \theta_\alpha^*)\}_\alpha$ denote the set of local minima of the pretraining test loss $\mathrm{K}^0(w)$ . In our theoretical development, we will frequently refer to the elements of $U_0$ as pretraining checkpoints. Note that the elements of $U_0$ , being local minima of the test loss, generally differ from the actual checkpoints obtained during pretraining, which are governed by the training loss $\hat{\mathrm{K}}^0(w)$ (or equivalently, $\hat{\mathrm{L}}^0(w)$ ). To bridge this gap between theory and practice, checkpoints should correspond to local minima of the training loss. This ensures that the theoretical objects we analyze – minimizers of the test loss – are meaningfully related to their empirical counterparts.
Given a single model - a parametric family $\mathcal{M} = \{p(y|x, w) : w \in W\}$ - with multiple optima (as neural networks are prone to exhibit), we can perform internal model
selection (Balasubramanian, 1997) using a local version of the free energy criterion to select among the local optima. This amounts to comparing the downstream free energies between elements of $U_{0}$ . We now define the downstream free energy associated to an element of $U_{0}$ .
# 4.1. Downstream free energy
With datasets $\mathcal{D}^0$ and $\mathcal{D}^1$ as above, let $n = |\mathcal{D}^0|$ and $m = |\mathcal{D}^1|$ . Informally, we might say that a pretraining checkpoint $w^{*} = (v^{*},\theta^{*})\in U_{0}$ is a good candidate for adaptation if there are many weights $\theta$ in the vicinity of $\theta^{*}$ with low fine-tuning test loss; i.e., low values of $\mathrm{K}^1 (w)$ . One way to make this mathematically precise is via the downstream free energy
$$
\bar {\mathrm {F}} ^ {1} \left(B _ {\gamma} \left(w ^ {*}\right)\right) := - \log \bar {\mathrm {Z}} ^ {1} \left(B _ {\gamma} \left(w ^ {*}\right)\right), \tag {1}
$$
which is the negative log of a local marginal likelihood
$$
\bar {\mathrm {Z}} ^ {1} \left(B _ {\gamma} \left(w ^ {*}\right)\right) := \int_ {B _ {\gamma} \left(w ^ {*}\right)} \exp \left\{- m \mathrm {K} ^ {1} (w) \right\} \varphi (w) d w. (2)
$$
Here $\varphi(w)$ is a prior over the model parameters $w$ , and $B_{\gamma}(w^{*}) := \{w = (v^{*}, \theta) : ||\theta - \theta^{*}||_2^2 \leq 1 / \gamma\}$ is the $\gamma$ -neighborhood around $w^{*}$ with $v^{*}$ frozen. Note that large values of $\gamma$ force us to stay near $\theta^{*}$ and thus, ultimately, stay near the pretraining checkpoint $w^{*} = (v^{*}, \theta^{*})$ as well.
Taken together, equations (1) and (2) imply that a large concentration of weights $\theta$ near $\theta^{*}$ with low downstream test loss $\mathrm{K}^1 (w)$ results in a large $\bar{\mathbf{Z}}^{1}(B_{\gamma}(w^{*}))$ and, equivalently, a small $\bar{\mathrm{F}}^{1}(B_{\gamma}(w^{*}))$ . Thus, we propose the following downstream free energy strategy for improved fine-tuning:
Pretraining checkpoints with lower downstream free energy are more likely to adapt successfully to downstream tasks.
Formally, we seek to find parameters $w^{*} \in U_{0}$ which minimize the downstream free energy; i.e.,
$$
\arg \min _ {w ^ {*} \in U _ {0}} \bar {\mathrm {F}} ^ {1} \left(B _ {\gamma} \left(w ^ {*}\right)\right). \tag {3}
$$
Before addressing the implementation of this free energy strategy, let's first understand the competing forces behind this model selection criterion. Given $w^{*} \in U_{0}$ , following the techniques set out in (Watanabe, 2009), the asymptotic expansion of $\bar{\mathrm{F}}^{1}(B_{\gamma}(w^{*}))$ in the sample size $m$ is
$$
\begin{array}{l} \bar {\mathrm {F}} ^ {1} \left(B _ {\gamma} \left(w ^ {*}\right)\right) \tag {4} \\ = m \mathrm {K} ^ {1} \left(w ^ {* 1}\right) + \lambda^ {1} \left(w ^ {*}\right) \log m + O (\log \log m), \\ \end{array}
$$
where $w^{*1} \coloneqq \arg \min_{w \in B_{\gamma}(w^{*})} \mathrm{K}^{1}(w)$ . Further discussion, including the derivation of equation (4), can be found in Section 4 and Appendix B of (Lau et al., 2025).
Remark 4.1. From (4), note that that downstream free energy of a checkpoint $w^{*}$ is a weighted sum of two things: the fit, as measured by $\mathrm{K}^1 (w^{*1})$ , and the complexity, as measured by $\lambda^1 (w^*)$ . This complexity measure $\lambda^1 (w^*)$ was recently introduced as the local learning coefficient; see Lau et al. (2025). Lower local learning coefficient means lower model complexity. Note that a checkpoint with higher loss under the downstream distribution may still be preferred as long as its complexity is low enough to compensate. Furthermore, note that for pretraining checkpoints that are in the same level set of $\mathrm{K}^1$ , the checkpoint with the lowest model complexity, as measured by $\lambda^1$ , will have the lowest downstream free energy.
The free energy strategy in (3) which uses $\bar{\mathrm{F}}^1 (B_\gamma (w^*))$ to select among candidate checkpoints in $U_{0}$ is conceptually sound but presents two notable implementation challenges. First, $\bar{\mathrm{F}}^1 (B_\gamma (w^*))$ , besides involving some unknown terms such as $\mathrm{K}^1$ , is the negative log of an intractable integral. This is not insurmountable as many techniques such as MCMC or variational inference are available to deal with intractable integrals.
The second, and more significant, issue is that applying $\bar{\mathrm{F}}^1 (B_\gamma (w^*))$ to select among checkpoints $w^{*}\in U_{0}$ requires access to downstream data. This poses a problem because, in many practical scenarios, the downstream task may not be known or fully available during pretraining. To address this limitation, we introduce the pretraining free energy, an analog of the downstream free energy but which can be computed using only the pretraining data. In Section 5 we show how these two quantities are related.
Remark 4.2. Note that the free energy as defined in equations (1) and (2) is not scale invariant with respect to parameters $w$ . Thus, for certain neural network architectures exhibiting strict scale invariance, such as those composed purely of ReLU activations, it's possible for a global parameter rescaling to leave model outputs and downstream accuracy unaffected, while potentially altering the free energy in some non-trivial way. However, our investigation here centers on commonly deployed neural networks, which typically incorporate elements like normalization layers or weight decay that break strict parameter scaling invariance.
# 4.2. Pretraining free energy
Similar to the downstream free energy defined in (1), we define the pretraining free energy for a pretraining checkpoint $w^{*} = (v^{*},\theta^{*})\in U_{0}$ as
$$
\mathrm {F} ^ {0} \left(B _ {\gamma} \left(w ^ {*}\right); \beta\right) := - \log \mathrm {Z} ^ {0} \left(B _ {\gamma} \left(w ^ {*}\right); \beta\right) \tag {5}
$$
where
$$
Z ^ {0} \left(B _ {\gamma} \left(w ^ {*}\right); \beta\right) := \int_ {B _ {\gamma} \left(w ^ {*}\right)} \exp \{- n \beta \hat {K} ^ {0} (w) \} \varphi (w) d w \tag {6}
$$
and $\beta > 0$ is an inverse temperature. Unlike $\bar{\mathbf{Z}}^1(B_\gamma(w^*))$ and $\bar{\mathbf{F}}^1(B_\gamma(w^*))$ , here the quantities $\mathbf{Z}^0(B_\gamma(w^*); \beta)$ and $\mathbf{F}^0(B_\gamma(w^*); \beta)$ are stochastic. We indicate this by dropping the overhead bar.
Analogous to (4), the asymptotic expansion of $\mathrm{F}^0 (B_\gamma (w^*);\beta)$ in $n$ for $w^{*}\in U_{0}$ is
$$
\begin{array}{l} \mathrm {F} ^ {0} \left(B _ {\gamma} \left(w ^ {*}\right); \beta\right) \tag {7} \\ = n \beta \hat {\mathrm {K}} ^ {0} \left(w ^ {* 0}\right) + \lambda^ {0} \left(w ^ {*}\right) \log n + O _ {p} (\log \log n) \\ \end{array}
$$
where $w^{*0} \coloneqq \arg \min_{w \in B_{\gamma}(w^{*})} K^0(w)$ . Note that the asymptotic expansion of $\overline{\mathrm{F}}^1(B_{\gamma}(w^{*}))$ in (4) involves the downstream test loss $\mathrm{K}^1$ whereas the asymptotic expansion of $\mathrm{F}^0(B_{\gamma}(w^{*}); \beta)$ in (7) involves the pretraining train loss $\hat{\mathrm{K}}^0$ . To compare the two, we take the expectation over the dataset in (7), arriving at the following expansion involving only deterministic quantities:
$$
\begin{array}{l} \mathbb {E} _ {\mathcal {D} ^ {0}} \mathrm {F} ^ {0} \left(B _ {\gamma} \left(w ^ {*}\right); \beta\right) \tag {8} \\ = n \beta \mathrm {K} ^ {0} \left(w ^ {* 0}\right) + \lambda^ {0} \left(w ^ {*}\right) \log n + O (\log \log n). \\ \end{array}
$$
In the next section, we will use these asymptotic expansions to bound the discrepancy between the downstream and pretraining free energy.
# 5. Relationship between pretraining and downstream free energy
In this section, we show there is a satisfying relationship between pretraining free energy and downstream free energy, asymptotically speaking. Relying on the leading order terms of the asymptotic expansion of the downstream free energy in (4), we can express the downstream free energy strategy in (3) as
$$
\arg \min _ {w ^ {*} \in U _ {0}} \left[ m K ^ {1} \left(w ^ {* 1}\right) + \lambda^ {1} \left(w ^ {*}\right) \log m \right], \tag {9}
$$
where $w^{*1} \coloneqq \arg \min_{w \in B_{\gamma}(w^{*})} \mathrm{K}^{1}(w)$ . To avoid requiring the downstream test loss $\mathrm{K}^{1}$ , we introduce the pretraining asymptotic free energy strategy which relies only on the pretraining distribution and (under mild assumptions, below) serves as a viable proxy for (9). Formally, this strategy seeks a solution of the following optimization
$$
\arg \min _ {w ^ {*} \in U _ {0}} \left[ n \beta_ {0} \mathrm {K} ^ {0} \left(w ^ {*}\right) + \lambda^ {0} \left(w ^ {*}\right) \log n \right] \tag {10}
$$
where $\beta_0 = M\frac{m\log n}{n\log m}$ . This strategy is supported by the following result whose proof can be found in Appendix C.
Proposition 5.1. Let $w^{*}$ be a local minimum of $\mathrm{K}^0 (w)$ ; i.e., $w^{*}\in U_{0}$ and $\gamma$ be such that $w^{*0}$ is a local minimum of $\mathrm{K}^0 (w)$ ; i.e., $w^{*0}\in U_0$ . Further suppose $\lambda^1 (w^*)\leq \lambda^0 (w^*)$ .
Define $M := \max_{(x,y) \sim r^0(x,y)} \frac{r^1(x,y)}{r^0(x,y)} < \infty$ . Then,
$$
\begin{array}{l} \mathrm {K} ^ {1} \left(w ^ {* 1}\right) + \lambda^ {1} \left(w ^ {*}\right) \frac {\log m}{m} \tag {11} \\ \leq M K ^ {0} (w ^ {*}) + D + \lambda^ {0} (w ^ {*}) \frac {\log m}{m} \\ \end{array}
$$
where $D = \int \log \frac{r^1(y|x)}{r^0(y|x)} r^1 (x,y)dxdy.$
Proposition 5.1 justifies model selection using the asymptotic expansion of the pretraining free energy as in (10). This follows from (11) by first multiplying both sides by $m$ and then noting that minimizing
$$
m M K ^ {0} \left(w ^ {*}\right) + m D + \lambda^ {0} \left(w ^ {*}\right) \log m
$$
is equivalent, up to constants, to minimizing
$$
\frac {\log n}{\log m} \left[ m M K ^ {0} \left(w ^ {*}\right) + \lambda^ {0} \left(w ^ {*}\right) \log m \right],
$$
which leads us precisely to (10). To further illustrate Proposition 5.1, we include explanatory examples in Appendix D which interprets this result applied to Gaussian distributions.
There are some real-world scenarios for which Proposition 5.1 would be uninformative. For example, if the pretraining data includes only images of horses while the downstream data contains only cars, their label supports would be disjoint, leading to an infinite $M$ . To address this, our experiments in Section 6 focus on settings where the pretraining dataset is significantly larger and more diverse than the downstream dataset. This also reflects common practice in the field and an established heuristic in transfer learning; see also (Kornblith et al., 2019). Specifically, we achieve this by using pretraining datasets with a substantially larger set of image classes. If this were reversed; i.e., the pretraining dataset has substantially fewer classes than the downstream dataset, the relationship we establish in Proposition 5.1 would be uninformative.
# 5.1. Observations of the pretraining asymptotic free energy strategy
In this section, we present practical observations that follow from selecting pretraining checkpoints according to the pretraining asymptotic free energy strategy defined by (10).
Observation 1: A suboptimal checkpoint in terms of pretraining test loss can still be preferred by the pretraining asymptotic free energy strategy in (10). Suppose we have two models $w_{\alpha}^{*}, w_{\beta}^{*} \in U_{0}$ ; i.e., both models are local minima of the pretraining test loss $\mathrm{K}^0$ . In order to determine which model is preferred for fine-tuning, our strategy (10) directs us to compare
$$
F _ {\alpha} = n \beta_ {0} \mathrm {K} ^ {0} \left(w _ {\alpha} ^ {*}\right) + \lambda^ {0} \left(w _ {\alpha} ^ {*}\right) \log n \tag {12}
$$
and
$$
F _ {\beta} = n \beta_ {0} \mathrm {K} ^ {0} \left(w _ {\beta} ^ {*}\right) + \lambda^ {0} \left(w _ {\beta} ^ {*}\right) \log n. \tag {13}
$$
Suppose $\mathrm{K}^0 (w_\alpha^*) < \mathrm{K}^0 (w_\beta^*)$ ; i.e., $w_\alpha^*$ and $w_\beta^*$ are in different level sets and checkpoint $w_\alpha^*$ has lower pretraining test loss; but $\lambda^0 (w_\alpha^*) > \lambda^0 (w_\beta^*)$ , implying checkpoint $w_\beta^*$ is less complex than checkpoint $w_\alpha^*$ . Then it is entirely possible for $F_\alpha > F_\beta$ so that checkpoint $w_\beta^*$ will be preferred by (10) despite having higher pretraining test loss. In fact, this happens precisely when
$$
\frac {m}{\log m} < \frac {1}{M} \frac {\lambda^ {0} (w _ {\alpha} ^ {*}) - \lambda^ {0} (w _ {\beta} ^ {*})}{\mathrm {K} ^ {0} (w _ {\beta} ^ {*}) - \mathrm {K} ^ {0} (w _ {\alpha} ^ {*})}.
$$
Recall, $m$ represents the number of examples in the downstream dataset. Note that, when $M$ is large, there's a smaller range of $m$ under which the suboptimal pretraining checkpoint will be preferred. In other words, if the downstream distribution is very different to the pretraining distribution, the free energy strategy will look to the lower level sets of pretraining test loss.
Observation 2: When $n\beta_0 \gg \log n$ , a checkpoint with lower pretraining test loss will always be preferred by the pretraining asymptotic free energy strategy in (10). Again, suppose we have two local minima $w_{\alpha}^{*}, w_{\beta}^{*} \in U_0$ but which are in different level sets of the test loss; i.e., $\mathrm{K}^0(w_{\alpha}^*) \neq \mathrm{K}^0(w_{\beta}^*)$ . Without a handle on $\beta_0$ , we cannot decide which checkpoint has lower free energy since, as described above in Observation 1, the complexity term $\lambda^0$ also plays a role in comparing $F_{\alpha}$ and $F_{\beta}$ .
However, when $n\beta_0$ is significantly larger than $\log n$ , the first term in (10) dominates the second. In this case, the pretraining asymptotic free energy strategy prioritizes checkpoints with lower pretraining test loss $\mathrm{K}^0$ .
Using the definition of $\beta_0$ in (10), the setting described here is equivalent to $Mm\gg \log m$ , where $m$ is the size of the fine-tuning dataset and $M$ measures distributional shift. Since $m$ already grows faster than $\log m$ , this may offer an intriguing insight which justifies the pretraining test loss as a heuristic for checkpoint adaptability.
Observation 3: For checkpoints with the same pretraining test loss, the one with the lowest complexity is preferred by the pretraining asymptotic free energy strategy in (10). Suppose we have two models $w_{\alpha}^{*}, w_{\beta}^{*} \in U_{0}$ in the same level set of $\mathrm{K}^{0}$ ; i.e., same pretraining test loss $K^{0}(w_{\alpha}^{*}) = K^{0}(w_{\beta}^{*})$ . As before, our strategy (10) directs us to compare $F_{\alpha}$ and $F_{\beta}$ as defined in equations (12) and (13), resp.
However, since the first terms are equal, selecting the preferred pretraining checkpoint depends only on the model complexity, as measured by $\lambda^0 (w_\alpha^*)$ and $\lambda^0 (w_\beta^*)$ . Thus, all else being equal, the strategy in (10) naturally prefers
simple pretraining checkpoints over more complex ones for improved fine-tuning.
# 5.2. Estimating pretraining free energy
So far, we have established the pretraining asymptotic free energy strategy as a theoretically principled approach to pretraining model selection for improved finetuning. In this section, we show how to estimate the pretraining asymptotic free energy required in (10) using only the sample pretraining train loss $\hat{\mathbf{L}}^0$ . This estimation technique, which we employ in our experiments (Section 6), enables the application of our proposed strategy in (10) for real-world machine learning scenarios.
We begin by focusing first on model selection for pretraining checkpoints in the same level set of $\mathbf{K}^0$ . In this case, we can set $\beta_0$ to an arbitrary value; we set $\beta_0 = 1$ . Next, note that the optimization objective in (10) can be equivalently expressed in terms of $\mathbf{L}^0$ since it differs only from $\mathbf{K}^0$ by a constant with respect to $w$ . In other words, we have
$$
\begin{array}{l} \underset {w ^ {*} \in U _ {0}} {\arg \min } \left[ n \mathrm {K} ^ {0} \left(w ^ {*}\right) + \lambda^ {0} \left(w ^ {*}\right) \log n \right] \\ = \underset {w ^ {*} \in U _ {0}} {\arg \min } \left[ n \mathrm {L} ^ {0} \left(w ^ {*}\right) + \lambda^ {0} \left(w ^ {*}\right) \log n \right]. \tag {14} \\ \end{array}
$$
To estimate the RHS of (14), we refer to recent work of (Lau et al., 2025) which shows that the Widely Applicable Bayesian Information Criterion (WBIC) around $w^{*} \in U_{0}$ is an asymptotically unbiased estimator of $n\mathrm{L}^0 (w^*) + \lambda^0 (w^*)\log n$ . This localized version of the WBIC is computed from the sample pretraining train loss $\hat{\mathrm{L}}^0$ measured in the neighborhood $B_{\gamma}(w^{*})$ of the checkpoint $w^{*}$ as described below.
Consider a localizing Gaussian prior which acts as a surrogate for enforcing the domain of integration given by $B_{\gamma}(w^{*})$ . Specifically, let
$$
\varphi_ {\vec {\gamma}} (w) \propto \exp \{- \vec {\gamma} ^ {T} | | w | | _ {2} ^ {2} \}, \quad \vec {\gamma} \in \mathbb {R} _ {> 0} ^ {p}
$$
which is centered at the origin with scale vector $\vec{\gamma} = (\gamma_1, \dots, \gamma_p)$ . Since we only want to measure the free energy with respect to parameters $\theta$ of the model backbone (recall, the fine-tuning setup described in Section 3), we take $\gamma_j = \infty$ in the coordinates of $v$ and $\gamma_j = \gamma$ in the coordinates of $\theta$ , where $\gamma$ is the same as the radius defining the neighborhood $B_{\gamma}(w^{*})$ ; recall, (2).
Define the pretraining posterior distribution
$$
p ^ {0} \left(w; w ^ {*}, \beta , \vec {\gamma}\right) \propto \exp \left\{- n \beta \hat {\mathrm {L}} ^ {0} (w) \right\} \varphi_ {\vec {\gamma}} \left(w - w ^ {*}\right). \tag {15}
$$
Following Lau et al. (2025), we define the pretraining WBIC at $w^{*} \in U_{0}$ by
$$
\operatorname {W B I C} \left(w ^ {*}; \beta^ {*}\right) := \int \left[ n \hat {\mathrm {L}} ^ {0} (w) \right] p ^ {0} \left(w; w ^ {*}, \beta^ {*}, \gamma\right) d w, \tag {16}
$$
where $\beta^{*} = \frac{1}{\log n}$ . It is not hard to see that (16) is a localized adaptation of Watanabe's classic Widely Applicable Bayesian Information Criterion (WBIC) (Watanabe, 2013). The classic WBIC itself was developed because the standard Bayesian Information Criterion (BIC) (Schwarz, 1978) is unsuitable for singular statistical models. Recall that a model is said to be 'regular' if its parameter-to-distribution mapping is one-to-one and its Fisher information matrix is positive definite for all possible parameter values; otherwise, it is singular. The key distinction of the pretraining WBIC, as defined in (16), and the classic WBIC is its localization through a Gaussian prior centered on the pretraining checkpoint $w^{*}$ .
The pretraining WBIC at a checkpoint $w^{*}$ is a good estimate of the (expected) pretraining free energy around $w^{*}$ defined by equations (5) and (6). Furthermore, $\mathrm{WBIC}(w^{*};\beta^{*})$ can be reliably computed via SGLD sampling methods; see Lau et al. (2025, Appendix G).
Therefore, to apply the pretraining asymptotic free energy strategy in (10) to checkpoints with the same $\mathbf{K}^0$ , we simply select the one with the smallest pretraining WBIC given by $\mathrm{WBIC}(w^{*};\beta^{*})$ . Next, we empirically verify this strategy using the CIFAR dataset trained on ResNet-18.
# 6. Experiments
The goal of our experiments is to evaluate how well the pretraining WBIC, which estimates the pretraining free energy as described in Section 5.2, correlates with downstream performance. In order to measure the impact of lower pretraining WBIC, we apply mechanisms during pretraining which are known to implicitly regularize this quantity, as shown in (Lau et al., 2025). These include including large learning rates, small batch sizes, and high momentum.
We use the CIFAR-FS dataset (Bertinetto et al., 2019), derived from CIFAR-100 where the 100 classes are divided into 64 classes for meta-training, 16 classes for meta-validation, and 20 classes for meta-testing. We pretrain on the meta-training set and then assess model adaptability on the unseen meta-test set via limited fine-tuning described in Section 3. The meta-validation classes are not used.
Pretraining. For pretraining, we use all 64 classes from the CIFAR-FS meta-training set to train a ResNet-18 model using stochastic gradient descent (SGD). We explore ranges of hyperparameter values for the learning rate, batch size and momentum. Interaction effects between these are not considered. Full experiment details for each hyperparameter sweep are provided in Appendix B.1. During training we track the pretraining train loss (first column of Figure 2) and the pretraining WBIC (second column of Figure 2). The hyperparameter settings for pretraining WBIC computation are provided in Appendix B.1.









Figure 2. Model checkpoints with lower pretraining WBIC (second column) consistently result in better transfer accuracy, both when fine-tuning on the full downstream dataset (third column) and in the few-shot setting (fourth column). Lower pretraining WBIC correlates with better downstream performance for Top row: larger learning rates, Middle row: smaller batch sizes, and Bottom row: increased momentum. Additional experiments on mini-ImageNet and a VGG model yield similar results; see Figure 3 and Appendix E.



Full meta-test fine-tuning uses the full meta-test dataset, consisting of all 20 meta-test classes with 600 examples per class. We use an 80/20 split for training and testing, with stratification within each class. In this setting a new (randomly initialized) linear head is attached for the 20-class classification task, and the model is fine-tuned for 100 steps using SGD. This setting corresponds to the "Fine-tune Transfer Accuracy" metric (third column) in Figure 2. Hyperparameter details for this setting are in Appendix B.2.
Few-shot meta-test fine-tuning examines a data-limited, few-shot scenario. A single few-shot task is created by randomly sampling 5 classes and 5 examples per class from the meta-test dataset, creating a dataset with 25 total training examples. A new (randomly initialized) linear head is attached for the 5-class classification task, and the model is finetuned for 100 steps using full batch gradient descent. The transfer accuracy is evaluated on 100 randomly selected test examples for each of the 5 classes. The overall transfer accuracy is averaged over 100 few-shot tasks. This setting corresponds to the "Avg 5-shot Transfer Accuracy" metric (fourth column) in Figure 2. Hyperparameter details for this setting are in Appendix B.2.
Results. In each of these two fine-tuning scenarios, we observe a strong correlation between lower pretraining free energy (as measured by the pretraining WBIC, see Section
5.2) and better downstream performance; see Figure 2. In particular, we see that increasing learning rate, decreasing batch sizes, and increasing momentum all result in lower pretraining WBIC, which in turn leads to better downstream performance. Note the Avg 5-shot transfer accuracy (fourth column) is typically higher than the finetune transfer accuracy (third column); this is likely because the former only needs to learn 5 classes at a time while the latter needs to learn 20 classes. Interestingly, we can view pretraining train loss (the first column of Figure 2) as a baseline comparison. We see that pretraining train loss often collapses to a similar value as training proceeds, rendering it ineffective for distinguishing different fine-tuning behaviors.
In Figure 1, we take each checkpoint at the end of pretraining and plot its pretraining WBIC (called pretraining free energy there since the terminology had not been introduced) versus transfer accuracy. The left (right) plot of Figure 1 corresponds to the third (fourth) column of Figure 2.
Comparison of with other pretraining metrics. As described in Section 2, recent work of (Galanti et al., 2022) and (Munn et al., 2024) examines the role of neural collapse and geometric complexity as effective pretraining metrics for assessing the suitability of a model checkpoint for transfer learning. To compare the effectiveness of our free energy strategy against these other pretraining metrics, we
conducted a correlation analysis computing the Pearson correlation coefficients (Pearson & Galton, 1895) using model checkpoints obtained from training a ResNet-18 model on CIFAR-FS to convergence; see Table 1.
These experiments involved a comprehensive exploration of the hyperparameter space (see Appendix B.3). For each checkpoint, we compared the Geometric Complexity, Neural Collapse, and Free Energy of the pretrained model to its downstream performance, measured via both full meta-test fine-tuning and few-shot meta-test fine-tuning. As indicated in Table 1, the pretraining Free Energy exhibits a substantially stronger correlation with downstream performance than other metrics considered.
<table><tr><td>Pretraining Metric</td><td>Finetune Accuracy</td><td>Avg 5-shot Accuracy</td></tr><tr><td>Geometric Complexity</td><td>-0.767</td><td>-0.443</td></tr><tr><td>Neural Collapse</td><td>-0.632</td><td>-0.1875</td></tr><tr><td>Free Energy</td><td>-0.820</td><td>-0.8901</td></tr></table>
Table 1. Correlation comparison between pretraining metrics (geometric complexity, neural collapse, and free energy) and downstream performance (finetune and few-shot transfer accuracy).
# 7. Conclusion and Future Work
In this work, we introduced the downstream free energy as a Bayesian model selection criterion for quantifying the adaptability of pretraining checkpoints, offering a principled way to predict their performance on unseen downstream tasks. Our key insight is that checkpoints with lower downstream free energy are more adaptable, making them ideal candidates for fine-tuning. Our empirical results across varied datasets (CIFAR-FS, mini-Imagenet) and architectures (ResNet, VGG) validate the utility of the pretraining free energy as a practical checkpoint selection criterion, especially when downstream data is scarce or inaccessible.
Despite the promising results, some limitations remain. First, our analysis currently lacks a direct link between downstream free energy and downstream predictive performance. At the moment, we provide a rigorous connection only when downstream adaptation is performed in a Bayesian manner (see Appendix A). While Bayesian deep learning is not yet widely adopted due to its computational overhead, this link may become valuable as computational barriers are reduced, particularly in fine-tuning scenarios.
In addition, while our theoretical framework supports the use of free energy as a selection criterion, the practical computation of the pretraining WBIC as in (16), remains challenging for large models which may possess tens or hundreds of billions of parameters. Developing tractable methods for this computation remains a challenge and presents a significant direction for future work. An alternative ap
proach would be to instead identify computationally efficient "levers" that influence pretraining free energy, thus allowing us to improve downstream adaptation performance without relying on direct computation of the pretraining WBIC.
# Impact Statement
This work proposes a novel theoretical framework for understanding the mechanisms behind successful fine-tuning in machine learning. Our findings have the potential to guide development of more efficient fine-tuning strategies, reducing computational costs and resource consumption, with implications for diverse applications like NLP and computer vision. As the primary focus of this work is theoretical, there are no direct societal consequences of our work that we feel must be specifically highlighted.
# Acknowledgments
We would like to thank Javier Gonzalvo for helpful discussions, suggestions, and feedback during the development of this work.
# References
Andriushchenko, M., Varre, A. V., Pillaud-Vivien, L., and Flammarion, N. Sgd with large step sizes learns sparse features. In International Conference on Machine Learning, pp. 903-925. PMLR, 2023.
Balasubramanian, V. Statistical inference, occam's razor, and statistical mechanics on the space of probability distributions. Neural Computation, 9(2):349-368, 1997.
Bengio, Y. Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML workshop on unsupervised and transfer learning, pp. 17-36. JMLR Workshop and Conference Proceedings, 2012.
Bertinetto, L., Henriques, J. F., Torr, P., and Vedaldi, A. Meta-learning with differentiable closed-form solvers. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyxnZh0ct7.
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosse-lut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N. S., Chen, A. S., Creel, K. A., Davis, J., Demszky, D., Donahue, C., Doumbouya, M., Durmus, E., Ermon, S., Etchemendy, J., Ethayarajh, K., Fei-Fei, L., Finn, C., Gale, T., Gillespie, L. E., Goel, K., Goodman, N. D., Grossman, S., Guha, N., Hashimoto, T., Henderson, P., Hewitt, J., Ho, D. E., Hong, J., Hsu, K., Huang, J., Icard, T. F., Jain, S., Jurafsky, D., Kalluri,
P., Karamcheti, S., Keeling, G., Khani, F., Khattab, O., Koh, P. W., Krass, M. S., Krishna, R., Kuditipudi, R., Kumar, A., Ladhak, F., Lee, M., Lee, T., Leskovec, J., Levent, I., Li, X. L., Li, X., Ma, T., Malik, A., Manning, C. D., Mirchandani, S. P., Mitchell, E., Munyikwa, Z., Nair, S., Narayan, A., Narayanan, D., Newman, B., Nie, A., Niebles, J. C., Nilforoshan, H., Nyarko, J. F., Ogut, G., Orr, L., Papadimitriou, I., Park, J. S., Piech, C., Portelance, E., Potts, C., Raghunathan, A., Reich, R., Ren, H., Rong, F., Roohani, Y. H., Ruiz, C., Ryan, J., R'e, C., Sadigh, D., Sagawa, S., Santhanam, K., Shih, A., Srinivasan, K. P., Tamkin, A., Taori, R., Thomas, A. W., Tramer, F., Wang, R. E., Wang, W., Wu, B., Wu, J., Wu, Y., Xie, S. M., Yasunaga, M., You, J., Zaharia, M. A., Zhang, M., Zhang, T., Zhang, X., Zhang, Y., Zheng, L., Zhou, K., and Liang, P. On the opportunities and risks of foundation models. ArXiv, 2021. URL https://crfm.stanford.edu/assets/report.pdf.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems, 33: 1877-1901, 2020.
Chen, S., Ma, K., and Zheng, Y. Med3d: Transfer learning for 3d medical image analysis. arXiv preprint arXiv:1904.00625, 2019.
Dherin, B., Munn, M., Rosca, M., and Barrett, D. Why neural networks find simple solutions: The many regularizers of geometric complexity. Advances in Neural Information Processing Systems, 35:2333-2349, 2022.
Dhillon, G. S., Chaudhari, P., Ravichandran, A., and Soatto, S. A baseline for few-shot image classification. In International Conference on Learning Representations, 2019.
Foret, P., Kleiner, A., Mobahi, H., and Neyshabur, B. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations, 2021.
Galanti, T., György, A., and Hutter, M. On the Role of Neural Collapse in Transfer Learning, January 2022. URL http://arxiv.org/abs/2112.15121.arXiv:2112.15121 [cs].
Goyal, P. Accurate, large minibatch sg d: trainingImagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
Gunasekar, S., Lee, J. D., Soudry, D., and Srebro, N. Implicit bias of gradient descent on linear convolutional networks. Advances in neural information processing systems, 31, 2018.
He, F., Liu, T., and Tao, D. Control batch size and learning rate to generalize well: Theoretical and empirical evidence. Advances in neural information processing systems, 32, 2019.
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
Hinton, G. E. and van Camp, D. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, COLT '93, pp. 5-13, New York, NY, USA, 1993. Association for Computing Machinery. ISBN 0897916115. doi: 10.1145/168304.168306. URL https://doi.org/10.1145/168304.168306.
Jaquier, N., Welle, M. C., Gams, A., Yao, K., Fichera, B., Billard, A., Ude, A., Asfour, T., and Kragic, D. Transfer learning in robotics: An upcoming breakthrough? a review of promises and challenges. The International Journal of Robotics Research, pp. 02783649241273565, 2023.
Kass, R. E. and Raftery, A. E. Bayes factors. Journal of the American Statistical Association, 90(430):773-795, 1995. doi: 10.1080/01621459.1995.10476572. URL https://www.tandfonline.com/doi/abs/10.1080/01621459.1995.10476572.
Ke, A., Ellsworth, W., Banerjee, O., Ng, A. Y., and Rajpurkar, P. Chextransfer: performance and parameter efficiency of imagenet models for chest x-ray interpretation. In Proceedings of the conference on health, inference, and learning, pp. 116-124, 2021.
Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations, 2017.
Kim, J. and Park, C. End-to-end ego lane estimation based on sequential transfer learning for self-driving cars. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 30-38, 2017.
Kornblith, S., Shlens, J., and Le, Q. V. Do better imagenet models transfer better? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2661-2671, 2019.
Kumar, A., Raghunathan, A., Jones, R. M., Ma, T., and Liang, P. Fine-tuning can distort pretrained features and underperform out-of-distribution. In International Conference on Learning Representations, 2022.
Lau, E., Furman, Z., Wang, G., Murfet, D., and Wei, S. The local learning coefficient: A singularity-aware complexity measure. In The 28th International Conference on Artificial Intelligence and Statistics, 2025. URL https://openreview.net/forum?id=1av51ZlsuL.
Lee, Y., Chen, A. S., Tajwar, F., Kumar, A., Yao, H., Liang, P., and Finn, C. Surgical fine-tuning improves adaptation to distribution shifts. In The Eleventh International Conference on Learning Representations, 2022.
Li, J., Tang, T., Zhao, W. X., Nie, J.-Y., and Wen, J.-R. Pre-trained language models for text generation: A survey. ACM Computing Surveys, 56(9):1-39, 2024.
Lippl, S. and Lindsey, J. Inductive biases of multi-task learning and finetuning: multiple regimes of feature reuse. Advances in Neural Information Processing Systems, 37: 118745-118776, 2024.
Liu, H., Xie, S. M., Li, Z., and Ma, T. Same pre-training loss, better downstream: Implicit bias matters for language models. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 22188-22214. PMLR, 23-29 Jul 2023a. URL https://proceedings.mlr.org/press/v202/liu23ao.html.
Liu, Y., Zhang, Y., Wang, Y., Hou, F., Yuan, J., Tian, J., Zhang, Y., Shi, Z., Fan, J., and He, Z. A survey of visual transformers. IEEE Transactions on Neural Networks and Learning Systems, 2023b.
MacKay, D. J. C. Information Theory, Inference & Learning Algorithms. Cambridge University Press, USA, 2002. ISBN 0521642981.
Masters, D. and Luschi, C. Revisiting small batch training for deep neural networks. arXiv preprint arXiv:1804.07612, 2018.
Mormont, R., Geurts, P., and Marée, R. Comparison of deep transfer learning strategies for digital pathology. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 2262-2271, 2018.
Munn, M., Dherin, B., and Gonzalvo, J. The impact of geometric complexity on neural collapse in transfer learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=PLbFid00aU.
Neyshabur, B., Tomioka, R., Salakhutdinov, R., and Srebro, N. Geometry of optimization and implicit regularization in deep learning. arXiv preprint arXiv:1705.03071, 2017.
Papyan, V., Han, X., and Donoho, D. L. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117(40):24652-24663, 2020.
Pearson, K. and Galton, F. Vii. note on regression and inheritance in the case of two parents. Proceedings of the Royal Society of London, 58(347-352): 240-242, 1895. doi: 10.1098/rspl.1895.0041. URL https://royalsocietypublishing.org/doi/abs/10.1098/rspl.1895.0041.
Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., and Huang, X. Pre-trained models for natural language processing: A survey. Science China technological sciences, 63(10): 1872-1897, 2020.
Robert, C. P. et al. The Bayesian choice: from decision-theoretic foundations to computational implementation, volume 2. Springer, 2007.
Sanchez, S., Romero, H., and Morales, A. A review: Comparison of performance metrics of pretrained models for object detection using the tensorflow framework. In IOP conference series: materials science and engineering, volume 844, pp. 012024. IOP Publishing, 2020.
Schwarz, G. Estimating the Dimension of a Model. The Annals of Statistics, 6(2):461-464, March 1978. ISSN 0090-5364, 2168-8966. doi: 10.1214/aos/1176344136. URL https://projecteuclid.org/journals/annals-of-statistics/volume-6/issue-2/ Estimating-the-Dimension-of-a-Model/ 10.1214/aos/1176344136.full. Publisher: Institute of Mathematical Statistics.
Simonyan, K. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2014.
Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S., and Srebro, N. The implicit bias of gradient descent on separable data. Journal of Machine Learning Research, 19 (70):1-57, 2018.
Watanabe, S. Algebraic Geometry and Statistical Learning Theory. Cambridge University Press, USA, 2009.
Watanabe, S. A Widely Applicable Bayesian Information Criterion. Journal of Machine Learning Research, 14(Mar):867-897, 2013. ISSN ISSN 1533-7928. URL http://www.jmlr.org/papers/v14/watanabe13a.html.
Wen, K., Ma, T., and Li, Z. How sharpness-aware minimization minimizes sharpness? In The Eleventh International Conference on Learning Representations, 2023.
# A. Theoretical guarantees on fine-tuning predictive performance
Here we discuss theoretical guarantees on downstream predictive performance when employing the version of the downstream free energy strategy in equation 9. We would like to give an analysis of downstream predictive performance without being tied to a specific training algorithm e.g., SGD with momentum, ADAM, etc. Towards this end, we consider measuring predictive performance through quantities related to the downstream posterior distribution over neural network weights:
$$
p ^ {1} (w; w ^ {*}, \gamma) \propto \exp \{- m \mathrm {K} ^ {1} (w) \} \varphi_ {\gamma} (w - w ^ {*}) \tag {17}
$$
This does not mean we are advocating for Bayesian prediction, but rather we believe the posterior distribution above contains highly relevant information that all sensible downstream training algorithms are sensitive to.
Since fine-tuning entails finding a small perturbation of said $w^{*}$ which performs well on the downstream training dataset $\mathcal{D}^1$ , we might consider an indicator of the downstream training performance to be given by
$$
\mathrm {T} _ {m} \left(w ^ {*}\right) := \mathbb {E} _ {w \sim p ^ {1} \left(w; w ^ {*}, \gamma\right)} \hat {\mathrm {K}} ^ {1} (w). \tag {18}
$$
Let us call equation 18 the downstream Gibbs training error. Select $\gamma$ such that $w^{*}$ is a local minimum of $\mathrm{K}^0 (w)$ ; i.e., $w^{*}\in U_{0}$ . Then, on average, over the draw of $\mathcal{D}^1$ , the expected downstream Gibbs training error is given by
$$
\mathbb {E} _ {\mathcal {D} ^ {1}} \mathrm {T} _ {m} \left(w ^ {*}\right) = \mathrm {K} ^ {1} \left(w ^ {* 1}\right) + \frac {\lambda^ {1} \left(w ^ {*}\right) - \nu^ {1} \left(w ^ {*}\right)}{m} + o \left(\frac {1}{m}\right) \tag {19}
$$
where $\nu^{1}(w^{*})$ , like the local learning coefficient $\lambda^1 (w^*)$ , is a positive number called the singular fluctuation that is an invariant of the underlying model-truth-prior triplet. Since $\nu^{1}(w^{*})$ is always positive, the strategy in equation 9 leads us to select a checkpoint that minimizes an upper bound on $\mathbb{E}_{\mathcal{D}^1}\mathrm{T}_m(w^*)$ .
We can also look at the population counterpart to equation 18 given by
$$
\mathrm {G} _ {m} \left(w ^ {*}\right) := \mathbb {E} _ {w \sim p ^ {1} \left(w; w ^ {*}, \gamma\right)} \mathrm {K} ^ {1} (w) \tag {20}
$$
Let us call equation 20 the downstream Gibbs test error. The expected value of this, over the draw of $\mathcal{D}^1$ is given by
$$
\mathbb {E} _ {\mathcal {D} ^ {1}} \mathrm {G} _ {m} \left(w ^ {*}\right) := \mathrm {K} ^ {1} \left(w ^ {* 1}\right) + \frac {\lambda^ {1} \left(w ^ {*}\right) + \nu^ {1} \left(w ^ {*}\right)}{m} + o \left(\frac {1}{m}\right). \tag {21}
$$
It does not appear the strategy in equation 9 gives control over the (expected) downstream Gibbs test error.
Finally consider the test error resulting from Bayesian model averaging:
$$
\mathrm {G} _ {m} ^ {\mathrm {B M A}} \left(w ^ {*}\right) := \mathbb {E} _ {r ^ {1} (x)} D _ {\mathrm {K L}} \left(r ^ {1} (y | x) \mid \mid \mathbb {E} _ {w \sim p ^ {1} \left(w; w ^ {*}, \gamma\right)} p (y | x, w)\right) \tag {22}
$$
where the expectation over the posterior has been moved inside the logarithm. Let us call equation 22 the downstream Bayes test error. We have that
$$
\mathbb {E} _ {\mathcal {D} ^ {1}} \mathrm {G} _ {m} ^ {\mathrm {B M A}} \left(w ^ {*}\right) := \mathrm {K} ^ {1} \left(w ^ {* 1}\right) + \frac {\lambda^ {1} \left(w ^ {*}\right)}{m} + o \left(\frac {1}{m}\right). \tag {23}
$$
It is evident that the strategy in equation 9 leads us to select a checkpoint that minimizes an upper bound on $\mathbb{E}_{\mathcal{D}^1}\mathrm{G}_m^{\mathrm{BMA}}(w^*)$
# B. Experiment details
This section provides details for the experiment results presented in Figure 1 and Figure 2. For these experiments we use the CIFAR-FS dataset (Bertinetto et al., 2019) which has been pre-partitioned into 64 meta-training classes, 14 meta-validation classes and 20 meta-test classes. Each class contains 600 examples. We use the meta-training dataset for pretraining and the meta-test dataset during fine-tuning. We do not use the meta-validation dataset.
Random seeds To account for stochasticity, we repeat all experiments below with 5 different random seeds. These seeds control the randomness in the pretraining optimization trajectory, the train-test split and the fine-tuning optimization trajectory in full meta-test finetuning (Section B.2 below), and the construction of few-shot tasks in few-shot meta-test finetuning (Section B.2 below). The variability across the random seeds is reflected in Figure 2, although the error bands may not always be visible due to the wide scale of the $y$ -axis in some cases.
# B.1. Pretraining details
We pretrain a ResNet-18 (He et al., 2016) on the CIFAR-FS meta-training dataset (Bertinetto et al., 2019) using SGD with cross-entropy loss. We vary SGD hyperparameters such as the learning rate, batch size, and momentum. We use plain SGD optimizer without any regularization nor schedule to avoid masking effects. We used random crop and random flip for data augmentation. Throughout training we report the pretraining train loss on the augmented data (Figure 2 first column) and the pretraining WBIC computed on the augmented data (Figure 2 second column). Note, we use the same SGLD hyperparameters to compute the WBIC across all experiments. That is, we use step size $\epsilon = 2\times 10^{-7}$ , chain length of 3,000 iterations, batch size of 2,048, $\gamma = 1.0$ , and $\beta^{*} = \frac{1}{\log n}$ where $n$ is the size of the pretraining dataset.
Learning rate. For experiments that vary the learning rate in Figure 2 (top row), for each learning rate value in $\{0.01, 0.05, 0.1, 0.2\}$ we run SGD without momentum with a fixed batch size of 512 for 50,000 iterations. The WBIC estimations were performed every 2,000 iterations with the SGLD hyperparameters above.
Batch size. For experiments that vary the batch size in Figure 2 (middle row), for each batch size in $\{16,32,64,128,256,512\}$ we run SGD without momentum with a fixed learning rate of 0.05 for 50,000 iterations. The WBIC estimations were performed every 4,000 iterations with the SGLD hyperparameters above.
Momentum. For experiments that vary the momentum in Figure 2 (bottom row), for each momentum in $\{0.0, 0.2, 0.4, 0.6, 0.8\}$ we run SGD with a fixed learning rate of 0.01 and batch size of 512 for 80,000 iterations. The WBIC estimations were performed every 2,000 iterations with the SGLD hyperparameters above.
# B.2. Fine-tuning details
We perform fine-tuning in two scenarios: full CIFAR-FS meta-test finetuning which uses all 20 classes of the meta-test set, and few-shot meta-test finetuning which consists of multiple tasks constructed from the CIFAR-FS meta-test dataset. In both settings we fine-tune a ResNet-18 model initializing the weights of the ResNet backbone with the pre-training weights. The weights of the model head are randomly initialized.
Full meta-test fine-tuning. When fine-tuning on the full CIFAR-FS meta-test dataset, we use all 20 meta-test classes and all 600 examples in each class. We then create an 80/20 train/test split. We use SGD with $L^2$ regularization rate of 0.01 and with a fixed learning rate of 0.0001 for the model backbone and a fixed learning rate of 0.01 for the model head. We fine-tune for 100 steps using a batch size of 128.
Few-shot meta-test fine-tuning. For few-shot fine-tuning, we use only part of the CIFAR-FS meta-test dataset by sampling 5-class classification tasks randomly from the 20 classes available in the meta-test dataset. For each of these 5 classes we sample 5 training examples to create a 5-shot dataset for fine-tuning. During fine-tuning, as with full meta-test fine-tuning, we use a fixed learning rate of 0.0001 for the model backbone and a fixed learning rate of 0.01 for the model head. We perform 100 steps of full-batch gradient descent (GD) with $L^2$ regularization rate of 0.001 and then measure the model performance on 100 random test samples from each class. This constitutes a single task. Finally, we report the resulting accuracy rates averaged over 100 randomly chosen tasks.
# B.3. Correlation Analysis for Table 1
To assess the effectiveness of our free energy strategy in comparison to these other pretraining metrics, we computed the Pearson correlation coefficients (Pearson & Galton, 1895) for each of the pretraining metrics $\{\}$ against the downstream $\{\}$ full meta-test fine-tuning transfer accuracy, few-show meta-test fine-tuning transfer accuracy\} using model checkpoints obtained from experiments with CIFAR-FS, trained on ResNet-18 to convergence.
These experiments, detailed in Section 6, involved a comprehensive exploration of the hyperparameter space. We swept across three hyperparameters (learning rate, batch size, and momentum), with six values for learning rate, six for batch size, and five for momentum. Each configuration was trained with five different random seeds, resulting in a total of 85 model checkpoints. For each checkpoint, we compared the Geometric Complexity, Neural Collapse, and Free Energy of the pretrained model to its downstream performance, measured via both full meta-test fine-tuning and few-shot meta-test fine-tuning. Notably, as indicated by the Pearson correlation coefficients in Table 1, the pretraining Free Energy exhibits a
substantially stronger correlation with downstream performance than other metrics considered.
# C. Proof of Proposition 5.1
Proof. By definition of the test loss and rearranging terms via change of measure, for all $w$ ,
$$
\begin{array}{l} \mathrm {K} ^ {1} (w) = \int \log \left(\frac {r ^ {1} (y | x)}{p (y | x , w)}\right) r ^ {1} (x, y) d x d y \\ = \int \log \left(\frac {r ^ {0} (y | x)}{p (y | x , w)} \frac {r ^ {1} (y | x)}{r ^ {0} (y | x)}\right) \frac {r ^ {1} (x , y)}{r ^ {0} (x , y)} r ^ {0} (x, y) d x d y \\ = \int \log \left(\frac {r ^ {0} (y | x)}{p (y | x , w)}\right) \frac {r ^ {1} (x , y)}{r ^ {0} (x , y)} r ^ {0} (x, y) d x d y \\ + \int \log \left(\frac {r ^ {1} (y | x)}{r ^ {0} (y | x)}\right) r ^ {1} (x, y) d x d y \\ \leq M K ^ {0} (w) + D. \\ \end{array}
$$
Also, by definition of $w^{*1}$ , we have $\mathrm{K}^1 (w^{*1})\leq \mathrm{K}^1 (w^*)$ . Combining these two facts, we get $\mathrm{K}^1 (w^*)\leq M\mathrm{K}^0 (w^*) + D$ and obtain the conclusion in (11).
# D. Examples of Proposition 5.1
In this section we provide two detailed examples involving Gaussian distributions which help to illustrate Proposition 5.1 in action.
Example 1 (Covariate shift between pretraining and downstream distributions). Suppose $r^0(y|x) = r^1(y|x) = r(y|x)$ . Our pretraining and fine-tuning joint model is $p^i(x,y|w) = p(y|x,w)r^i(x)$ . Then we have $\lambda^0(w^*) = \lambda^1(w^*)$ and $K^i(w) = \mathbb{E}_{r^i(x)}K(x,w)$ where $K(x,w) = D_{\mathrm{KL}}(r(y|x)||p(y|x,w))$ . Writing
$$
\mathbb {E} _ {r ^ {1} (x)} K (x, w) = \int K (x, w) \frac {r ^ {1} (x)}{r ^ {0} (x)} r ^ {0} (x) d x
$$
we have that if $M = \max_{x\sim r^0 (x)}\frac{r^1(x)}{r^0(x)} < \infty$ then
$$
\mathbb {E} _ {r ^ {1} (x)} K (x, w) \leq M \mathbb {E} _ {r ^ {0} (x)} K (x, w)
$$
Putting this together we have $D = 0$ and
$$
\mathrm {K} ^ {1} \left(w ^ {* 1}\right) \leq \mathrm {K} ^ {1} \left(w ^ {*}\right) \leq M \mathrm {K} ^ {0} \left(w ^ {*}\right).
$$
Suppose the two covariate distributions are Gaussians
$$
r ^ {i} (x) \propto \exp \{- \frac {| | x - \mu_ {i} | | _ {2} ^ {2}}{2 \sigma_ {i} ^ {2}} \}
$$
then $M$ is finite if $\sigma_0 > \sigma_1$ , in which case $M = \frac{\sigma_0}{\sigma_1}\exp \left\{\frac{(\mu_0 - \mu_1)^2}{2(\sigma_0^2 - \sigma_1^2)}\right\}$
Example 2 (Nuisance parameter mismatch between pretrain and downstream distributions). Suppose the pretrain $(i = 0)$ and downstream $(i = 1)$ distributions are given by
$$
r ^ {i} (x, y) = r (y | x, w _ {0}, \sigma_ {i} ^ {2}) r (x)
$$
where $r(y|x, w_0, \sigma_i^2) = N(f_{w_0}(x), \sigma_i^2)$ with $f_{w}(x)$ representing neural network with weight $w$ . The pretraining and fine-tuning model are given by
$$
p ^ {i} (x, y | w) = r (y | x, w, \sigma_ {i} ^ {2}) r (x)
$$
Then we have $\lambda^0 (w^*) = \lambda^1 (w^*)$ and $M = \sigma_0 / \sigma_1$ .









Figure 3. Model checkpoints with lower pretraining WBIC (second column) consistently result in better transfer accuracy, both when fine-tuning on the full downstream dataset (third column) and in the few-shot setting (fourth column). Lower pretraining WBIC correlates with better downstream performance for Top row: larger learning rates, Middle row: smaller batch sizes, and Bottom row: increased momentum.



# E. Additional Experiments for mini-Imagenet; see Figure 3
# E.1. Pretraining details
We pretrain a VGG-16 (Simonyan, 2014) on the mini-Imagenet meta-training dataset (Dhillon et al., 2019) using SGD with cross-entropy loss. We vary SGD hyperparameters such as the learning rate, batch size, and momentum. We use plain SGD optimizer without any regularization nor schedule to avoid masking effects. We used random crop and random flip for data augmentation. Throughout training we report the pretraining train loss on the augmented data (Figure 2 first column) and the pretraining WBIC computed on the augmented data (Figure 2 second column). Note, we use the same SGLD hyperparameters to compute the WBIC across all experiments. That is, we use step size $\epsilon = 2 \times 10^{-7}$ , chain length of 1,000 iterations, batch size of 1,024, $\gamma = 1.0$ , and $\beta^{*} = \frac{1}{\log n}$ where $n$ is the size of the pretraining dataset. The results are plotted in Figure 3.
Learning rate. For experiments that vary the learning rate in Figure 2 (top row), for each learning rate value in $\{0.0025, 0.005, 0.01\}$ we run SGD without momentum with a fixed batch size of 512 for 50,000 iterations. The WBIC estimations were performed every 2,000 iterations with the SGLD hyperparameters above.
Batch size. For experiments that vary the batch size in Figure 2 (middle row), for each batch size in $\{16,32,64,128,256,512\}$ we run SGD without momentum with a fixed learning rate of 0.01 for 50,000 iterations. The WBIC estimations were performed every 2,000 iterations with the SGLD hyperparameters above.
Momentum. For experiments that vary the momentum in Figure 2 (bottom row), for each momentum in $\{0.0, 0.1, 0.3, 0.5\}$ we run SGD with a fixed learning rate of 0.005 and batch size of 512 for 50,000 iterations. The WBIC estimations were performed every 2,000 iterations with the SGLD hyperparameters above.
# E.2. Fine-tuning details
We perform fine-tuning in two scenarios: full mini-Imagenet meta-test finetuning which uses all 20 classes of the meta-test set, and few-shot meta-test finetuning which consists of multiple tasks constructed from the mini-Imagenet meta-test dataset. In both settings we fine-tune a VGG-16 model initializing the weights of the VGG backbone with the pre-training weights. The weights of the model head are randomly initialized.
Full meta-test fine-tuning. When fine-tuning on the full mini-Imagenet meta-test dataset, we use all 20 meta-test classes and all 600 examples in each class. We then create an 80/20 train/test split. We use SGD with $L^2$ regularization rate of 0.01 and with a fixed learning rate of 0.0001 for the model backbone and a fixed learning rate of 0.01 for the model head. We fine-tune for 500 steps using a batch size of 32.
Few-shot meta-test fine-tuning. For few-shot fine-tuning, we use only part of the mini-Imagenet meta-test dataset by sampling 5-class classification tasks randomly from the 20 classes available in the meta-test dataset. For each of these 5 classes we sample 5 training examples to create a 5-shot dataset for fine-tuning. During fine-tuning, as with full meta-test fine-tuning, we use a fixed learning rate of 0.0001 for the model backbone and a fixed learning rate of 0.01 for the model head. We perform 100 steps of full-batch gradient descent (GD) with $L^2$ regularization rate of 0.01 and then measure the model performance on 100 random test samples from each class. This constitutes a single task. Finally, we report the resulting accuracy rates averaged over 100 randomly chosen tasks. |