text
string
source
string
0.09 0.39.87 ± 0.05 0.6781 ± 0.05 0.4185 ± 0.07 EBR (Ours) 0.8533 ± 0.09 0.4277 ± 0.02 0.7030 ± 0.05 0.4290 ± 0.02 Table 1. MIMIC-IV: Comparison of average performance with standard deviation across multiple modality missingness rates. taken care of, existing SOTA models perform notably poorly when the noise rate is increased. Since basis reallocation frees up rank bottlenecks, the novel dimensions can be uti- lized by SGD for denoising. EBR makes the denoising process explicit through the adversarial training of ψandgi to optimize Lmd, providing stronger robustness to noise. MethodMortality Readmission AUC-ROC AUC-PRC AUC-ROC AUC-PRC Grape (NeurIPS ’20) 0.8837 0.4584 0.7085 0.4551 + KD 0.9011 0.4620 0.7231 0.4610 +EBR 0.9102 0.4799 0.7488 0.4691 M3Care (SIGKDD ’22) 0.8896 0.4603 0.7067 0.4532 + KD 0.8950 0.4700 0.7080 0.4562 +EBR 0.8987 0.4850 0.7296 0.4832 MUSE (ICLR’24) 0.9201 0.4883 0.7351 0.4985 + KD 0.9350 0.4993 0.7402 0.5066 +EBR 0.9380 0.5001 0.7597 0.5138 Table 2. MIMIC-IV: Using knowledge-distilled / EBR backbones for the modality that would otherwise be eliminated by collapse. Independence from Fusion Strategies: Finally, to show that basis reallocation can be performed agnostic of the fu- sion strategy, we replace the unimodal encoders of a number of SOTA multimodal models with their knowledge distilled / EBR counterparts and report their performance in Table 2. Irrespective of the fusion strategy, it can be seen that an out- of-the-box improvement in test performance can be attained by these simple replacements, establishing the generic na- ture of our results. 4.4. Fusion with Inference-Time Missing Modalities As discussed in Section 3.4, after reallocating bases to fea- tures via EBR, since the latent factors of Xare identifiable up to the equivariances shared by the underlying mecha- nisms, we leverage this property to substitute missing modal- ities at test-time with those that are available. Concretely, once training with EBR converges, we proceed as follows: (1) Rank modalities wrttheir similarities (computed pair- wise across all samples) with a reference modality (chosenas the strongest modality in our experiments) in terms of the latent encoding gi(xi); (2) When a modality iof a test sample xgoes missing, choose its substitution candidate as the modality jthat is closest to it in the ranked list; (3) Com- pute the proxy unimodal encoding of xiash−1(gj(xj)). We validate the importance of ranking based on the EBR latents by benchmarking against other substitution strategies in Appendix C.4. To evaluate our approach, we adopt the experimental setup of MUSE (Wu et al., 2024), following which we mask out the modalities in the MIMIC-IV dataset with probabilities {0.1,0.2,0.3,0.4,0.7}. We then take the average and stan- dard deviation across these missingness rates and report the results in Table 1. It can be seen that close to 3%improve- ments can be achieved in both Mortality and Readmission prediction AUC-ROC and AUC-PR metrics on top of SOTA, by simply replacing the unimodal encoders of the baseline MUSE with our proposed EBR variants and following the ranking and substitution strategy for dealing with missing modalities detailed above. 5. Conclusion and Discussions We studied the phenomenon of modality collapse from the perspective of polysemanticity
https://arxiv.org/abs/2505.22483v1
and low-rank simplic- ity bias. We established, both theoretically and empirically, that modality collapse happens due to low rank gradient updates forcing the fusion head neurons to polysemantically encode predictive features of one modality with noisy fea- tures from another, leading to the eventual collapse of the latter. This work attempts to reveal that multimodal learning may be plagued in ways that are rather unexpected, and con- sequently, unexplored, thereby leaving room for a number of improvements and future explorations. For instance, Theorems 2 and 3 are valid when the reduction in conditional cross-entropy provided (amount of unique la- bel information held) by each feature is the same. It remains to be explored how these results can be extended to the case 8 A Closer Look at Multimodal Representation Collapse when such reductions are different across features. We con- jecture that (and as also empirically evidenced) EBR turns the otherwise saddle landscape, that is obtained after the rank bottlenecks are freed up by knowledge distillation, into a convex one, enabling smoother and more predictable opti- mization. Developing an understanding of this could lead to deeper insights into the dynamics of the loss landscape geometry of modality collapse. Acknowledgments We would like to thank the following individuals at Fu- jitsu Research of Europe for their independent inputs: Mo- hammed Amer (for discussions on observations of modality collapse in multimodal genomics), Shamik Bose (for inputs on polysemanticity and capacity in neural networks), and Nuria Garc ´ıa-Santa (for help with the MIMIC-IV dataset). We would also like to thank the anonymous reviewers for their thorough analysis and detailed feedback that helped clarify and improve various aspects of our work. Impact Statement This paper advances the understanding of modality collapse in multimodal fusion, providing a theoretical foundation and experimental evidence to improve robustness in the presence of missing modalities. By systematically analyzing cross- modal interactions, we demonstrate that mitigating modality collapse enhances performance across diverse applications. In healthcare, our approach can be capable of reliable diag- nosis even when hard / expensive to acquire modalities such as imaging or genomics might be missing. In autonomous perception, it could support safer decision-making despite sensor failures. By introducing a scalable and generaliz- able multimodal learning framework, this work lays the foundation for more robust and deployable AI systems in real-world settings, with the potential to positively impact society. To the best of our knowledge, we are not aware of any negative impacts of this work. References Ahuja, K., Hartford, J., and Bengio, Y . Properties from mechanisms: an equivariance perspective on identifiable representation learning. In ICLR , 2022. Arjovsky, M., Bottou, L., Gulrajani, I., and Lopez-Paz, D. Invariant risk minimization. ArXiv , abs/1907.02893, 2019. Baldi, P. Autoencoders, unsupervised learning, and deep architectures. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning , 2012. Bishop, C. M. Training with noise is equivalent to tikhonovregularization. Neural Computation , 7(1):108–116, 1995. doi: 10.1162/neco.1995.7.1.108. Chaudhuri, A., Mancini, M., Chen, Y ., Akata, Z., and Dutta, A. Cross-modal fusion distillation for fine-grained sketch- based image retrieval. In BMVC , 2022. Chaudhuri, A.,
https://arxiv.org/abs/2505.22483v1
Georgescu, S., and Dutta, A. Learning conditional invariances through non-commutativity. In ICLR , 2024. Chen, J. and Zhang, A. Hgmf: Heterogeneous graph-based fusion for multimodal data with incompleteness. In ACM SIGKDD , 2020. De Veaux, R. D. and Ungar, L. H. Multicollinearity: A tale of two nonparametric regressions. In Selecting Models from Data , 1994. Dou, Q., Liu, Q., Heng, P.-A., and Glocker, B. Unpaired multi-modal segmentation via knowledge distillation. IEEE Transactions on Medical Imaging , 2020. Elhage, N., Hume, T., Olsson, C., Schiefer, N., Henighan, T., Kravec, S., Hatfield-Dodds, Z., Lasenby, R., Drain, D., Chen, C., Grosse, R., McCandlish, S., Kaplan, J., Amodei, D., Wattenberg, M., and Olah, C. Toy models of superposition. Transformer Circuits Thread , 2022. Galanti, T., Siegel, Z. S., Gupte, A., and Poggio, T. A. SGD and weight decay secretly minimize the rank of your neural network. In NeurIPS 2024 Workshop on Mathematics of Modern Machine Learning , 2024. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y . Generative adversarial networks. Commun. ACM , 2020. Gulrajani, I. and Hashimoto, T. Identifiability conditions for domain adaptation. In ICML , 2022. Huben, R., Cunningham, H., Smith, L. R., Ewart, A., and Sharkey, L. Sparse autoencoders find highly interpretable features in language models. In ICLR , 2024. Huh, M., Mobahi, H., Zhang, R., Cheung, B., Agrawal, P., and Isola, P. The low-rank simplicity bias in deep networks. Transactions on Machine Learning Research , 2023. Jackson, Z., Souza, C., Flaks, J., Pan, Y ., Nicola, H., and Thite, A. Free spoken digit dataset (fsdd). https://github.com/Jakobovski/ free-spoken-digit-dataset , 2018. Javaloy, A., Meghdadi, M., and Valera, I. Mitigating modal- ity collapse in multimodal V AEs via impartial optimiza- tion. In ICML , 2022. 9 A Closer Look at Multimodal Representation Collapse Johnson, A. E. W., Bulgarelli, L., Shen, L., Gayles, A., Shammout, A., Horng, S., Pollard, T. J., Moody, B., Gow, B., wei H. Lehman, L., Celi, L. A., and Mark, R. G. Mimic-iv, a freely accessible electronic health record dataset. Scientific Data , 2023. Kidger, P. and Lyons, T. Universal Approximation with Deep Narrow Networks. In COLT , 2020. Kim, W., Son, B., and Kim, I. Vilt: Vision-and-language transformer without convolution or region supervision. InICML , 2021. Lecomte, V ., Thaman, K., Schaeffer, R., Bashkansky, N., Chow, T., and Koyejo, S. What causes polysemanticity? an alternative origin story of mixed selectivity from in- cidental causes, 2024. URL https://arxiv.org/ abs/2312.03096 . Lecun, Y ., Bottou, L., Bengio, Y ., and Haffner, P. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE , 1998. Lee, K., Lee, S., Hahn, S., Hyun, H., Choi, E., Ahn, B., and Lee, J. Learning missing modal electronic health records with unified multi-modal data embedding and modality- aware attention. In Machine Learning for Healthcare Conference , 2023. Ma, C., Tschiatschek, S., Hern ´andez-Lobato, J. M., Turner, R. E., and Zhang, C. Vaem: a deep generative model for heterogeneous mixed type data. In NeurIPS , 2020. Ma, M., Ren, J., Zhao, L., Tulyakov, S., Wu, C., and Peng,
https://arxiv.org/abs/2505.22483v1
X. Smil: Multimodal learning with severely missing modality. In AAAI , 2021. Ma, M., Ren, J., Zhao, L., Testuggine, D., and Peng, X. Are multimodal transformers robust to missing modality? In CVPR , 2022. Nagrani, A., Yang, S., Arnab, A., Jansen, A., Schmid, C., and Sun, C. Attention bottlenecks for multimodal fusion. InNeurIPS , 2021. Naz´abal, A., Olmos, P. M., Ghahramani, Z., and Valera, I. Handling incomplete heterogeneous data using vaes. Pattern Recognition , 107, 2020. Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., and Ng, A. Multimodal deep learning. In ICML , 2011. Parascandolo, G., Kilbertus, N., Rojas-Carulla, M., and Sch¨olkopf, B. Learning independent causal mechanisms. InICML , 2018. Poklukar, P., Vasco, M., Yin, H., Melo, F. S., Paiva, A., and Kragic, D. Geometric multimodal contrastive representa- tion learning. In ICML , 2022.Radhakrishnan, A., Beaglehole, D., Pandit, P., and Belkin, M. Mechanism for feature learning in neural networks and backpropagation-free machine learning models. Sci- ence, 2024. Ramachandram, D. and Taylor, G. W. Deep multimodal learning: A survey on recent advances and trends. IEEE Signal Processing Magazine , 2017. Scherlis, A., Sachan, K., Jermyn, A. S., Benton, J., and Shlegeris, B. Polysemanticity and capacity in neural networks. ArXiv , abs/2210.01892, 2022. Shen, Y . and Gao, M. Brain tumor segmentation on mri with missing modalities. In Information Processing in Medical Imaging . Springer International Publishing, 2019. Shi, Y ., N, S., Paige, B., and Torr, P. Variational mixture- of-experts autoencoders for multi-modal deep generative models. In NeurIPS , 2019. Shi, Y ., Paige, B., Torr, P., and N, S. Relating by contrasting: A data-efficient framework for multimodal generative models. In ICLR , 2021. Sreelatha, S. V ., Kappiyath, A., Chaudhuri, A., and Dutta, A. DenetDM: Debiasing by network depth modulation. InNeurIPS , 2024. Sutter, T. M., Daunhawer, I., and V ogt, J. E. Generalized multimodal ELBO. In ICLR , 2021. Tian, Y ., Krishnan, D., and Isola, P. Contrastive representa- tion distillation. In ICLR , 2020. Tsai, Y .-H. H., Bai, S., Liang, P. P., Kolter, J. Z., Morency, L.-P., and Salakhutdinov, R. Multimodal transformer for unaligned multimodal language sequences. In Korhonen, A., Traum, D., and M `arquez, L. (eds.), ACL, 2019. Vielzeuf, V ., Lechervy, A., Pateux, S., and Jurie, F. Cen- tralnet: a multilayer approach for multimodal fusion. In ECCV Workshops , 2018. Wang, H., Chen, Y ., Ma, C., Avery, J., Hull, L., and Carneiro, G. Multi-modal learning with missing modality via shared-specific feature modelling. In CVPR , 2023. Wang, W., Tran, D., and Feiszli, M. What makes training multi-modal classification networks hard? In CVPR , June 2020. Wu, Z., Dadu, A., Tustison, N., Avants, B., Nalls, M., Sun, J., and Faghri, F. Multimodal patient representation learn- ing with missing modalities and labels. In ICLR , 2024. Xie, Q., Hovy, E. H., Luong, M.-T., and Le, Q. V . Self- training with noisy student improves imagenet classifica- tion. In CVPR , 2019. 10 A Closer Look at Multimodal Representation Collapse Xue, Z. and Marculescu, R. Dynamic multimodal fusion. InMulti-Modal Learning and Applications Workshop (MULA), CVPR , 2023.
https://arxiv.org/abs/2505.22483v1
You, J., Ma, X., Ding, Y ., Kochenderfer, M. J., and Leskovec, J. Handling missing data with graph repre- sentation learning. In NeurIPS , 2020. Zhang, C., Chu, X., Ma, L., Zhu, Y ., Wang, Y ., Wang, J., and Zhao, J. M3care: Learning with missing modalities in multimodal healthcare data. In ACM SIGKDD , 2022. Zhao, F., Zhang, C., and Geng, B. Deep multimodal data fusion. ACM Comput. Surv. , 2024. Zhou, Y ., Wang, X., Chen, H., Duan, X., and Zhu, W. Intra- and inter-modal curriculum for multimodal learning. In ACM International Conference on Multimedia , 2023. 11 A Closer Look at Multimodal Representation Collapse A. Extended Literature Review Missing Modalities: Existing SOTA multimodal fusion approaches do not account for the possibility of missing modalities (Ramachandram & Taylor, 2017; Nagrani et al., 2021; Shi et al., 2021; Chaudhuri et al., 2022; Zhao et al., 2024). Although this limitation was identified in works as early as Ngiam et al. (2011), the pattern of missingness, i.e., which modality(ies) could go missing, were assumed to be known at training time. Later, a number of graph-based techniques (You et al., 2020; Zhang et al., 2022; Wu et al., 2024), including ones that use heterogeneous graphs to model different missingness patterns (Chen & Zhang, 2020), alongside transformer-based (Tsai et al., 2019; Ma et al., 2022), and Bayesian meta-learning (Ma et al., 2021) based approaches attempted to operate without this assumption. Other approaches such as those that facilitate direct interaction among modality-specific raw inputs (Kim et al., 2021; Lee et al., 2023), and ones based on self-supervised domain adaptation (Shen & Gao, 2019), also provided promising results. Interestingly, it was observed by Ma et al. (2022) that multi-modal representations are strongly dependent on the fusion strategy, and that the optimal way of fusing modalities is dependent on the data, due to which, the authors recommended that fusion strategies should be distribution-specific. However, the limitations introduced by such dependencies was also accounted for in related multimodal learning literature to address the challenge of resource-efficient utilization of modalities, which was tackled through dynamic data-dependent fusion (Xue & Marculescu, 2023). Despite the existence of a number of bespoke techniques for dealing with missing modalities, SOTA approaches such as ShaSpec (Wang et al., 2023), in addition to such algorithms, were also benchmarked against baselines based on GANs (Goodfellow et al., 2020) and autoencoders (Baldi, 2012), which the authors found to be similarly competitive. Although there have been some works that explored the applicability of knowledge distillation to dealing with missing modalities, their purposes have been scoped to addressing issues such as compressing the extra parameter overhead due to multimodal fusion (Dou et al., 2020), or dynamically weighting data points within modalities and contributions from loss terms (Zhou et al., 2023). In this work, we use knowledge distillation as a tool to theoretically study the fundamental processes in optimization that govern modality collapse, and show that it can avoid collapse by implicitly freeing up rank bottlenecks that lead to cross-modal entanglements between noisy and predictive features. B. Proofs Lemma 1 (Cross-Modal
https://arxiv.org/abs/2505.22483v1
Polysemantic Collision). As the number of modalities increase, the fraction of polysemantic neurons encoding features from different modalities, for a given depth and width, increases quadratically in the number of modalities as follows: p(wp)≥m(m−1)(dim fmin)2 mX i=1dimfi!2, where p(wp)is the probability of a neuron being polysemantic via superposition, and fminis the modality-specific encoder with the smallest output dimensionality. Proof. The number of ways any two features can be selected from by φfromXsuch that both belong to different modalities is≥m 2 (dim fmin)2, since there arem 2 ways of choosing modality-pairs, and there are ≥(dim fmin)2ways of choosing feature pairs in each such combination. Now, it is these pairs of features that lead to cross-modal polysemantic collisions (through neuron subspaces wp) during fusion in φ. Let the ambient dimension of the input to φbeFdim=mX i=1dimfi. Then, for a given depth and width, the probability that a polysemantic weight subspace would represent features from two different modalities would be: p(wp)≥m 2(dim fmin)2 Fdim 2=m(m−1)(dim fmin)2 mX i=1dimfi!2 This completes the proof of the lemma. Lemma 3 (Entanglement by Feature Type) .If the latent factors underlying the modalities are sufficiently complementary to 12 A Closer Look at Multimodal Representation Collapse each other in terms of predicitvity of the label y, i.e., for any pair of modalities iandj, X pX qzp i·zq j< K, where pandqare indices over the latent factors of modalities iandjrespectively, and Kis a constant, then, the noisy features from one modality are more likely to be entangled with predictive features of another through polysemantic weights in the fusion head, i.e., for any pair of noisy ( zϵ) and predictive ( zy) features from the same modality, the following will hold: P wzy·wP wzϵ·w≤1, where wdenotes weight subspaces representing a different modality in φ. Proof. Since noisy features are closer to random, they can get entangled with neurons representing predictive features from any modality if the corresponding neuron allows features up to (1−K)units of deviations, according to the Johnson–Lindenstrauss lemma (Elhage et al., 2022), i.e., satisfying the following: zϵ·w≥K;zy·w< K It implies that the set of noisy features zϵwould be more closely aligned with wthan the set of predictive features zy, making the ratio of the sums over all w∈φ, of the latter to the former, ≤1. With the elimination of noisy features, such entanglements following from superposition become less likely among predictive features across modalities, since they would normally require dedicated dimensions with strong deviations from perfect orthogonality, i.e., monosemantic neurons. This completes the proof of the lemma. Theorem 1 (Interference) As the number of cross-modal polysemantic collisions increase, the fraction of predictive conjugate features contributing to the reduction of the task loss decreases, resulting in the following limit: lim p(wp)→1X ∀zy∈X∂ ∂wpL(φ(zy),y) = 0 , where zydenotes predictive conjugate features in X. Proof. Letzϵbe the noisy conjugate of zy. As the number of polysemantic collisions increase, so does the proportion of polysemantic neurons, i.e.,lim p(wp)→1. Now, from Lemma 3, we know that as the noisy features are more likely to get into cross-modal polysemantic entanglements, which implies that they would exhibit
https://arxiv.org/abs/2505.22483v1
a higher similarity (dot-product) with the polysemantic subspace wp. Additionally, since zyandzϵare conjugate to each other, zywould exhibit a low similarity (dot-product) with wp, activating in the opposite direction as that of zϵ. Again, from Lemma 3, since zϵis entangled with wp, when present in the input, it would always activate. Based on this, the conjugate activation equation (without the non-linearity) of zycan be written as: lim p(wp)→1φ(zy) =φ(zy) +φ(zϵ) =wp·zy|{z} large -ve+wp·zϵ|{z} large +ve= 0, meaning that the net activation of zyalong wpis 0, which ultimately implies that: lim p(wp)→1X ∀zy∈X∂ ∂wpL(φ(zy),y) = 0 This completes the proof of the theorem. 13 A Closer Look at Multimodal Representation Collapse B.1. Rank Bottleneck Definition 2 (Degree of Polysemanticity) .We quantitatively define the degree of polysemanticity (Elhage et al., 2022; Scherlis et al., 2022), γ, of a weight subspace, w, as the ratio of the number of features in the input distribution Xencoded in the subspace and the number of dimensions of the subspace, i.e., γ(w) =|w∩X| dim(w), where |w∩X|is the number of features in Xthat are encoded in w. Polysemantic neurons lie on a low-rank manifold that the weights of each layer converge to under SGD. So, since all optimization happens along this low-rank polysemantic manifold, it is not possible for φto avert the cancellation effect among conjugate features, of which the noisy counterparts may get encoded as part of a polysemantic neuron. Lemma 2 (Gradient Rank). The rank of gradient updates across iterations of SGD at layer lis a convergent sequence with the following limit: lim n→∞rank(∇lLn)∝rank X x∈X∇φl(x)∇φl(x)T! , where φl(x)and∇lLnare respectively the output and the gradient of the loss Lat layer lat the n-th iteration of SGD, and Xis the set of all inputs to layer lacross the dataset. Proof. Step 1: The rank of each layer decreases with every iteration of SGD (Galanti et al., 2024). Step 2: Every layer converges to a quantity proportional to the average gradient outer product (Radhakrishnan et al., 2024). Theorem 4 (Depth-Rank Duality (Sreelatha et al., 2024)) .LetA= [A0, A1, ..., A n]be the attribute subspace of X with increasing ranks, i.e., rank( A0)<rank( A1)< ... < rank( An), such that every A∈ A is maximally and equally informative of the label Y, i.e., I(A0, Y) =I(A1, Y) =...=I(An, Y). Then, across the depth of the encoder ϕ, SGD yields a parameterization that optimizes the following objective: min ϕ,fL(f(ϕ(X)), Y) | {z } ERM+ min ϕX l ϕ[l](˜X)−Ωd⊙ A 2, where L(·,·)is the empirical risk, f(·)is a classifier head, ϕ[l](·)is the output of the encoder ϕ(optimized end- to-end) at depth l,∥·∥2is the l2-norm, ⊙is the element-wise product, ˜Xis the l2-normalized version of X,Ωd= [ 1π1(l); 1π2(l);...; 1πn(l)], 1πis a random binary function that outputs 1with a probability π, and πi(l)is the propagation probability of Aiat depth lbounded as: πi(l) =O rank( ϕ[l])r−d i , where rank( ϕ[l])is the effective rank of the ϕ[l]representation space, and ri= rank( Ai). Theorem 2 (Polysemantic Bottleneck) Let Wbe the weight matrix at a given layer of φ, andw≤Wbe any subspace in W. When the reduction in
https://arxiv.org/abs/2505.22483v1
conditional cross-entropy H(x;y|z)provided (amount of unique label information held) by each feature is the same, i.e.,I(x;y|z1) =I(x;y|z2) =...=I(x;y|zk), at any iteration nof SGD, the norm of the difference between wand the average gradient outer product (AGOP) of the complete weight matrix Wis bounded as follows: w−X x∈X∇φW(x)∇φW(x)T ≤γ(w)−1/n, where γ(w)is the degree of polysemanticity of w. Proof. We know that the weights of a neural network when optimized with SGD converge to a value proportional to the Averge Gradient Outer Product (AGOP), which essentially represents those sets of features which when minimally perturbed, produce a large change in the output (Radhakrishnan et al., 2024). 14 A Closer Look at Multimodal Representation Collapse Additionally, SGD with weight decay minimizes the ranks of the weight matrices (Galanti et al., 2024) and that this minimization is more pronounced as we go deeper into the neural network (Huh et al., 2023), as formalized in Theorem 4 by Sreelatha et al. (2024). In other words, the deeper we go into a network, the more likely it is for the representations to be of a lower rank (Huh et al., 2023; Sreelatha et al., 2024), and that this rank decreases with each successive iteration (Galanti et al., 2024). In other words, for each layer, there is a subspace of a specific rank that is updated through backpropagation, and according to Lemma 2, since the rank of such updates decrease with iterations, the lower the rank of this subspace, the greater its cumulative gradient across iterations, i.e., the more likely it is to be learned by gradient descent and the more likely it is that the weights of the particular layer would converge to this subspace. If two features equally minimize the empirical risk, and their joint encoding has no local improvement in the minimization of the marginal loss, extending the result by Galanti et al. (2024), SGD on the fusion operator would prioritize the encoding of the modality with the lower rank of the two as follows: min ¯Wφ(l)X m∈Ml Wφ(l) m Wφ(l) m −¯Wφ(l) ≤K·(1−2µλ)nl, (1) where Wφ(l) mis the subspace of the weight matrix at layer lof the fusion operator φcorresponding to the modality m,µis the learning rate, nis the SGD iteration, and ¯Wφ(l)is the target weight matrix that the fusion operator converges towards at layer lsuch that: rank( ¯Wφ(d)) =X m∈Mcrank( ¯Wfm(o)), where ¯Wf(o) m is the weight matrix at the output layer of the modality-specific encoder fm(o)of modality mandMcis the set of modalities that survive collapse. Now, for polysemantic subspaces w, since the degree of polysemanticity γ(w)>1, we can extend Equation (1) as: min ¯Wφ(l)X m∈Md Wφ(l) m Wφ(l) m −¯Wφ(l) ≤K·(1−2µλ)nl≤γ(w)−1/n(2) According to the condition I(x;y|z1) = I(x;y|z2) = ...=I(x;y|zk), since basins corresponding to multimodal combinations all lie at the same depth, their empirical risks are essentially the same, and so are the gradients from the ERM term. Now, as a result of modality collapse, we know that one of the basins is steeper than the rest, meaning it has a higher local gradient. Since the empirical risk is constant across all the
https://arxiv.org/abs/2505.22483v1
basins / multimodal combinations, the steepness must come from the rank minimization term. Therefore, the combination with a steep entry must lead to a lower rank solution. As observed by (Javaloy et al., 2022), no local improvement in the minimization of the marginal loss may be due to conflicting gradients in the local parameterizations for the two modalities. Note that this does not imply that the two modalities are globally conflicting. It is only the local encodings of the two that somehow conflict with each other. Specifically, following from Equations (1) and (2) and Lemma 2, the norm of the difference between the polysemantic subspace wand the AGOP of the ambient weight matrix Wcan be bound as: w−X x∈X∇φW(x)∇φW(x)T ≤K·(1−2µλ)nl(3) Therefore, for polysemantic bases, at any given iteration of SGD, the difference with the AGOP can be more tightly bound than for monosemantic bases. Formally, combining Equations (1) to (3), we have: w−X x∈X∇φW(x)∇φW(x)T ≤K·(1−2µλ)nl≤γ(w)−1/n(4) In other words a basis formed with polysemantic neurons is more similar to the AGOP than one formed with monosemantic neurons, provided the conditional cross-entropy H(x;y|z)reduction provided (amount of unique label information held) by each feature is the same, i.e.,I(x;y|z1) =I(x;y|z2) =...=I(x;y|zk). 15 A Closer Look at Multimodal Representation Collapse This completes the proof of the theorem. B.2. Knowledge Distillation Frees Up Rank Bottlenecks As described earlier, the cause for collapse is cross-modal interference between noisy and predictive features. Here, we find that knowledge distillation implicitly remedies this problem. Knowledge distillation converges when the noisy and the predictive subspaces have been sufficiently disentangled to the point that the available rank can be assigned completely towards modeling the teacher modality, after effectively having discarded as many of the noisy features as possible. There is some empirical evidence on this from the self-distillation literature (Xie et al., 2019). With the disentangled and denoised representations obtained from knowledge distillation, the causal factors of the previously eliminated modalities can expand (inverse of collapse) the multimodal representation space, utilizing previously unused dimensions for encoding features that effectively reduce the loss. Previously, the effect of the semantically relevant features from the eliminated modalities would not be observable since the superposition with the noisy features would cancel out (when marginalized across all features) any conditional reduction in loss that the causal factors would have induced. Theorem 3 (Dynamic Convergence Bound]). When the inputs to φare dynamic (for instance, when the unimodal representations are aligned via cross-modal knowledge distillation) under some distance metric d, then at any iteration nof SGD, the norm of the difference between wand the AGOP of Wis bounded as follows for all modalities i, j∈Mand datapoints x∈X: lim d(˜xi,˜xj)→ϵ w−X x∈X∇φW(x)∇φW(x)T ≤κ−1/n, where ˜xi,˜xj=fi(xi), fj(xj),κis a constant for a given depth proportional to the AGOP along the entire weight matrix W at that depth, ϵis the maximum permissible bound on the distance between any pair of modality-specific encodings, and bothWandware functions of xiandxj, as they result from backpropagation on their predictions on X. Proof. Every modality consists of both noisy and predictive features. If fusion collapses to a specific modality (target),
https://arxiv.org/abs/2505.22483v1
it means that the modality contains more predictive information and less noise than the rest. Knowledge distillation to align the representations of the other modalities with the target would thus denoise the other modalities, allocating a larger fraction of the feature space of the modality-specific encodings of such modalities to predictive features. Since noisy features are closer to random, they can get entangled with predictive features from any modality if the corresponding neuron has a slight deviation from perfect orthogonality, according to the Johnson–Lindenstrauss lemma (Elhage et al., 2022). With the elimination of noisy features, such entanglements following from superposition become less likely among predictive features across modalities, since they would normally require dedicated dimensions with strong deviations from perfect orthogonality, i.e., monosemantic neurons. Therefore, as the unimodal representations get closer to each other through the implicit denoising mechanism of knowledge distillation, SGD gets increasingly compelled to parameterize the fusion head in a monosemantic manner. As the representations from different modalities get closer to each other under the distance metric d, which is what effectively happens during cross-modal knowledge distillation, the proportion of monosemantic neurons in Wincreases. This results in the AGOP of Wdiverging away from its polysemantic subspaces w. In other words, knowledge distillation implicitly disentangles the cross-modal interferences by freeing rank bottlenecks and encouraging necessary monosemanticity, allowing for independent, modality-wise denoising of features along novel dimensions. In other words, when cross-modal polysemantic weight matrices are rank-bottlenecked, the only solution to minimize the loss further is to allocate the noisy features new, independent dimensions in the latent space. However, this causes an increase in representation rank. To counteract this, knowledge distillation frees up the rank bottlenecks by down-weighting the rank-regularization term. An intuition for this can be developed based on the low-rank simplicity bias. A weighted general case of the Depth-Rank Duality result by (Sreelatha et al., 2024) follows as a consequence of cross-modal interferences, which can be written as: min ϕ,fL(φ(X), Y) | {z } ERM+ (1−κ)αmin ϕX drank( φl), where κis the strength of cross-modal knowledge distillation. Now, when cross-modal polysemantic interference happens despite there being extra available dimensions in the ambient space, it indicates a faulty capacity allocation (Scherlis et al., 16 A Closer Look at Multimodal Representation Collapse 2022) of the corresponding neurons by SGD, which follows from the Johnson–Lindenstrauss lemma (Elhage et al., 2022). It happens because SGD, by default, performs the aforementioned implicit weighted rank-regularization aside from ERM, with equal weights between ERM and the rank regularization term. Knowledge distillation reduces the weight on the rank-regularization term, freeing up other dimensions for exploration, potentially containing higher rank solutions with more modalities. The rank de-regularization happens due to knowledge distillation having to denoise the student modality in order to align its representation with that of the teacher (Bishop, 1995). Once the optimizer converges to the modality-collapsed solution at the saddle point, the value of the loss and the rank of the solution balance each other out. Now, to explore new dimensions from the saddle point, the optimizer has to explore a novel direction in the parameter
https://arxiv.org/abs/2505.22483v1
space, which leads to a reduction in loss but a simultaneous increase in rank. The reason behind this saddle-geometry is the presence of noisy features from the one modality in entanglement with the predictive features from the another, which results in an adversarial minimax game between the two. On either side of the saddle point along the unexplored dimensions are predictive features of the former modality which could potentially minimize the task loss, but is not taken into account due to rank regularization. Since the rank-regularization happens when d(˜xi,˜xj)→ϵ, it implies that knowledge distillation down-weights this regular- ization by disentanglement and denoising of the bases of the modalities from which the noise features originate. Specifically, under a dynamic input space approaching a bounded neighborhood ϵ, the number of features encoded in any polysemantic subspace gets bounded by a constant (say, kw) as the n→ ∞ . So, the RHS in Theorem 2 can be rewritten as: lim n→∞γ(w)−1/n=|w∩X| dim(w)−1/n =kw dim(w)−1/n =κ−1/n, where κ=kw/dim(w)is a constant is both kwanddim(w)are constants. Finally, following from the above, when d(˜xi,˜xj)→ϵ, Equation (4) from the proof of Theorem 2 can be expressed in terms of κas: lim d(˜xi,˜xj)→ϵ w−X x∈X∇φW(x)∇φW(x)T ≤K·(1−2µλ)nl≤γ(w)−1/n=κ−1/n This completes the proof of the theorem. B.3. Additional Remarks Unequal conditional cross-entropy across features: According to the condition I(x;y|z1) =I(x;y|z2) =...= I(x;y|zk), since basins corresponding to multimodal combinations all lie at the same depth, their empirical risks are essentially the same, and so are the gradients from the ERM term. Now, as a result of modality collapse, we know that one of the basins is steeper than the rest, meaning it has a higher local gradient. Since the empirical risk is constant across all the basins / multimodal combinations, the steepness must come from the rank minimization term in Theorem 4. Therefore, the combination with a steep entry must lead to a lower rank solution. When the equality is not met across all features, the low-rank / steepness condition is trivially satisfied by the existence of a lower-dimensional subspace of zis that has a lower conditional mutual information I(x;y|zi), and deriving the upper-bound on the rank in terms of the AGOP is no-longer necessary. The rank of the subspace comprising features with lower relative mutual information could act as a reasonable estimate of the rank of the final weights that SGD would converge to. By considering the condition with the equality, we analyze the boundary case that even when such a subspace with low conditional mutual information cannot be identified, it is possible to upper-bound the rank of the weight matrix. Identifying Latent Factors and Substitutability: If the latent factors are identifiable from the data up to some symmetry of the latent distribution (Gulrajani & Hashimoto, 2022), then the substitutability result also holds up to the actions of that symmetry group. In other words, substitutability is directly contingent on identifiability, i.e., the existence of symmetries in the latent distribution can affect the substitutability of latent factors among modalities. Segregating predictive and noisy features from the set of latent factors can be done by learning
https://arxiv.org/abs/2505.22483v1
to discover independent causal mechanisms on the aggregate of all modalities (Parascandolo et al., 2018). The degree to which the latent factor representations of the individual modalities can be compressed, i.e., the value of ϵin Theorem 3, depends on the size / rank of the invariant (Arjovsky et al., 2019) subspace. 17 A Closer Look at Multimodal Representation Collapse C. Additional Experimental Settings and Results C.1. Experimental Settings Dataset Details: MIMIC-IV contains information about 180,000 patients across their 431,000 admissions to the ICU. Following Wu et al. (2024), we use the clinical notes, lab values, demographics (age, gender, and ethnicity), diagnosis, procedure, and medications as the set of input modalities. The task is to perform Mortality (whether the patient will pass away in the 90 days after discharge) and Readmission (whether a patient will be readmitted within the next 15 days following discharge) prediction for a given patient. avMNIST comprises 1500 samples of images and audio, taken from MNIST (Lecun et al., 1998) and the Free Spoken Digits Dataset (Jackson et al., 2018), where the task is to predict the labels of the input digits from 0 to 9. We adopt the experimental setup of Wang et al. (2023) for avMNIST. Implementation Details: The two hidden layers of ψhave output dimensionalities 512 and 256 respectively. The hidden layers of hhave output dimensionalities 1024 and 512 respectively, whereas that of h−1is 512 and 1024. The model was trained for 1200 epochs, with an initial learning rate of 0.01, decayed at a rate of 0.9 every 100 epochs. We interleave between the optimization of LmdandLsemevery 10 epochs. C.2. Results on avMNIST Following on from Section 4, we list the empirical results on avMNIST as follows: • Presence of rank bottlenecks: Figure 10 (a) and (c) • Effectiveness of basis reallocation: –Rank and Similarity with the Multimodal Representation: Figure 10 –Optimization Dynamics: Figure 9 –Denoising Effect of Basis Reallocation: Figure 8 –Independence from Fusion Strategies: Table 4 • Fusion with Inference-time Missing Modalities: Table 3 The analytical conclusions for avMNIST are the same as what is discussed in Section 4, since the patterns of observations are highly consistent between avMNIST and MIMIC-IV . The only experiment not included for avMNIST is the one corresponding to Figure 4 for MIMIC-IV , since avMNIST has only two modalities, and hence, it is not possible to monitor the effect of increasing the number of modalities on the loss curves. MethodAcc @ Audio Missingness Rate 95% 90% 85% 80% Autoencoder (ICMLW’12) 89.78 89.33 89.78 88.89 GAN (ACM Comm’20) 89.11 89.78 91.11 93.11 Full2miss (IPMI’19) 90.00 91.11 92.23 92.67 Grape (NeurIPS’20) 89.65 90.42 91.15 91.37 SMIL (AAAI’21) 92.89 93.11 93.33 94.44 ShaSpec (CVPR’23) 93.33 93.56 93.78 94.67 MUSE (ICLR’24) 94.21 94.36 94.82 94.93 EBR (Ours) 95.30 95.57 95.89 95.93 Table 3. avMNIST: Comparison with SOTA on dealing with the missing audio modality across different missingness rates at test time, following the baseline setup of Wang et al. (2023). C.3. Sequence of Distillation The results of various strategies for sequencing the teacher modality for cross-modal knowledge distillation are reported in Table
https://arxiv.org/abs/2505.22483v1
5. Based on these observations, we choose weakest-to-strongest as the sequence to benchmark our KD based implicit basis reallocation mechanism. 18 A Closer Look at Multimodal Representation Collapse Figure 8. avMNIST: With increasing noise rate, existing approaches suffer from modality collapse due to noisy cross-modal entanglements. With improved strategies of basis reallocation, implicit (KD) or explicit (EBR), robustness to noise and the consequent prevention of modality collapse can be ensured. Figure 9. avMNIST: Semantic loss minimization comparison between vanilla multimodal learning and using implicit (KD) and explicit (EBR) basis reallocation. C.4. Baselines for Substitutability We design the following baselines for comparison against our EBR-based modality substitution approach: zeros, random sampling, nearest-neighbor modality, train set average. Using four (two weak and two strong - diagnosis, lab values, clinical notes, and medication respectively) modalities from MIMIC IV , we report the average with standard deviation of the 15 possible missingness patterns on the AUC-ROC metric for Mortality prediction in Table 6. The target represents the model trained on specifically on the subset of modalities that do not go missing, i.e., its performance would always be higher than any baseline since it does not have to deal with the distribution shift that comes from modalities going missing at test time. C.5. Multicollinearity We expect to see increased levels of multicollinearity as the number of modalities increase, if the dimensionality of the representation space remains constant. As correctly conjectured by the reviewer, we would expect multicollinearity to be more pronounced in the deeper layers of the fusion head. The reason behind this is that although there may be dependencies 19 A Closer Look at Multimodal Representation Collapse Figure 10. avMNIST: Multimodal rank and representation similarities of modalities with the multimodal representation, under implicit (KD) and explicit (EBR) reallocation mechanisms, across different strengths βof the modality that gets eliminated under collapse. MethodAcc @ Audio Missingness Rate 95% 90% 85% 80% SMIL (AAAI’21) 92.89 93.11 93.33 94.44 + KD 92.95 93.97 94.06 94.70 +EBR 93.77 94.02 94.51 94.96 ShaSpec (CVPR’23) 93.33 93.56 93.78 94.67 + KD 95.02 95.16 95.30 95.45 +EBR 95.30 95.57 95.89 95.93 MUSE (ICLR’24) 94.21 94.36 94.82 94.93 + KD 94.98 95.02 95.40 95.55 +EBR 95.02 95.26 95.61 95.70 Table 4. avMNIST: Using knowledge-distilled / EBR backbones for the modality that would otherwise be eliminated. Method AUC-ROC AUC-PR Strongest only 0.9196 0.4875 Strongest-to-weakest 0.8651 0.4130 Random 0.9005 0.4685 Simultaneous 0.9102 0.4786 Weakest-to-strongest 0.9350 0.4993 Table 5. MIMIC-IV: AUC-ROC and AUC-PR on Mortality Prediction for various sequences of knowledge distillation. Method AUC-ROC Target Zeros 0.5110 ± 0.18 0.7844 ± 0.02Random 0.6139 ± 0.15 Rep-NN 0.7150 ± 0.08 Late Fusion 0.6990 ± 0.06 Avg w/o cls 0.5312 ± 0.23 Avg w/ cls 0.7396 ± 0.09 EBR (Ours) 0.7829 ± 0.05 Table 6. MIMIC-IV: Comparison of the substitutability performance of EBR with baselines. Experiments performed over a subset of 4 (2 strong and 2 weak) input modalities, based on which, the average of 15 possible missingness patterns with standard deviation are reported. among features across modalities, they may not be exactly linear. As they propagate deeper into the
https://arxiv.org/abs/2505.22483v1
fusion head, it is more likely that those non-linear dependencies would be resolved and linearized in the final representation space prior to classification. Theoretically, the bound in Thm 2 is derived based on the AGOP, i.e., x∈X∇φW(x)∇φW(x)T, being a low rank subspace in W (corresponding to an independent set of features), as discussed in (Radhakrishnan et al., 2024), 20 A Closer Look at Multimodal Representation Collapse which is also required since one-to-one dimension-to-feature mappings needed to detect the presence of multicollinearity may exist in neural networks (De Veaux & Ungar, 1994). This aligns with the condition for regression multicollinearity that XTXshould be not a full rank matrix. To empirically confirm this, we calculate the variance inflation factor (VIF) with increasing modalities on our trained representation space. We report the average VIF across features in Table 7. With the increasing number of modalities, multicollinearity (VIF) increases in all cases. However, basis reallocation encourages cross-modal features to be encoded independently, with the explicit EBR being more efficient in controlling the level of multicollinearity relative to the implicit KD. #Modalities 2 3 4 5 Vanilla 1.15 2.68 3.51 4.70 w/ KD 1.09 1.90 2.30 2.68 w/ EBR 1.05 1.26 1.32 1.55 Table 7. Average variance inflation factor (VIF) across features in the representation space with increasing number of modalities. C.6. Statistical Comparisons In Table 8 we report the resulting p-values of performing the Wilcoxon rank test with Holm–Bonferroni correction (significance level α= 0.05) on the Table 1 results between our proposed EBR and the other baseline methods. The null hypothesis that the proposed EBR and the other models follow the same distribution of AUC-ROC and AUC-PRCs with the chosen missingness rates, were rejected for the both Mortality and Readmission prediction tasks across all baselines, most often, with significantly low p-values, which in all cases, was lower than 0.01. It further provides evidence in support of the uniqueness of EBR in leveraging basis reallocation to free up rank bottlenecks as a novel mechanism to tackle missing modalities. Method Mortality Readmission AUC-ROC AUC-PRC AUC-ROC AUC-PRC CM-AE 0.0090 0.0077 0.0065 0.0035 SMIL 0.0066 0.0053 0.0042 0.0066 MT 0.0083 0.0082 0.0077 0.0065 Grape 0.0027 0.0057 0.0058 0.0042 M3-Care 0.0079 0.0031 0.0069 0.0039 ShaSpec 0.0085 0.0062 0.0049 0.0075 MUSE 0.0088 0.0079 0.0086 0.0089 Table 8. P-values of the Wilcoxon rank test with Holm–Bonferroni correction on the Table 1 results between EBR and other baselines. C.7. Polysemanticity Considering the results in Figure 5 (a) and (c), Figure 7, and Section 4.3, since there is no external source of noise in the fusion head, and encouraging monosemanticity through basis reallocation has a denoising effect, the noise that leads to the observed collapse must come from some cross-modal polysemantic interference. To provide further evidence, we adapt the definition of polysemanticity based on neural capacity allocation from Scherlis et al. (2022) to measure cross-modal polysemanticity as the amount of uncertainty in the assignment of a neuron to a particular modality. We train a two-layer ReLU network on weights from unimodal models to classify which modality the input models are optimized on. Next, we apply
https://arxiv.org/abs/2505.22483v1
this modality-classifier on the weights of our multimodal fusion head and record the average cross-entropy (CE) in its outputs. Higher values of cross-entropy indicate higher levels of cross-modal polysemanticity, since the probability masses are spread out across multiple modalities. In Table 9, we report the results on bi-modal training. The sharply lower relative CE for KD and EBR directly indicate the reduced cross-modal polysemantic interference under basis reallocation. 21 A Closer Look at Multimodal Representation Collapse Method CE Vanilla 5.66 KD 2.09 EBR 0.59 Table 9. Empirically measuring cross-modal polysemantic interference as average cross-entropy (CE) in the modality classifier prediction. C.8. Comparison with Contrastive and Generative Models Cross-Modal Polysemantic Interference in multimodal contrastive learning: We choose GMC (Poklukar et al., 2022) as our candidate multimodal contrastive learning algorithm for analyzing cross-modal polysemantic interference in multimodal contrastive learning. We evaluate GMC by applying their proposed contrastive objective to our baseline representation learning setting on MIMIC-IV and report the results in terms of the lowest achieved training semantic loss in Table 10. As we can see, trends similar to that of our original setting reported in the main manuscript, in the semantic loss gap between the Multimodal Prefix and the Unimodal Baseline, play out when we perform a contrastive objective based fusion as reported in Poklukar et al. (2022). It further supports the claims in Lemma 1 and Theorem 1 that as the number of modalities increase, the modality undergoing collapse contributes less and less to the downstream representation used to encode the semantics, irrespective of the fusion strategy. Number of Modalities 2 3 4 5 Multimodal Prefix 27.68 52.90 91.20 167.30 Unimodal Baseline 7.97 6.55 5.33 9.55 Table 10. Lowest achieved training semantic loss with increasing number of modalities in the contrastive setting. Rank Bottlenecks in Generative and Contrastive Models: We choose MMV AE (Shi et al., 2019) as our candidate generative model for analyzing rank bottlenecks. Since the objective of generative modeling is somewhat different from the downstream application that we experimented with, to analyze MMV AE, we performed the experiment on their proposed MNIST-SVHN dataset, while for GMC (Poklukar et al., 2022), since it is for general representation learning, we applied their proposed contrastive objective to our baseline setting on MIMIC-IV . In Table 11, we report the results of our experiment, where the vanilla setting refers to the original model, without KD or EBR. Methodβ 0 2 4 6 8 MMV AE (Unimodal Baseline) 198 MMV AE (Vanilla) 477 421 398 110 96 MMV AE + KD 482 465 390 298 270 MMV AE + EBR 485 477 431 405 395 GMC (Unimodal Baseline) 1255 GMC (Vanilla) 1877 1330 1146 930 872 GMC + KD 1905 1676 1533 1427 1390 GMC + EBR 1912 1825 1709 1600 1588 Table 11. Representation ranks with increasing βin generative (MMV AE) and contrastive (GMC) models. We can see that in both the generative and contrastive settings, the ranks consistently drop as the strength of the modality undergoing collapse βis increased. The drop is sharp around a critical point, where the rank goes below
https://arxiv.org/abs/2505.22483v1
On the Surprising Effectiveness of Large Learning Rates under Standard Width Scaling Moritz Haas1Sebastian Bordt1Ulrike von Luxburg1Leena Chennuru Vankadara2 1University of Tübingen, Tübingen AI Center {mo.haas,sebastian.bordt,ulrike.luxburg}@uni-tuebingen.de 2Gatsby Computational Neuroscience Unit, University College London l.vankadara@ucl.ac.uk Abstract The dominant paradigm for training large-scale vision and language models is He initialization and a single global learning rate ( standard parameterization , SP). Despite its practical success, standard parametrization remains poorly understood from a theoretical perspective: Existing infinite-width theory would predict instability under large learning rates and vanishing feature learning under stable learning rates. However, empirically optimal learning rates consistently decay much slower than theoretically predicted. By carefully studying neural network training dynamics, we demonstrate that this discrepancy is not fully explained by finite-width phenomena such as catapult effects or a lack of alignment between weights and incoming activations. We instead show that the apparent contradiction can be fundamentally resolved by taking the loss function into account: In contrast to Mean Squared Error (MSE) loss, we prove that under cross-entropy (CE) loss, an intermediate controlled divergence regime emerges, where logits diverge but loss, gradients, and activations remain stable. Stable training under large learning rates enables persistent feature evolution at scale in all hidden layers, which is crucial for the practical success of SP. In experiments across optimizers (SGD, Adam), architectures (MLPs, GPT) and data modalities (vision, language), we validate that neural networks operate in this controlled divergence regime under CE loss but not under MSE loss. Our empirical evidence suggests that width-scaling considerations are surprisingly useful for predicting empirically optimal learning rate exponents. Finally, our analysis clarifies the effectiveness and limitations of recently proposed layerwise learning rate scalings for standard initialization. 1 Introduction Scaling has become the dominant paradigm in building ever more capable vision and language models (Brown et al., 2020, Dosovitskiy et al., 2021, Kaplan et al., 2020, Hoffmann et al., 2022, Grattafiori et al., 2024). Training these models requires many choices, including initialization variances, learning rates, and other optimization hyperparameters. The dominant practice for training large models is standard parameterization (SP): networks are initialized using He initialization (He et al., 2015) and trained with a single global learning rate tuned at each scale (OLMo Team et al., 2024). Infinite-width theory provides principled rules for scaling hyperparameters as network size increases. In particular, Maximal Update Parameterization ( µP)prescribes how learning rates and initialization variances should scale layer-wise so that the training dynamics remain width-independent. This allows tuning small proxy models and transferring the optimal hyperparameters to large scales Preprint. Under review.arXiv:2505.22491v1 [cs.LG] 28 May 2025 (Yang and Hu, 2021, Yang et al., 2022). Crucially, infinite-width theory also predicts that under SP, network dynamics should become unstable with learning rates scaling larger than O(1/n)(where n is network width), and that feature learning vanishes with O(1/n)learning rates, causing the models to enter a kernel regime (Sohl-Dickstein et al., 2020, Yang and Hu, 2021). Empirically, however, networks trained in SP exhibit stable feature learning and excellent generalization performance, often with optimal learning rates decaying much slower than theoretically predicted (commonly around Ω(1/√n)). This is depicted in Figure
https://arxiv.org/abs/2505.22491v1
1, where we see that the optimal learning rates (solid lines) for different models trained in SP decay much slower than the theoretically predicted maximal stable scaling law (dashed gray lines). This discrepancy presents a fundamental puzzle: Why does SP remain stable and effective at large learning rates, despite the theoretical predictions? And does there exist an infinite-width limit that corresponds more closely with the behaviour of practical finite-width networks? Figure 1: Optimal learning rate ex- ponents exceed the theoretically pre- dicted stability threshold. For MLPs on MNIST and GPT on language data, optimal learning rates in SP decay slower than the theoretically predicted maximal stable ηn=O(n−1)in gray.One possible resolution could be that assumptions underly- ing infinite-width theory fail at realistic finite widths. Re- cent studies have highlighted phenomena like the catapult effect (Lewkowycz et al., 2020) and the edge of stability (Cohen et al., 2021), suggesting classical stability bounds underestimate viable learning rates. However, our analysis of simplified linear models under SP indicates that these finite-width effects alone cannot explain why large learn- ing rate scaling remains stable. In a similar spirit, Everett et al. (2024) hypothesized that certain infinite-width the- ory predictions of alignment between weight updates and activations may break down in practice - which could also have plausibly explained the discrepancy between theory and practice. Through careful empirical measurements, we also rule out this hypothesis. Instead, we find a fundamental resolution in the previously overlooked role of the loss function. Specifically, unlike under MSE loss, where the effect of output logit divergence catastrophically cascades and destabilizes training, cross-entropy (CE) loss allows stable training under large learning rates even when output logits diverge. Consequently, the practical stability threshold in SP is determined solely by hidden or input layer stability constraints — not by output-layer divergence. Empirically, we find that while output-layer divergence under CE loss remains benign with respect to the stability threshold, it occasionally influences the optimal learning rate choice — particularly under SP with layerwise learning rates, as used by Everett et al. (2024). Moreover, we identify cases where output-layer divergence breaks the previously observed learning-rate transfer under layerwise learning rates. Main Contributions. After reconciling the apparent contradictions between infinite-width theory and practice, we provide the first infinite-width proxy that has strong correspondence to practical finite-width networks, as they are initialized and trained in practice, in the following sense: •Contrary to what was hypothesized in previous work (Everett et al., 2024), we show that the infinite-width alignment predictions between weights and incoming activations hold when measured with the right refined coordinate checks (RCC) . Consequently, logits diverge at sufficient width in SP under empirically optimal learning rates. •We show that the CE loss function enables a controlled divergence regime in which training remains stable despite logit divergence. This regime allows recovering feature learning at large widths under large learning rates ηn= Θ( n−1/2), which could explain the practical success of SP. To the best of our knowledge, this provides the first practical infinite-width limit in the feature-learning regime for SP. •The controlled divergence regime also sheds light
https://arxiv.org/abs/2505.22491v1
on the stability of other parameterizations, particularly SP with µP learning rates (SP-full-align, Everett et al., 2024). However, we empirically show that SP-full-align does not provide learning rate transfer on vision datasets due to inherent width dependence. •We show that our width-scaling considerations provide surprisingly good predictions of max- imal stable learning rate exponents, which often dominate optimal learning rates, particularly in Transformers (Vaswani et al., 2017). 2 Taken together, our results deepen the theoretical understanding of why SP remains effective at large scales, provide rigorous empirical validation for critical assumptions in infinite-width theory, and offer practical insights into stable hyperparameter transfer for scaling neural networks. 2 Background: Width-scaling arguments from Tensor Program theory Before exploring plausible explanations for the empirical width-scaling properties of neural networks, we first define used notation and distill all necessary width-scaling arguments from Tensor Program (TP) theory (Yang and Hu, 2021, Yang and Littwin, 2023). We provide a more detailed introduction to TP scaling arguments in Appendix C.1, and a detailed account of related work in Appendix A. Setting and Notation. We define an (L+ 1) -layer MLP of width niteratively via h1(ξ) :=W1ξ, xl(ξ) :=ϕ(hl(ξ)), hl+1(ξ) :=Wl+1xl(ξ), f (ξ) :=WL+1xL(ξ), for inputs ξ∈Rdinwith trainable weight matrices W1∈Rn×din,Wl∈Rn×nforl∈[2, L], andWL+1∈Rdout×n. We call hlpreactivations, xlactivations, and f(ξ)output logits. Training the MLP with Stochastic Gradient Descent (SGD) with global learning rate η > 0under loss function L:Rdout×Rdout→Rwith labelled training point (ξt, yt)∈Rdin×Rdoutis defined asWl t+1=Wl t−η∇WlL(ft(ξt), yt). We denote updates accumulated over all time steps by ∆hl t=hl t−hl 0and the change from a single update step by δhl t=hl t−hl t−1. The fan-notation has the purpose of unifying all weight matrices and simply means W∈Rfan_out ×fan_in. In this paper, we define standard parameterization (SP) to mean He initialization (Wl 0)ij∼N(0, cϕ/fan_in (Wl 0)) trained with SGD or Adam with a single possibly width-dependent learning rate ηn=η·nα, α∈R, for all trainable weights {Wl t}l∈[L+1]. This models the typical practice, in which a global learning rate is tuned at each model scale. We denote the softmax function by σ(f)i= exp( fi)· (P j∈[dout]exp(fj))−1. For naturally measuring the average scaling of entries in vectors x∈Rd, we use the root-mean-squared norm ∥x∥RMS :=d−1/2· ∥x∥2as the standard vector norm. For matrices W, we write ∥W∥Ffor the Frobenius norm and measure entry-wise scaling with the RMS norm∥W∥RMS :=1 fan_in ·fan_out1/2∥W∥F. The operator norm w.r.t. the RMS-norm is defined as ∥W∥op:=∥W∥RMS→RMS := supx∈Rfan_in (W)(∥Wx∥RMS/∥x∥RMS). We use Bachmann-Landau notation O,Θ,Ωthat purely tracks dependence on width nand omits all other dependencies. Effective and Propagating Updates. When training neural networks, weights Wl tof layer levolve from their initialization Wl 0through updates ∆Wl t, such that Wl t=Wl 0+∆Wl t. Although we directly control the scaling of these initial weights and updates, we are ultimately interested in their impact on subsequent activations in the network. For standard architectures, including convolutional networks and Transformers, weights typically act linearly on incoming activations. Thus, for weights Wl tand incoming activations xl−1 t, the change in the next layer’s pre-activations ∆hl tcan be decomposed into two distinct contributions: the effective updates arising directly
https://arxiv.org/abs/2505.22491v1
from the change in weights ∆Wl t of the current layer, and the propagating updates , arising indirectly from activation changes ∆xl−1 t in preceding layers: ∆hl t= (∆ Wl t)xl−1 t|{z} Effective Updates+Wl 0(∆xl−1 t).|{z} Propagating Updates(RCC) We say the layer admits maximal stable feature learning if both the effective updates and propagating updates remain width-independent as network width n→ ∞ , that is ∥(∆Wl t)xt∥RMS = Θ(1) and ∥Wl 0(∆xl−1 t)∥RMS = Θ(1) . Identifying the correct scaling exponents. In the spirit of Everett et al. (2024), we use pland qlto denote the width-scaling exponents of the alignment ratios of the pairs (∆Wl t, xl−1 t)and (Wl 0,∆xl−1 t)respectively, that is, ∥∆Wl txl−1 t∥RMS ∥∆Wl t∥RMS· ∥xl−1 t∥RMS= Θ( npl),∥Wl 0∆xl−1 t∥RMS ∥Wl 0∥RMS· ∥∆xl−1 t∥RMS= Θ( nql). (α-rms) A key insight from Yang and Hu (2021) and Yang and Littwin (2023) is that during training, correlations can emerge in certain layers between the two quantities in each pair in (RCC) , causing 3 Figure 2: Alignment has minimal width-dependence. Alignment ratio between accumulated weight updates ∆Wtand incoming activations xtin RMS norm (left) and operator norm (center) as well as between initial weights W0and activation updates ∆xtin operator norm (right) for the last layernorm layer, the first MLP layer in Transformer block 2 and the readout layer. RMS norm may be confounded by accumulated rank over the course of training (e.g. compare (∆Wt, xt)values for last LN). While operator norm alignment tends to decay over the course of training, it does not display strong width-dependence, even after 2000 batches (see annotated width-dependent exponents). them to become aligned in the infinite-width limit and thereby inducing pl= 1andql= 1due to a law of large numbers effect. If, instead, these quantities were uncorrelated, their product would exhibit smaller scaling exponents ( pl= 1/2andql= 1/2) due to a central limit effect. In particular, infinite-width theory predicts the exponents p1:L+1= 1,q1:L= 1/2, andqL+1= 1. The alignment exponents pl, qlare a consequence of the training dynamics and do not depend on the specific parameterization used (e.g., SP, NTP, or µP). By adjusting the initialization variance, which controls the scale of initial weights W0, and the learning rate, which governs the magnitude of updates ∆Wt, we can ensure that both contributions in(RCC) remain width-independent as the network width ngrows. The corresponding choice of hyperparameter scaling defines the Maximal Update Parameterization (µP). As we will discuss in Section 4, under the theoretically predicted alignment exponents, SP with O(1/n)learning rates leads to vanishing activation updates in all layers ∥∆xt∥RMS =o(1)and choosing the learning rate ω(1/n)leads to logit divergence in the infinite-width limit. 3 Finite-width distortions and long-training dynamics alone do not explain the stability of large learning rates in SP Differing optimal learning rate exponents at finite width versus in the infinite-width limit may be caused by finite-width effects accumulating over many update steps and eventually inducing a phase transition, in particular when the number of update steps exceeds the width of the network. Here we investigate two such potential explanations. 3.1 Update alignment between weights and activations is
https://arxiv.org/abs/2505.22491v1
barely width-dependent Everett et al. (2024) highlight that at finite width and over extended training times, it is a priori unclear whether the pairs (∆Wl t, xl−1 t)and(WL+1 0,∆xL t)remain strongly correlated or whether their alignment exponents (p1:L+1, qL+1)should rather be thought of as dynamically changing over the course of training. If the alignment exponents instead transition towards the central-limit regime and in particular if p1:L+1=−1/2, this could explain the observed√ngap between theoretically predicted and empirically observed optimal learning rate scalings. Yang et al. (2023a) introduces an alignment metric that serves as a natural and unified metric to evaluate these infinite-width theory predictions, αA,x=∥Ax∥RMS ∥A∥RMS→RMS∥x∥RMS. (α-op) Specifically, if the alignment exponents p1:L+1= 1, q1:L=1 2, qL+1= 1hold, both contributions in (RCC) must exhibit alignment metrics α∆Wl txl−1 tandαWl 0∆xl−1 tof order Θ(1) in all layers. 4 In Figure 2, we plot the alignment metrics at varying widths over the course of Transformer training with AdamW in SP. It shows that while alignment can decrease over the course of training, it exhibits minimal dependence on network width. Even after accumulating approximately 2000 batches of training, the width-scaling exponents are much closer to 0than to −0.5, indicating that infinite-width alignment predictions hold reasonably well. Hence a lack of alignment alone cannot explain the large optimal learning rate exponents observed in practice. 3.2 Does a catapult mechanism in the first update steps stabilize large learning rates in SP? As another plausible explanation, initial divergence under large learning rates may be stabilized over the course of training at finite width. Unlike at infinite width, where there only exist a divergent regime and a lazy regime without feature learning, an intermediate catapult regime was identified by Lewkowycz et al. (2020) at finite width, for SGD training with MSE loss in Neural Tangent Parameterization (NTP). They provide theory for 2-layer linear networks. Under small learning ratesη≤2/λ0, where λ0denotes the largest eigenvalue of the Hessian at initialization, the network monotonically converges to a minimum. Under large learning rates η >4/λ0, training diverges. But in an edge of stability regime (Cohen et al., 2021, 2022) of intermediate learning rates, the loss increases in the first O(log(n))update steps while the sharpness λtdecreases. Once the sharpness lies below the edge of stability 2/η, the loss decreases and the final learned function may generalize better as the solution lies in a basin with lower sharpness. But existing work does not study width-scaling with SP. May similar initial training dynamics be at play here? In Appendix C.4 we analyse the 2-layer linear network model from Lewkowycz et al. (2020) in NTP, SP and µP trained with SGD under MSE loss, and provide loss and sharpness increase characterizations in Proposition C.17. In µP, the update equations of the learned function ftand the sharpness λtare fully width-independent, which allows width-independent learning rates. In NTP, at least the conditions for loss and sharpness reduction are approximately width-independent. In SP, on the other hand, sharpness increases λt+1≥λtiffλt≥4 nηn(1 +y ft−y), requiring ηn=O(n−1) to avoid sharpness (as well as loss) divergence in the first update steps. The simulations shown
https://arxiv.org/abs/2505.22491v1
in Figure C.2 validate the maximal stable learning rate scaling η=O(n−1). Hence catapult dynamics alone do not suffice for explaining large learning rate stability in SP. 4 Cross-entropy loss enables stable feature learning under large learning rates in standard parameterization First, let us briefly recall why infinite-width theory predicts divergence under SGD training in SP with learning rates ηn=η·n−αforα <1. Recall that the alignment exponents in (α-rms) satisfy p1:L+1= 1. In particular, for the output layer, we have ∥∆WL+1 txL t∥RMS = Θ( n· ∥∆WL+1 t∥RMS· ∥xL t∥RMS). For SGD, the weight update of the last layer after 1 update step is given by ∆WL+1=−η·n−α·χ0·(xL 0)T, where χ0:=∂fL(f0(ξ0), y0). Under SP, at initialization, both ∥xL 0∥RMS = Θ(1) and∥χ0∥RMS = Θ(1) . This implies logit divergence after 1 step of SGD with learning rates ηn=ω(1/n): ∥xL∥RMS = Θ(1) ,∥∆WL+1∥RMS = Θ( n−α),=⇒ ∥ ∆WL+1xL∥RMS = Θ( n1−α) So,why do larger learning rates remain stable and even effective, despite logit divergence? Here, we demonstrate that a simple yet fundamental aspect of training, the choice of loss function, resolves the large learning rate puzzle, and enables a well-defined and practical infinite-width limit that allows feature learning under SP. The key insight is that, under cross-entropy (CE) loss, the logits fnever directly appear in the training dynamics; instead, the effective output function is σ(f). Unlike the destabilizing logit blowup encountered under mean squared error (MSE) loss, under CE loss, logit growth has a harmless effect on training stability. In particular, the CE loss introduces an intermediate controlled divergence regime α∈[1/2,1)that is absent for the MSE loss (Figure 3). Proposition 1. (Asymptotic regimes in SP , informal) For fixed L≥2,t≥1,η >0,α∈R, consider training a (L+ 1) -layer MLP of width nin SP with SGD and global learning rate ηn=η·n−αfor tsteps. Then the logits ft, training loss L(ft(ξt), yt), loss-logit derivatives χt:=∂fL(ft(ξt), yt), loss-weight gradients ∇l t:=∇WlL(ft(ξt), yt)and activations xl t,l∈[L], after training scale as follows in the infinite-width limit n→ ∞ . 5 Controlled DivergenceStableStable Catastrophic Instability Catastrophic Instability Learning Rate CE lossMSE loss Hidden-layer feature learningFigure 3: Learning rate regimes for SGD in SP. Under MSE loss, either training re- mains stable ( α≥1) or logits and acti- vations diverge ( α < 1) in the infinite- width limit. Under CE loss, a ‘controlled divergence’ regime α∈[1/2,1)emerges where logits diverge, but training does not diverge. At α= 1/2, hidden layers learn features width-independently. Under cross-entropy (CE) loss , three qualitatively distinct regimes arise: (a)Stable regime (α≥1): Logits, loss, gradients and activations remain stable, that is ∥ft∥RMS =O(1),|L(ft(ξt), yt)|=O(1),∥χt∥RMS =O(1),∥∇l t∥RMS =O(n−1/2) and∥xl t∥RMS =O(1)for all l∈[L]. (b)Controlled divergence (1 2≤α < 1): Logits diverge ∥ft∥RMS = Θ( n1−α), but loss, gradients and activations remain stable, that is ∥xl t∥RMS = Θ(1) ,|L(ft(ξt), yt)|=O(1), ∥χt∥RMS =O(1)and∥∇l t∥RMS =O(n−1/2)for all l∈[L]. (c)Catastrophic instability (α <1 2): Logits, activations and weight gradients diverge, that is ∥ft∥RMS→ ∞ ,∥xl t∥RMS→ ∞ and∥∇l t∥RMS→ ∞ ,l∈[2, L]. Under mean -squared error (MSE) loss , a stable regime as in (a) above arises if α≥1. Ifα <1, training is catastrophically unstable
https://arxiv.org/abs/2505.22491v1
as in (c) above and, in addition, loss and loss-logit derivatives diverge, that is |L(ft(ξt), yt)| → ∞ and∥χt∥RMS→ ∞ . The formal statement together with a proof can be found in Appendix C.3. For an intuitive under- standing of this result, note that the only effect that the choice of loss function L(f, y)has on the final learned function is through the loss-logit gradients χt:=∂fL(ft(ξt), yt)over the course of training. Under MSE loss, the loss gradients are given by the residuals χt=ft(ξt)−yt. But CE loss induces loss gradients χt=σ(ft(ξt))−yt. Crucially, it is the correct choice of loss function to effectively view σ(f)as the output of the network instead of the unnormalized logits f. If one were to use MSE(σ(f), y)as a loss function instead, additional derivative terms can induce vanishing gradients under exploding network output and not increase the optimal learning rate exponent (Appendix F.3). Under CE loss, the effective network output σ(f)at most converges to one-hot predictions when the logits diverge, and with increasing width training points are sharply memorized after a single update step. At large learning rates ηn= Θ( n−1/2), training points are not just memorized in last-layer weights, but feature learning is recovered in the infinite-width limit: Proposition 2 (Under CE loss, SP with large learning rates learns features at large width, informal ).Consider the setting of Proposition 1 of training a (L+1)-layer MLP with SGD in SP with global learning rate ηn=η·n−α,α∈R, in the infinite-width limit n→ ∞ . (a)Under both MSE and CE loss in the stable regime ( α≥1), feature learning vanishes in all layers l∈[L], that is ∥∆xl t∥RMS =O(n−1/2). (b)Under CE loss in the controlled divergence regime (1 2≤α <1), input layer feature learning vanishes at rate ∥∆x1 t∥RMS = Θ n−1/2−α , and hidden layers l∈[2, L]learn features at rate ∥∆xl t∥RMS = Θ n1/2−α . In particular, when α= 1/2, the weight updates of all hidden layers induce width-independent activation updates, that is ∥∆xl t∥RMS = Θ (1) . To the best of our knowledge, this provides the first infinite-width limit of SP in the practical feature learning regime. Figure 4 empirically validates that the predicted width-scaling exponents that induce maximally stable feature learning despite logit blowup under ηn=η·n−1/2are already accurate at moderate width 512. Appendix E.4 shows that effective update predictions also hold accurately in Transformers trained with Adam. In the next section, we discuss the profound implications that training stability under logit blowup has on learning rate scaling exponents in practice. 6 Figure 4: Hidden-layer feature learning albeit logit divergence in SP under large learning rates. Effective l-th layer update scalings ∥∆Wtxt∥RMS of MLPs trained with SGD in SP with ηn= 0.0001·(n/256)−1/2on CIFAR-10 under CE loss. Our TP scaling predictions are accurate: Hidden layers learn features width-independently, and input layers have vanishing feature learning. The update scaling exponents can already be accurately estimated at small width n≤512. Figure 5: Learning rates decay slower under CE loss than under MSE loss. Left versus center-left: Width-scaled learning rate versus training error for MNIST showing approximate transfer with ηn=η·n−1under MSE loss
https://arxiv.org/abs/2505.22491v1
versus with ηn=η·n−1/2under CE loss. Center-right versus right: Optimal learning rate (solid) and minimal unstable learning rate (dashed) for 2-layer MLPs on generated multi-index data and 8-layer MLPs on CIFAR-10 and MNIST. Optimal learning rates are often close to max-stable learning rates. Theoretical instability predictions O(n−1)for MSE loss, O(1)for 2-layer MLPs and O(n−1/2)for deep MLPs under CE loss are surprisingly accurate. 5 Consequences of training stability under logit divergence In this section we perform extensive experiments to empirically evaluate the implications of the stability and feature learning predictions of our infinite-width theory from the previous section. Experimental details. We train MLPs of varying depth up to width 16384 with plain SGD and Adam on CIFAR-10, MNIST and a generated multi-index model (reported in Appendix F). We also train Pythia-GPTs with warmup and cosine learning rate decay on the DCLM-Baseline dataset (Li et al., 2024) up to width 4096 or 1.4B parameters using both Adam with decoupled weight decay (Loshchilov and Hutter, 2019) and SGD (reported in Appendix F.2). If not stated otherwise, we consider SP with a global learning rate. All details can be found in Appendix D. Code will be made publicly available in a future version of this manuscript. 5.1 Infinite-width theory is a useful predictor of empirical optimal learning rate exponents While our theory predicts maximal stable learning rates, in practice, we are interested in optimal learning rates. We hypothesize that maximal stable feature learning in all layers induces optimal performance at large width. However, since different layer types require different maximal stable learning rate exponents, the single global learning rate under SP is subject to opposing forces for recovering feature learning under the constraint of training stability. We now evaluate several instantiations of this hypothesis. MLPs and Transformers with SGD. Figures 5 and 6 show that the empirical maximal learning rate exponents under CE loss closely follow α= 1/2for both MLPs on vision data and for GPTs on language data. The x-axis scales the learning rate with the closest clean exponent from {0,0.5,1}to show that approximate empirical transfer is often enforced by the stability threshold O(n−1/2). While the theory only predicts the maximal stable exponent, Proposition 2 suggests that the optimal learning rate may follow the maximal stable exponent: α= 1/2since it is the only setting under which feature learning is preserved at large width in all hidden layers. The maximal stable learning rate under MSE loss also consistently scales as its infinite-width prediction O(n−1)and optimal learning rates closely follow this exponent, as under smaller exponents α >1, not even logits are updated 7 Figure 6: Approximate learning rate transfer for GPT in SP. Left to center-right: Width-scaled learning rate versus training loss for GPT trained with SGD, Adam with trainable Layernorm parameters and Adam without trainable Layernorm parameters. Right: Corresponding optimal (solid) and maximal stable (dashed) learning rate exponents. For SGD, hidden-layer stability ηn=O(n−1/2) clearly dominates the maximal stable as well as optimal learning rate scaling. For Adam without Layernorm parameters, hidden-layer stability induces a stability threshold ηn=O(n−1). Trainable Layernorm parameters further stabilize large learning rates and
https://arxiv.org/abs/2505.22491v1
induce larger optimal learning rate scaling ηn≈Θ(n−1/2)toward preserving input-layer feature learning at scale. ∥∆ft∥RMS→0. Overall, this shows that existing infinite-width theory was indeed predictive of the maximal stable learning rate exponents under MSE loss, but that CE loss induces qualitatively more favorable behaviour that is only captured by a sufficiently loss-specific analysis. MLPs with Adam. Adam approximately normalizes the gradient and therefore further stabilizes training against misscaled gradients beyond the effect of CE loss, under sufficiently small ε >0. Wlis effectively updated if the learning rate scaling counteracts the scaling accumulated in the inner product between normalized weight gradients and incoming activations. This leads to the ideal ( µP) learning rates η(Wl) =η/fan_in (Wl). Thus Adam in SP with ηn= Θ( n−1)induces width-independent updates, except for vanishing input layer feature learning and logit divergence through WL+1 0∆xL t. In deep MLPs, we typically observe optimal learning rates ηn=O(n−1), suggesting that hidden-layer stability dominates. Transformer training with AdamW. In Transformers with trainable Layernorm parameters, which scale input-like, training is stabilized, and the exponent is increased toward input layer feature learning. Without trainable Layernorm parameters, in contrast, only the embedding layer scales input-like so that training becomes approximately width-independent under ηn= Θ( n−1). Figure 6 shows that the max-stable and optimal learning rate exponents shrink from −1/2toward −1if we remove the trainable layer-norm parameters. This suggests that trainable scale parameters in normalization layers play an essential role in maintaining learning rates in Transformers, which could explain why they are almost unanimously used in modern architectures (OLMo Team et al., 2024, Grattafiori et al., 2024, Gemma Team et al., 2024). Moreover, input layer learning vanishes at scale in SP, which may explain techniques like removing weight decay in the embedding layer (OLMo Team et al., 2024). Logit divergence under large learning rates may be a reason for regularizing techniques like the Z-loss (Chowdhery et al., 2023, Wortsman et al., 2024, OLMo Team et al., 2024). Taken together, the empirical evidence suggests that infinite-width theory may serve as a helpful proxy for understanding practical neural networks at finite width. Since training divergence imposes a hard constraint on the optimal learning rate and activation divergence in multiple layers becomes harder to stabilize, width-scaling predictions seem to hold even more accurately on deep and sensitive architectures such as Transformers. 5.2 A novel understanding of standard initialization with layerwise learning rates Everett et al. (2024) perform extensive Transformer experiments, and recommend training with Adam in SP with µP learning rates (SP-full-align) as the overall best performing parameterization in terms of validation loss, learning rate transfer and learning rate sensitivity. This parameterization only differs from µP through the larger last-layer He initialization WL+1 0∼N(0, n−1). While the authors attribute the success of SP-full-align to a lack of alignment between WL+1 0 and∆xL t, they only measure the joint alignment between Wtandxtfor each layer, which confounds the individual alignment exponents of (∆Wt, xt)and(W0,∆xt)from (RCC) . We provide a detailed explanation in Appendix C.2. Our empirical alignment reevaluation in Figure 2 and Figure E.15 does not support the hypothesized lack of alignment.
https://arxiv.org/abs/2505.22491v1
This implies that logits diverge through WL+1 0∆xL tas soon as feature learning does not vanish. Instead our theoretical results in Section 4 show that logit divergence 8 Figure 7: Performance difference be- tween losses is larger in SP than in µP. Optimal training accuracy of 8-layer MLPs trained with SGD on MNIST (left) and CIFAR-10 (right). The performance in µP depends much less on the loss function since all layers learn width-independently. is not necessarily harmful for training stability under CE loss. Just like SP with ηn= Θ( n−1/2), SP-full-align with ηn= Θ(1) lies at the feature learning edge of the controlled divergence regime. Learning rate transfer of SP-full-align breaks on image datasets. Due to width-independent alignment between WL+1 0 and∆xL t, logits diverge with width in SP-full-align at sufficient width. We validate this claim for CIFAR-10 at moderate width in Figure F.34. This introduces width-dependent training dynamics. Consequently our single-pass experiments in Appendix F.8 consistently show decaying optimal learning rates in SP-full-align for both SGD and Adam on MNIST, CIFAR-10 and generated multi-index data. We also observe that the maximal stable learning rate remains width- independent as our theory would predict. This constitutes our only experiment in which the maximal stable learning rate scaling is suboptimal in deep nonlinear networks. We leave fully understanding the driving mechanism to future work. 5.3 A scaling-theoretic view on the practical success of CE loss in deep learning Many success stories in deep learning, from computer vision to natural language processing, use the cross-entropy loss. We propose a scaling-theoretic explanation for this practical dominance. Our results show that networks trained under CE loss allow stable optimization at significantly larger learning rates in SP than under MSE loss, which recovers feature learning at large widths and consequently improves generalization. To empirically investigate this hypothesis, we compare the performance of CE and MSE losses under both SP and µP. Since µP admits asymptotically stable dynamics, both losses exhibit similar limiting behaviours. Thus we predict that CE loss only significantly outperforms MSE loss in SP, but not in µP. Our empirical findings confirm this prediction (Figure 7), which suggests that MSE loss may deserve renewed consideration as a practical choice under stable parameterizations like µP, especially given its theoretical simplicity and widespread use in theoretical analyses. 6 Discussion and future work On the theoretical side, we have provided the first infinite-width proxy model for finite neural networks, as they are initialized and trained in practice. On the practical side, we have seen that infinite-width feature learning and instability predictions are surprisingly predictive indicators for empirical width-scaling exponents, in particular for deep Transformers. Better understanding of the controlled divergence regime. Since practical neural networks operate at the edge of the controlled divergence regime, better understanding parameterizations beyond the stable regime from Yang and Hu (2021) is paramount. Since the NTK diverges in SP with ηn= Θ( n−1/2), studying this limit is subtle. However, investigating the rescaled NTK might still be a useful tool in better understanding this limit. While width dependence is undesirable from a transfer perspective, fast
https://arxiv.org/abs/2505.22491v1
memorization under logit blowup may improve learning speed. How is generalization affected? Logit blowup may partially explain overconfidence in neural networks in SP, and suggests that wide networks in µP may be more calibrated. Numerical considerations. In this paper, we consider the regime of sufficient numerical precision. From a numerical perspective, signals that diverge fast can leave floating point range at moderate widths. Hence implementations that ensure minimal accumulation of width-dependent factors in SP akin to Blake et al. (2025) could stabilize large-scale model training in practice. Understanding optimal learning rate exponents. The exact conditions that induce hyperparameter transfer are still poorly understood. Without full width-independence, the optimal learning rate scaling cannot be predicted with certainty. Both vanishing feature learning in input-like layers and 9 logit divergence can induce strong finite-width effects, so that we would still recommend µP learning rates over SP from a width-scaling perspective. Similar to CE loss, normalization layers correct scaling in the forward pass. In combination with Adam which stabilizes the backward pass, such stabilizing components can correct most misscaled signals. Deeply understanding their interplay and effect on optimal learning rates remains an important direction for future work. Acknowledgments and Disclosure of Funding Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Ger- many’s Excellence Strategy – EXC number 2064/1 – Project number 390727645. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Moritz Haas. Leena Chennuru Vankadara is supported by the Gatsby Charitable Foundation. References Emmanuel Abbe, Enric Boix Adsera, and Theodor Misiakiewicz. Sgd learning on neural networks: leap complexity and saddle-to-saddle dynamics. In The Thirty Sixth Annual Conference on Learning Theory (COLT) , 2023. Cited on page 17. Jeremy Bernstein and Laker Newhouse. Modular duality in deep learning. arXiv:2410.21265 , 2024. Cited on page 22. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning (ICML) , 2023. Cited on page 30. Johan Bjorck, Alon Benhaim, Vishrav Chaudhary, Furu Wei, and Xia Song. Scaling optimal LR across token horizons. In The Thirteenth International Conference on Learning Representations (ICLR) , 2025. Cited on page 16. Charlie Blake, Constantin Eichenberg, Josef Dean, Lukas Balles, Luke Yuri Prince, Björn Deiseroth, Andres Felipe Cruz-Salinas, Carlo Luschi, Samuel Weinbach, and Douglas Orr. u- µp: The unit- scaled maximal update parametrization. In The Thirteenth International Conference on Learning Representations (ICLR) , 2025. Cited on page 9, 17. Blake Bordelon and Cengiz Pehlevan. Self-consistent dynamical field theory of kernel evolution in wide neural networks. Advances in Neural Information Processing Systems (NeurIPS) , 35: 32240–32256, 2022. Cited on page 16. Blake Bordelon and Cengiz Pehlevan. Deep linear network training dynamics from random ini- tialization: Data, width, depth, and hyperparameter transfer. arXiv:2502.02531 , 2025. Cited on page 16. Blake Bordelon, Hamza Tahir Chaudhry, and Cengiz Pehlevan. Infinite limits of multi-head trans- former dynamics. In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS)
https://arxiv.org/abs/2505.22491v1
, 2024a. Cited on page 16. Blake Bordelon, Lorenzo Noci, Mufan Bill Li, Boris Hanin, and Cengiz Pehlevan. Depthwise hyper- parameter transfer in residual networks: Dynamics and scaling limit. In The Twelfth International Conference on Learning Representations (ICLR) , 2024b. Cited on page 16. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS) , 33:1877–1901, 2020. Cited on page 1, 16, 48. Lenaic Chizat and Francis Bach. On the global convergence of gradient descent for over-parameterized models using optimal transport. Advances in neural information processing systems , 31, 2018. Cited on page 16. 10 Lénaïc Chizat and Praneeth Netrapalli. The feature speed formula: a flexible approach to scale hyper-parameters of deep neural networks. In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) , 2024. Cited on page 16. Lénaïc Chizat, Maria Colombo, Xavier Fernández-Real, and Alessio Figalli. Infinite-width limit of deep linear neural networks. Communications on Pure and Applied Mathematics , 77(10): 3958–4007, 2024. Cited on page 16. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research , 2023. Cited on page 8. Jeremy Cohen, Simran Kaur, Yuanzhi Li, J Zico Kolter, and Ameet Talwalkar. Gradient descent on neural networks typically occurs at the edge of stability. In International Conference on Learning Representations (ICLR) , 2021. Cited on page 2, 5, 17. Jeremy M Cohen, Behrooz Ghorbani, Shankar Krishnan, Naman Agarwal, Sourabh Medapati, Michal Badura, Daniel Suo, David Cardoze, Zachary Nado, George E Dahl, et al. Adaptive gradient methods at the edge of stability. arXiv:2207.14484 , 2022. Cited on page 5, 17. Alex Damian, Eshaan Nichani, and Jason D. Lee. Self-stabilization: The implicit bias of gradient descent at the edge of stability. In ICLR , 2023. Cited on page 17. Francesco D’Angelo, Maksym Andriushchenko, Aditya Varre, and Nicolas Flammarion. Why do we need weight decay in modern deep learning? In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) , 2024. Cited on page 17. Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine , 29(6):141–142, 2012. Cited on page 30. Nolan Dey, Bin Claire Zhang, Lorenzo Noci, Mufan Li, Blake Bordelon, Shane Bergsma, Cengiz Pehlevan, Boris Hanin, and Joel Hestness. Don’t be lazy: Completep enables compute-efficient deep transformers. arXiv:2505.01618 , 2025. Cited on page 16. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. InInternational Conference on Learning Representations (ICLR) , 2021. Cited on page 1. Katie Everett, Lechao Xiao, Mitchell Wortsman, Alexander A Alemi, Roman Novak, Peter J Liu, Izzeddin Gur, Jascha Sohl-Dickstein, Leslie Pack Kaelbling, Jaehoon Lee, et al. Scaling exponents across parameterizations and optimizers. arXiv:2407.05872 , 2024. Cited
https://arxiv.org/abs/2505.22491v1
on page 2, 3, 4, 8, 17, 18, 21, 45, 56, 57, 58, 59, 60, 61, 62. Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv:2403.08295 , 2024. Cited on page 8. Justin Gilmer, Behrooz Ghorbani, Ankush Garg, Sneha Kudugunta, Behnam Neyshabur, David Cardoze, George Edward Dahl, Zachary Nado, and Orhan Firat. A loss curvature perspective on training instabilities of deep learning models. In International Conference on Learning Represen- tations (ICLR) , 2022. Cited on page 17. Eugene A. Golikov. Dynamically stable infinite-width limits of neural classifiers. arXiv:2006.06574 , 2020. Cited on page 16. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv:2407.21783 , 2024. Cited on page 1, 8. Moritz Haas, Jin Xu, V olkan Cevher, and Leena Chennuru Vankadara. µP2: Effective sharpness aware minimization requires layerwise perturbation scaling. In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) , 2024. Cited on page 16. 11 Soufiane Hayou and Greg Yang. Width and depth limits commute in residual networks. In Interna- tional Conference on Machine Learning (ICML) , 2023. Cited on page 16. Bobby He, Lorenzo Noci, Daniele Paliotta, Imanol Schlag, and Thomas Hofmann. Understanding and minimising outlier features in transformer training. In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) , 2024. Cited on page 17. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In IEEE international conference on computer vision (ICCV) , 2015. Cited on page 1, 30. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. An empirical analysis of compute-optimal large language model training. Advances in Neural Information Processing Systems (NeurIPS) , 35:30016–30030, 2022. Cited on page 1, 16. Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural Tangent Kernel: Convergence and gener- alization in neural networks. In Advances in Neural Information Processing Systems (NeurIPS) , 2018. Cited on page 16. Dayal Singh Kalra, Tianyu He, and Maissam Barkeshli. Universal sharpness dynamics in neural network training: Fixed point analysis, edge of stability, and route to chaos. In The Thirteenth International Conference on Learning Representations (ICLR) , 2025. Cited on page 26. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv:2001.08361 , 2020. Cited on page 1, 16. Atli Kosson, Bettina Messmer, and Martin Jaggi. Rotational equilibrium: How weight decay balances learning across neural networks. In Forty-first International Conference on Machine Learning (ICML) , 2024. Cited on page 17. Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. Cited on page 30. Daniel Kunin, Allan Raventos, Clémentine Carla Juliette Dominé, Feng Chen, David Klindt, An- drew M Saxe, and
https://arxiv.org/abs/2505.22491v1
Surya Ganguli. Get rich quick: exact solutions reveal how unbalanced ini- tializations promote rapid feature learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Cited on page 17, 30. Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, and Guy Gur-Ari. The large learning rate phase of deep learning: the catapult mechanism. arXiv:2003.02218 , 2020. Cited on page 2, 5, 17, 25, 26. Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Yitzhak Gadre, Hritik Bansal, Etash Guha, Sedrick Scott Keh, Kushal Arora, et al. Datacomp-lm: In search of the next generation of training sets for language models. Advances in Neural Information Processing Systems (NeurIPS) , 37:14200–14282, 2024. Cited on page 7, 31. Lightning AI. Litgpt. https://github.com/Lightning-AI/litgpt , 2023. Cited on page 30. Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. On the variance of the adaptive learning rate and beyond. In International Conference on Learning Representations (ICLR) , 2020. Cited on page 17. Peter J. Liu, Roman Novak, Jaehoon Lee, Mitchell Wortsman, Lechao Xiao, Katie Everett, Alexan- der A. Alemi, Mark Kurzeja, Pierre Marcenac, Izzeddin Gur, Simon Kornblith, Kelvin Xu, Gamaleldin Elsayed, Ian Fischer, Jeffrey Pennington, Ben Adlam, and Jascha Sohl-Dickstein. Nanodo: A minimal transformer decoder-only language model implementation in JAX. GitHub repository , 0.1.0, 2024. http://github.com/google-deepmind/nanodo . Cited on page 18. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Confer- ence on Learning Representations (ICLR) , 2019. Cited on page 7. 12 Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training. arXiv:1812.06162 , 2018. Cited on page 17. Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A mean field view of the landscape of two- layer neural networks. Proceedings of the National Academy of Sciences , 115(33):E7665–E7671, 2018. Cited on page 16. Lorenzo Noci, Sotiris Anagnostidis, Luca Biggio, Antonio Orvieto, Sidak Pal Singh, and Aurelien Lucchi. Signal propagation in transformers: Theoretical perspectives and the role of rank collapse. Advances in Neural Information Processing Systems (NeurIPS) , 35:27198–27211, 2022. Cited on page 17. Lorenzo Noci, Chuning Li, Mufan Li, Bobby He, Thomas Hofmann, Chris J Maddison, and Dan Roy. The shaped transformer: Attention models in the infinite depth-and-width limit. Advances in Neural Information Processing Systems (NeurIPS) , 36, 2024a. Cited on page 16. Lorenzo Noci, Alexandru Meterez, Thomas Hofmann, and Antonio Orvieto. Super consistency of neural network landscapes and learning rate transfer. In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) , 2024b. Cited on page 16. OLMo Team, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, et al. 2 olmo 2 furious. arXiv:2501.00656 , 2024. Cited on page 1, 8. Elliot Paquette, Courtney Paquette, Lechao Xiao, and Jeffrey Pennington. 4+3 phases of compute- optimal neural scaling laws. In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) , 2024. Cited on page 16. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas
https://arxiv.org/abs/2505.22491v1
Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS) , 2019. Cited on page 30. Dan Qiao, Kaiqi Zhang, Esha Singh, Daniel Soudry, and Yu-Xiang Wang. Stable minima cannot overfit in univariate relu networks: Generalization by large step sizes. In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) , 2024. Cited on page 17. Yunwei Ren, Eshaan Nichani, Denny Wu, and Jason D Lee. Emergence and scaling laws in sgd learning of shallow neural networks. arXiv:2504.19983 , 2025. Cited on page 17. Christopher J Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E Dahl. Measuring the effects of data parallelism on neural network training. arXiv:1811.03600 , 2018. Cited on page 17. Jascha Sohl-Dickstein, Roman Novak, Samuel S Schoenholz, and Jaehoon Lee. On the infinite width limit of neural networks with a standard parameterization. arXiv:2001.07301 , 2020. Cited on page 2, 16. Alexander Tsigler, Luiz FO Chamon, Spencer Frei, and Peter L Bartlett. Benign overfitting and the geometry of the ridge regression solution in binary classification. arXiv:2503.07966 , 2025. Cited on page 17. Leena Chennuru Vankadara, Jin Xu, Moritz Haas, and V olkan Cevher. On feature learning in structured state space models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) , 2024. Cited on page 16, 31. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems (NeurIPS) , 2017. Cited on page 2, 30. Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv:1011.3027 , 2010. Cited on page 42. 13 Nikhil Vyas, Alexander Atanasov, Blake Bordelon, Depen Morwani, Sabarish Sainathan, and Cengiz Pehlevan. Feature-learning networks are consistent across widths at realistic scales. Advances in Neural Information Processing Systems (NeurIPS) , 36, 2024. Cited on page 16, 17. Jonathan Wenger, Felix Dangel, and Agustinus Kristiadi. On the disconnect between theory and practice of overparametrized neural networks. arXiv:2310.00137 , 2023. Cited on page 16. Mitchell Wortsman, Peter J Liu, Lechao Xiao, Katie E Everett, Alexander A Alemi, Ben Adlam, John D Co-Reyes, Izzeddin Gur, Abhishek Kumar, Roman Novak, Jeffrey Pennington, Jascha Sohl-Dickstein, Kelvin Xu, Jaehoon Lee, Justin Gilmer, and Simon Kornblith. Small-scale proxies for large-scale transformer training instabilities. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. Cited on page 8, 17, 30, 46. Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture. InInternational Conference on Machine Learning (ICML) , 2020. Cited on page 17. Greg Yang. Wide feedforward or recurrent neural networks of any architecture are gaussian processes. Advances in Neural Information Processing Systems (NeurIPS) , 32, 2019. Cited on page 16. Greg Yang and Edward J. Hu. Tensor programs iv: Feature learning in infinite-width neural networks. InInternational Conference on Machine Learning (ICML) , 2021. Cited
https://arxiv.org/abs/2505.22491v1
on page 2, 3, 9, 16, 19, 20, 21, 22, 24, 27, 57. Greg Yang and Etai Littwin. Tensor programs ivb: Adaptive optimization in the infinite-width limit. arXiv:2308.01814 , 2023. Cited on page 3, 16, 23, 24. Greg Yang, Edward J Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, and Jianfeng Gao. Tensor programs v: Tuning large neural networks via zero-shot hyperparameter transfer. arXiv:2203.03466 , 2022. Cited on page 2, 16, 17, 18, 31. Greg Yang, James B. Simon, and Jeremy Bernstein. A spectral condition for feature learning. arXiv:2310.17813 , 2023a. Cited on page 4, 16, 18, 31. Greg Yang, Dingli Yu, Chen Zhu, and Soufiane Hayou. Tensor programs vi: Feature learning in infinite-depth neural networks. arXiv:2310.02244 , 2023b. Cited on page 16. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. In International Conference on Learning Representations (ICLR) , 2020. Cited on page 17. Zixiong Yu, Songtao Tian, and Guhan Chen. Divergence of empirical neural tangent kernel in classification problems. International Conference on Learning Representations (ICLR) , 2025. Cited on page 16. Ruiqi Zhang, Jingfeng Wu, Licong Lin, and Peter L Bartlett. Minimax optimal convergence of gradient descent in logistic regression via large and adaptive stepsizes. arXiv:2504.04105 , 2025. Cited on page 17. 14 Appendices Appendix Contents. A Detailed Related Work 16 B Take-aways for practitioners 18 C Theoretical considerations 19 C.1 Distilled TP scaling arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 C.2 Measuring Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 C.3 Formal statements and proofs of Propositions 1 and 2 . . . . . . . . . . . . . . . . 22 C.4 Scaling dynamics in 2-layer linear networks . . . . . . . . . . . . . . . . . . . . . 25 D Experimental details 30 D.1 MLPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 D.2 Multi-index data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 D.3 Language modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 D.4 Figure Details . . . .
https://arxiv.org/abs/2505.22491v1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 E Refined coordinate checks 31 E.1 SGD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 E.2 Adam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 E.3 Normalization layers and Adam provide robustness to miss-initialization . . . . . . 41 E.4 Alignment and update scaling in Transformers . . . . . . . . . . . . . . . . . . . . 42 F Empirical learning rate exponents 44 F.1 Summary of the MLP experiments in this section . . . . . . . . . . . . . . . . . . 44 F.2 Transformer experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 F.3 Cross-entropy loss enables large-learning rate training . . . . . . . . . . . . . . . . 48 F.4 MLPs with SGD on MNIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 F.5 MLPs with ADAM on MNIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 F.6 MLPs with SGD on CIFAR-10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 F.7 MLPs with ADAM on CIFAR-10 . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 F.8 Effective update parameterizations beyond µP . . . . . . . . . . . . . . . . . . . . 56 15 A Detailed Related Work Here we provide a more detailed account of related work than what is possible in the main body of the paper. Neural networks in the infinite-width limit. Past work has extensively analysed the Neural Tangent Parameterization (NTP) (Jacot et al., 2018) due to its tractability. But due to lacking feature learning in the infinite-width limit, finite networks in NTP behave qualitatively differently and hence NTP is not the ideal model for understanding finite neural networks.
https://arxiv.org/abs/2505.22491v1
Finite-width deviations already accumulate after a few steps of training (Wenger et al., 2023), in particular under CE loss (Yu et al., 2025). Considerable effort has been invested in finding a descriptive infinite-width model for SP. Sohl-Dickstein et al. (2020) note that the NTK diverges under large learning rates ηn=ω(n−1) in SP, which motivates them to consider a different parameterization which preserves a finite NTK in the infinite-width limit, but consequently does not correspond to SP anymore. Golikov (2020) studies a class of ‘dynamically stable’ parameterizations, allowing large learning rates under a variant of SP, they call ‘sym-default’ parameterization, which again is not equivalent SP. Another popular width-dependent parameterization is the Maximal Update Parameterization ( µP). It achieves a width- independent effect of updates in all trainable weights on the output function. Its infinite-width limit has been observed to closely track finite networks in µP well over long periods of training in, for example, feature learning strength, the learned function, or gradient and Hessian statistics (Vyas et al., 2024, Noci et al., 2024b). As an important practical consequence, it allows to tune small proxy models and train the large model only once with the optimal HPs (Yang et al., 2022). µP was derived using Tensor Programs (TP) framework (Yang, 2019, Yang and Hu, 2021, Yang and Littwin, 2023) that, in theory, allows to exactly track the learning dynamics of many popular architectures like MLPs, ResNets and Transformers trained with SGD or Adam in arbitrary parameterizations in the infinite-width limit. Haas et al. (2024) derive a µP-like parameterization for sharpness aware minimization algorithms achieving transfer of the optimal learning rate and perturbation radius jointly by showing that perturbations should be scaled like updates in µP. Vankadara et al. (2024) derive an initialization and learning rate scaling rule that achieves width-independent training dynamics for the state-space model Mamba, which shows that the spectral condition on the weights and weight updates in every layer for achieving µP provided by Yang et al. (2023a) does not apply to arbitrary architectures. At sufficient numerical precision, the mean-field parameterization (Mei et al., 2018, Chizat and Bach, 2018) is equivalent to µP. While it was initially restricted to shallow neural networks, the dynamical mean-field theory (DMFT) by Bordelon and Pehlevan (2022) generalizes it to more complex architectures, including Transformers (Bordelon et al., 2024a). Although still expensive, the approximate solvers from DMFT are more computationally feasible than iteratively solving the exact TP limit equations. Chizat et al. (2024) studies deep linear networks in µP and shows convergence of gradient flow to a minimum l2-norm solution. Other neural network scaling limits. Beyond width scaling, depth scaling L→ ∞ has been studied in detail. For ResNets, Yang et al. (2023b), Hayou and Yang (2023), Bordelon et al. (2024b) show thatL−1/2-scaling of shallow residual blocks induces depth-independence and this limit commutes with width scaling, implying that depth can be scaled independent of width. Using approximative DMFT theory, Bordelon et al. (2024a) suggest that L−1-depth scaling may be necessary to preserve feature learning in attention blocks although they consider a pure depth limit. Dey
https://arxiv.org/abs/2505.22491v1
et al. (2025) confirm L−1-block scaling to be the ‘correct’ scaling by providing additional desiderata and empirical evidence on Transformers. Bordelon et al. (2024a) also show that the infinite within-head dimension limit effectively leads to a single-head Transformer, an the infinite number of heads limit concentrates by aggregating over the coordinate distribution at fixed within-head size, closer to how scaling is typically performed in practice (Brown et al., 2020). Noci et al. (2024a) study a joint width and depth limit close to initialization for Transformers with the goal of preventing rank collapse. Long training time is much less understood. Bordelon and Pehlevan (2025) study the training dynamics of deep and wide linear networks trained on structureless Gaussian data. Chizat and Netrapalli (2024) considers the angle between activations and gradients to give scaling rules for hyperparameters toward automatic HP scaling. They correct output layer scaling of MLPs in µP depth-dependently, only for SGD. Scaling laws. Robust compute-optimal scaling laws in LLMs were reported by Kaplan et al. (2020), Hoffmann et al. (2022). Paquette et al. (2024) provide theory on random feature models trained with one-pass SGD and identify 4 phases and 3 subphases depending on properties of the data and the target. Bjorck et al. (2025) observe no transfer across token horizons but a predictable scaling 16 law with exponent −0.32on LLama. McCandlish et al. (2018) suggests that the optimal learning rate scales as a/(1 + b/batchsize )with setting-dependent constants a, b. Hence for sufficiently large batch size the optimal learning rate is roughly constant, which is in line with the empirical observations by Shallue et al. (2018), Yang et al. (2022). Ren et al. (2025) study SGD training of 2-layer MLPs on isotropic Gaussian data under MSELoss and find that different teacher neurons are abruptly learned at different timescales leading to a smooth scaling law in the cumulative objective. Further work toward assessing the compute-optimal and data-optimal Pareto frontiers under realistic assumptions remains an important and challenging task for future work. Finite width training dynamics. Understanding finite-width training dynamics complements infinite- width theory very well, as the former line of work operates at fixed width, while the latter ask what changes with increasing width. From a practical perspective, scaling networks with µP appears to preserve the properties from base width (Vyas et al., 2024, Noci et al., 2022). Deep understanding of neural network training dynamics is still limited to 2-layer nonlinear MLP (Ren et al., 2025, Zhang et al., 2025) or (deep) linear MLP (Kunin et al., 2024, Tsigler et al., 2025) toy models under strong distributional assumptions. Kunin et al. (2024) explain for 2-layer networks that varying layerwise initialization variance and learning rate scaling induces differing learning regimes: fast feature learning in balanced parame- terizations (desirable for linear nets), faster learning of earlier layers in upstream parameterizations with small parameter movement (desirable in nonlinear networks, as it reduces time to grokking and sample complexity of hierarchical data structures), faster learning of later layers in downstream initializations (that is initial lazy fitting followed by slow feature learning). Abbe et al. (2023) show
https://arxiv.org/abs/2505.22491v1
that, opposed to lazy networks, feature learning networks can learn low rank spikes in hidden layer weights/kernels to help with sparse tasks. Qiao et al. (2024) show that large learning rates induce sparse linear spline fits in univariate gradient descent training by showing that all stable minima are flat, non-interpolating and produce small first order total variation, hence avoid overfitting and learn functions with bounded first order total variation Edge of stability. Large learning rates have broadly been observed to induce optimal generalization. Lewkowycz et al. (2020) observe that under large learning rates at the edge of stability, 2/λ0< η < carc/λ0(where carc= 12 for ReLU nets) an initial blowup at training time at least log(n)induces a bias towards flatter minima. (Cohen et al., 2021) find loss spikes during training, but that training self-stabilizes through sharpness reduction. Damian et al. (2023) develops some understanding of the mechanisms behind EOS dynamics. For Adam, the preconditioner matrix provides an additional mechanism by which stabilization can occur (Cohen et al., 2022, Gilmer et al., 2022). Warmup. Warmup allows stability under larger learning rates via slow sharpness reduction ( ?). ?also shows that warmup allows using larger learning rates than otherwise stable by constantly operating at the edge of stability. Warmup does not improve performance but stabilizes training; by allowing training with larger learning rates, these often induce improved performance. Large catapults harm Adam by persisting in its memory. Hence Adam’s optimal learning rate is further away from the failure boundary than for SGD, and Adam benefits more from longer warmup. Above the optimal learning rate, Adam has a regime of training failure, where early catapults persist in the second moment and prevent learning. Warmup also widens the regime of near-optimal learning rate choices. Liu et al. (2020) find that particularly Adam needs warmup due to large initial variance. Effective learning rates. Kosson et al. (2024) study the effect of weight decay on rotation in weight vectors, which influences the effective learning rate. Also see references therein for literature on effective learning rates, which is related to the alignment discussion in this paper. Stability of Transformer training. More empirically, a plethora of works study the training stability of large-scale Transformers with respect to warmup, weight decay (D’Angelo et al., 2024), batch size (You et al., 2020), the optimizer (Kosson et al., 2024), the position of normalization layers (Xiong et al., 2020) and their interplay with the parameterization and numerical considerations (Wortsman et al., 2024, Blake et al., 2025, Everett et al., 2024). Wortsman et al. (2024) find that qk-Layernorm stabilizes Transformer training beyond the stabilizing effect from using µP. Xiong et al. (2020) propose pre-LN for enhanced training stability requiring less warmup. He et al. (2024) observe that outlier features (=extremely activated coordinates in activations) emerge quickly in Transformer training with AdamW and that rank-collapse under strong correlations between inputs is correlated with more outlier features. Non-diagonal preconditioning like SOAP and Shampoo resolves the issue. 17 Most relevant to our work, Everett et al. (2024) perform extensive and insightful experiments for Nan- oDO decoder-only Transformers (Liu
https://arxiv.org/abs/2505.22491v1
et al., 2024) in SP, µP, NTP and mean field parameterizations with corrected layerwise learning rate scalings, questioning the infinite-width alignment predictions between weights and incoming activations at finite width over the course of long training. They recommend SP with ADAM in conjunction with µP-learning rate scaling (they call SP-full-align) as the best-performing empirical parameterization in terms of generalization, learning rate transfer and learning rate sensitivity. B Take-aways for practitioners In this paper, the term parameterization refers to width-dependent scaling of initialization and learning rate of each trainable weight tensor separately. Studying parameterizations then means applying a scaling rule for layerwise initialization variances and learning rates and understanding how relevant quantities such as update scaling in activations and logits evolves, and where instabilities may arise at large widths. At some fixed base width, all parameterizations can be considered equivalent, if we allow tuning constant multipliers. For properly comparing the performance of parameterizations, constant weight and initialization multipliers should be tuned at some fixed base width for each parameterization at base width separately (Yang et al., 2022). This adjusts the layerwise activation and gradient size at finite width. The parameterization then prescribes the rule, by which the layerwise initialization and updates are rescaled when changing the width in relation to that base width width /base_width . Alternatively, parameterizations may be equivalent at base width such that only one set of weight multipliers has to be tuned. The extensive LLM experiments in Everett et al. (2024) suggest that the advantage of large last-layer initialization may just be an artifact of the community extensively tuning performance in SP, and after also tuning all layerwise multipliers for µP, the performance difference vanishes. While SP performs better than naive theory would predict, and can learn hidden-layer features width-independently under CE loss, feature learning still vanishes in input-like layers like embedding or Layernorm layers under both SGD and Adam. Still only µP learning rate scaling effectively updates all layers. Small last-layer initialization then recovers full width-independent and hence predictable scaling dynamics under sufficient precision in the regime n≫dout, whereas standard last-layer initialization induces logit blowup at sufficient width, which is not necessarily harmful for generalization but reduces predictability as scaling is not fully width-independent. Standard initialization with µP learning rates (SP-full-align) can induce ‘practical transfer’ and empirically update all weights effectively without logit blowup at moderate scales n≪doutin the regime where the width is much smaller than the output dimension, as is relevant for NLP settings, but likely exhibits unexpected changed behaviour at sufficient scale, when logits start to diverge due to the last-layer term WL+1 0∆xL t. This can be read off from differing dominating terms in ∥WL+1 0∥RMS→RMS , assuming width-independent alignment αWL+1 0∆xL t= Θ(1) as we measure. For uniform non-asymptotic transfer for both n≫doutandn≪dout, by the same argument we suggest a last-layer initialization σL+1= (fan_in√fan_out+√fan_in )−1, that transitions from SP initialization in the regime n≪doutto µP initialization in the regime n≫dout, as proposed in Yang et al. (2023a). For AdamW without using width-dependent weight multipliers, layer-balancing µP learning rates are simply given by the
https://arxiv.org/abs/2505.22491v1
learning rate scaling η(W) =η/fan_in (W). Here, all biases as well as normalization layer weights should be understood as weights to the one-dimensional input 1, hence fan_in = 1. For recovering width-independent weight decay, weight decay requires the inverse scaling wd·fan_in (W). TP-like width scaling arguments are very useful for identifying sources of divergence or shrinkage with scale, and architecture components such as normalization layers and training algorithms such as Adam correct most but not all divergent or vanishing scalings in the forward and backward pass, respectively. Of particular importance for evaluating the width-dependent signal propagation is the refined coordinate check (RCC) for disentangling effective updates in the current layer from updates propagating forward through the network. Ideally, all W0∆xL tand∆Wtxtshould remain width-independent, which is only guaranteed in µP at sufficient width. 18 C Theoretical considerations C.1 Distilled TP scaling arguments Here we aim to provide a more detailed, comprehensive introduction to the essential width-scaling arguments inspired by Tensor Program (TP) theory. Effective Updates. When training neural networks, we have control over the initial scaling W0and update scaling ∆Wtof trainable weights Wt=W0+ ∆Wt, but we are ultimately interested in their effect on the activations in the following layers. In standard architectures (including convolutional networks and Transformers), weights typically act linearly on the incoming activations. For such weights Wtand incoming activations xt, we can decompose the next layer’s (pre-)activation updates ∆htinto effective updates of Wtand activation updates ∆xtpropagating forward from previous layers. Evaluating the contributions of both terms separately yields a refined coordinate check , ∆ht= (∆ Wt)xt+W0(∆xt). (RCC) Note that updates of previous layers can propagate forward through the term W0∆xteven when the current layer’s effect on the output vanishes ∆Wtxt→0as width n→ ∞ . Hence, we say that the weight Wtiseffectively updated only if ∆Wtxtcontributes non-vanishingly. Plotting the width-dependent scaling of ∥(∆Wt)xt∥RMS and∥W0(∆xt)∥RMS as arefined coordinate check , has been very useful for us to gain insights into the network internal signal propagation. The usefulness of (RCC) for effective update scalings is illustrated in Figure C.1. While the activations and activation updates in a Layernorm layer evolve width-independently when training GPT with Adam in SP and global learning rate scaled as ηn=η·n−1, the refined coordinate check reveals that the effective updates in the current (input-like) layer and the activation update scaling instead stems from effective updates propagating forward from previous (hidden-like) layers. By choosing layerwise initialization variances and learning rates according to the Maximal Update Parameterization ( µP), both terms in (RCC) become width-independent in all layers in each update step. Consequently, width-scaling becomes predictable, stable and feature learning is preserved even at large width. Starting from SP, µP can be realized with smaller last-layer initialization ∥WL+1 0∥RMS =O(n−1), larger input layer learning rate ηW1=η·nand smaller last-layer learning rateηWL+1=η·n−1for SGD. Predicting scaling exponents. While the TP framework formally requires writing out all forward and backward pass computations performed during training and provides the exact infinite-width limit objects of output logits and activation coordinate distributions, we simplify its implications on width-scaling exponents for practical purposes as follows. A linear transformation either maps
https://arxiv.org/abs/2505.22491v1
fixed to width-scaling dimension ( input-like ), width-scaling to width-scaling ( hidden-like ) or width-scaling to fixed dimension ( output-like ). Here, all bias vectors and normalization layer weights can be understood as input-like weights to the one-dimensional input 1. Any sum of length n→ ∞ that occurs in individual terms in (RCC) either accumulates a factor n1/2under sufficient independence of0-mean summands (CLT-like behaviour) or a factor nwhen the summands are correlated or have non-zero mean (LLN-like behaviour). Crucially, not any sum may be evaluated with this heuristic but only weight and activation (update) pairs as in (RCC) (see Yang and Hu (2021, Appendix H)). If, for example, we considered the confounded term (W0+ ∆Wt)x0, the initial part W0x0clearly scales CLT-like but ∆Wtx0scales LLN-like; evaluating the scaling of their sum might result in wrong scaling predictions. At sufficient width, all width-scaling inner products (∆Wt, xt)from (RCC) , however, are expected to behave LLN-like, that is ∥∆Wtxt∥RMS = Θ( n· ∥∆Wt∥RMS· ∥xt∥RMS). 19 Figure C.1: (Tracking effective updates requires refined coordinate check) Activation norm ∥Wtxt∥RMS , activation update norm ∥∆(Wtxt)∥RMS , propagating update norm ∥W0∆xt∥RMS and effective update norm ∥∆Wtxt∥RMS for the last normalization layer in GPT trained with Adam and learning rate scaling ηn= 0.01·n−1for width-independent hidden layer feature learning. While activations and activation updates appear width-independent due to propagating updates, our refined coordinate check (RCC) reveals that Layernorm weight updates have vanishing effect in SP. Over time, effective updates accumulate effective rank, but do not lose alignment with width (Figure 2). Concrete examples. Complementing the more generic introduction to TP scaling arguments above, we now provide more concrete examples for illustrating how weight updates affect activations in subsequent forward passes. Consider the Tensor Program for training MLPs with SGD from Yang and Hu (2021, Appendix H). We restate the relevant update scalings when using large learning rates in SP that induce output blowup. Since divergence is not allowed in the TP framework, it does not formally cover the unstable case, but we can still heuristically write down the scaling predictions, assuming that correlations still induce LLN-like exponents and independence still induces CLT-like exponents, as we have measured to hold empirically. The crucial insight is that training with cross-entropy loss effectively means that we are considering ft(ξ) =σ(WL+1 txL t(ξ))as the output function and the loss derivative also becomes χt:=∂Lt ∂f=σ(WL+1 txL t)−yt. Hence, from a stability point of view, we can allow ˜ft:=WL+1xL→ ∞ , which results in a saturated softmax. Under one-hot labels y∈ {0,1}CwithP cyc= 1, this means fast memorization of training points (xi, yi). For width- independent hidden-layer feature learning, we may still require activations to have width-independent coordinate-scaling, but let the output function be arbitrary, since the softmax renormalizes. Definition C.1 (Activation stability ).A parameterization is activation stable iff∥xl t∥RMS = Θ(1) for all times t≥0and all layers l∈[L]. ◀ We now show heuristically that MLPs trained with SGD in SP are activation stable and feature learning under global learning rate scaling η= Θ( n−1/2). Backward pass. Here we denote the entry-wise scaling in width-scaling vectors as
https://arxiv.org/abs/2505.22491v1
v= Θ( nc), meaning ∥v∥RMS = Θ( nc). Assuming ∥ϕ′(hl t)∥RMS = Θ(1) as for ReLU (otherwise we would get vanishing gradients), the entries of the following width-scaling vectors scale as ∂f ∂xL t= WL+1 t=WL+1 0−∆WL+1 t=O(n−1/2),∂f ∂hl t=∂f ∂xl t⊙ϕ′(hl t) = Θ(∂f ∂xl t), ∂f ∂xl−1 t= (Wl 0)⊤∂f ∂hl t−ηθWlt−1X s=0χs(∂f ∂hls)⊤∂f ∂hls nxl−1 s= Θ(max(∂f ∂hl t, η(∂f ∂hls)2xl−1 s) = Θ( n−1/2). Note that any larger learning rate scaling would induce exploding gradients. For example, η= Θ(1) induces δWL+1 1= Θ(1) , so∂f ∂xL 1= Θ(1) and∂f ∂xL−k 1= Θ( n∂f ∂xL−k+1 1) = Θ( n2k−1)fork≥1. This results in exploding activations in the next forward pass, and even larger gradients in the following backward pass. We therefore continue with η= Θ( n−1/2), and get the activation updates δh1 t= −ηχt−1∂f ∂h1 t−1(ξt−1)⊤ξ= Θ( n−1/2·1·n−1/2·1) = Θ( n−1), δhl t= Wl t−1δxl−1 t+δWl txl−1 t 20 = θx Wl 0δxl−1 t−ηθWlX sχs−1∂f ∂hl s−1|{z} Θ(n)−1/2(xl−1 t−1)⊤δxl−1 t|{z} O(n)  −ηχt−1θW∂f ∂hl t−1|{z} Θ(n−1/2)(xl−1 t−1)⊤xl−1 t|{z} Θ(n)= Θ(1) , The output updates are δ˜ft(ξ) = δWL+1 txL t(ξ) + (WL+1 0+ ∆WL+1 t−1)δxL t(ξ) = −ηχt−1xL t−1xL t(ξ)|{z} Θ(n)+WL+1 0δxL t(ξ)|{z} Θ(1)+ ∆WL+1 t−1δxL t(ξ)|{z } Θ(n1/2)= Θ( n1/2), δft(ξ) = σ(˜ft−1+δ˜ft)−σ(˜ft−1) = Θ(1) . 2 layer networks. Observe that in 2 layer nets, there are no hidden layers, so that a larger learning rate can be chosen. Let η= Θ( nc). Then in the first step, δh1 1= Θ( η∂f ∂h1 0) = Θ( ncn−1/2). But note that the gradient scaling may grow after the first step,∂f ∂xL 1=WL+1 1= Θ( nc), so that δh1 2= Θ( ncnc). Hence activation stability requires η=O(1), which results in feature learning after 2 steps δx1 2= Θ(1) . Then ˜ft= Θ( η(xL t−1)⊤xL t(ξ)) = Θ( n). Random feature models. In random feature models, we only train the last layer and keep all other weights fixed Wl t=Wl 0for all l≤L. There, by definition, we do not get feature learning and the backward pass does not matter. The only gradient that matters is the last-layer gradient which has fixed scaling Θ(χt−1xL t−1) = Θ(1) at all times t≥0. The function update becomes δWL+1 txL(ξ) = −ηχt−1(xL(ξt−1))⊤xL(χ) = Θ( ncn), where the inner product between activations converges to the NNGP kernel in the infinite-width limit. Hence large learning rates η=ω(n−1)result in immediate extreme memorization of the training points ft(ξt−1)→one-hot (yt−1)asn→ ∞ , and ηn= Θ( n−1)results in fully width-independent dynamics. Adam. Adam with small enough εnormalizes the gradients in each layer before updating the weights. Since the gradients ∇WlL=χ∂f ∂hl(xl−1)⊤are generally correlated with the incoming activations xl−1, their inner product accumulates Θ(fan_in ). Non-vanishing correlation persists when only recovering the signs of the gradient. Hence for a width-independent effect on the output of the current layer, the learning rate should always be chosen as η(W) =η fan_in (W). Since both hidden and output layers have fan_in =n, activation stability requires a global learning rate η=O(n−1), which results in effective hidden and output layer learning, but vanishing input layer updates. Networks recover
https://arxiv.org/abs/2505.22491v1
input layer feature learning under η= Θ(1) , where ˜ft= Θ( n). In random feature models, η just determines the extremeness of memorization of the training labels, where η= Θ( n−1)induces width-independence and η=ω(n−1)increasing memorization. C.2 Measuring Alignment Everett et al. (2024, Fig. 2) provides RMS-alignment exponents between weights Wtand incoming activations xt. But only measuring the alignment between ∆Wtandxtas well as W0and∆xtfrom (RCC) separately allows to evaluate the width-scaling predictions from Yang and Hu (2021). For example hidden layers in µP scale as (Wl 0)ij= Θ( n−1/2)at initialization, as 0-mean independence induces CLT-like scaling Wl 0xl−1 0= Θ( n1/2· ∥Wl 0∥RMS· ∥xl−1 0∥RMS). But updates are correlated with incoming activations, so that ∆Wtxt= Θ( n· ∥∆Wt∥RMS· ∥xt∥RMS)which necessitates ∥∆Wt∥RMS = Θ( n−1). This implies that the entry size of Wt=W0+ ∆Wtis dominated by the initialization and confounds ∥Wt∥RMS for accurately measuring the alignment exponent of the layer’s updates ∆Wt. For correct width-scaling of the layer’s learning rate, the influence of W0is irrelevant so that the joint alignment between Wtandxtdoes not reveal the alignment exponent 21 that is relevant for correct learning rate scaling. Additionally, replacing the RMS-norm ∥A∥RMS by the operator norm ∥∆Wt∥RMS→RMS provides a more natural measure of alignment (Bernstein and Newhouse, 2024), since the RMS-norm is confounded by accumulated rank whereas under maximal alignment for the operator norm it holds that ∥∆Wtxt∥RMS =∥∆Wt∥RMS→RMS∥xt∥RMS , and the left-hand side is smaller under less alignment. Under perfect alignment we expect the ratio ∥∆Wtxt∥RMS ∥∆Wt∥RMS→RMS∥xt∥RMSto remain width-independent. We are not interested in constant prefactors, but only width-dependent scaling. C.3 Formal statements and proofs of Propositions 1 and 2 Before providing the full formal statements of Proposition 1 and Proposition 2, we formally introduce all definitions and assumptions. C.3.1 Definitions In this section, we collect all definitions that do not appear in the main text. We adopt all definitions from Yang and Hu (2021), up to minor modifications. If not stated otherwise, limits are taken with respect to width n→ ∞ . Definition C.2 (Big-O Notation ).Given a sequence of scalar random variables c={cn∈R}∞ n=1, we write c= Θ ( n−a)if there exist constants A, B≥0such that for almost every instantiation of c={cn∈R}∞ n=1, fornlarge enough, An−a≤ |cn| ≤Bn−a. Given a sequence of random vectors x={xn∈Rn}∞ n=1, we say xhas coordinates of size Θ (n−a)and write x= Θ ( n−a)to mean the scalar random variable sequenceq ∥xn∥2/n nisΘ (n−a). For the definition of c=O(n−a) andc= Ω( n−a), adapt the above definition of c= Θ( n−a)by replacing An−a≤ |cn| ≤Bn−a with|cn| ≤Bn−aandAn−a≤ |cn|, respectively. We write xn=o(n−a)ifna·q ∥xn∥2/n→0 almost surely. ◀ Definition C.3 (SGD update rule ).Given a (L+1)-layer MLP with layerwise initialization variances {σl}l∈[L+1]and (potentially) layerwise learning rates {ηWl}l∈[L+1], we define the SGD update rule as follows: (a) Initialize weights iid as (Wl 0)ij∼ N(0, σ2 l). (b) Update the weights via Wl t+1=Wl t−ηWl· ∇WlL(ft(ξt), yt). ◀ Definition C.4 (Parameterization ).We define a width-scaling parameterization as a collection of exponents {bl}l∈[L+1]∪ {cl}l∈[L+1]that determine layerwise initialization variances σ2 l=cl·n−bl and layerwise learning rates ηl=η·n−cl, with width-independent constants cl, η > 0for all l∈[L+
https://arxiv.org/abs/2505.22491v1
1]. ◀ Definition C.5 (Training routine ).Atraining routine is a combination of base learning rate η≥0, training sequence {(ξt, yt)}t∈Nand a continuously differentiable loss function L(f(ξ), y)using the SGD update rule. ◀ Definition C.6 (Stability ).We say a parametrization of a (L+ 1) -layer MLP is stable if 1. For every nonzero input ξ∈Rdin\{0}, hl 0, xl 0= Θ ξ(1),∀l∈[L],andEf0(ξ)2=Oξ(1), where the expectation is taken over the random initialization. 2. For any training routine, any time t∈N,l∈[L],ξ∈Rdin, we have hl t(ξ)−hl 0(ξ), xl t(ξ)−xl 0(ξ) =O∗(1),and ft(ξ) =O∗(1), where the hidden constant in O∗can depend on the training routine, t,ξ,land the initial function f0. ◀ 22 Definition C.7 (Nontriviality ).We say a parametrization is trivial if for every training routine, ft(ξ)−f0(ξ)→0almost surely for n→ ∞ , for every time t >0and input ξ∈Rdin. Otherwise the parametrization is nontrivial . ◀ Definition C.8 (Feature learning ).We say a parametrization admits feature learning in the l-th layer if there exists a training routine, a time t >0and input ξsuch that xl t(ξ)−xl 0(ξ) = Ω ∗(1), where the constant may depend on the training routine, the time t, the input ξand the initial function f0but not on the width n. ◀ Definition C.9 (σ-gelu ).Define σ-gelu to be the function x7→x 2 1 + erf σ−1x +σe−σ−2x2 2√π.◀ In order to apply the Tensor Program Master Theorem, all Nonlin and Moment operations in the NE⊗OR⊤program (Yang and Littwin, 2023), which do not only contain parameters as inputs, are required to be pseudo-Lipschitz in all of their arguments. For training with SGD, this is fulfilled as soon as ϕ′is pseudo-Lipschitz. σ-gelu fulfills this assumption. Definition C.10 (Pseudo-Lipschitz ).A function f:Rk→Ris called pseudo-Lipschitz of degree d if there exists a C >0such that |f(x)−f(y)| ≤C∥x−y∥(1 +Pk i=1|xi|d+|yi|d). We say fis pseudo-Lipschitz if it is so for any degree d. ◀ C.3.2 Full formal statements of Propositions 1 and 2 Assumptions. For all of the results in this section, we assume that the used activation function is σ-gelu forσ >0sufficiently small. For small enough σ >0,σ-gelu (Definition C.9) approximates ReLU arbitrarily well. We assume constant training time t≥1as width n→ ∞ . We assume batch size1for clarity, but our results can be extended without further complications to arbitrary fixed batch size. Proposition C.11. (Asymptotic regimes in SP) For fixed L≥2,t≥1,η >0,α∈R, consider training a (L+ 1) -layer MLP of width nin SP with SGD and global learning rate ηn=η·n−αfor tsteps. Then the logits ft, training loss L(ft(ξt), yt), loss-logit derivatives χt:=∂fL(ft(ξt), yt), loss-weight gradients ∇l t:=∇WlL(ft(ξt), yt)and activations xl t,l∈[L], after training scale as follows in the infinite-width limit n→ ∞ . The hidden constants in O∗,Ω∗andω∗below can depend on the training routine, t,ξ,land the initial function f0. Under cross-entropy (CE) loss , three qualitatively distinct regimes arise: (a)Stable regime (α≥1): For any training routine, all l∈[L]and any ξ∈Rdin, it holds that∥ft(ξ)∥RMS =O∗(1),|L(ft(ξt), yt)|=O∗(1),∥χt∥RMS =O∗(1),∥∇l t∥RMS = O∗(n−1/2)and∥xl t(ξ)∥RMS =O∗(1). (b)Controlled divergence (1 2≤α < 1): For any training routine, all l∈[L]and any ξ∈Rdin, it holds that ∥nα−1·ft(ξ)∥RMS =O∗(1),∥xl t(ξ)−xl 0(ξ)∥RMS =O∗(1), |L(ft(ξt), yt)|=O∗(1),∥χt∥RMS =O∗(1)and∥∇l t∥RMS =O∗(n−1/2).
https://arxiv.org/abs/2505.22491v1
In addition, there exists a training routine and input ξsuch that ∥nα−1·ft(ξ)∥RMS = Ω∗(1). (c)Catastrophic instability (α <1 2): For any l∈[L], there exists a training routine and a ξ∈Rdin, such that ∥ft(ξ)∥RMS =ω∗(1),∥xl t(ξ)∥RMS =ω∗(1)and∥∇l t∥RMS =ω∗(1). Under mean -squared error (MSE) loss , a stable regime as in (a) above arises if α≥1. Ifα <1, training is catastrophically unstable as in (c) above and, in addition, there exists a training routine such that |L(ft(ξt), yt)|=ω∗(1)and∥χt∥RMS =ω∗(1). Proposition C.12 (Under CE loss, SP with large learning rates learns features at large width ). Consider the setting of Proposition 1 of training a (L+1)-layer MLP with SGD in SP with global learning rate ηn=η·n−α,α∈R, in the infinite-width limit n→ ∞ . The hidden constants in O∗,Ω∗andω∗below can depend on the training routine, t,ξ,land the initial function f0. (a)Under both MSE and CE loss in the stable regime ( α≥1), for any training routine, l∈[L] andξ∈Rdinit holds that ∥∆xl t(ξ)∥RMS =O∗(n−1/2). (b)Under CE loss in the controlled divergence regime (1 2≤α <1), for any training routine, l∈[L]andξ∈Rdinit holds that ∥∆x1 t(ξ)∥RMS =O∗ n−1/2−α , and∥∆xl t(ξ)∥RMS = O∗ n1/2−α . For any l∈[L], there exists a training routine and ξ∈Rdinsuch that ∥∆x1 t(ξ)∥RMS = Ω∗ n−1/2−α , and∥∆xl t(ξ)∥RMS = Ω∗ n1/2−α . 23 Remark C.13 (2-layer networks recover stable training dynamics and width-independent feature learning at α= 0).Similarly, it can be shown that 2-layer MLPs remain activation stable under width-independent learning rate scaling ηn= Θ(1) . The controlled divergence regime is given by0≤α <1/2, with width-independent input layer feature learning at α= 0. ◀ Remark C.14 (Adam recovers stable training dynamics and width-independent hidden-layer feature learning at α= 1).For Adam and L≥2, an analogous NE⊗OR⊤-based proof (Yang and Littwin, 2023) would show that ηn= Θ( n−1)recovers feature learning in all hidden layers l∈[2, L], stable activations and loss-logit gradients, while logits blow up only through WL+1 0∆xL t= Θ( n1/2). To avoid logit blowup, ηn= Θ( n−3/2)would be necessary. In that case, only the term WL+1 0∆xL t would contribute non-vanishingly to the logit updates. Hence, for Adam under CE loss, the controlled divergence regime is given by 1≤α <3/2, with hidden-layer feature learning at α= 1. ◀ C.3.3 Proof of Propositions 1 and 2 The proof in Yang and Hu (2021) for general stable abc-parameterizations directly covers the stable regimes of both losses, showing a kernel regime and vanishing feature learning for α≥1. For the controlled divergence regime under CE loss, however, note that the TP framework does not allow diverging computations. Here, we need to replace the logit updates by rescaled logit updates, before computing the softmax limit outside of the TP framework. Formally, under standard initialization, WL+1 0∼N(0,1/n)is replaced in the TP by ˆWL+1 ε con- structed via Nonlin, conditioning on f0(ξ)(see Yang and Hu (2021, Appendix H) for all details). For stable parameterizations, the function updates are defined in the TP as δˆft=θ′ L+1δWL+1 txL t n+θ′ LfˆWL+1 t−1δxL t n, where θ′ L+1=n1−αandθ′ Lf=n1−r−bL+1. In the controlled divergence regime α <1, we now define rescaled logit updates
https://arxiv.org/abs/2505.22491v1
in the TP as δˆft=ˆθL+1δWL+1 txL t n+ˆθLfˆWL+1 t−1δxL t n, by replacing θ′ L+1byˆθL+1:=θαθ′ L+1and replacing θ′ LfbyˆθLf:=θαθ′ Lf, where θα:=nα−1. The adapted pre-factors ensure that δˆfremains O∗(1)for a well-defined TP. The TP master theorem now implies almost sure convergence of the rescaled logit updates δˆft→˚δˆft∈Rdouta.s. Now we compute the softmax limit outside of the TP framework, as we want to recover the softmax values of the original diverging logits. Thus, given the convergent sequence δˆft→˚δˆft∈Rdouta.s., due to the smoothness and saturation properties of the softmax it follows that there exists a ˚χt∈Rdout such that σ(θ−1 α·δˆft)−yt→˚χta.s. Since |σ(θ−1 α·δˆft)−yt| ≤1 +|yt|and|˚χt| ≤1 +|yt|, this sequence can again be used as a TP scalar. Now the last-layer weights are TP vectors updated with δWL+1 t=−ηnχtxL twhich do not change the scaling of ˆWL+1 t=ˆWL+1 t−1+θL+1/f·δWL+1 1 with θL+1/f≤1as long as α≥1/2. Thus the backward pass scalings are not affected and the rest of the TP can remain unchanged. For larger learning rates α <1/2under CE loss, we provide heuristic scaling arguments. Observe that preactivations diverge after the first update step δh2 1=−ηn∂f0 ∂h2(x1 0)⊤x1 1= Θ( n1/2−α). The updates of the next hidden layer’s preactivations scale even larger, that is δh3 1=−ηn∂f0 ∂h3(x2 0)⊤x2 1= Θ(n2(1/2−α)). In this way, the exponent growth continues throughout the forward pass. But even if there is only a single hidden layer, the scaling of the backpropagated gradient is increased after the second step,∂f2 ∂xL=W0−ηnχ0xL 0−ηnχ1xL 1= Ω( ηnχ1δxL 1) = Ω( n1/2−2α) =ω(n−1/2). This, in turn, increases the preactivation update scaling δh2 3=−ηn∂f2 ∂h2(x1 2)⊤x1 3= Ω(n−α∂f2 ∂xLn) = Ω(n3(1/2−α)), which in turn increases the gradient scaling in the next step, inducing a feedback loop of cascading exponents between diverging activations and gradients, inducing fast training divergence. Under MSELoss, observe how, already for α < 1, diverging logits δf1=WL+1 0δxL 1− ηnχ0(xL 0)⊤xL 0= Θ( n1−α)increase the gradient scaling through χ1=f1−y1= Θ( n1−α)which in 24 turn increases the activation as well as logit scaling in the next step, and induces a divergent feedback loop even worse than above. C.4 Scaling dynamics in 2-layer linear networks Here, we rederive the training dynamics of the minimal model from Lewkowycz et al. (2020) that shows an initial catapult mechanism in NTP. They observe that the training dynamics of repeatedly updating a 2-layer linear network in NTP on the same training point is fully captured by update equations of the current function output ttand the current sharpness λt. C.4.1 Deriving the update equations for SP, NTP and µP NTP. The original model by Lewkowycz et al. (2020) is given by f=n−1/2vux, where u∈Rn×d, v∈Rnare initialized as uij, vi∼N(0,1)and trained with MSE loss L(f, x, y ) = 1 2(f(x)−y)2, loss derivative χt=f(x)−yand a global learning rate η. Repeated gradient descent updates using (x, y), then results in the update equations, ft+1= ft(1 +n−1η2χ2∥x∥2)−ηχtλt, λt+1= λt+n−1ηχt∥x∥2(ηχt∥x∥2λt−4ft), where the update kernel is defined as ˜Θ(x, x′) =n−1(∥u∥2+∥v∥2). Note that the width-dependence in ftandλtresults in qualitatively different behaviour in the infinite- width limit. In particular, in the limit, the sharpness cannot evolve
https://arxiv.org/abs/2505.22491v1
over the course of training, λt=λ0. Maximal Update Parameterization. We define a 2-layer linear network in µP with arbitrary weight multipliers as f= ¯v¯ux, with reparameterization-invariant weights ¯uij∼N(0,1/din)and¯vi∼N(0,1/n2),¯u= n−auu,¯v=n−avv, and the original weights u, vare trained with MSE loss and layerwise learning rates ηu=ηn1+2auandηv=ηn−1+2av, which results in reparameterization-invariant layerwise learning rates ¯ηu=ηnand¯ηv=ηn−1. Formally, we now perform updates on uandv, but we can work with ¯uand¯vinstead. For gradients it holds that∂f ∂u=∂f ∂¯u∂¯u ∂u=∂f ∂¯un−au; this width scaling has to be accounted for when transitioning between representatives of the µP equivalence class. For updates ¯ηu,¯ηvshould be used instead of ηu, ηv, as the layerwise learning rate rescaling was exactly chosen to cancel out the effect of the weight rescaling, ¯ut+1−¯ut=−n−auηu∂f ∂¯u∂¯u ∂u=−n−2auηu∂f ∂¯u=−¯ηu∂f ∂¯u. The derivatives for backpropagation are given by, χt:=∂L ∂f=f(xt)−yt,∂f ∂¯v=x⊤¯u⊤,∂f ∂¯u= ¯v⊤x⊤. The updated weights are then given by ¯vt+1= ¯vt−ηn−1χtx⊤¯u⊤, ¯ut+1= ¯ut−ηnχt¯v⊤x⊤. In the case din= 1, the updated function is then given by ft+1= ¯ vt+1¯ut+1x=ft+η2χ2 tx⊤¯u⊤¯v⊤x⊤x−¯ηuχt¯v¯v⊤x⊤x−¯ηvχtx⊤¯u⊤¯ux = ft 1 +η2χ2 t∥x∥2 −ηχt n∥¯v∥2+n−1∥¯u∥2 ∥x∥2 25 = ft 1 +η2χ2 t∥x∥2 −ηχt˜Θ(x, x), where we call ˜Θthe reparameterization-invariant update kernel defined as ˜Θ(x, x′) =X lηl η∂f(x) ∂Wl∂f(x′) ∂Wl=x⊤ n−1∥¯u∥2+n∥¯v∥2 x′. The update kernel evolves via the reparameterization-invariant update equation λt+1= ˜Θt+1(x, x) =∥x∥2 n∥¯vt+1∥2+n−1∥¯ut+1∥2 = ∥x∥2 n∥¯vt∥2+n−1∥¯ut∥2+n−1¯η2 uχ2 t¯v¯v⊤x⊤x −2n¯ηvχt¯v¯ux+n¯η2 vχ2 tx⊤¯u⊤¯ux−2n−1¯ηuχt¯v¯ux = λt+∥x∥2 η2χ2 t∥x∥2(n−1∥¯u∥2+n∥¯v∥2)−4ηχtft = λt+∥x∥2 η2χ2 tλt−4ηχtft = λt+∥x∥2ηχt ηχtλt−4ft . Now note that, even under f0= 0, we get non-trivial, width-independent dynamics. Due to the LLN, at initialization, we have n−1∥¯u0∥2≈1andn∥¯v0∥2≈1(=n−1times sum over niidχ2variables), hence λ0≈2. To conclude, the training dynamics for repeatedly updating with the same training point (x, y)are fully described by the update equations, ft+1= ft 1 +η2χ2 t∥x∥2 −ηχtλt, (C.1) λt+1= λt+∥x∥2ηχt ηχtλt−4ft . (C.2) This can be rewritten in terms of the error (or function-loss derivative under MSE loss) χt=ft−y, akin to Kalra et al. (2025), as χt+1= χt 1−ηλt+η2∥x∥2χt(χt+y) , (C.3) λt+1= λt+∥x∥2ηχt ηχtλt−4(χt+y) . (C.4) First observe that all terms in the update equations become width-independent in µP. Only the initial conditions are width-dependent with vanishing variance, f0= Θ( n−1/2). As opposed to NTP, the sharpness update term η2χ2 t∥x∥2is not vanishing anymore. While Lewkowycz et al. (2020) simply use labels y= 0, non-trivial dynamics in µP require y̸=f0→0. Importantly, ηχtalways appear jointly, so that interpolation effectively reduces the learning rate. Remark C.15 (Characterizing sharpness increase: Critical threshold depends on the labels. ). When both sharpness and the loss increase, then training diverges as the learning rate lies even further from its edge of stability. In µP, since f0→0,λtwill grow in the first step. For subsequent steps, the sharpness update equation (C.4) implies that sharpness increases ( λt+1≥λt) if and only if λt≥4 η(1 +y χ) =4 ηft χt. Kalra et al. (2025) provides a more extensive analysis of the dynamics and fixed points of this model in µP. ◀ Remark C.16 (Weight multipliers ).A natural choice of weight multipliers for µP can be considered to beal= 1/2·I(l=L+ 1)−1/2·I(l= 1) , as this choice allows using a width-independent global learning rate ηn=η·n0, and the update
https://arxiv.org/abs/2505.22491v1
kernel does not require width-dependent scaling factors, ˜Θ(x, x′) =x⊤ ∥u∥2+∥v∥2 x′. In other words, under these weight multipliers, width-independence in parameter space translates into width-independence in function space. ◀ Standard Parameterization. We define training a 2-layer linear network in SP with global learning rate scaling n−cas f= ¯v¯ux, 26 with initialization ¯u∼N(0,1/din),¯v∼N(0,1/n)and global learning rate ¯ηu= ¯ηv=ηn−c. Parameter multipliers affect all scalings in the same way as for µP. Only the learning rate has a different prefactor, and the last layer has larger initialization. The adapted update equations become ft+1= ft(1 +n−2cη2χ2∥x∥2)−n1−cηχtλt, λt+1= λt+∥x∥2n−cηχt(n−cηχtλt−4n−1ft), where we define, as for NTP, ˜Θ(x, x′) =n−1(∥¯u∥2+∥¯v∥2), where n−1∥¯v∥2≈n−1at initialization ( n−2times sum over niidχ2-variables with positive mean). Choosing c <1results in output blowup of the term n1−cηχtλt. While this can in principle be counteracted by shrinking λtat finite width, a well-defined stable and non-trivial infinite-width limit is only attained at c= 1, where ft+1=ft−ηχtλtandλt+1=λt. We now show that also at finite width, stable training with a constant learning rate in SP requires η=O(n−1). C.4.2 Finding the maximal stable learning rate scaling by characterizing the conditions for loss and sharpness decrease The following proposition characterizes the choices of ηthat result in a decrease in loss at any present state. Writing nsp=n, in SP, 1,else,nntp=n, in NTP , 1,else,, we can write the update equations of parameter- izations jointly as χt+1= χt(1−nspηλt+n−1 ntpη2χ∥x∥2(χt+y)), λt+1= λt+ηχt∥x∥2n−1 ntp(ηχtλt−4n−1 sp(χt+y)). Proposition C.17 (Characterizing loss decrease in SP and NTP ).Letη≥0. For nspornntp large enough, we write the update equations of repeatedly updating the uv-model with SGD on the training point (x, y)with∥x∥= 1in SP or NTP jointly as provided above. The loss decreases at any step, omitting time t, 1.in the case f(f−y)≥0, if and only if η≤2 nspλ+O(n−3 spn−1 ntp)orη∈[nspnntpλ ∥x∥2χf−2 nspλ− O(n−3 spn−1 ntp),nspnntpλ ∥x∥2χf], 2. in the case, f(f−y)<0, if and only if η≤2 nspλ−O(n−3 spn−1 ntp). It holds that λt+1≥λtif and only if λt≥4 nspηt(1 +y χt). Remark C.18 (Instability in SP ).The crucial insight from Proposition C.17 for SP is that both loss and sharpness increase early in training as soon as η=ω(n−1), unless an extensively large learning rate that depends on the current sharpness, training point and output function, is accurately chosen at each time step in a slim interval of benign large learning rates, which is unlikely to hold in practice. Figure C.2 shows that in simulated training with constant learning rates, the maximal stable learning rate indeed scales as Θ(n−1). This instability prediction is in line with the infinite-width prediction from Yang and Hu (2021), and hence does not explain large learning rate stability in SP in practice. ◀ Proof. From the update equations, it holds that λt+1≥λtif and only if ηtnntpχt(ηtχtλt−4 nspft)≥ 0if and only if λt≥4 nspηt(1 +y χt). Observe that the loss decreases if and only if |χt+1| ≤ |χt|, which holds if and only if |1−nspηλt+ η2n−1 ntpχ∥x∥2(χt+y)| ≤1, which can be written as, omitting all subscripts ·t, η2n−1 ntp∥x∥2χf−ηnspλ∈[−2,0]. Assuming η≥0, the above holds if and only if ηχf≤nspnntpλ ∥x∥2andn−1 ntpη2∥x∥2χf−ηnspλ≥ −2 . The first constraint
https://arxiv.org/abs/2505.22491v1
is a mild one that states η=O(n). We will now focus on the second one. 27 Solving for the roots of this polynomial in η, we get η1,2=1 2∥x∥2χf nntpnspλ±q n2 ntpn2spλ2−8∥x∥2nntpχf . Assuming n2 spnntpλ2≫8∥x∥2χf=:C, we get nspnntpλq 1−C n2spnntpλ2≈nspnntpλ(1− C 2n2spnntpλ2−1 4(C n2spnntpλ2)2). In that case η1≈2 nspλandη2≈nspnntpλ ∥x∥2χf−2 nspλ. Hence, if χf≥0, we get loss decrease if η≤2 nspλ+O(n−3 spn−1 ntp)orη∈[nspnntpλ ∥x∥2χf−2 nspλ− O(n−3 spn−1 ntp),nspnntpλ ∥x∥2χf]. Ifχf < 0, we get loss decrease if η∈[nspnntpλ ∥x∥2χf−2 nspλ+O(n−3 spn−1 ntp),2 nspλ−O(n−3 spn−1 ntp)], where the left end of the interval is negative. The upper end resembles the edge of stability that vanishes as n−1 spfor SP but not for NTP. Note the interesting slim regime of benign large learning rates η≈nλ ∥x∥2χf−1 nspλ= Θ( n)when f(f−y)>0. As all of the involved quantities are known at training time, an adaptive learning rate schedule may significantly speed up training by stable learning with excessive learning rates. However it remains unclear whether a similar regime exists in practical architectures under CE loss. In that case, the sharpness computation would also be much more computationally expensive. Figure C.2: Stable SP. ft(left) and λt(center) after training to convergence for several widths. The largest stable learning rate for SP indeed scales as n−1(right ). When lines end, training diverged for larger learning rates. The first subplot shows that training has succeeded in memorizing the training labely= 1at the optimal learning rate at all widths. The second subplot shows that the randomness inλtdue to random initial conditions vanishes with increasing width, as SP is approaching its kernel regime. C.4.3 µP converges faster to its limit than SP and NTP Here we study the convergence speed of the uv-model from Appendix C.4 to the infinite-width limit in SP, NTP and µP through simulations in the gradient flow regime η∈ {0.001,0.01}. We draw new data points without signal x, y∼N(0, I2)for every update. Convergence of the function to y= 1would confound the findings. If we draw the initial ˚f0from N(0,1)for SP and NTP versus N(0, n−1)forµP independent of f0at finite width, we only see convergence ft→˚ftforµP due to non-vanishing variance in the initial output function in SP and NTP (Figure C.6). Therefore, in all following experiments in this section, we start the update equations of the limit from the same f0to just measure the deviations over training. We let λ0differ from ˚λ0, as otherwise µP at finite width would exactly coincide with its limit, as it follows width-independent update equations. Observe thatµP still converges faster to its limit than NTP, which converges faster than SP (Figure C.5). The convergence exponent of µP seems to lie around −0.45. Observe large variance of the difference to the limit between random initial seeds (Figures C.3 and C.4). This requires many runs for an accurate estimation of the exponent. Note that the slow convergence of SP here may be due to the kernel regime enforced by MSE loss, and convergence to our feature learning limit under ηn= Θ( n−1/2) may be much faster. 28 Figure C.3: (µP remains closest to its limit
https://arxiv.org/abs/2505.22491v1
across training) Difference between finite networks and their infinite-width limit from the same initial condition across 100random seeds for µP (line), SP (dotted line) and NTP (dashed line) after T= 2,5or10steps (left to right), running plain SGD in gray, 0.1warmup in green and 0.1weight decay in orange. Initially, warmup retains the most closeness to the limit, as the learning rate is very small. In later steps, µP clearly remains closer to its limit. All parameterizations converge to their limit, but with large variance. Figure C.4: (Large variance in convergence to limit) Standard deviation divided by mean of |ft−˚ft| across random initialization. Note that in all paramerizations the variance of the difference to the limit is very large. Figure C.5: (µP converges faster to its limit) Width convergence exponent of the decay of the difference of learned finite neural networks to their limit derived between smallest and largest width 64and65536 for learning rate 0.001(left) and 0.01(right) across 100random seeds for µP (line), SP (dotted line) and NTP (dashed line) against number of training iterations T, running plain SGD in gray, 0.1warmup in green and 0.1weight decay in orange. While the exponents are still noisy even from means of 100random seeds, µP clearly converges faster to its limit than NTP which converges faster than SP, even when starting from the same initial conditions. SP and NTP with weight decay seem to systematically deviate from their limit only at large learning rate and late in training as their exponent decreases with the amount of training. 29 Figure C.6: (Non-vanishing initial variance in SP and NTP prevent convergence) Difference to limit after 20steps (left) and corresponding exponents (right) for differing initial ˚f0drawn independently from f0. Only µP converges to its limit at exponent around −0.45. D Experimental details If not otherwise specified, we train a single epoch to prevent confounding from multi-epoch overfitting effects. D.1 MLPs We implement our MLP experiments on MNIST (Deng, 2012) and CIFAR-10 (Krizhevsky et al., 2009) in PyTorch (Paszke et al., 2019). We train ReLU MLPs with the same width nin all hidden dimensions with plain SGD/Adam with a single learning rate for all trainable parameters, batch size 64 without learning rate schedules, weight decay or momentum to prevent confounding. We use Adam with the PyTorch standard hyperparameters. By standard initialization we mean He initialization variance cϕ/fan_in withcϕ= 2for the ReLU activation function (He et al., 2015). D.2 Multi-index data We generate multi-index teacher data, inspired by Kunin et al. (2024), but setting a deterministic teacher for ensuring a balanced classification task. We draw the covariates ξ∼ U(Sdin−1)i.i.d. from the uniform distribution on the unit sphere in Rdin with input dimension din= 100 . The training set consists of 103training points. We also draw a test set consisting of 104test points. For the target function f∗, drawing 3 random directions as in Kunin et al. (2024) results in heavily unbalanced classes and f∗= 0 on large part of the support with high probability. Instead, we set 4 teacher neurons deterministically for less noisy results. The teacher net is a shallow ReLU
https://arxiv.org/abs/2505.22491v1
network given by f∗(ξ) =sign(P4 i=1siϕ(w⊤ iξ))with unit vectors w1=e1,w2=e2,w3=−e1, w4=−e2and signs s1=s3= +1 ands2=s4=−1. This results in the nonlinear target function f∗(ξ) =sign(ξ1−ξ2)for all ξ∈Rdinwithξ1>0orξ2>0, butf∗(ξ) =sign(ξ2−ξ1)for all ξ∈(−∞,0)×(−∞,0). We do not use label noise. This dataset requires learning to attend to the first 2covariate dimensions (ξ1, ξ2), where all of the signal for the labels f∗(ξ)lies. If the input layer does not learn to align with these dimensions, the sparse signal is obscured in the activations (random features) after the first layer due to the large variance in the remaining covariate dimensions. D.3 Language modeling We train small Transformer models (Vaswani et al., 2017) using LitGPT (Lightning AI, 2023). We adapt the Pythia (Biderman et al., 2023) architecture with 6 Transformer blocks, standard d_head−1/2 attention scaling, pre-attention and qk-Layernorm (Wortsman et al., 2024). We purely scale width, proportionally scaling the number of attention heads and the MLP hidden size while keeping the number of layers and head dimension d_head = 32 fixed. For widths 256,1024 and4096 , this results in8,32and128heads per Transformer block and a total of 30M,167Mand1.4Bparameters. 30 Standard training means AdamW with a single, tuned maximal learning rate, (β1, β2) = (0 .9,0.95), ε= 10−12, sequence length 512, batch size 256,700steps of warmup followed by cosine learning rate decay to 10% of the maximal learning rate, weight decay 0.1, gradient clipping. We train for 10681 steps in mixed precision on the DCLM-Baseline dataset (Li et al., 2024). We train all models on the same number of tokens to prevent confounding effects from increased training time. D.4 Figure Details Figure 1 : The training accuracy of 8-layer MLPs is averaged over 4 runs to reduce noise from random initialization. The training loss of GPT trained with SGD is averaged over 3 runs. GPT with Adam was only run once. Figure 2 : The readout layer and last Layernorm layer are chosen due to their particular importance for logit blowup. The MLP layer was chosen to add a layer that scales hidden-like. This layer was not cherry picked. We observe other MLP layers to have similar scaling properties. Figure 4 : 3-layer MLP trained with SGD on CIFAR-10 with width-dependent learning rate ηn= 0.0001·n−0.5. Averages over 4 random seeds. Figure 5 : Minimal unstable learning rates are defined as the smallest learning rates to produce NaN entries when using MSE loss, or, under CE loss, accuracy <20% on MNIST and CIFAR-10, and <54% on binary multi-index data. For 2-layer MLPs, our theory predicts ηn=O(1)as the instability threshold, since there are no hidden layers, and input layers are updated width-independently at ηn= Θ(1) . The x-axes showing learning rates are scaled as (n/256)α. In this way, the learning rate at base width 256remains the same for comparability of the constants. If the optimal or maximal stable learning rate indeed scales as ηn=η·n−α, then the width-dependent scaling of the x-axis ηn·nαshows learning rate transfer. Figure 6 : SGD runs are averaged over 3 random seeds, due to noisy individual outcomes. The results for Adam stem from a single random seed due to
https://arxiv.org/abs/2505.22491v1
limited computational resources. Minimal unstable learning rates are defined as the smallest learning rates to produce loss worse than (optimal CE loss +1) at each width. The x-axes showing learning rates are scaled as (n/256)α. In this way, the learning rate at base width 256remains the same for comparability of the constants. If the optimal or maximal stable learning rate indeed scales as ηn=η·n−α, then the width-dependent scaling of the x-axis ηn·nαshows learning rate transfer. Figure 7 : Shown is the training accuracy at the end of training for one epoch for the optimal learning rate at each width. E Refined coordinate checks The standard coordinate check as provided in the readme of the mup-package Yang et al. (2022) may be considered the plot of activation norms ∥xl t∥RMS aftertsteps of training for all layers l and the network output norm ∥f∥RMS withf:=WL+1 txL tas a function of width. Completely width-independent dynamics under µP then result in an approximately width-independent coordinate check of all layers. However, width-dependence in the activations of previous layers would confound thel-th layer activation scaling, so that measuring the effective l-th layer updates requires measuring ∥∆Wl txl t∥RMS in each layer, where one may be interested in the weight updates accumulated over the entire course of training ∆Wl t=Wl t−Wl 0or the update in a single step δWl t=Wl t−Wl t−1. In standard architectures, one can equivalently measure the operator norm of the weight updates ∥∆Wl t∥2→2·p fan_in (Wl t)/fan_out (Wl t)!= Θ(1) (Yang et al., 2023a); however in non-standard architectures such as Mamba this spectral condition has been shown to fail, so that, in the general case, care should be taken in how exactly weight updates affect the output function (Vankadara et al., 2024). The difference between ∥∆Wtxt∥and the preactivation updates ∥∆(Wtxt)∥is precisely ∥W0∆x∥RMS which measures the effect of updates propagating from previous layers. All coordinate checks are run over 4 random seeds either at small learning rate or the optimal learning rateη256at base width 256(after 1 epoch of training) on CIFAR-10. The learning rate is then scaled in relation to that base width ηn=η·(n/256)αwithα∈ {− 1,−0.5,0}. 31 E.1 SGD Figure E.1 shows the refined coordinate check for a 3-layer MLP in SP with global learning rate scaling ηn=η·n−1/2. As predicted by Proposition 2, the input layer updates decay as n−1, the hidden layer learns features width-independently, and the output scales as n1/2which results in one-hot predictions after the softmax in wide models, but not necessarily unstable training dynamics. Both∥∆Wl txl t∥RMS and∥∆Wl t∥RMS→RMS measure the effective update effect in the l-th layer equivalently and accurately even in narrow MLPs of width 64. Naively tracking the activation updates ∆xl t=xl t−xl 0however is confounded by non-vanishing feature learning in narrow models, and only shows the correct hidden- and last-layer scaling exponents for n≥4000 , even after only a single update step. Figure E.2 shows a refined coordinate check for a 3-layer MLP in SP with width-independent global learning rate scaling ηn= 0.0001·n0. While infinite-width theory predicts the input layer to learn width-independently and the hidden layer to
https://arxiv.org/abs/2505.22491v1
explode as Θ(n), both empirical exponents aren−1/2smaller, so that the input layer has vanishing feature learning and the hidden layer is still exploding. This ostensible contradiction is resolved when repeating the coordinate check but initializing the last layer to 0(Figure E.3). Now the predicted scaling exponents are recovered, already at small width. The reason for this subtle but important difference is that the gradient that is back-propagated is given by the last-layer weights, ∂f/∂xL=WL+1 t =WL+1 0+ ∆WL+1 t. Under standard initialization at the optimal learning rate, the initialization WL+1 0= Θ( n−1/2)still dominates the updates ∆WL+1 t= Θ( ηn)in absolute terms after a few update steps at widths up to16384 . Comparing the absolute scales of ∥∆Wl txl t∥RMS or∥Wl t∥∗in both figures confirms this hypothesis. The pure update effects in Figure E.3 have lower order of magnitude in the constant before the scaling law, but follow clear scaling exponents. Therefore the faster scaling law under last-layer zero initialization can be extrapolated with certainty to induce a phase transition under standard initialization around width 4·107. We do not have sufficient computation resources to validate this but arrive at this order of magnitude irrespective of whether we extrapolate the scaling laws of ∥∆Wl txl t∥RMS or∥Wl t∥∗as well as of the input or hidden layer laws. For base width n0 and width-dependent statistics ∆1 nand∆2 nwith differing scaling exponents c1andc2,∆1 nand∆2 n intersect at width n0·(∆2 n0/∆1 n0)1/(c1−c2). This consequential difference in empirical scaling exponents at realistic widths due to a subtle dif- ference in last-layer initialization highlights the attention to detail that is required to make accurate scaling predictions from infinite-width limit theory, but, as we show in this paper, apparent contradic- tions can often be reconciled with enough attention to detail, and the clean scaling laws we arrive at as a result already hold at moderate scales and prove the usefulness of investing this extra effort. Hence, one reason why scaling exponents in SGD can be larger than predicted up to very large widths, is due to differing orders of magnitude in the constant pre factors in the initialization versus update terms in the backward pass. Without our refined coordinate check, the phase transition around width 107is hard to predict. As predicted, the width-exponents of 2-layer nets behave like the input and output layer in 3-layer nets (Figure E.4). When choosing the optimal learning rate η256= 0.03at width 256, stronger finite-width effects due to non-vanishing input layer feature learning already occur after a few steps and make the update scaling exponents after 10 steps only visible at larger width n≥2048 (Figure E.5). As long as divergence is prevented in the first few steps, self-stabilization mechanisms such as activation sparsification can quickly contain the initial catapult (Figure E.6). In deeper networks, explosion of several hidden layers is increasingly difficult to stabilize, and finite width effects are reduced. Figure E.5 shows the effective update rank and the alignment between activations at initialization versus at time tfor the same input training points under unstable width-independent learning rate scaling. The updates in each layer
https://arxiv.org/abs/2505.22491v1
are remarkably strongly dominated by a single direction. As hidden-layer activations are slowly diverging, their alignment is only beginning to decrease at large widths n≥4096 . The beginning instability of ∥∆x2∥RMS will eventually induce training instability and suboptimal accuracy at large width, which is hard to predict without tracking the layerwise effective update scaling across widths. 32 Figure E.1: (Hidden layer feature learning in SP under intermediate learning rate scaling) Effective l-th layer update scalings ∥∆Wl txl−1 t∥RMS (top), weight update spectral norm ∥∆Wl t∥∗ (2nd row) and activation updates δxl(bottom) of MLPs trained in SP with small learning rate ηn= 0.0001·(n/256)−1/2scaled to preserve hidden-layer feature learning. The TP scaling predictions are accurate. Hidden layers learn features width-independently, and input layers have vanishing feature learning. At moderate widths, activation updates are confounded by previous layer updates, and thus do not provide an accurate metric for effective update scaling. 33 Figure E.2: (Inaccurate exponent predictions under standard initialization with large learning rate scaling) Effective l-th layer update scalings ∥∆Wl txl−1 t∥RMS (top), weight update spectral norm∥∆Wl t∥∗(2nd row) and activation updates δxl(bottom) of 3-layer MLPs trained in SP with width-independent ηn= 0.0001 . Hidden layer activation updates explode, and input layers have vanishing feature learning. By TP scaling predictions, however, the input layer should learn features width-independently. Instead, the TP scaling exponents are only accurate under last-layer zero initialization, not under standard initialization (see Figure E.3 for last-layer zero initialization) as the initialization scaling WL+1 0= Θ( n−1/2)still dominates the update scaling ∆WL+1 t= Θ( ηn) at realistic widths after a few updates under the optimal learning rate. Hence, the backpropagated gradient ∂f/∂xL=WL+1 t, relevant for the hidden and input layer updates, behaves for a several steps like it should only behave in the first step. By comparing the absolute scales here versus those in Figure E.3 it becomes apparent that this is indeed a finite-width effect, as the absolute scale of ∥∆Wx∥2here is on the order 10−1and10−2for input and hidden layer, respectively, whereas the pure update effects under last-layer zero initialization are of at most order 10−4for both layer types. Clearly for sufficient width, the differing scaling exponents will induce a phase transition toward the predicted scaling exponents. While the input layer learns features width-independently under last-layer zero initialization, as predicted by TP theory, this is not the case at realistic scales under standard initialization. The qualitative statement that standard parameterization with width- independent learning rates is not activation stable in deep networks is still accurate at moderate width. 34 Figure E.3: (Accurate exponent predictions in SP with last-layer zero initialization under large learning rate scaling) Same as Figure E.2 with width-independent ηn= 0.0001 but initializing the last layer to zero. Here, the TP scaling predictions are accurate. Hidden layer activation updates explode as n1, and input layers learn features width-independently. Observe a smaller absolute scale of the pure update effects here versus in Figure E.1 that explains the differing exponents there. The updates in the input and hidden layers vanish in the first step, as the gradient for backprop is WL+1
https://arxiv.org/abs/2505.22491v1
0= 0. Figure E.4: (Shallow nets learn features width-independently under large learning rate scaling) Same as Figure E.1 but for 2-layer MLPs trained in SP with width-independent ηn= 0.0003 with standard initialization (left) and last-layer initialized to 0(right). The input layer and output layer scalings behave as in the 3-layer nets. Since there is no exploding hidden layer, activation stability is preserved in 2-layer nets under ηn= Θ(1) . 35 Figure E.5: (Large finite-width effects at optimal learning rate in shallow 3-layer MLPs) At the optimal learning rate η256= 0.03with width-independent scaling, non-vanishing input layer feature learning confounds the scalings after few update steps up to moderate widths n≤1024 , similar to Adam (Figure E.9). 36 Figure E.6: (Activation sparsification at the optimal learning rate) Effective l-th layer update ranks∥∆Wl t∥F/∥∆Wl t∥∗, activation sparsity and cosine similarity between activations to each layer comparing time 0and time ton the same input training point and on differing training points in the same batch of 3-layer MLPs trained with SGD in SP with width-independent learning rate ηn= 0.03as in Figure E.5. As opposed to the gradient flow regime, at the optimal learning rate, there are significant self-stabilization effects at large width already after 10 steps through activation sparsification but less through activation rotation. 37 Figure E.7: (Full width-independence in µP)Effective l-th layer updates (top), effective update ranks∥∆Wl t∥F/∥∆Wl t∥∗(second row) and cosine similarity between activations to each layer comparing time 0and time ton the same input training point (bottom) of 3-layer MLPs trained with SGD in µP with width-independent learning rate ηn= 0.03. As expected, all statistics behave width-independently. The effective update rank is remarkably small, as for SP. The activation are rotated quite quickly. E.2 Adam Withηn= Θ( n−1/2), the optimal learning rate scaling for 3-layer MLPs with Adam on CIFAR-10 is larger than predicted (Figure F.22). Figure E.9 shows that this may be due to large finite-width effects for Adam at optimal learning rate multiplier η256= 0.0003 and moderate width n≤8192 . While the weight update spectral norm scales as predicted, the input-layer gets large updates at moderate width (Figure E.10) and induces a strong rotation of the activations. As a result, the activation explosion only sets in at large width n≥8192 . This qualitative change toward vanishing input layer feature learning will result in a phase transition toward unstable scaling at large widths which is hard to predict at small scale from measurements alone, except when measuring both ∥∆Wl∥∗and the alignment ∥∆Wlxl−1∥RMS . As opposed to SGD, observe large finite-width effects in the activation updates even under small absolute learning rate 10−6at moderate width n≤8192 (Figure E.8). 38 Figure E.8: (Large finite-width effects from input-layer updates in Adam) Effective l-th layer up- date scalings ∥∆Wl txl−1 t∥RMS (top), weight update spectral norm ∥∆Wl t∥∗(2nd row) and activation update norm ∥δxl∥RMS (bottom) of 3-layer MLPs trained with Adam in SP with ηn= 10−6·n−1/2. Observe the theoretically predicted exponents in ∥∆Wl∥∗do not transfer to the activation updates at moderate width n <8192 due to large non-vanishing input layer updates at moderate width. Even the
https://arxiv.org/abs/2505.22491v1
effective updates ∥∆Wl txl−1 t∥RMS do not perfectly align with the scaling law at infinite width, indicating that the alignment between ∆Wl tandxl−1 tevolves non-trivially across width and that the spectral norm ∥∆Wl∥∗and pure infinite-width predictions are less useful for explaining the behaviour of Adam at moderate width. 39 Figure E.9: (Large finite-width effects from input-layer updates in Adam) Effective l-th layer update scalings ∥∆Wl txl−1 t∥RMS (top), weight update spectral norm ∥∆Wl t∥∗(2nd row) and ac- tivation update norm ∥δxl∥RMS (bottom) of 3-layer MLPs trained with Adam in SP with large ηn= 0.0003·n−1/2. Observe the theoretically predicted exponents in ∥∆Wl∥∗do not transfer to the activation updates at moderate width n <8192 due to large non-vanishing input layer updates at moderate width. Even the effective updates ∥∆Wl txl−1 t∥RMS do not perfectly align with the scaling law at infinite width, indicating that the alignment between ∆Wl tandxl−1 tevolves non-trivially across width and that the spectral norm ∥∆Wl∥∗and pure infinite-width predictions are less useful for explaining the behaviour of Adam at moderate width. Figure E.10: (Strong activation rotation under Adam at moderate width) Effective l-th layer update ranks ∥∆Wl t∥F/∥∆Wl t∥∗(top) and cosine similarity between activations to each layer comparing time 0and time ton the same input training point (bottom) of 3-layer MLPs trained with ADAM in SP with large ηn= 0.0003·n−1/2. The effective update rank is mostly growing in time in the input layer. Already after a few steps, the first-layer activation coordinates are drastically rotated at moderate widths. This induces a u-curve in the hidden-layer activations that inherit large rotation from the input layer at moderate width and update too much at large width under ηn= Θ( n−1/2). 40 E.3 Normalization layers and Adam provide robustness to miss-initialization For MLPs trained with SGD, initialization greatly impacts the training dynamics as both the forward and the backward pass are affected (Figure E.11). Large input layer initialization induces update instability at large width, which is stabilized by extreme activation sparsification (Figure E.13). By adding normalization layers, the forward pass can be enforced to scale width-independently. This may affect the gradients. But the gradient norms become irrelevant under Adam with sufficiently small ε. Adding both normalization layers and Adam to MLPs, observe that initialization is barely relevant for update scalings (Figure E.12), and other downstream statistics such as activation sparsity (Figure E.14). Here we use RMSNorm to fairly compare activation sparsity, but we expect LayerNorm to induce the same scaling behaviour. Figure E.11: (Initialization matters in MLPs with SGD) SP (top), SP with large input layer variance 2(bottom). The initializations induce significant differences in the training dynamics. Large input layer normalization becomes unstable at large width. Figure E.12: (Differing initialization barely matters with normalization layers and Adam) Update spectral norms of MLPs with the most basic normalization layer RMSNorm after every layer trained with Adam and initialized with SP (top) versus SP with large input layer variance 2(bottom). Here, initialization barely impacts the update scaling. 41 Figure E.13: (Big difference in activation sparsity under SGD) SP (top), SP with large input layer variance 2(bottom).
https://arxiv.org/abs/2505.22491v1
Large input variance has to be stabilized by increased activation sparsity. Figure E.14: (Activation sparsity barely affected under normalization) Same as Figure E.12 but showing the fraction of activation entries that equal 0. Both initializations do not significantly sparsify activations beyond 50%. E.4 Alignment and update scaling in Transformers Since we measure width-independent alignment αWL+1 0∆xL t= Θ(1) (Figures 2 and E.15), under large output dimension dout≫n(as is typical in language settings), ∥W0∥opapproximately scales Θ(1) (Vershynin, 2010), as opposed to Θ(n1/2)at sufficient width n≫dout. The term WL+1 0∆xL t therefore induces approximately width-independent logit updates even under standard last-layer initialization, in the regime dout≫n(cf. Figure E.17), but it induces logit divergence at sufficient width dout≪n. 42 Figure E.15: (Updates propagate maximally in the readout layer in SP-full-align) The operator norm ratio for propagating activations in the readout layer for training GPT with AdamW in SP- full-align with near-optimal learning rate ηn= 0.00316 . The ratio is barely width dependent so that propagated activations can be computed when knowing both ∥W0∥op=∥W0∥RMS→RMS and ∥∆xt∥RMS . Figure E.16: (Effective updates follow predictions) Effective updates ∥∆Wtxt∥for constant learning rate scaling ηn= 0.01(top) and stable learning rate scaling ηn= 0.01·(n/256)−1(bottom) in GPT models of varying width (the darker, the wider) for the embedding layer, the first MLP layer in the Transformer block 2, the last Layernorm before the readout layer and the readout layer (from left to right). At constant learning rate, hidden and output layers diverge with width. At optimal learning rate, embedding and normalization layer updates vanish with width. 43 Figure E.17: (Refined coordinate checks for GPT in SP with Adam and ηn= 0.01·n−1)From left to right: Activation norm, activation updates, effective updates ∥∆Wtxt∥RMS , propagating updates ∥W0∆xt∥RMS after 2, 10, 100 and 700 batches of training (the darker, the more batches). Layers from top to bottom: readout, last Layernorm, first MLP layer in Transformer block 2, embedding layer. Infinite width-scaling predictions are accurate in all effective update terms ∥∆Wtxt∥RMS : Embedding and Layernorm layers scale input-like and their updates vanish as Θ(n−1), all hidden and output layers are effectively updated width-independently. Against the infinite-width prediction, logit updates do not explode, not because of miss-alignment but because output dimension is much larger than width dout≫n, which changes the approximate scaling of ∥WL+1 0∥RMS→RMS from Θ(n1/2)in the infinite-width limit, to Θ(1) in the large output dimensional regime. F Empirical learning rate exponents F.1 Summary of the MLP experiments in this section In general, the optimal learning rate exponent appears to be architecture- as well as data-dependent. We conjecture that the optimal learning rate scaling is subject to opposing objectives. Ideally, the effective updates in all layers scale width-independently. Since this cannot be achieved with a single learning rate for input, hidden and output layers, the layer types act on the optimal learning rate scaling as opposing forces. SGD under MSE loss. For SGD under MSE loss, output blowup results in unstable training dynamics so that the maximal stable and optimal learning rate robustly scales as ηn= Θ( n−1)across architectures and datasets. As a consequence of vanishing
https://arxiv.org/abs/2505.22491v1
feature learning, neither training nor test loss monotonically improve with scale under MSE loss. Random feature models. When only training the last layer, fully width-independent training dynamics are achieved with ηn=η·n−1. Figure F.18 shows that this exponent clearly results in learning rate transfer for 2-layer ReLU random feature networks on CIFAR-10. Also observe that since all learning rate scalings recover activation-stability, larger than optimal learning rates still result in non-trivial classification accuracy. Deep MLPs. With an increasing amount of hidden layers, their width-independence eventually outweighs input layer feature learning in vision datasets. For at least 6 layers, we see approximate learning rate transfer under ηn= Θ( n−1/2)for SGD and ηn= Θ( n−1)for Adam as predicted for width-independent hidden layer feature learning for both CIFAR-10 and MNIST. 44 Shallow ReLU MLPs at moderate width and (deep) linear networks are not useful proxy models for deep nonlinear networks. For shallow MLPs, we often observe stronger finite-width effects than for deeper networks causing larger than predicted optimal learning rate scaling at moderate width, as divergence in fewer hidden layers can be stabilized over the course of training up to larger widths (cf. Appendix E). In linear networks, on the other hand, feature learning is not essential as the learned function always remains linear. Consequently we often observe that optimal learning rates decay faster than maximal-stable learning rates in (deep) linear networks even under CE loss (Figures F.12 and F.31). These differences between deep non-linear networks and toy architectures suggest that shallow MLPs and deep linear networks do not serve as useful proxy models for practical non-linear networks in terms of optimal learning rate exponents at moderate width. Input layer task. Under multi-index data with a sparse signal and high-dimensional isotropic covariates (explained in Appendix D.2), learning the two signal input dimensions is particularly useful for good generalization. Appendix F.3 shows the predicted exponent ηn=η·n0for input layer learning in 2-layer MLPs. Deeper MLPs recover hidden layer stability with optimal learning rate scaling ηn= Θ( n−1/2). Observe that generalization suffers when the input layer does not learn to align with the signal dimensions, so that only the 2-layer MLP with CE loss generalizes well at large width. Standard initialization with µP learning rates (SP-full-align). While Everett et al. (2024) re- port good transfer properties of SP-full-align, Appendix F.8 shows that the optimal learning rate clearly shrinks across image datasets and our multi-index data. We also introduce a variant of this parameterization that matches the n1/2logit blowup rate from the term WL+1 0∆xtin the effective last-layer updates by increasing the last-layer learning rate. This variant performs similarly well as SP-full-align. In particular, both variants seem to be less learning rate sensitive than µP. Adam learns features with ηn=η·n−1.Adam simplifies the learning rate scaling for weight W toηW=η/fan_in (W), because the gradient is normalized but still correlated with the incoming activations since the sign is preserved in each entry. Thus ηn=η/n is expected to induce width- independent hidden- and output-layer learning, but vanishing input-layer learning since here fan_in is fixed and hence would require constant learning rate scaling. As
https://arxiv.org/abs/2505.22491v1
for SGD, we still observe the optimal learning rate scaling ηn=η·n−1in deep MLPs on MNIST (Appendix F.5) and on CIFAR-10 (Appendix F.7), indicating that width-independence in hidden- and output-layer dominates input layer feature learning. F.2 Transformer experiments As we consider single-pass training, training and validation loss approximately coincide, so that statements about the training loss transfer to statements about the validation loss irrespective of the optimizer. All figures in this section show training loss on the left and validation loss on the right. Stabilizing techniques like gradient clipping can improve the absolute learning rate multiplier in front of the scaling law, but do not seem to change the width-scaling exponent for SGD (Figure F.4 vs Figure F.5). 45 Figure F.1: (Instability without qk-Layernorm) Train loss (left) and validation loss (right) of single-pass AdamW training without qk-Layernorm. Training and validation loss approximately coincide. Optimal learning rate scaling is dominated by the maximal stable learning rate scaling that is at most Θ(n−1). But without qk-Layernorm, the stability threshold is decreasing faster than Θ(n−1)even when increasing warmup length, so that it may be that the instability threshold would decay beyond the ideal learning rate and performance suffers. As our computational budget does not allow us to scale further, this setting remains inconclusive. Figure F.2: Large learning rate stability with qk-Layernorm Same as Figure F.1 but with qk- Layernorm as recommended by Wortsman et al. (2024). Training and validation loss approximately coincide. The optimal learning rate seems to approximately transfer under ηn=η·n−1/2, so the added Layernorm appears to stabilize learning at larger learning rate scaling, similar to the softmax in CE loss. Figure F.3: Same as Figure F.1 with qk-Layernorm as recommended by Wortsman et al. (2024), but all trainable Layernorm parameters are fixed to initialization. Here only the embedding layer behaves input-like, so that all other parameters learn width-independently under learning rate scaling Θ(n−1). While the optimum is drifting toward larger learning rates, an increasingly large plateau of near-optimal learning rates emerges at large width. Θ(n−1)still approximately captures the maximal stable learning rate scaling. 46 Figure F.4: (GPT trained with SGD has Θ(n−1/2)-learning rate scaling) Train loss (left) and validation loss (right) of single-pass SGD training (averaged over 3 random seeds affecting weight initialization and data shuffling). Training and validation loss approximately coincide. Hence also validation-optimal learning rate scaling is dominated by maximal stable learning rate scaling Θ(n−1/2)for hidden-layer stability. Figure F.5: Same as Figure F.4 but with gradient clipping. Performance is significantly improved as larger learning rate constants are stable (observe similar performance as without gradient clipping at the same learning rate). Optimal learning rate scaling is still dominated by the maximal stable learning rate scaling ηn=η·n−1/2for hidden-layer stability. Figure F.6: Same as Figure F.2 but in SP-full-align. AdamW in SP-full-align and SP with a global learning rate seem to have similar performance without multiplier tuning. SP-full-align approximately transfers the learning rate here in the dout≫nregime, but not in the dout≪nregime in Appendix F.8. 47 Figure F.7: Large learning rate exponent in original GPT paper. Just plotting the reported learning rate and d _model
https://arxiv.org/abs/2505.22491v1
values from Brown et al. (2020) results in quite a stable scaling law with exponent −0.648, which is larger than −1required for hidden-layer stability but significantly smaller than 0required for width-independent input layer learning. But note that jointly increasing batch size, n_layers and n_heads might be confounding factors here. F.3 Cross-entropy loss enables large-learning rate training MLPs on multi-index data. Here we train 2-layer and 3-layer ReLU MLPs on generated multi-index teacher data as detailed in Appendix D. These data crucially differ from the other considered datasets in that the target function only depends on the first 2 input dimensions. Due to the isotropic covariate distribution, input layer feature learning is necessary for good generalization. Hence we observe a clearηn= Θ(1) scaling for 2-layer MLPs with CE loss, necessary for preserving input layer feature learning (Figure F.8). 3-layer MLPs attain the maximal activation-stable exponent ηn= Θ( n−1/2)in CE loss (Figure F.9). 2-layer MLPs preserve a better validation accuracy compared to their training accuracy than deeper nets, as input layer learning gets increasingly inhibited by Θ(1) -learning rate instability in the presence of hidden layers. Both for shallow and deeper MLPs with MSE loss, we lose feature learning under the maximal output-stable scaling ηn= Θ( n−1), as expected. In this setting, it becomes particularly apparent that using the MSE loss with a softmax applied to the output of the network is not desirable. Ultimately, the only difference to CE loss is that the loss derivative with respect to the network output f(ξ) :=WL+1xL(ξ)becomes ∂L ∂f j=X i∈[C](σ(f)i−yi)σ(f)i(δij−σ(f)j), where the inner derivative of the softmax σi(δij−σj)vanishes as soon as the outputs diverge |fi(ξ)−fj(ξ)| → ∞ on a training point ξ. Hence, while the softmax still mitigates output blowup in the forward pass, the gradients vanish under output blowup. The CE loss, on the other hand, is exactly the correct choice of loss function to cancel out the inner derivative of the softmax and effectively viewσ(f)as the output of the network, resulting in ∂L ∂f j=σ(f)j−yj. Here vanishing gradients under output blowup in the MSE+softmax setting is so severe that output blowup prevents learning under large learning rates and the optimal learning rate scales as Θ(n−1). 48 Figure F.8: (Cross-entropy loss increases maximal stable learning rate scaling to approximately Θ(1) in 2-layer nets) Training accuracy (top) and validation accuracy (bottom) for a 2-layer MLP on generated multi-index teacher data (mean over 4 seeds) with CE loss (left), MSE loss (center) and MSE loss with softmax (right). The x-axis scales the learning rate with width-dependent exponents; observe approximate transfer under Θ(1) ,Θ(n−1)andΘ(n−1)scaling, respectively. In the MSE plot, ending lines indicate divergence for larger learning rates. MSE loss with softmax on the output does not increase optimal learning rate scaling due to vanishing gradients and gets worse due to a lack of input layer feature learning. Figure F.9: (Cross-entropy loss increases maximal stable learning rate scaling to approximately Θ(n−1/2)in 3-layer nets) Same as Figure F.8 but for a 3-layer MLP. The x-axis scales the learning rate with width-dependent exponents; observe approximate transfer of the maximal stable learning
https://arxiv.org/abs/2505.22491v1
rate under Θ(n−1/2),Θ(n−1)andΘ(n−1)scaling, respectively. In the MSE plot, ending lines indicate divergence for larger learning rates. Observe that wider networks generalize worse with scale as they lose input layer feature learning. F.4 MLPs with SGD on MNIST With MSE loss, observe a clear O(n−1)optimal and maximal stable learning rate exponent for all network variants (Figure F.10). With CE loss, observe that 2-layer MLPs transfer the optimal and maximal stable learning rate n0 (Figure F.11). 3 and 4 layer MLPs are still able to transfer the maximal stable learning rate, indicating self-stabilization of activation blowup at moderate width. In 6, 8 and 10 layer MLPs the maximal stable learning rate scaling n−1/2becomes increasingly pronounced, as it becomes increasingly difficult to stabilize activation blowup in an increasing amount of hidden layers while an increasing amount of layers is learning under n−1/2learning rate scaling. 49 3 and 4 layer linear MLPs clearly show how the maximal stable learning rate scales as n0whereas the optimal learning rate scales as n−1(Figure F.12). While activations can be shrunk for self-stabilization in linear nets, they lack the ability to learn non-linear features for the improved generalization at large learning rates. Hence losing feature learning but preventing the necessity to shrink activations under small learning rates n−1is optimal. Observe that also deeper linear MLPs are only stable under n−1/2as theoretically predicted. Figure F.10: (MSE loss on MNIST approximately transfers under Θ(n−1)learning rate scaling) Both the optimal as well as the maximal stable learning rate approximately transfer under global Θ(n−1)learning rate scaling when training 2, 3 or 10 layer MLPs (from left to right) with MSE loss on MNIST. Loss is not improving as feature learning is lost under Θ(n−1)scaling. Especially in 2 layer nets, the input layer is learning features at small width, but not at large width. Figure F.11: (Deeper nets follow infinite width theory increasingly accurately) Training accuracy after 1 epoch of training MLPs with 2, 3, 4, 6, 8 and 10 layers (from top-left to bottom right) on MNIST. While 2, 3 and 4 layer MLPs self-stabilize under large learning rates Θ(n0)and approximately transfer the optimum as well as max-stable learning rate, in 6, 8 and 10 layer MLPs it becomes increasingly apparent that the maximal stable learning rate transitions towards Θ(n−1/2)to prevent hidden layer blowup, which also forces the optimal learning rate to be O(n1/2)for at least feature learning in the hidden layers. Hence the theoretical activation stability predictions hold more accurately in deeper nets, with too many hidden layers to stabilize blowup in all of them. 50 Figure F.12: (In linear nets on MNIST, the optimal learning rate shrinks faster than the maximal stable learning rate) Same as Figure F.11 but for linear nets. The maximal stable learning rate scales similarly as for the non-linear nets, but the optimum approximately follows Θ(n−1). Irrespective of the depth, linear MLPs can only learn a linear transformation; hence under sufficient width, feature learning under large learning rates does not provide a benefit over mere last-layer learning. F.5 MLPs with ADAM on MNIST Figure F.13 shows that for deep
https://arxiv.org/abs/2505.22491v1
MLPs trained with Adam on MNIST the optimal learning rate scales at most as ηn=O(n−1). MLPs with 2 or 3 layers tend to have larger optimal learning rate scaling exponents around n−1/2, but with an increasing amount of layers the conflicting objectives of first layer versus hidden layer width-independent learning are dominated by the increasing number of hidden layers. For all depths, the instability threshold appears to scale as ηn≈Θ(n−1/2), but since Adam has a wide regime of suboptimal large learning rates where its moments are already harmed ( ?), the maximal stable learning rate threshold is often less clear cut compared to SGD. Figure F.13: (Learning rate transfer in deep MLPs for ADAM on MNIST) MLPs trained with ADAM on MNIST with 2, 3, 4, 6, 8, 10 layers (from top left to bottom right). In the first row, the x-axis is width-dependently scaled to show approximate transfer under n−1/2learning rate scaling. In the bottom row, the x-axis is width-dependently scaled to show approximate transfer under ηn≈Θ(n−1). Observe the optimal learning rate scaling transitioning from larger than Θ(n−1/2)in 2-layer MLPs toward at most Θ(n−1)with increasing depth. 51 Figure F.14: (Transfer in validation accuracy in deep MLPs for ADAM on MNIST) Validation accuracy of MLPs trained with ADAM on MNIST with 2 layer random feature, 3, 8, 10 layers (from left to right). Validation-optimal learning rate in deep MLPs scales as ηn=O(n−1). 2 layer RF and 3 layer nets appear to approximately transfer under ηn≈Θ(n−1/2)but lose monotonic improvement and predictability at scale. Figure F.15: (µP as a baseline for transfer) 8-layer MLPs trained on MNIST with SGD (top) and ADAM (bottom) under CE loss (left) and MSE loss (right). No systematic learning rate shifts in µP; saturating drifts may occur. Transfer and monotonic improvement looks less noisy under MSE loss. F.6 MLPs with SGD on CIFAR-10 2-layer random feature ReLU MLPs very clearly transfer under ηn=η·n−1learning rate scaling under any loss. Under CE loss, also larger learning rates result in non-trivial learning as saturating the softmax does not harm training stability. Under MSE loss on the other hand, training diverges above the edge of stability and results in trivial accuracy of 10%. Under MSE with softmax on the output logits, a exploding logits induce vanishing gradients which also inhibits learning (see Appendix F.3 for more details), and results in worse accuracy than under CE loss. For 2-layer ReLU MLPs under CE loss, the maximal stable learning rate scales as ηn= Θ(1) as predicted under activation stability. The optimal learning rate however is a trade-off between input layer feature learning and output layer stability. Here an output layer that explodes too sharply appears to perform suboptimally. But under small learning rates ηn=O(n−1/2), feature learning is lost and accuracy gets worse with scale. For 3-layer RELU MLPs with CE loss, the maximal stable learning rate and the optimal learning rate transfer over many widths before beginning to shrink at width 16384 . Over 782updates, an initial catapult can be stabilized at moderate width as long as training does not diverge early such as under
https://arxiv.org/abs/2505.22491v1
MSE loss. Apparently, large hidden layer updates can be stabilized over the course of training. As 52 an additional inductive bias that self-stabilizes large gradients, activations are sparsified which may enhance generalization under large learning rates. Figure F.19 shows very slow decay of the optimal learning rate also in 4- and 6-layer MLPs, but deeper MLPs eventually require ηn= Θ( n−1/2)for stable hidden layer updates. Figure F.16: (Softmax increases maximal stable learning rate scaling) Training accuracy after one epoch of training 3-layer MLPs on CIFAR 10 with CE loss (left), MSE loss (center) and MSE loss with softmax (right). The x-axis scales the learning rate with width-dependent exponents to show approximate transfer under Θ(1) ,Θ(n−1)andΘ(n−1/2)scaling, respectively. Figure F.17: (Softmax increases maximal stable learning rate scaling) Same as Figure F.16 but with 2-layer MLPs and approximate transfer under Θ(n−0.5),Θ(n−1)andΘ(n−1/2). Note how the maximal stable learning rate rather scales as Θ(1) , but the optimal learning rate rather scales as Θ(n−1/2). Figure F.18: (Random feature models approximately transfer under ηn= Θ( n−1)for SGD) Same as Figure F.16 but with 2 layers and only training the last layer results in approximately width-independent dynamics with ηn=η·n−1independent of the loss function or architecture used. Note that also larger learning rates result in non-trivial generalization because there is no instability caused by activation blowup. The larger learning rates are not optimal, because the usual benefits of larger learning rates like increased feature learning or activation sparsity do not apply to random feature models. 53 Figure F.19: (Hidden-layer stability determines learning rate scaling in deep MLPs for SGD on CIFAR-10) MLPs trained with SGD on CIFAR-10 with 4, 6, 8 and 10 layers (from left to right). First two x-axes are width-independent, last two scaled by n1/2. While 4- and 6-layer MLPs self-stabilize sufficiently for approximate transfer under width-independent learning rate scaling, 8- and 10-layer MLPs have a clear max-stable learning rate scaling n−1/2. Figure F.20: (Hidden-layer stability determines learning rate scaling in deep MLPs for SGD on CIFAR-10) 8-layer MLPs trained with SGD (left) and Adam (right) on CIFAR-10 with MSE loss with Layernorm applied to the logits. The Layernorm has a similar stabilizing effect as CE loss and allows learning with logit blowup under ηn= Θ( n−1/2). For Adam, hidden and output layers learn width-independently with ηn= Θ( n−1). F.7 MLPs with ADAM on CIFAR-10 All networks in Figure F.21 appear to transfer under large ηn= Θ( n−1/2)learning rate scaling. This scaling clearly induces activation blowup in the forward pass. A crucial difference to SGD is that activation blowup does not affect the updates in Adam since the gradient is normalized. For SGD, exploding gradients induce even larger explosion in the next forward pass, which in turn induces even larger explosion in the next backward pass. Hence, without activation stability, even the divergence exponent grows over time in SGD. For Adam, on the other hand, gradients are normalized, so that the forward pass always accumulates the same width-dependent exponent that is stabilized when passed through the softmax. Thus under sufficient numerical precision, from a stability point of
https://arxiv.org/abs/2505.22491v1
view, Adam can even tolerate larger learning rates than the hidden-layer feature learning ηn= Θ( n−1), and the optimal learning rate may also be pushed toward input layer feature learning. Indeed, when fixing the first layer (Figure F.22), all MLPs transfer under ηn= Θ( n−1), which now achieves full width-independent effective updates. In this variant there are no conflicting objectives trading off hidden-layer and input-layer width-independence. The fact that Adam with MSE loss (Figure F.23) can have large optimal learning rates ηn= Θ( n−1/2) indicates that the crucial effect of CE loss in SGD is stabilizing the gradients. Adam similarly limits the step size as the update scale is independent of χt. As for SGD, at large depth hidden-layer width-independence tends to dictate the optimal learning rate. 54 Figure F.21: (Learning rate exponent ηn= Θ( n−1)for ADAM in deep MLPs on CIFAR-10) MLPs trained with ADAM on CIFAR-10 with 2-layer random features, 2, 8, 10 layers (from left to right). The first 2 x-axes show approximate transfer under n−1/2learning rate scaling, the last 2 under n−1learning rate scaling. As for SGD, in deeper nets hidden-layer width-independence dominates input-layer width-independence and induces optimal learning rate scaling ηn≈Θ(n−1). Figure F.22: (Trade off between input- and hidden-layer width-independence) 3-layer MLPs trained with ADAM on CIFAR-10 (left) and not training the first layer (right). 3-layer MLPs approximately transfer under ηn= Θ( n−1/2), being pushed toward input-layer feature learning. As there are no conflicting goals like preserving input layer feature learning, 3-layer MLPs with fixed input layer follow the width-independent exponent ηn= Θ( n−1)that yields hidden-and output-layer width-independent feature learning. Figure F.23: (Adam stabilizes the backward pass even under MSE loss) MLPs trained with ADAM on CIFAR-10 under MSE loss with 2, 3, 6, 8 layers (from left to right). 2 and 3 layers show approximate transfer under n−1/2learning rate scaling, 6 and 8 layers show approximate transfer under n−1. 55 Figure F.24: (Large performance difference between losses for SGD in SP) Optimal training accuracy of 8-layer MLPs trained with SGD (left) and Adam (right) on CIFAR-10 (top) and MNIST (bottom) with MSE loss (dashed lines) and CE loss (solid lines) in µP (blue) and SP (orange). For SGD in SP, CE loss performs much better than MSE loss as large learning rates recover feature learning at large widths. The performance in µP depends much less on the loss function since features are always learned width-independently. In µP, MSE loss slightly outperforms CE loss. For ADAM, small learning rates ηn= Θ( n−1)in SP recover hidden-layer feature learning so that the difference between losses is much smaller. F.8 Effective update parameterizations beyond µP The logit updates can be decomposed into ft(ξ)−f0(ξ) =WL+1 0∆xL t(ξ) + ∆ WL+1 txL t(ξ), for arbitrary inputs ξ∈Rdinand∆WL+1 t=Pt−1 t′=0χt′·xL t′(ξt′). In this section, we consider vision and generated data sets in the regime n≫dout. First note that under large last-layer initialization (WL+1 0)ij∼N(0, n−1)as in SP, fully width-independent training dynamics are impossible, since width-independent feature learning ∆xL t= Θ(1) implies logit blowup through the term WL+1 0xL t= Θ( n1/2)for both SGD
https://arxiv.org/abs/2505.22491v1
and Adam. The fact that logit blowup does not prevent stable training under CE loss explains why we can achieve non-vanishing feature learning under SP last-layer initialization. When dropping the logit stability constraint, we can ask which is the optimal layerwise learning rate scaling under standard last-layer initialization. Following the µP desiderata, we still want to effectively update all layers, meaning a non-vanishing effect of the weight updates in each layer on the output function. With the correct choice of layerwise learning rates, we can still satisfy these desiderata for all scalings of last-layer initialization variance, which also implies that there is not a unique abc-equivalence class to fulfill these effective update desiderata when not requiring logit stability. We will see that SP full-align in Everett et al. (2024), which just uses the µP layerwise learning rates for SP initialization (which they promote as their overall best-performing parameterization without identifying stability under logit blowup as the key mechanism), fulfills these desiderata, except for vanishing last-layer update effect on the output function. We will introduce another variant with larger last-layer learning rate that recovers effective updates of all layers. For avoiding confusion with SP, meaning using a global learning rate, and withµP, meaning also achieving width-independence in the logits, we call this last variant Maximal Update Parameterization under Standard Output-layer Initialization (MUSOLI) . 56 For deriving the optimal layerwise learning rate exponents, first consider the scaling of hidden-layer pre-activation updates δhl,l∈[2, L], and input-layer pre-activation updates δh1(Yang and Hu, 2021, p. 51), δhl(ξ) = Θ Wl 0δxl−1 t+ηlχt−1∂f ∂hl t−1(xl−1 t−1)⊤xl−1 t(ξ)|{z } n) , δh1(ξ) = Θ η1χt−1∂f ∂h1 t−1(ξt−1)⊤ξ|{z} 1) , where it holds that ∂f/∂hl= Θ( ∂f/∂xL) = WL+1 t = Θ( WL+1 0−ηL+1χtxL)(at latest in the second step) (Yang and Hu, 2021, p. 52). Hence the correct l-th layer learning rate ηlfor achieving a width-independent effect on the next layer’s pre-activations needs to cancel out the backpropagated gradient scaling ∂f/∂hland for hidden layers additionally the LLN-like scaling from the inner product between activations. As we still require activation stability xL T= Θ(1) , we have∂f/∂xL= Θ( n−min(bL+1,cL+1)). While under standard µP, it holds that ∂f/∂xL= Θ( n−1), the changed gradient scaling must be counteracted by choosing hidden layer learning rate ηl= Θ(nmin(bL+1,cL+1)−1),l∈[2, L], and input layer learning rate η1= Θ( nmin(bL+1,cL+1)). In words, under larger last-layer initialization or learning rate, the hidden and input layer learning rates should be scaled down by the same amount. Finally, SP-full-align achieves a width-independent effect of the last-layer weight updates on the logits. But as the width-independent feature updates ∆xL= Θ(1) induce logit blowup WL+1 0∆xL t= Θ( n1/2), the effect of the last-layer weight updates on the softmax output is actually vanishing. For last-layer weight updates to affect the softmax output in the same scaling as the updates propagated forward, the last-layer learning rate needs to be ηL+1= Θ( n−bL+1), hence cL+1=bL+1. Hence MUSOLI is defined as SP-full-align but setting ηL+1= Θ( n−bL+1). This last-layer learning rate is larger than in µP or Everett et al. (2024) under standard last-layer initialization bL+1=
https://arxiv.org/abs/2505.22491v1
1/2, but necessary for fulfilling the desideratum that the weight updates in all layers affect the output function non-vanishingly. Figure F.34 shows that indeed all weight updates behave width-independently under µP with standard last-layer initialization (WL+1 0)ij∼N(0, n−1). But the output logits are dominated by the acti- vations propagated forward as WL+1 0δxL t= Θ( n1/2), since δxL tandWL+1 0 are highly correlated. Consequently, the last-layer updates have vanishing effect on the output function, which induces width dependence. By additionally scaling up the last-layer learning rate ηL+1= Θ( n−1/2), the logit scaling exponent in the term WL+1 0δxL t= Θ( n1/2)is matched in the last-layer update term ∆WL+1 txL t= Θ( n1/2)so that bL+1= 1/2andcL+1= 1/2recovers a balanced influence of all layer updates in the softmax output. Figure F.25 and Figure F.26 show that after single-pass SGD or Adam, for both SP-full-align and MUSOLI the optimal learning rate shrinks with width for both generated 2-class multi-index teacher data as well as MNIST. The optimal learning rate exponent is often closer to −0.5as we consistently observe under MSE loss, preventing logit blowup. Figure F.27 shows the same for CIFAR-10. This behaviour persisting across 3 data sets suggests that neither SP-full-align nor MUSOLI can be expected to transfer the optimal learning rate in general. An interesting question for future work remains why logit divergence introduces a width-dependence in the optimal learning rate in these parameterizations. As expected from parameterizations in the controlled divergence regime, Figure F.27 also shows that the maximal stable learning rate scales width-independently, since activation and gradient stability is preserved. Over the course of 20epochs, the training dynamics under large learning rates in MLPs with at least 3 layers are stabilized and the optimal learning rate indeed scales width-independently under standard last-layer initialization. Hence width-dependence in parameterizations can induce optimal learning rate scaling that varies over the course of long training. But often the optimal learning rate scales like the maximal stable learning rate. In such cases our theory is predictive. Note that SP full-align and MUSOLI are much more robust to poor tuning of the learning rate than µP, both in terms of training and test accuracy (Figures F.28 and F.29). We leave a closer analysis of the multi-epoch setting to future work. 57 For ADAM, the gradient is normalized in the backward pass, so that input- and hidden-layer learning rates remain the same as in µP under large last-layer initialization. This is again equivalent to the SP full-align parameterization from Everett et al. (2024). The logit update term WL+1 0∆xL t= Θ( n1/2) should again be balanced with a larger output layer learning rate ηL+1= Θ( n−1/2)if the weight updates of all layers should have a non-vanishing effect on the softmax output in the infinite-width limit (MUSOLI). Figure F.30 shows that nonlinear networks trained with Adam and large last- layer initialization already tend to transfer better under MUSOLI than under SP full-align after 1 epoch. Linear networks again have smaller optimal learning rate exponent, indicating that avoiding logit blowup improves over feature learning in this case, where feature learning
https://arxiv.org/abs/2505.22491v1
does not even add expressivity. Generalization, learning rate transfer and learning rate sensitivity after 20 epochs tends to be similar in all 3 considered parameterizations in deep ReLU MLPs (Figure F.31), showing again that parameterizations with logit blowup are a viable alternative. Especially in deep ReLU MLPs, the last-layer learning rate does not seem to have a big impact, and SP full-align and MUSOLI overall behave similarly for both SGD and Adam. Figure F.25: (Effective update variants do not transfer optimal learning rates on multi-index data) Training accuracy of 8-layer MLPs trained for 1epoch on multi-index teacher data under CE loss (top) and MSE loss (bottom) with SGD in SP-full-align, SGD in MUSOLI, Adam in SP-full- align and Adam in MUSOLI (from left to right). In all cases, logit blowup is avoided by optimal learning rates shrinking as ηn= Θ( n−1/2). Under CE loss the maximal stable learning rate remains width-independent, for SGD under MSE loss the maximal stable learning rate decays as n−1/2, as necessary for stability. Figure F.26: (Effective update variants do not transfer optimal learning rates on MNIST) Training accuracy of 8-layer MLPs trained for 1epoch on MNIST under CE loss with SGD in SP-full-align, SGD in MUSOLI, Adam in SP-full-align and Adam in MUSOLI (from left to right). In all cases, the optimal learning rate decays with width, while the maximal stable learning rate stays constant. 58 Figure F.27: (Effective update variants for SGD on CIFAR-10) MLPs with 2, 3 an 6 layers (from left to right) trained with SGD on CIFAR-10 in µP (top) versus SP full-align from Everett et al. (2024) (2nd row) versus SP full-align with larger last-layer learning rate (MUSOLI) (bottom row). While µP transfers with low variance as expected (left), µP with large standard last-layer initialization bL+1= 1/2and large last-layer learning rate cL+1= 1/2(right) have a non-trivial optimal learning rate scaling between Θ(n−1/2)andΘ(1) , while the maximal stable learning rate scales width-independently. 59 Figure F.28: (Effective update variants for SGD on CIFAR-10 after convergence) MLPs with 2, 3 an 6 layers (from left to right) trained with SGD in µP (top) versus SP full-align from Everett et al. (2024) (2nd row) versus SP full-align with larger last-layer learning rate (MUSOLI) (bottom row) as in Figure F.27 but trained for 20 epochs. After sufficiently long training the large learning rate dynamics stabilize in MUSOLI so that the optimum indeed scales width-independently. MUSOLI strictly dominates original µP in training accuracy, and robustness to badly tuned learning rate is strongly improved under SP last-layer initialization compared to original µP. In sufficiently deep MLPs, the larger last-layer learning rate barely matters, but in 2-layer nets SP-full align avoids output blowup and feature learning by transferring under ηn= Θ( n−1/2). 60 Figure F.29: (Test accuracy of effective update variants for SGD on CIFAR-10 after convergence) Test accuracy of 2-layer, 3-layer and 6-layer (from left to right) MLPs trained with SGD for 20epochs on CIFAR-10 in µP (top) versus SP full-align from Everett et al. (2024) (2nd row) versus SP full-align with larger last-layer learning rate (MUSOLI) (bottom row).
https://arxiv.org/abs/2505.22491v1
The validation-optimal learning rate scales width-independently in all cases. Observe that, while all variants generalize similarly well, the susceptibility to poorly tuned learning rates is much larger in µP than under parameterizations with large last-layer initialization. Figure F.30: (Train accuracy of effective update variants for ADAM on CIFAR-10) Train accuracy of 2-layer, 3-layer, 6-layer and 3-layer-linear MLPs (from left to right) trained with ADAM for 1 epochs on CIFAR-10 in µP (top row) versus SP full-align from Everett et al. (2024) (2nd row) versus MUSOLI (bottom row). The learning rate transfers irrespective of the architecture in µP. Large last-layer learning rate improves transfer in MUSOLI over SP full-align. The optimal learning rate scales as ηn= Θ( n−1/2)in both parameterizations with large last-layer initialization, as feature learning does not improve expressivity. 61 Figure F.31: (Test accuracy of effective update parameterizations for ADAM on CIFAR-10 after convergence) Test accuracy of 2-layer, 3-layer, 6-layer and 3-layer-linear MLPs (from left to right) trained with ADAM for 20epochs on CIFAR-10 in µP (top row) versus SP full-align from Everett et al. (2024) (2nd row) versus MUSOLI (bottom row). The validation-optimal learning rate scales width-independently in all ReLU MLPs with at least 3 layers. 3-layer linear networks clearly transfer under ηn= Θ( n−1/2)in SP full-align and MUSOLI, as for sufficient width learning features does not add expressivity, and instead avoiding logit blowup dominates the learning rate scaling. Figure F.32: (Effective update variants with SGD under MSE loss avoid logit blowup) Training accuracy of 2-layer, 3-layer linear, 6-layer and 8-layer MLPs (from left to right) trained with SGD for 1 epoch (top) and 20epochs (bottom) on CIFAR-10 in SP full-align from Everett et al. (2024). Optimal learning rates shrinking as ηn= Θ( n−1/2)persists, avoiding logit blowup through WL+1 0∆xL t. Only in 8-layer MLPs is the optimal learning rate saturating at the width-independent stability threshold. 62 Figure F.33: (Coordinate check for µP for SGD on CIFAR-10) µP induces fully width-independent update dynamics. Figure F.34: (Coordinate check for µP with standard initialization for SGD on CIFAR-10) Effective updates ∥∆Wlxl−1∥RMS and activation updates ∥∆xl∥RMS as a function of width. The theoretically predicted scaling exponents hold: All layers update width-independently, but due to the large last-layer initialization, the activation updates correlated with WL+1 0 propagated forward induce output logits exploding as WL+1 0δxL t= Θ( n1/2). This motivates increasing the last-layer learning rate to ηL+1= Θ( n−1/2)so that last-layer updates contribute with the same scaling. Note that in absolute terms, the updates are much smaller than under µP (Figure F.33). 63
https://arxiv.org/abs/2505.22491v1
Demystifying the Paradox of Importance Sampling with an Estimated History-Dependent Behavior Policy in Off-Policy Evaluation Hongyi Zhou1Josiah P. Hanna2Jin Zhu3Ying Yang1Chengchun Shi3 Abstract This paper studies off-policy evaluation (OPE) in reinforcement learning with a focus on behav- ior policy estimation for importance sampling. Prior work has shown empirically that estimating a history-dependent behavior policy can lead to lower mean squared error (MSE) even when the true behavior policy is Markovian. However, the question of why the use of history should lower MSE remains open. In this paper, we theoreti- cally demystify this paradox by deriving a bias- variance decomposition of the MSE of ordinary importance sampling (IS) estimators, demonstrat- ing that history-dependent behavior policy estima- tion decreases their asymptotic variances while increasing their finite-sample biases. Addition- ally, as the estimated behavior policy conditions on a longer history, we show a consistent decrease in variance. We extend these findings to a range of other OPE estimators, including the sequen- tial IS estimator, the doubly robust estimator and the marginalized IS estimator, with the behav- ior policy estimated either parametrically or non- parametrically. 1. Introduction Off-policy evaluation (OPE) focuses on estimating the av- erage return (sum of discounted rewards) of a specific de- cision policy, referred to as the target policy, by leveraging historical data collected under a potentially different policy, known as the behavior policy. OPE is vital in numerous domains where direct experimentation is impractical due to high costs, potential risks, or ethical concerns, such as in 1Department of Mathematical Science, Tsinghua University, Beijing, China2Computer Sciences Department, University of Wisconsin – Madison, Madison, WI, USA3London School of Economics and Political Science, London, UK. Correspondence to: Chengchun Shi <c.shi7@lse.ac.uk >. Proceedings of the 42ndInternational Conference on Machine Learning , Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).healthcare (Murphy et al., 2001; Hirano et al., 2003), rec- ommendation systems (Chapelle & Li, 2011) and robotics (Levine et al., 2020). One widely used OPE method is importance sampling (IS, see e.g., Precup et al., 2000), which employs a reweight- ing approach to handle the distribution shift between the target policy and the behavior policy. This approach is straightforward: returns generated by the behavior policy are re-weighted based on the ratio of the probability of selecting actions under the target policy to that under the behavior policy. The re-weighted returns are then averaged to produce an unbiased estimator of the target policy’s value. In the limit, as the number of trajectories increases, this estimator converges to the true value of the target policy. However, with finite samples, IS may exhibit high variance, causing considerable estimation error. Consequently, more advanced estimators have been proposed to lower its vari- ance, including the doubly robust (DR) estimator (Jiang & Li, 2016; Thomas & Brunskill, 2016) and marginalized IS estimator (MIS, Liu et al., 2018). Despite its limitation, IS serves as a foundation for many OPE methods and is particularly valued in practice for its unbiasedness. It is also frequently used in off-policy learning algorithms, such as the proximal policy optimization algorithm (Schulman et al., 2017), which is
https://arxiv.org/abs/2505.22492v1
widely used for fine-tuning large language models (Ouyang et al., 2022). In practice, the behavior policy might be unknown and must be estimated from the historical data to construct the IS ratio. Paradoxically, IS with an estimated behavior policy results in an estimator with lower asymptotic variance and often lower finite-sample mean-squared error (MSE) com- pared to IS using the true behavior policy. This result has been shown in the statistics (Henmi et al., 2007), causal inference (Hirano et al., 2003; Rosenbaum & Rubin, 1983), multi-armed bandit (Xie et al., 2019a), and Markov deci- sion process (MDP) policy evaluation (Hanna et al., 2021) literature. Furthering the paradox, Hanna et al. showed em- pirically that in MDPs where the true behavior policy is a first-order Markov-policy (action selection is conditioned only on the current state), the IS estimator’s MSE could be lowered by estimating a higher-order Markov-policy where action selection is conditioned on a history of preceding 1arXiv:2505.22492v1 [cs.LG] 28 May 2025 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE Table 1. Impact of incorporating history-dependent IS ratios on bias and variance across various OPE estimators, where rep- resents an increase, represents a decrease and indicates no difference. METHOD BIAS VARIANCE ORDINARY IS SEQUENTIAL IS DR ( WITH A MISSPECIFIED Q) DR ( WITH A CORRECT Q) MARGINALIZED IS states (2021). However, the theoretical basis and generality of this finding was left as an open question. In this work, we establish a comprehensive theoretical framework for analyzing OPE estimators with history- dependent IS ratios; refer to Table 1 for a quick summary of our findings. Our contributions are as follows: •We demystify the aforementioned paradox for ordinary IS (OIS) estimators with history-dependent IS ratios by deriving a bias-variance decomposition of their MSEs. Our findings reveal that inlarge samples, thevariance component becomes theleadingterm intheMSE and canbereduced through history-dependent behaviorpol- icyestimation. Specif ically, increas ingthehistory-length, decreases thevariance . •We also show that there isnofreelunch forusinghistory- dependent ISratios, asitcomes attheprice ofincreas ing thebias oftheresultingOPE estimator,which becomes non-negligibleinfinitesamples. •We extend these findings to accommodate other variants of IS estimators, including the sequential IS (SIS), DR and MIS estimators, with the behavior policy estimated either parametrically, or non-parametrically. Interestingly, incorporating history-dependent IS ratios has different effects on the asymptotic variances of these estimators: (1) It reduces the asymptotic variance for SIS; (2)It leaves the asymptotic variance of DR unchanged when the Q-function is correctly specified, and im- proves the performance with a misspecified Q; (3) It increases the asymptotic variance for MIS. 2. Literature review on OPE There is a huge literature on OPE in reinforcement learning (RL); see Uehara et al. (2022) for a recent review of existing methodologies. Current OPE methods can be grouped into four major categories:•Model-based methods . These methods estimate an MDP model from the offline data and learn the policy value based on the estimated model (Gottesman et al., 2019; Yin & Wang, 2020; Wang et al., 2024). •Direct methods . These methods estimate a value or Q- function to directly construct the policy value estimator (Sutton
https://arxiv.org/abs/2505.22492v1
et al., 2008; Le et al., 2019; Feng et al., 2020; Luckett et al., 2020; Hao et al., 2021; Liao et al., 2021; Chen & Qi, 2022; Shi et al., 2022b; Li et al., 2023; Liu et al., 2023; Bian et al., 2025). •IS methods . This paper focuses on the family of IS estimators, which can be further classified into three types, according to the IS ratios used to reweight the rewards: (i) OIS, which employs the product of IS ratios from the initial time to the termination time to reweight the empirical return (Hanna et al., 2019; 2021); (ii) SIS, which also uses the product of IS ratios but applies a different product at each time to reweight the immediate reward (Thomas et al., 2015; Zhao et al., 2015; Guo et al., 2017); (iii) MIS, which uses an IS ratio on the marginal state- action distribution as a function of both the action and the state to adjust the reward (Liu et al., 2018; Nachum et al., 2019; Xie et al., 2019b; Dai et al., 2020; Wang et al., 2023; Zhou et al., 2023). In addition to these methods, several variants have been proposed to improve estimation accuracy, including incremental IS (Guo et al., 2017), conditional IS (Rowland et al., 2020), and state-based IS (Bossens & Thomas, 2024). These methods modify the IS ratio to enhance efficiency and are, in principle, similar to our proposal, which considers history-dependent behavior policy estimation as an alternative strategy for improving IS efficiency. •Doubly robust methods . These methods combine the value or Q-function estimator used in direct methods and the IS ratios used in IS to construct the policy value es- timator (Zhang et al., 2013; Jiang & Li, 2016; Thomas & Brunskill, 2016; Farajtabar et al., 2018; Bibaut et al., 2019; Tang et al., 2020; Uehara et al., 2020; Kallus & Uehara, 2020; 2022; Liao et al., 2022). A salient fea- ture of these methods is their double-robustness property, which ensures the resulting policy value estimator’s con- sistency as long as either one of the two nuisance function estimators to be correctly specified, not necessarily both. Several extensions of DR have been proposed in the liter- ature, including triply robust estimators (Shi et al., 2021), semi-parametrically efficient estimators tailored to linear MDPs (Xie et al., 2023) and methods that estimate the difference in Q-functions (Cao & Zhou, 2024). When the target policy itself is history-dependent, history- dependent behavior policy has been employed to correct the off-policy distributional shift (Kallus & Uehara, 2020). 2 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE However, in settings where the target policy is Markovian – a common scenario in MDPs due to the Markovian na- ture of the optimal policy (Puterman, 2014) – the effects of history-dependent behavior policy estimation on the accu- racy of the resulting OPE estimator have been less explored. Hanna et al. (2019; 2021) demonstrated the possibility of lower MSE with a history-dependent behavior policy for evaluating Markov policies in MDPs. However, their work largely focused on
https://arxiv.org/abs/2505.22492v1
estimating Markov behavior policies and left the justification for using history as an open question. Our analysis significantly advances their analyses in the following ways: (i) We offer a bias-variance decomposition to theoretically demystify this paradox. (ii) We demonstrate that the variance varies monotonically with the number of preceding observations used to fit the behavior policy. (iii) As opposed to Hanna et al. (2019) and Hanna et al. (2021) whose focused on OIS estimator, our analysis extends to SIS, DR and MIS. 3. Building intuition: from bandits to MDPs This section begins with a bandit example to introduce the OPE problem and IS estimators. This example serves to build intuition about how estimating a behavior policy that conditions on extra information than the true behavior policy can lead to a more accurate IS estimator. We next formulate the OPE problem in MDPs and describe the IS estimators for MDPs. 3.1. A bandit example Consider a contextual bandit model B= (S,A, r)where S andAdenote finite context and action spaces respectively, andr:S ×A → Rdenotes a deterministic reward function. At each time, the agent observes certain contextual informa- tionS∈ S and selects an action Aaccording to a behavior policy πbsuch that P(A=a|S) =πb(a|S)for any a∈ A. Next, the environment responds by assigning a numerical reward Rto the agent, the conditional expectation of which, given the state-action pair, is equal to r(S, A). Given n independent and identically distributed (i.i.d.) copies of context-action-reward triplets, OPE aims to evaluate the ex- pected reward the agent would have received under a certain target policy πe, which may differ from πb. IS estimators are motivated by the change-of-measure the- orem, which allows us to expresses the target policy’s ex- pected reward v(πe)based on the IS ratio and the observed reward as v(πe) =Ehπe(A|S) πb(A|S)Ri . (1) Assuming that both πbandπeare both context independent (i.e.,πe(A|S) =πe(A),πb(A|S) =πb(A)), we introduce three IS estimators that differ in their choice of the IS ratio: Figure 1. The left panel is log absolute bias of the three IS estima- tors. The right panel shows log MSE of three different estimators. Results are averaged over 104trials. 1.When πbis known to us, the first estimator uses the oracle IS ratio πe/πbto estimate v(πe), bv† IS=Enhπe(A) πb(A)Ri , where Endenotes the empirical average over the (S, A, R )triplets in the offline dataset. According to (1), it is immediate to see that bv† ISis an unbiased estimator of v(πe)1. 2.Letn(a)denote the number of occurrences of A=a in the offline data. When πbremains unknown, it can be estimated by the sample mean estimator bπb(a) = n(a)/n, leading to the second IS estimator that employs acontext-agnostic estimated IS ratio, bvCA IS=Enhπe(A) bπb(A)Ri . 3.Letn(s, a)andn(s)denote the number of occurrences of(S=s, A=a)andS=sin the offline data, re- spectively. When πbis unknown and not assumed to be context-independent, it is natural to estimate πbus- ingbπb(a|s) =n(s, a)/n(s), leading to a third estimator with a context-dependent estimated IS ratio bvCD IS=Enhπe(A) bπb(A|S)Ri . LetMSE A(•)denote the asymptotic MSE of a given estima- tor, obtained by removing errors that are high-order in the
https://arxiv.org/abs/2505.22492v1
sample size n. The following lemma summarizes the perfor- mance of the three estimators in terms of their asymptotic MSEs. Lemma 1. MSE A(bvCD IS)≤MSE A(bvCA IS)≤MSE A(bv† IS). The first equality hold if and only if the reward function ris independent of the context Swhereas the second equality holds if and only if E(R|A) = 0 almost surely. The two inequalities in Lemma 1 derive the following two seemingly paradoxical conclusions in the bandit setting: 1We will use the symbol †to denote estimators that use oracle IS ratios throughout the paper. 3 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE Conclusion 1. Even when the behavior policy is known, using an estimated IS ratio can asymptotically improve the resulting IS estimator compared to the one using the oracle behavior policy. Conclusion 2. Even when the true behavior policy is context-agnostic, incorporating context in estimating the IS ratio can asymptotically enhance the performance com- pared to using a context-agnostic ratio. Our numerical results, reported in Figure 1, empirically confirm these conclusions. As observed in the right panel, incorporating context-dependent estimated IS ratios substan- tially reduces the MSE. Given that the y-axis visualizes the log(MSE), even seemingly close log values can correspond to considerable differences in MSE values. In what follows, we outline a sketch of the proof to demys- tify these results. The key insight is that replacingthetrue behaviorpolicywith itsestimatorintheISratioplays a similarroleinadding anaugmentationterm totheISesti- mator.This modificationeffectively trans forms theresulting estimatorintoaDRestimator, which is often more efficient than IS even in bandit settings (Tsiatis, 2006; Zhang et al., 2012; Dud ´ık et al., 2014). Specifically, it can be shown that bvCA ISandbvCD ISequal bvCA IS=EnnX aπe(a)br(a) +πe(A) bπb(A)[R−br(A)]o , bvCD IS=EnnX aπe(a)br(S, a) +πe(A) bπb(A|S)[R−br(S, A)]o , respectively, where both br(a)andbr(s, a)denote the sam- ple mean estimators, obtained by averaging rewards across different contexts and/or actions. In both expressions, the first terms within the curly brackets represent the direct method estimators for the policy value whereas the second terms serve as augmentation terms. The inclusion of these augmentation terms offers two advantages: (i) It debiases the bias inherent in the reward estimators, ren- dering the resulting OPE estimator asymptotically unbiased. (ii) It effectively reduces the variance of the OPE estima- tor by contrasting the observed reward with their predictor. Specifically, it can be shown that both expressions achieve no larger asymptotic variances than bv† ISwhich uses the ora- cle IS ratio. Additionally, the variance reductions are likely substantial when the reward function differs significantly from 0. These discussions verify the assertions in Lemma 1. In summary, our bandit example has revealed several intrigu- ing conclusions that we aim to establish in MDPs. First, we will demonstrate that Conclusion 1 remains valid across a range of IS-type estimators with history-dependent behavior policy estimators in MDPs. Second, we will expand on Conclusion 2 by demonstrating that estimating a behaviorpolicy that conditions on history leads to more accurate OPE estimators in large samples – even when the true behavior policy does not condition on more than the immediate pre- ceding state. Finally,
https://arxiv.org/abs/2505.22492v1
the above theoretical analysis did not consider the biases of IS estimators. As depicted in the left panel of Figure 1, incorporating history-dependent behavior policy estimation can increase bias in small samples. In our forthcoming analysis of MDPs, we will carefully examine the finite-sample biases of different IS estimators. 3.2. OPE in MDPs Markov decision processes . This paper focuses on a finite- horizon MDP model Mcharacterized by a state space S, an action space A, a transition kernel P:S × S × A → R, a reward function r:S×A → Rand a finite horizon T <∞. Consider a trajectory H:= (S0, A0, R0, . . . , S T, AT, RT) generated in M. These data are generated as follows: •At each time, suppose the environment arrives at a given stateSt∈ S; •The agent then selects an action At∈ A according to a behavior policy πb(•|St); •Next, the environment provides an immediate reward to the agent whose expected value is specified by the reward function r(St, At); •Finally, the environment transits into a new state St+1at timet+1according to the transition function P(•|St, At). This process repeats until the termination time, T, is reached. Common IS-type estimators . Given an offline dataset with ni.i.d. trajectories, the objective of OPE is to learn the ex- pected cumulative reward v(πe) =Eπe(PT t=0γtRt)under a different target policy πe, where γ∈(0,1]denotes the discount factor and Eπedenotes the expectation assuming the actions are assigned according to πe. LetEndenote the empirical average operator over the n trajectories in the offline dataset and λtdenote the product of IS ratiosQt k=1πe(Ak|Sk) πb(Ak|Sk)up to time t. Below, we detail the definitions of the three types of IS estimators introduced in Section 2, along with the DR estimator which also employs IS ratios for OPE: 1.OIS serves as the most foundational estimator. It applies a single weight λTto reweight the entire empirical return GT=PT t=0γtRt, leading to bv† OIS=En(λTGT). 2.SISmodifies OIS by applying a time-dependent ratio λtto reweight each reward Rt, resulting in bv† SIS= En(PT t=0γtλtRt). This adjustment reduces the vari- ance associated with the product of IS ratios since, at each time t, only ratios up to that time are used. 4 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE 3.DRfurther employs an estimated Q-function to reduce the variance of SIS. Specifically, let Qπe t(s, a)denote the Q-function under the target policy, which measures the cumulative reward starting from a given state-action pair Qπe t(s, a) =TX k=tγk−tEπe(Rk|At=a, St=s). Given a Q-function estimator Q={Qt}tfor{Qπe t}t, DR is defined by bv† DR=EnnTX t=0h λtγt Rt−Qt(St, At) +λt−1γtX aQt(St, a)πe(a|St)o , with the convention that λ−1= 1. Since bv† DRemploys the oracle IS ratio and leverages the double-robustness property, it remains consistent regardless of whether the Q-function is correctly specified. 4.MIS further reduces the variances of the aforementioned three estimators by replacing λt– which is known to suffer from the curse of horizon (Liu et al., 2018) – with an MIS ratio given by wt=dπe,t(St, At)/dπb,t(St, At) where dπe,t(·)anddπb,t(·)are the marginal distributions of(St, At)induced by policies πeandπb,
https://arxiv.org/abs/2505.22492v1
respectively. This leads to bv† MIS=En(PT t=0γtwtRt). We will investigate the theoretical properties of these esti- mators in the next two sections. 4. Demystifying the paradox in MDPs In this section, we conduct a rigorous theoretical analysis to evaluate the impact of replacing the oracle behavior pol- icy with an estimated history-dependent behavior policy for OPE. Our analysis accommodates all four estimators discussed in Section 3.2. Although πbis a Markov policy, historical observations can still be utilized to estimate it. In particular, we define the following estimator that uses k-step state-action history Ht−k:t= (St−k, At−k, . . . , S t−1, At−1, St), bπ(k) b= arg max π∈ΠkEnhTX t=0logπ(At|Ht−k:t)i , for some policy class Πkthat satisfies the following mono- tonicity assumption: Assumption 1 (Monotonicity) .Π0⊆Π1⊆Π2⊆ ··· . Most commonly used policy classes based on logistic re- gression models or neural networks satisfy Assumption 1. We discuss this assumption in greater detail in Appendix C.2 and impose the following assumptions.Assumption 2 (Realizability) .There exists some θ∗∈Π0 such that πb=π∗ θ. Assumption 3 (Bounded rewards) .There exists some con- stant Rmax<∞such that |Rt| ≤Rmaxalmost surely for anyt. Assumption 4 (Coverage) .There exist some constants ε > 0, C≥1such that all policy functions πθare lower bounded byε, and πe(s, a)/πθ(s, a)≤Cholds for all state-action pair(s, a). Assumption 5 (Differentiability) .All policies πθare twice differentiable with respect to the parameter θ, and both its first and second derivatives are uniformly bounded. Assumption 6 (Non-singularity) .The Fisher information matrix of θ∗, denoted by I(θ∗), is non-singular. We make a few remarks. First, realizability assumes that the policy class Π0is rich enough to cover πb. It is a common assumption in machine learning (Shalev-Shwartz & Ben- David, 2014). It will be relaxed in Section 5 by permitting a nonzero approximation error. Second, the bounded rewards and coverage conditions are frequently assumed in the RL and OPE literature (see e.g., Chen & Jiang, 2019; Fan et al., 2020; Kallus & Uehara, 2022). Finally, Assumptions 5 and 6 are widely imposed in statistics to establish the theoreti- cal properties of maximum likelihood estimators (see e.g., Casella & Berger, 2024). 4.1. Ordinary IS estimator Recall from Section 3.2 that bv† OISdenotes the OIS estimator with the oracle IS ratio λT. LetbvOIS(k)denote the version that uses the k-step state-action history to compute the be- havior policy estimator bπ(k) band plugs it into λTto construct the ratio estimator bλT(k), bvOIS(k) =En[bλT(k)GT]. The following theorem establishes the theoretical properties of these estimators. Theorem 2. Assume Assumptions 1 – 6 hold. Then MSE(bvOIS(k)) =1 nVar ProjT(k)(λTGT) +O(k+ 1)C2TR2 max n3/2ε2 ,(2) where T(k)denotes the space of mean zero random vari- ables that is orthogonal to the tangent space spanned by the score vector s(H, k;θ∗) =∂ ∂θTX t=0logπθ(At|Ht−k:t) θ=θ∗, andProjT(k)(•)denotes the projection of a given random variable onto the space of T(k); refer to Appendix C.2 for 5 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE the detailed definitions. Moreover, for any k′< k, we have Var ProjT(k)(λTGT) =Var ProjT(k′)(λTGT) −Var ProjT(k′)(λTGT)−ProjT(k)(λTGT) .(3) Theorem 2 has a number of important
https://arxiv.org/abs/2505.22492v1
implications: 1.Equation (2)obtains a bias-variance decomposition for the MSE of bvOIS(k). In particular, the first term on the right-hand-side (RHS) of (2)corresponds to its asymp- totic variance, which is of the order O(n−1), whereas the second term upper bounds its finite-sample bias, which decays to zero at a faster rate as nincreases. Additionally, it is well known that the variances of IS-type estimators grow exponentially fast with the time horizon (see, e.g., Liu et al., 2018). Our error bound reveals that when us- ing estimated IS ratios, the same curse of horizon applies to the bias, which includes a factor of C2Tfor some C≥1, where C= 1if and only if the behavior policy matches the target policy, meaning there is no off-policy distributional shift at all. 2.In large samples, the asymptotic variance term becomes the dominating factor. This term equals the variance of En[ProjT(k)(λTGT)].Thus, incorporatinghistory-de- pendent behaviorpolicyestimationintoOISestimators canbeinterpreted asaprojectionthatprojects theem- piricalreturn intoamore constrained space forvariance reduction. This interpretation aligns with our perspective on transforming IS estimators with estimated ratios into DR estimators, as illustrated in the bandit example (see Section 3.1), since DR can be viewed as projecting an IS estimator onto a specific augmentation space to improve efficiency (Tsiatis, 2006). Notice that the projected vari- able ProjT(k)(λTGT)achieves a smaller variance than λTGTitself, our result thus covers Corollary 2 in Hanna et al. (2021), suggesting that replacing the true behavior policy with its estimate reduces the asymptotic variance of the resulting OIS estimator. 3.Additionally, according to (3), the variance term is a monotonically non-decreasing function with respect to the history-length, which in turn demonstrates the ad- vantage of estimating a high-order Markov policy over a first-order policy in large samples. Mathematically, this can again be interpreted through projection: thelonger thehistory-length, themore restrictivetheconstrained space used toproject theempiricalreturn, leadingto greater asymp totic efficiency . 4.In small samples, particularly in settings with long hori- zons, the bias term becomes non-negligible and increases exponentially with the horizon. To the contrary, the ora- cle estimator bv† OISis unbiased. This illustrates the risk ofemploying history-dependent behavior policy estimation in small samples. Based on the aforementioned discussion, the following corollary is immediate from Theorem 2. Corollary 3. Letkandk′be two positive integers satisfying k′≤k. Under Assumptions 1 – 6, we have MSE A(bvOIS(k))≤MSE A(bvOIS(k′)) To summarize, Theorem 2 formally establishes the bias- variance trade-off in history-dependent behavior policy es- timation: it decreases the asymptotic variance of the OIS estimator at the cost of increasing the finite-sample bias. Furthermore, a longer history length results in a greater reduction in variance. 4.2. Sequential IS estimator Letbλt(k)denote the estimator for λtby replacing the oracle behavior policy with its estimator bπ(k) b. We define bvSIS(k) as a variant of the oracle SIS estimator bv† SISconstructed based on {bλt(k)}t. The following theorem obtains a similar bias-variance decomposition for its MSE. Theorem 4. Assume Assumptions 1 – 6 hold. Then MSE(bvSIS(k)) =1 nVar ProjT(k)(TX t=0λtγtRt) +O(k+ 1)C2TR2 max n3/2ε2 .(4) In addition, the first term on the RHS of (2)is non- decreasing with respect to k. Recall that the oracle SIS estimator bv† SISis given by
https://arxiv.org/abs/2505.22492v1
En(PT t=0λtγtRt). Similar to OIS, Theorem 4 suggests that using an estimated behavior policy will lower the MSE of the resulting SIS estimator in large samples through pro- jection. Meanwhile, the longer the history-length, the lower the asymptotic MSE, leading to the following corollary. Corollary 5. Letkandk′be two positive integers satisfying k′≤k. Then under Assumptions 1 – 6, MSE A(bvSIS(k))≤MSE A(bvSIS(k′)) However, estimating the behavior policy can introduce sig- nificant biases in small samples and long horizons, the mag- nitudes of which are given by the second term in (4). 6 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE 4.3. Doubly robust estimator Consider the following DR estimator constructed based on the history-dependent IS ratio bλt(k), bvDR(k) = EnTX t=0λtγt(Rt−Qt(St, At)) +λt−1γtX aQt(St, a)πe(a|St) , with a pre-specified Q-function which is required to satisfy the following assumption: Assumption 7 (Boundedness) .There exists some Umax< ∞such that the absolute value of Ut=Rt−Qt(St, At) + γQt+1(a, St+1)is upper bounded by U∞almost surely for anyt. Assumption 7 corresponds to a version of the bounded- ness condition in Assumption 3 tailored for DR estimators. The constant Umaxis expected to be much smaller than Rmaxwith a well-chosen Q-function. In particular, when the Q-function is correctly specified, Utcorresponds to the absolute value of the Bellman residual, which tends to con- centrate more closely around zero than Rt. Theorem 6. Assume Assumptions 1, 2, 5 – 7 hold. Then, MSE(bvDR(k)) =1 nVar ProjT(k)(TX t=0λtγtUt) +O(k+ 1)C2TU2 max n3/2ε2 .(5) In addition, the first term on the RHS of (5)is non- decreasing with respect to k. However, when the Q-function is correctly-specified, this term becomes a constant function ofk. We make two remarks regarding Theorem 6: 1.The bias-variance decomposition in (5)closely resembles that of SIS, with the key difference being that the reward Rtand its bound Rmaxin(4)are replaced with Utand Umax, respectively. With a well-specified Q-function, Ut is expected to exhibit lower variability than Rt, andUmax can be significantly smaller than Rmax. This highlights the advantages of history-dependent DR estimators over SIS: they not only improve asymptotic variance but also reduce finite-sample bias. 2.However, the second part of Theorem 6 indicates that, unlike OIS or SIS, history-dependent behavior policy estimation may not further reduce asymptotic variance when the Q-function is correctly specified. This is in- tuitive, as in such cases, the DR estimator is known to achieve certain efficiency bounds (Jiang & Li, 2016;Kallus & Uehara, 2020). If the estimator is already effi- cient, history-dependent behavior policy estimation can- not provide additional gains. On the other hand, when the Q-function is misspecified, there remains room for improvement, and history-dependent estimators can im- prove the estimation accuracy. The following corollary is again an immediate application of Theorem 5. Corollary 7. Under Assumptions 1, 2, 5 – 7, we have for anyk′≤kthat MSE A(bvDR(k))≤MSE A(bvDR(k′)). The equation holds when the Q-function is correctly speci- fied. In that case, we have MSE A(bvDR(k)) = MSE A(bv† DR) for any k. 4.4. Marginalized importance sampling estimator A key step in constructing the MIS estimator lies in the estimation of
https://arxiv.org/abs/2505.22492v1
the MIS ratio. Unlike the previously discussed ratios{λt}t, which can be known in settings such as ran- domized studies, the MIS ratio depends on the marginal state distribution and is typically unknown, even when the behavior policy is given. In the literature, several methods have been developed to estimate the MIS ratio, such as minimax learning (Uehara et al., 2020) and reproducing kernel Hilbert space (RKHS)- based methods (Liao et al., 2022). To simplify the analysis, we focus on using linear function approximation in this paper, which parameterizes each wtbyϕ⊤ t(St, At)αt, for some state-action features ϕt. Adapting Example 2 from Uehara et al. (2020) to the finite-horizon setting, we derive the following closed-form expression for the estimator bα0, bα0=bΣ−1 0EnhX aπe(a|S0)ϕ0(S0, a))i , wherebΣt=Enh ϕt(St, At)ϕ⊤ 0(S0, A0)i , and the following recursive formulas for computing bαt, bαt=bΣ−1 tEnhX aπe(a|St)ϕt(St, a))ϕ⊤ t−1(St−1, At−1)i bαt−1. The estimated MIS ratios {bwt=ϕ⊤ t(St, At)bαt}tare then plugged into the oracle estimator bv† MISto compute bvMIS(0). Alternatively, the k-step history Ht−k:tcan be used to construct a history-dependent MIS ratio wt(k) = E(λt|Ht−k:t, At). This ratio can be interpreted as a con- ditional IS ratio (Rowland et al., 2020) with Ht−k:tandAt being the conditioning variable. It is also closely related to the incremental IS (INCRIS) ratio proposed by Guo et al. (2017), but differs by incorporating an additional MIS ratio forSt−k. 7 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE For estimation, wt(k)can be parameterized similarly to wt, using k-step features ϕt(k)as a function of Ht−k:t andAt, with parameters estimated in a manner similar to those for wt. However, unlike IS and DR, incorporating a history-dependent MIS ratio may increase the MSE of the resulting MIS estimator, denoted by bvMIS(k). Additionally, the longer the history-length, the worsen the performance. We summarize these results in the following theorem. Theorem 8. LetbvMIS(k)be the MIS estimator with k-step history: Then, under regularity conditions specified in Ap- pendix C.2, for any k′< k, MSE A(bvMIS(k′))≤MSE A(bvMIS(k)). To appreciate why Theorem 8 holds, notice that by setting kto the horizon T,wt(k)is reduced to the λt, and the resulting estimator is reduced to SIS, which suffers from the curse of horizon and is known to be less efficient than MIS. More generally, similar to , increasing the history-length leads to a more variable IS ratio, thus increasing the MSE. 5. Extensions to cases where the behavior policy is estimated nonparametrically Our analysis so far focuses on using parametric models to estimate the behavior policy or IS ratio. In practical appli- cations, nonparametric estimation of the behavior policy can be desirable to avoid the potential misspecification of the parametric model. This motivates us to investigate the performance of history-dependent OPE estimators with non- parametrically estimated behavior policy. A common nonparametric approach is to approximate the policy set Πusing a sequence of sieve spaces Πn. Below, we demonstrate that, under certain regularity conditions (detailed in Appendix C.3), similar to the parametric case, replacing the true behavior policy with an estimated be- havior policy within the sieve space lowers the asymptotic
https://arxiv.org/abs/2505.22492v1
variance of the resulting OPE estimator. Specifically, we assume the policy class Πcan be rep- resented by {π(Ht−k:t;θ), θ∈Θ}with an infinite- dimensional Hilbert space Θ. Let Θ1⊆. . .Θn⊆ Θn+1. . .⊆Θbe a sequence of finite-dimensional sieve spaces. For a given sample size n, we compute the esti- matorbθnby maximizing the log-likelihood function in the sieve space Θn, bθn(k) = arg max θ∈ΘnEnhTX t=0logπθ(At|Ht−k:t)i . LetbvOIS(k),bvSIS(k)andbvDR(k)denote the OIS, SIS and DR estimators, respectively, each constructed based on the estimated behavior policy π(Ht−k:t;bθn(k)). We summarize our results as follows.Theorem 9. Under Assumptions 8 - 13 defined in Appendix C.3, we have MSE A(bvOIS(k))≤MSE A(bv† OIS), MSE A(bvSIS(k))≤MSE A(bv† SIS), MSE A(bvDR(k))≤MSE A(bv† DR). Theorem 9 demonstrates the advantages of OPE estimators with nonparametrically estimated behavior policies in large samples. While similar results have been established in the literature (see e.g., Hanna et al., 2021), they primarily focused on the OIS estimator using parametric estimation of the behavior policy and required the realizability assump- tion (see Assumption 2). In contrast, Theorem 9 relaxes the realizability by allowing the approximation error to decay to zero at a rate of o(n−1/4)(see Assumption 9), which is much slower than the parametric n−1/2-rate. Nonetheless, we demonstrate that the resulting OPE estimators still con- verge at the parametric rate, which is central to establish their MSEs. This faster convergence rate occurs because the policy value is a smooth functional of the sieve estimator, and “smoothing” inherently improves the convergence rate. While similar findings have been documented in classical statistics literature for nonparametric regression problems (Shen, 1997; Newey et al., 1998), these phenomena have not been less explored in OPE and RL. One exception is Shi et al. (2023), who considered the direct method esti- mator but did not study history-dependent behavior policy estimation. 6. Numerical studies Our experiment compares several history-dependent IS esti- mators in the CartPole environment (Brockman et al., 2016). Specifically, we consider the following three estimators: SIS, DR with a misspecified Q-function, and MIS. As shown in Figure 2, all three estimators’ MSEs decrease with the sample size, suggesting their consistencies. For SIS and DR with misspecified Q-functions, replacing the oracle behavior policy with a history-dependent estimator generally reduces their MSEs in large samples. Additionally, performance improves with longer history-length. However, for MIS estimators, the performance consistently worsens as we increase the history-length to estimate the MIS ratio. Finally, it is also apparent that history-dependent estimators generally suffer from larger biases compared to those using an oracle behavior policy. These empirical results verify our theoretical findings. In Appendix B, we further expand our numerical experi- ments to more complex MuJoCo environments, including (i) Inverted Pendulum, featuring a continuous action space; (ii) Double Inverted Pendulum, characterized by a higher- dimensional state space; (iii) Swimmer, an environment 8 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE Figure 2. Absolute bias (left panel) and log MSE (right panel) of three OPE estimators: SIS (top panel), DR (middle panel), MIS (top panel). The results are averaged over 50 simulations. with substantially different dynamics compared to
https://arxiv.org/abs/2505.22492v1
the other two. The detailed results are deferred to Appendix B. 7. Discussion This paper demystifies the paradox concerning the impact of history-dependent behavior policy estimation on IS-type OPE estimators by establishing a bias-variance decompo- sition of their MSEs. Our analysis reveals a trade-off in the choice of history-length for estimating the behavior pol- icy: increasing the history-length reduces the estimator’s asymptotic variance, but can increase its finite-sample bias. Therefore, selection of history length is crucial for applying our theory to practice. In this section, we propose some practical guidance on the selection of history length when estimating behavior policy. Specifically, motivated by the bias-variance trade-off, we propose to select the history length that minimizes h∗= arg min h[2ncVar(h)−hlog(n)],where cVar(h)denotes variance estimator computed via the sampling variance formula or bootstrap, klog(n)is the Bayesian information criterion (BIC, Schwarz, 1978) penalty preventing selecting long history without substan- tial reduction of the variance. Our simulation studies (not reported in the paper) demonstrate strong empirical perfor- mance of this history selection method. To conclude this paper, we note that the OPE literature has been growing rapidly in recent years, expanding into several directions, including the investigation of partially observable environments (Uehara et al., 2023; Hu & Wager, 2023), heavy-tailed rewards (Xu et al., 2022; Liu et al., 2023; Rowland et al., 2023; Zhu et al., 2024; Behnamnia et al., 2025) and unmeasured confounders (Kallus & Zhou, 2020; Namkoong et al., 2020; Tennenholtz et al., 2020; Nair & Jiang, 2021; Shi et al., 2022a; Wang et al., 2022; Bruns- Smith & Zhou, 2023; Xu et al., 2023; Bennett & Kallus, 2024; Shi et al., 2024; Yu et al., 2024). Our proposal is related to a growing line of research that investigates optimal experimental design for OPE (Hanna et al., 2017; Mukherjee et al., 2022; Wan et al., 2022; Li et al., 2023; Liu & Zhang, 2024; Liu et al., 2024; Sun et al., 2024; Wen et al., 2025). These works focus on designing optimal behavior policies prior to data collection to improve OPE accuracy whereas our proposal considers estimating behavior policies after data collection for the same purpose. The work of Liu & Zhang (2024) is particularly related as the behavior policy is computed from offline data before being run to collect more data. Both approaches share the most fundamental goal of enhancing OPE by learning behavior policies - whether for data collection or retrospective estimation. Acknowledgement Hongyi Zhou’s and Ying Yang’s research was partially sup- ported by NSFC 12271286 & 11931001. Hongyi Zhou’s re- search was also partially supported by the China Scholarship Council. Chengchun Shi’s and Jin Zhu’s research was par- tially supported by the EPSRC grant EP/W014971/1. Josiah Hanna acknowledges support from NSF (IIS-2410981), American Family Insurance through a research partnership with the University of Wisconsin—Madison’s Data Sci- ence Institute, the Wisconsin Alumni Research Foundation, and Sandia National Labs through a University Partnership Award. The authors thank the anonymous referees and the area chair for their insightful and constructive comments, which have led to a significantly improved version of the paper. Impact statement This
https://arxiv.org/abs/2505.22492v1
paper provides a theoretical foundation for using history-dependent behavior policy estimators for OPE in re- 9 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE inforcement learning. Our research reveals that while these estimators may decrease accuracy with small sample sizes, they significantly improve estimation accuracy as sample size increases. This insight clarifies when and how historical data should be integrated into behavior policy estimation, en- hancing the effectiveness and reliability of various off-policy estimators across different applications. Our work primarily engages in theoretical analysis and does not directly interact with or manipulate real-world systems. Consequently, it is unlikely to have negative societal consequences. References Behnamnia, A., Aminian, G., Aghaei, A., Shi, C., Tan, V . Y . F., and Rabiee, H. R. Log-sum-exponential estimator for off-policy evaluation and learning. In International Conference on Machine Learning . PMLR, 2025. Bennett, A. and Kallus, N. Proximal reinforcement learn- ing: Efficient off-policy evaluation in partially observed markov decision processes. Operations Research , 72(3): 1071–1086, 2024. Bian, Z., Shi, C., Qi, Z., and Wang, L. Off-policy evaluation in doubly inhomogeneous environments. Journal of the American Statistical Association , to appear, 2025. Bibaut, A., Malenica, I., Vlassis, N., and Van Der Laan, M. More efficient off-policy evaluation through regular- ized targeted learning. In International Conference on Machine Learning , pp. 654–663. PMLR, 2019. Bossens, D. M. and Thomas, P. S. Low variance off-policy evaluation with state-based importance sampling. In 2024 IEEE Conference on Artificial Intelligence (CAI) , pp. 871– 883. IEEE, 2024. Brockman, G., Cheung, V ., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. Ope- nai gym, 2016. URL https://arxiv.org/abs/ 1606.01540 . Bruns-Smith, D. and Zhou, A. Robust fitted-q-evaluation and iteration under sequentially exogenous unobserved confounders. arXiv preprint arXiv:2302.00662 , 2023. Cao, D. and Zhou, A. Orthogonalized estimation of differ- ence of q-functions. arXiv preprint arXiv:2406.08697 , 2024. Casella, G. and Berger, R. Statistical inference . CRC press, 2024. Chapelle, O. and Li, L. An empirical evaluation of thomp- son sampling. In Proceedings of the 24th International Conference on Neural Information Processing Systems , NIPS’11, pp. 2249–2257, Red Hook, NY , USA, 2011. Curran Associates Inc. ISBN 9781618395993.Chen, J. and Jiang, N. Information-theoretic considerations in batch reinforcement learning. In International Con- ference on Machine Learning , pp. 1042–1051. PMLR, 2019. Chen, X. and Qi, Z. On well-posedness and minimax opti- mal rates of nonparametric q-function estimation in off- policy evaluation. In Proceedings of the 39th Interna- tional Conference on Machine Learning , volume 162 of Proceedings of Machine Learning Research , pp. 3558– 3582. PMLR, 17–23 Jul 2022. Chernozhukov, V ., Chetverikov, D., and Kato, K. Gaussian approximation of suprema of empirical processes. The Annals of Statistics , pp. 1564–1597, 2014. Chernozhukov, V ., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., and Robins, J. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal , 21(1):C1–C68, 01 2018. Dai, B., Nachum, O., Chow, Y ., Li, L., Szepesvari, C., and Schuurmans, D. Coindice: Off-policy confidence interval estimation. In Advances in Neural Information Processing Systems
https://arxiv.org/abs/2505.22492v1
, volume 33, pp. 9398–9411. Curran Associates, Inc., 2020. Dud´ık, M., Erhan, D., Langford, J., and Li, L. Doubly Ro- bust Policy Evaluation and Optimization. Statistical Sci- ence, 29(4):485 – 511, 2014. doi: 10.1214/14-STS500. Fan, J., Wang, Z., Xie, Y ., and Yang, Z. A theoretical analysis of deep q-learning. In Learning for dynamics and control , pp. 486–489. PMLR, 2020. Farajtabar, M., Chow, Y ., and Ghavamzadeh, M. More robust doubly robust off-policy evaluation. ArXiv , abs/1802.03493, 2018. Feng, Y ., Ren, T., Tang, Z., and Liu, Q. Accountable off- policy evaluation with kernel Bellman statistics. In Pro- ceedings of the 37th International Conference on Ma- chine Learning , volume 119 of Proceedings of Machine Learning Research , pp. 3102–3111. PMLR, 13–18 Jul 2020. Gottesman, O., Liu, Y ., Sussex, S., Brunskill, E., and Doshi- Velez, F. Combining parametric and nonparametric mod- els for off-policy evaluation. In International Conference on Machine Learning , pp. 2366–2375. PMLR, 2019. Guo, Z. D., Thomas, P. S., and Brunskill, E. Using options and covariance testing for long horizon off-policy pol- icy evaluation. In Proceedings of the 31st International Conference on Neural Information Processing Systems , NIPS’17, pp. 2489–2498, Red Hook, NY , USA, 2017. Curran Associates Inc. ISBN 9781510860964. 10 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE Hanna, J., Niekum, S., and Stone, P. Importance sampling policy evaluation with an estimated behavior policy. In Proceedings of the 36th International Conference on Ma- chine Learning , volume 97 of Proceedings of Machine Learning Research , pp. 2605–2613. PMLR, 09–15 Jun 2019. Hanna, J. P., Thomas, P. S., Stone, P., and Niekum, S. Data- efficient policy evaluation through behavior policy search. In Precup, D. and Teh, Y . W. (eds.), Proceedings of the 34th International Conference on Machine Learning , vol- ume 70 of Proceedings of Machine Learning Research , pp. 1394–1403. PMLR, 06–11 Aug 2017. Hanna, J. P., Niekum, S., and Stone, P. Importance sam- pling in reinforcement learning with an estimated behav- ior policy. Mach. Learn. , 110(6):1267–1317, 2021. ISSN 0885-6125. Hao, B., Ji, X., Duan, Y ., Lu, H., Szepesvari, C., and Wang, M. Bootstrapping fitted q-evaluation for off-policy infer- ence. In Proceedings of the 38th International Confer- ence on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pp. 4074–4084. PMLR, 2021. Henmi, M., Yoshida, R., and Eguchi, S. Importance sam- pling via the estimated sampler. Biometrika , 94(4):985– 991, 12 2007. Hirano, K., Imbens, G. W., and Ridder, G. Efficient esti- mation of average treatment effects using the estimated propensity score. Econometrica , 71(4):1161–1189, 2003. Hu, Y . and Wager, S. Off-policy evaluation in partially observed Markov decision processes under sequential ignorability. The Annals of Statistics , 51(4):1561 – 1585, 2023. doi: 10.1214/23-AOS2287. Jiang, N. and Li, L. Doubly robust off-policy value evalu- ation for reinforcement learning. In Proceedings of The 33rd International Conference on Machine Learning , vol- ume 48 of Proceedings of Machine Learning Research , pp. 652–661, New York, New York, USA, 20–22 Jun 2016. PMLR. Kallus, N. and Uehara,
https://arxiv.org/abs/2505.22492v1
M. Double reinforcement learning for efficient off-policy evaluation in markov decision pro- cesses. Journal of Machine Learning Research , 21(167): 1–63, 2020. URL http://jmlr.org/papers/ v21/19-827.html . Kallus, N. and Uehara, M. Efficiently breaking the curse of horizon in off-policy evaluation with double reinforce- ment learning. Oper. Res. , 70(6):3282–3302, November 2022. ISSN 0030-364X.Kallus, N. and Zhou, A. Confounding-robust policy eval- uation in infinite-horizon reinforcement learning. Ad- vances in neural information processing systems , 33: 22293–22304, 2020. Kosorok, M. R. Introduction to Empirical Processes and Semiparametric Inference . Springer New York, NY , 2008. Le, H., V oloshin, C., and Yue, Y . Batch policy learning under constraints. In Proceedings of the 36th Interna- tional Conference on Machine Learning , volume 97 of Proceedings of Machine Learning Research , pp. 3703– 3712. PMLR, 09–15 Jun 2019. Levine, S., Kumar, A., Tucker, G., and Fu, J. Offline rein- forcement learning: Tutorial, review, and perspectives on open problems. ArXiv , abs/2005.01643, 2020. Li, G., Wu, W., Chi, Y ., Ma, C., Rinaldo, A., and Wei, Y . Sharp high-probability sample complexities for policy evaluation with linear function approximation. arXiv preprint arXiv:2305.19001 , 2023. Liao, P., Klasnja, P., and Murphy, S. Off-policy estimation of long-term average outcomes with applications to mobile health. Journal of the American Statistical Association , 116(533):382–391, 2021. Liao, P., Qi, Z., Wan, R., Klasnja, P., and Murphy, S. A. Batch policy learning in average reward Markov decision processes. The Annals of Statistics , 50(6):3364 – 3387, 2022. Liu, Q., Li, L., Tang, Z., and Zhou, D. Breaking the curse of horizon: infinite-horizon off-policy estimation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems , NIPS’18, pp. 5361–5371, Red Hook, NY , USA, 2018. Curran Asso- ciates Inc. Liu, S. and Zhang, S. Efficient policy evaluation with offline data informed behavior policy design. In Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J., and Berkenkamp, F. (eds.), Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pp. 32345–32368. PMLR, 21–27 Jul 2024. Liu, S. D., Chen, C., and Zhang, S. Doubly optimal policy evaluation for reinforcement learning. arXiv preprint arXiv:2410.02226 , 2024. Liu, W., Tu, J., Zhang, Y ., and Chen, X. Online estimation and inference for robust policy evaluation in reinforce- ment learning. arXiv preprint arXiv:2310.02581 , 2023. Luckett, D. J., Laber, E. B., Kahkoska, A. R., David M. Maahs, E. M.-D., and Kosorok, M. R. Estimating 11 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE dynamic treatment regimes in mobile health using v- learning. Journal of the American Statistical Association , 115(530):692–706, 2020. doi: 10.1080/01621459.2018. 1537919. Mukherjee, S., Hanna, J. P., and Nowak, R. D. Revar: Strengthening policy evaluation via reduced variance sam- pling. In Cussens, J. and Zhang, K. (eds.), Proceedings of the Thirty-Eighth Conference on Uncertainty in Artifi- cial Intelligence , volume 180 of Proceedings of Machine Learning Research , pp. 1413–1422. PMLR, 01–05 Aug 2022. Murphy, S. A., van der Laan, M. J., Robins, J. M., and
https://arxiv.org/abs/2505.22492v1
Group, C. P. P. R. Marginal mean models for dynamic regimes. Journal of the American Statistical Association , 96(456):1410–1423, 2001. Nachum, O., Chow, Y ., Dai, B., and Li, L. Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections. Advances in neural information processing systems , 32, 2019. Nair, Y . and Jiang, N. A spectral approach to off-policy evaluation for pomdps. arXiv preprint arXiv:2109.10502 , 2021. Namkoong, H., Keramati, R., Yadlowsky, S., and Brunskill, E. Off-policy policy evaluation for sequential decisions under unobserved confounding. Advances in Neural In- formation Processing Systems , 33:18819–18831, 2020. Newey, W. K., Hsieh, F., and Robins, J. Undersmoothing and bias corrected functional estimation. 1998. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. Precup, D., Sutton, R. S., and Singh, S. P. Eligibility traces for off-policy policy evaluation. In Proceedings of the Sev- enteenth International Conference on Machine Learning , ICML ’00, pp. 759–766, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc. ISBN 1558607072. Puterman, M. L. Markov decision processes: discrete stochastic dynamic programming . John Wiley & Sons, 2014. Rosenbaum, P. R. and Rubin, D. B. The central role of the propensity score in observational studies for causal effects. Biometrika , 70(1):41–55, 1983. ISSN 00063444, 14643510. Rowland, M., Harutyunyan, A., Hasselt, H., Borsa, D., Schaul, T., Munos, R., and Dabney, W. Conditionalimportance sampling for off-policy learning. In Interna- tional Conference on Artificial Intelligence and Statistics , pp. 45–55. PMLR, 2020. Rowland, M., Tang, Y ., Lyle, C., Munos, R., Bellemare, M. G., and Dabney, W. The statistical benefits of quantile temporal-difference learning for value estimation. In In- ternational Conference on Machine Learning , pp. 29210– 29231. PMLR, 2023. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. Schwarz, G. Estimating the dimension of a model. The annals of statistics , pp. 461–464, 1978. Shalev-Shwartz, S. and Ben-David, S. Understanding ma- chine learning: From theory to algorithms . Cambridge university press, 2014. Shao, J. Mathematical Statistics . Springer, New York, 2nd edition, 2003. ISBN 978-0-387-00179-1. doi: 10.1007/ b98854. Shen, X. On methods of sieves and penalization. The Annals of Statistics , 25(6):2555–2591, 1997. Shi, C., Wan, R., Chernozhukov, V ., and Song, R. Deeply- debiased off-policy interval estimation. In International conference on machine learning , pp. 9580–9591. PMLR, 2021. Shi, C., Uehara, M., Huang, J., and Jiang, N. A minimax learning approach to off-policy evaluation in confounded partially observable markov decision processes. In Inter- national Conference on Machine Learning , pp. 20057– 20094. PMLR, 2022a. Shi, C., Zhang, S., Lu, W., and Song, R. Statistical inference of the value function for reinforcement learning in infinite- horizon settings. Journal of the Royal Statistical Society Series B: Statistical Methodology , 84(3):765–793, 12 2022b. Shi, C., Wang, X., Luo, S., Zhu, H., Ye, J., and Song, R. Dynamic causal effects evaluation in a/b testing with a reinforcement
https://arxiv.org/abs/2505.22492v1
learning framework. Journal of the Ameri- can Statistical Association , 118(543):2059–2071, 2023. Shi, C., Zhu, J., Shen, Y ., Luo, S., Zhu, H., and Song, R. Off- policy confidence interval estimation with confounded markov decision process. Journal of the American Statis- tical Association , 119(545):273–284, 2024. Sun, K., Kong, L., Zhu, H., and Shi, C. Optimal treat- ment allocation strategies for a/b testing in partially observable time series experiments. arXiv preprint arXiv:2408.05342 , 2024. 12 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE Sutton, R. S., Szepesv ´ari, C., and Maei, H. R. A conver- gent o(n) algorithm for off-policy temporal-difference learning with linear function approximation. Advances in neural information processing systems , 21(21):1609– 1616, 2008. Tang, Z., Feng, Y ., Li, L., Zhou, D., and Liu, Q. Dou- bly robust bias reduction in infinite horizon off-policy estimation. In International Conference on Learning Representations , 2020. Tennenholtz, G., Shalit, U., and Mannor, S. Off-policy eval- uation in partially observable environments. In Proceed- ings of the AAAI Conference on Artificial Intelligence , volume 34, pp. 10276–10283, 2020. Thomas, P. S. and Brunskill, E. Data-efficient off-policy pol- icy evaluation for reinforcement learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48 , ICML’16, pp. 2139–2148. JMLR.org, 2016. Thomas, P. S., Theocharous, G., and Ghavamzadeh, M. High-confidence off-policy evaluation. In AAAI Confer- ence on Artificial Intelligence , 2015. Tsiatis, A. A. Semiparametric Theory and Missing Data . Springer, 2006. Uehara, M., Huang, J., and Jiang, N. Minimax weight and q-function learning for off-policy evaluation. In Proceed- ings of the 37th International Conference on Machine Learning , volume 119 of Proceedings of Machine Learn- ing Research , pp. 9659–9668. PMLR, 13–18 Jul 2020. Uehara, M., Shi, C., and Kallus, N. A review of off-policy evaluation in reinforcement learning. arXiv preprint arXiv:2212.06355 , 2022. Uehara, M., Kiyohara, H., Bennett, A., Chernozhukov, V ., Jiang, N., Kallus, N., Shi, C., and Sun, W. Future- dependent value-based off-policy evaluation in pomdps. InAdvances in Neural Information Processing Systems , volume 36, pp. 15991–16008. Curran Associates, Inc., 2023. Van Der Vaart, A. W., Wellner, J. A., van der Vaart, A. W., and Wellner, J. A. Weak convergence . Springer, 1996. Wan, R., Kveton, B., and Song, R. Safe exploration for efficient policy evaluation and comparison. In Inter- national Conference on Machine Learning , pp. 22491– 22511. PMLR, 2022. Wang, J., Qi, Z., and Shi, C. Blessing from human-ai interaction: Super reinforcement learning in confounded environments. arXiv preprint arXiv:2209.15448 , 2022.Wang, J., Qi, Z., and Wong, R. K. W. Projected state-action balancing weights for offline reinforcement learning. The Annals of Statistics , 51(4):1639 – 1665, 2023. Wang, W., Li, Y ., and Wu, X. Off-policy evaluation for tabular reinforcement learning with synthetic trajectories. Statistics and Computing , 34(1):41, 2024. Wen, Q., Shi, C., Yang, Y ., Tang, N., and Zhu, H. Unraveling the interplay between carryover effects and reward auto- correlations in switchback experiments. In International Conference on Machine Learning . PMLR, 2025. Xie, C., Yang, W., and Zhang, Z.
https://arxiv.org/abs/2505.22492v1
Semiparametrically effi- cient off-policy evaluation in linear markov decision pro- cesses. In International Conference on Machine Learning , pp. 38227–38257. PMLR, 2023. Xie, T., Ma, Y ., and Wang, Y .-X. Towards optimal off-policy evaluation for reinforcement learning with marginalized importance sampling . Curran Associates Inc., Red Hook, NY , USA, 2019a. Xie, T., Ma, Y ., and Wang, Y .-X. Towards optimal off-policy evaluation for reinforcement learning with marginalized importance sampling. Advances in neural information processing systems , 32, 2019b. Xu, Y ., Shi, C., Luo, S., Wang, L., and Song, R. Quantile off- policy evaluation via deep conditional generative learning. arXiv preprint arXiv:2212.14466 , 2022. Xu, Y ., Zhu, J., Shi, C., Luo, S., and Song, R. An instrumen- tal variable approach to confounded off-policy evaluation. InInternational Conference on Machine Learning , pp. 38848–38880. PMLR, 2023. Yin, M. and Wang, Y .-X. Asymptotically efficient off-policy evaluation for tabular reinforcement learning. In Interna- tional Conference on Artificial Intelligence and Statistics , pp. 3948–3958. PMLR, 2020. Yu, S., Fang, S., Peng, R., Qi, Z., Zhou, F., and Shi, C. Two-way deconfounder for off-policy evaluation in causal reinforcement learning. Advances in Neural Information Processing Systems , 37:78169–78200, 2024. Zhang, B., Tsiatis, A. A., Laber, E. B., and Davidian, M. A robust method for estimating optimal treatment regimes. Biometrics , 68(4):1010–1018, 05 2012. ISSN 0006-341X. Zhang, B., Tsiatis, A. A., Laber, E. B., and Davidian, M. Robust estimation of optimal dynamic treatment regimes for sequential treatment decisions. Biometrika , 100(3): 681–694, 2013. 13 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE Zhao, X. and Zhang, Y . Asymptotic normality of nonpara- metric m-estimators with applications to hypothesis test- ing for panel count data. Statistica Sinica , 27:931–950, 2017. URL https://api.semanticscholar. org/CorpusID:54836455 . Zhao, Y .-Q., Zeng, D., Laber, E. B., and Kosorok, M. R. New statistical learning methods for estimating optimal dynamic treatment regimes. Journal of the American Statistical Association , 110(510):583–598, 2015. Zhou, W., Li, Y ., Zhu, R., and Qu, A. Distributional shift- aware off-policy interval estimation: A unified error quan- tification framework. arXiv preprint arXiv:2309.13278 , 2023. Zhu, J., Wan, R., Qi, Z., Luo, S., and Shi, C. Robust offline reinforcement learning with heavy-tailed rewards. In International Conference on Artificial Intelligence and Statistics , pp. 541–549. PMLR, 2024. 14 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE A. Details of experiments Bandit example in Section 3.1. In our illustrative example, we set the context space S={0,1}, the action space A={0,1}. The target policy πbis set as πe(1) = Pe(A= 1) = 0 .4, π e(0) = Pe(A= 0) = 0 .6. The behavior policy is set as πb(1) = Pb(A= 1) = 0 .3, π b(0) = Pb(A= 0) = 0 .7. Both the target and behavior policies are independent of context information. The context information Sfollows a Bernoulli distribution with parameter 0.5, that is, P(S= 0) = P(S= 1) = 0 .5. Given context information Sand action A, the reward is a random variable with mean 10a+ 0.1(1 + 2
https://arxiv.org/abs/2505.22492v1
s). Therefore, the reward function is a deterministic function defined as r(s, a) = 10 a+ 0.1(1 + 2 s). For the illustrative example, we can derive the closed-form expression of the policy’s value, which is 4.2. Numerical experiments in Section 6. In Cartpole environment, the state space Sis a subset of R4. For any s∈ S,sis characterized by four elements (x,˙x, θ, ˙θ), where x,˙xare the position and velocity of the cart, θ,˙θare the angle and angle velocity of the pole with the vertical axis. The behavior policy and the target policy are set as πb(a|s)∼Bernoulli (pb),where pb= 1/(1 + exp(10 θ)) ; πe(a|s)∼Bernoulli (pe),where pe= 1/(1 + exp(20 θ)). Given s= (x,˙x, θ, ˙θ), the reward is defined as R= (2−x/x max)(2−θ/θ max)−1. The maximum episode length is set as 200. We use a logistic regression model to estimate the behavior policy. The state transition model is set as the physical system implemented in CartPole environment in the gym library. And the initial state are uniformly drawn from [−0.05,0.05]4. We use a Monte Carlo (MC) procedure to approximate the true value of target policy. Specifically, we run the deploy the target policy to the simulator and get a empirical cumulative reward bv(l) MC. The procedure is repeated Ltimes, and the MC estimator is given by bvMC=1 LLX l=1bv(l) MC. In our experiments, we set L= 106and the value of bvMCis 92.91. B. Additional experiment results In this section, we examine the impact of using history-dependent behavior policies in the OIS estimator across three MuJoCo environments: (i) Inverted Pendulum ; (ii)Double Inverted Pendulum and (iii) Swimmer . For both Inverted Pendulum and Double Inverted Pendulum, the behavior policy is modeled using a transformed Beta distribution. Specifically, we set the action to 2Z−1, where Z∼Beta(2 +Sθ,2−Sθ)andθ=e1= (1,0, . . . , 0). The parameter θis estimated by maximizing the log-likelihood. In Swimmer, the action is two-dimensional, i.e., A= (A1, A2), and we sample each component independently given the state: A1∼Beta(2+Sθ1,2−Sθ1)andA2∼Beta(2+Sθ2,2−Sθ2), with θ1=e1= (1,0, . . . , 0)andθ2=e2= (0,1,0, . . . , 0). The results, summarized in Figure 3, demonstrate that using history-dependent behavior policy estimation generally reduces the MSE of OIS in large-sample settings. Moreover, the performance tends to improve with longer history lengths. We further evaluate the use of history-dependent behavior policies in the SIS, DR, and MIS estimators within the more complex Swimmer environment. Results, presented in Figure 4, again aligns with our theory. 15 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE |Bias| Log Var Log MSEInverted Pendulum Double Inverted Pendulum Swimmer 5001000200040008000 5001000200040008000 50010002000400080000.000.050.100.150.20 51015 0.350.400.450.500.550.600.650.000.050.100.150.20 51015 0.350.400.450.500.550.600.000.050.100.15 0.10.20.30.40.5 0.030.060.090.12 Sample SizeHistory Length †012 Figure 3. Bias, log variance and log MSE for OIS estimators across three different environments |Bias| Log Var Log MSESIS DR MIS 5001000200040008000 5001000200040008000 5001000200040008000−2.0−1.5−1.0−0.5 −2.5−2.0−1.5−1.0−0.5 −2.8−2.4−2.0−1.6−2.0−1.5−1.0 −2.5−2.0−1.5−1.0−0.5 −3.0−2.5−2.0−1.5−4−3−2−1 −2.5−2.0−1.5−1.0 −2.5−2.0−1.5−1.0 Sample SizeHistory Length †0124 Figure 4. Bias, log variance and log MSE for OIS,DR and MIS estimators in Swimmer environment 16 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE C. Proofs
https://arxiv.org/abs/2505.22492v1
C.1. Proof of Lemma 1 According to the definitions of bvCD ISandbvCA IS, it follows from straightforward calculations that bvCA IS=EnnX aπe(a)br(a) +πe(A) bπb(A)[R−br(A)]o , and bvCD IS=EnnX aπe(a)br(S, a) +πe(A) bπb(A|S)[R−br(S, A)]o . According to Neyman orthogonality, both the estimated reward and estimated behavior policy can be asymptotically replaced by its oracle value without changing the OPE estimator’s asymptotic MSE (Chernozhukov et al., 2018). As this part of the proof follows standard arguments, we provide only a sketch; interested readers may refer to, for example, the proof of Theorem 9 in Kallus & Uehara (2020) for further details. Specifically, bvCD IScan be decomposed into the following four terms: bvCD IS=En X aπe(a)r(S, a) +πe(A) πb(A)[R−r(S, A)]! (6) +EnX aπe(a|S)[br(S, a)−r(S, a)]−πe(A) πb(A)[br(S, A)−r(S, A)] (7) +Enhπe(A) bπb(A|S)−πe(A) πb(A) [R−r(S, A)]i (8) +Enπe(A) bπb(A|S)−πe(A) πb(A) [br(S, A)−r(S, A)]. (9) Here, the right-hand-side (RHS) of (6)is the oracle DR estimator with the true reward function and IS ratio, and (7)–(9) are the reminder terms, which we will show are of order op(n−1/2). In particular: •For fixed brandbπb,(7)and(8)are of zero mean. They are of the order op(n−1/2)provided that brandbπbconverge to their oracle values. Even when brandbπbare estimated from the same data used in the evaluation, our use of tabular methods—combined with the fact that the number of contexts and actions is finite—ensures that these estimators belong to function classes with finite VC-dimension (Van Der Vaart et al., 1996). Therefore, standard empirical process theory (e.g., Chernozhukov et al., 2014, Corollary 5.1) can be applied to establish that these terms are indeed op(n−1/2). •For fixed brandbπb,(9)is of the order ∥br−r∥×∥bπb−πb∥where∥br−r∥and∥bπb−πb∥denote the root MSEs (RMSEs) between br(S, A)andr(S, A), and between bπb(A|S)andπb(A), respectively. Crucially, the order is the product of the two RMSEs. Consequently, as they decay to zero at a rate of op(n−1/4)– which is much slower than the parametric rateOp(n−1/2)– this term becomes op(n−1/2)as well. Again, under tabular estimation with finitely many contexts and actions, these estimators converge at the parametric rate, and empirical process theories can be similarly used to handle the dependence between the estimators and the evaluation data in (9). Therefore, bvCD ISis asymptotically equivalent to the oracle DR estimator (which is unbiased). Consequently, they achieve the same asymptotic variance and MSE, and we have MSE A(bvCD IS) = MSE A" En X aπe(a)r(S, a) +πe(A) πb(A)[R−r(S, A)]!# =VarA" En X aπe(a)r(S, a) +πe(A) πb(A)[R−r(S, A)]!# =1 nVar X aπe(a)r(S, a) +πe(A) πb(A)[R−r(S, A)]! , 17 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE which is equal to 1 nVar X aπe(a)r(S, a)! +1 nVarπe(A) πb(A)[R−r(S, A)] . Similar argument yields that MSE A(bvCA IS) =1 nVarπe(A) πb(A)[R−E(R|A)] . Then the first inequality follows from the fact that Varπe(A) πb(A)[R−E(R|A)] =Varπe(A) πb(A)[R−r(S, A)] +Varπe(A) πb(A)[r(S, A)−E(R|A)] , and that Varπe(A) πb(A)[r(S, A)−E(R|A)] ≥Var Eπe(A) πb(A)[r(S, A)−E(R|A)]|S =Var X aπe(a)r(S, a)! . The equality holds if and only if Var πe(A) πb(A)[r(S, A)−E(R|A)]|S = 0, which implies that the context Sis independent of the reward function r. We next prove the second inequality. Since bv† ISis unbiased, the
https://arxiv.org/abs/2505.22492v1
second inequality follows from the fact that MSE A(bv† IS) =1 nVarπe(A) πb(A)R =1 nVarπe(A) πb(A)[R−E(R|A)] +1 nVarπe(A) πb(A)E(R|A) . =MSE A(bvCA IS) +1 nVarπe(A) πb(A)E(R|A) ≥MSE A(bvCA IS). The equality holds if and only if E(R|A) = 0 almost surely. C.2. Proof of Theorems in Section 4 Details of Assumption 1. We assume that the policy class is parametrized by a vector θ= (θ0, . . . , θ k). For any πθ∈Πk andi∈ {0, . . . , k }, the state-action pair St−i, At−iaffects θonly through their interactions with θi. In this way, if we set θ1=. . .=θk= 0, then πθbecomes a Markov policy. Moreover, for any k′< k, if we fix θk′+1=. . .=θk= 0, then the policy class Πkdegenerates to Πk′. Notations. Given a single trajectory H= (s0, a0, r0, . . . s T, aT, rT), let Ht−k:tdenote the trajectory segment (st−k, at−k, . . . , s t)the likelihood function of trajectory Hunder policy πθ(·|·)is given by p(H, θ) =TY t=0πθ(at|Ht−k:t)p(rt|st, at)p(st+1|st, at). Further define p(H, π e)be the likelihood function of trajectory Hunder policy πe, given as p(H, π e) =TY t=0πe(at|Ht−k:t)p(rt|st, at)p(st+1|st, at). The log-likelihood function is defined as L(H, θ) = log p(H, θ)and the score function is defined as s(H, k, θ ) =∂ ∂θlogp(H, θ) =∂ ∂θTX t=0logπθ(at|Ht−k:t). 18 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE In what follows, we write s(H, k, θ )ass(H, θ)to ease notation. Let Ht= (s0, a0, . . . , s t, at)be the state-action trajectory up to time tandHst= (s0, a0, . . . , s t)be the trajectory up to st. We further define s(Ht, θ) =∂ ∂θtX j=0logπθ(aj|Hj−k:j), s(Ht:T, θ) =∂ ∂θTX j=t+1logπθ(aj|Hj−k:j). Proof of Theorem 2. For simplicity of notation, we define u(H, θ) =GTTY t=0πe(at|st) πθ(at|Ht−k:t)=GTp(H, π e) p(H, θ). Direct calculation yields that ∂ ∂θu(H, θ) =−u(H, θ)s(H, θ), (10) andbv† OIS=1 nPn i=1u(Hi, θ∗),bvOIS(k) =Pn i=1u(Hi,bθn). Using Taylor expansion at θ=θ∗, we obtain bvOIS(k)−bv† OIS =1 nnX i=1∂ ∂θu(Hi, θ∗)(bθn−θ∗) +Rn(bθn) =1 nnX i=1u(Hi, θ∗)s(Hi, θ∗)(bθn−θ∗) +Rn1(bθn), (11) where the remainder term can be represented as Rn1(H,bθ) =1 2nu(H,eθn)(bθn−θ∗)⊤nX i=1 s(H,eθ)s(H,eθ)⊤−∂ ∂θs(H,eθ) (bθn−θ∗). Under the bounded rewards assumption (Assumption 3), we have GT=O(TR max). Under the coverage assumption (Assumption 4, we have u(H, θ) = Op(TCTRmax)ands(H, θ) = O(ε−1). Under the differentiability assumption (Assumption 5),∂ ∂θs(H, θ) =O(ε−2). Combining these facts, we obtain that the remainder term satisfies Rn1=OpTCTRmax ε2∥bθn−θ∗∥2 . (12) Using the property of maximum likelihood estimator (see e.g., Theorem 4.17 in Shao, 2003), we have √n(bθn−θ∗) =I−1(θ∗)1√nnX i=1s(Hi, θ∗) +OP(∥bθ−θ∗∥2). (13) Further using the central limit theorem,p 1/nTPn i=1s(Hi, θ∗)converges to a normal distribution with mean zero and variance I(θ∗), which is of order O(T). It follows that under the non-singularity assumption (Assumption 6), ∥bθn−θ∗∥=OP k+1√ nT . Combining equations (11), (12) and (13), we have bvOIS(k)−bvOIS =−1 nnX i=1u(Hi, θ∗)s(Hi, θ∗)I−1(θ∗)1 nnX j=1s(Hj, θ∗) +Op(k+ 1)CTRmax nε2 =−1√nE[u(H, θ∗)s(H, θ∗)]I−1(θ∗)1√nnX j=1s(Hj, θ∗) +Op(k+ 1)CTRmax nε2 +Rn2,(14) 19 Demystifying the Paradox of IS with an Estimated History-Dependent
https://arxiv.org/abs/2505.22492v1
Behavior Policy in OPE where Rn2=1 n( 1 nnX i=1u(Hi, θ∗)s(Hi, θ∗)−E[u(H, θ∗)s(H, θ∗)]) I−1(θ∗)1 nnX j=1s(Hj, θ∗). Again, according to the central limit theorem, we have 1 nnX i=1u(Hi, θ∗)s(Hi, θ∗)−E[u(H, θ∗)s(H, θ∗)] =Op r T nCTRmaxε−1! . Therefore, we obtain Rn2is also of order Op (k+1)CTRmax nε2 . Plug into equation (14), we obtain bvOIS(k)−bv† OIS=−1√nE[u(H, θ∗)s(H, θ∗)]I−1(θ∗)1√nnX j=1s(Hj, θ∗) +Op(k+ 1)CTRmax nε2 , where the predominant term on the right hand side is denoted as v1. Using the fact that E[s(H, θ∗)] = 0 , we know that the predominant term has mean 0. Meanwhile, since I(θ∗) =E[s(H, θ∗)s⊤(H, θ∗)], we obtain Var(v1) =Cov(vOIS, v1) =1 nE[u(H, θ∗)s⊤(H, θ∗)]I−1(θ∗)E[u(H, θ∗)s(H, θ∗)]. (15) It follows that Cov (v† OIS−v1, v1) = 0 . We define T⊥(k) := w=s⊤(H, θ∗)a|a∈Rk as the tangent space spanned by score vector, and we define T(k) := w|E{u·w}= 0,∀u∈T⊥(k) . In fact, the whole space Rkcan be decomposed into T(k)LT(k)⊥.v1∈T(k)⊥is the orthogonal projection of v† OISonto the tangent space spanned by the score vector and v† OIS−v1∈T(k)is the projection of v† OISon the space of random vectors orthogonal to the score vector. Moreover, equation (15) indicates bvOIS(k)−vtrue= (bv† OIS−vtrue)−v1+Rn3, (16) withRn3=Op (k+1)CTRmax nε2 . Take variance on both sides, we obtain Var(bvOIS(k)) = Var(bv† OIS−v1) +Var(Rn3) + 2 Cov(bv† OIS−v1, Rn3). (17) Using similar calculations, we can show that Var(bv† OIS−v1) = O(R2 maxC2T/n), Var(Rn3) = O(k+ 1)2C2TR2 max n2ε4 . By Cauchy-Schwartz inequality, we have Cov(bv† OIS−v1, Rn3)≤q Var(bv† OIS−v1)·Var(Rn3) =O(k+ 1)C2TR2 max n3/2ε2 . Since εis a constant, Var(Rn3)is a higher order term compared to Cov(bv† OIS−v1, Rn3). Furthermore, since bv† OISis unbiased, so Bias(bvOIS(k)) =O(k+ 1)CTRmax nε2 . 20 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE It follows that Bias2(bvOIS(k))is a higher order term compared to Cov(bv† OIS−v1, Rn3). Using bias-variance decomposition, we obtain MSE(bvOIS(k)) = Var(bv† OIS−v1) +Bias2(bvOIS(k)) +O(k+ 1)C2TR2 max n3/2ε2 =Var ProjT(k)(bv† OIS) +O(k+ 1)C2TR2 max n3/2ε2 =1 n ProjT(k)(λTGT) +O(k+ 1)C2TR2 max n3/2ε2 , (18) where ProjT(k)(·)represents the orthogonal projection of a random variable to the space T(k). This proves the first claim of Theorem 2. We next show the second claim of Theorem 2. In fact, for any k′< k, under the monotocity assumption (Assumption 1), the tangent space spanned by score vector for model Πkis strictly larger than that of Πk′. Therefore, we have T(k)⊥⊆T(k′)⊥. It follows that k′< k,T(k′)⊆T(k)and the second claim of Theorem 2 directly follows from Pythagorean Theorem. Proof of Theorem 4. The proof of Theorem 4 simply follows the proof of Theorem 6 by taking Q(s, a)≡0and is thus omitted. Proof of Theorem 6. The likelihood of trajectory segment Ht= (S0, A0, R0, . . . , S t, At, Rt)can be represented as: Pθ(HSt+1) =tY j=0πθ(Aj|Hj−k:j)p(Sj+1|Sj, Aj)p(Rj|Sj, Aj). It follows that the cumulative density ratio with respect to behavior policy πθcan be represented as λt(θ) :=tY j=1πe(Aj|Sj) πθ(Aj|Sj−k:j)=Pπe(HSt+1) Pθ(HSt+1). Then the doubly robust estimator can be represented as bvDR(k) = EnTX t=0( Pπe(HSt+1) Pˆθn(HSt+1)γt(Rt−Qt(St, At)) +Pπe(HSt) Pˆθn(HSt)γtQt(St, πe)) =EnQ0(S0, πe) +EnTX t=0( Pπe(HSt+1) Pˆθn(HSt+1)γt(Rt−Qt(St, At)
https://arxiv.org/abs/2505.22492v1
+γQt+1(St+1, πe))) , (19) withQt(S, πe) =R aQt(S, a)dπe(a|S)and the doubly robust estimator with oracle weight can be represented as bv† DR=EnQ0(S0, πe) +EnTX t=0Pπe(HSt+1) Pθ∗(HSt+1)γt(Rt−Qt(St, At) +γQt+1(St+1, πe)) . For notation simplicity, we denote u(HSt+1, θ) =Pπe(HAt) Pπθ(HAt)γt(Rt−Qt(St, At) +γQt+1(St+1, πe)). Then direct calculation yields that∂ ∂θu(HSt+1, θ) =u(HSt+1, θ)s(Ht, θ). Under Assumption 3, 4, 5, using similar argument as proving equation (11), (12),(13) and (14), Taylor expansion yields bvDR(k)−bv† DR=−En(TX t=0u(HSt+1)s(θ∗, HAt)T) (ˆθn−θ∗) +OP(k+ 1)TCTUmax ε2∥ˆθn−θ∗∥2 =−En(TX t=0u(HSt+1)s(θ∗, HAt)T) I−1(θ∗)Ens(θ∗, HT) +OP(k+ 1)CTUmax nε2 =−E(TX t=0u(HSt+1)s(θ∗, HAt)T) I−1(θ∗)Ens(θ∗, HT) +OP(k+ 1)CTUmax nε2 . 21 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE Denote the main term on the right hand side on the last line by v2. Noted that E(TX t=0u(HSt+1, θ∗)s(HT, θ∗)) =E(TX t=0u(HSt+1, θ∗) (s(Ht, θ∗) +s(Ht:T, θ∗))) =E( E"TX t=0u(HSt+1, θ∗) (s(Ht, θ∗) +s(Ht:T, θ∗)) HSt+1#) =E(TX t=0u(HSt+1, θ∗) s(Ht, θ∗) +E s(Ht:T, θ∗) HSt+1) =E(TX k=0u(HSt+1, θ∗) s(Ht, θ∗) +E s(Ht:T, θ∗) St+1) =E(TX k=0u(HSt+1, θ∗)s(Ht, θ∗)) , where the second equality follows from total expectation formula, the fourth equality follows from the Markov property and the last equality follows from the fact that the score function vanishes at the true parameter. Thus, it follows from direct calculation that Var(v2) =Cov(bv† DR, v2). Therefore, similar to the proof of Theorem 2, we know that v2is the orthogonal projection of bv† DRonto the tangent space spanned by score function. Plugging into equation (20) and minus vtrueon both sides yields bvDR(k)−vtrue= (bv† DR−vtrue)−v2+OP(k+ 1)CTUmax nε2 . Using similar argument as proving equation (18) and combining the fact that bv† DRis unbiased and Ev2= 0, we obtain MSE(bvDR(k)) = Var(ProjT(k)(bv† DR)) +O(k+ 1)C2TU2 max n3/2ε2 . (20) This finishes the first claim of Theorem 6. In order to prove Var(ProjT(k)(bv† DR))is decreasing with respect to k, we denote σ2(k) =Var(bv† DR)−Var(ProjT(k)(bv† DR)), then σ2(k) =Var(v2). It follows that σ2(k) =1 nE(TX t=0u(HSt+1, θ∗)s⊤(Ht, θ∗)) I−1(θ∗)E(TX t=0u(HSt+1, θ∗)s(Ht, θ∗)) =1 nE  TX t=0tY j=0πe(Aj|Sj) πθ∗(Aj|Sj)γtUts(Ht, θ∗)T  I−1(θ∗)E  TX t=0tY j=0πe(Aj|Sj) πθ∗(Aj|Sj)γtUts(Ht, θ∗)  . (21) with Ut=Rt−Qt(St, At) +γQt+1(St+1, πe). We next prove that for any k′< k, the inequality σ2(k′)≤σ2(k)holds. For θ= (θ0, . . . , θ k), define γ= (θ0, . . . , θ k′), η= (θk′+1, . . . , θ k)andθ∗= (γ∗, η∗). It follows that s⊤(Ht, θ) = ( s⊤(Ht, γ), s⊤(Ht, η))for any t∈ {0,1, . . . , T }. Therefore, we can conclude that σ2(k′) =1 nE  TX t=0tY j=0πe(Aj|Sj) πθ∗(Aj|Sj)Uts(Ht, γ∗)⊤  I−1(γ∗)E  TX t=0tY j=0πe(Aj|Sj) πθ∗(Aj|Sj)Uts(Ht, γ∗)  . LetI(γ∗) =E[s(H, γ∗)s⊤(H, γ∗)], I(η∗) =E[s(H, η∗)s⊤(H, η∗)]andI12=E[s(H, γ∗)s⊤(H, η∗)], then I(θ∗) =I(γ∗)I12 IT 12 I(η∗) , 22 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE In order to calculate I−1(θ∗), we apply the formula of the inversion of a block matrix: A B C D−1 =A−1+A−1B(D−CA−1B)−1CA−1−A−1B(D−CA−1B)−1 −(D−CA−1B)−1CA−1(D−CA−1B)−1 , we obtain from equation (21) that σ2(k) = σ2(k′) +E"TX t=0u(HSt+1)s⊤(HT, γ∗)# I−1(γ∗)I12J−1I21I−1(γ∗)E"TX t=0u(HSt+1)s(HT, γ∗)# +E"TX t=0u(HSt+1)s⊤(HT, η∗)# J−1IT 12I−1(γ∗)E"TX t=0u(HSt+1)s(HT, γ∗)# +E"TX t=0u(HSt+1)s⊤(HT, γ∗)#
https://arxiv.org/abs/2505.22492v1
I−1(γ∗)I12J−1E"TX t=0u(HSt+1)s(HT, η∗)# −E"TX t=0u(HSt+1)s⊤(HT, η∗)# J−1E"TX t=0u(HSt+1)s(HT, η∗)# =σ2(k′) +E J−1/2IT 12I−1(γ∗)TX t=0u(HSt+1)s(Ht, γ∗)−J−1/2TX t=0u(HSt+1)s(Ht, η∗) 2 , withJ=I(η∗)−IT 12I−1(γ∗)I12. Thus, we obtain σ2(k)≥σ2(k′)for any k′< k. To this end, we finishes the proof of Var(ProjT(k)(bv† DR))is decreasing with respect to k. Proof of Corollary 7. We directly calculate σ2(k)in equation (21). σ2(k) = E  tY j=0πe(Aj|Sj) πθ∗(Aj|Sj)γtUts(θ∗, Ht)   =E  E tY j=0πe(Aj|Sj) πθ∗(Aj|Sj)γtUts(θ∗, Ht) Ht    =E  tY j=0πe(Aj|Sj) πθ∗(Aj|Sj)s(θ∗, Ht)γtE Ut St, At   = 0 , (22) where the last equality follows from Bellman equation, which indicates E Ut St, At = 0. Together with equation (21) completed the proof. Proof of Theorem 8. We first prove that the MIS estimators with weight function estimated by linear sieves is equivalent to the double reinforce- ment learning (DRL) estimator (Kallus & Uehara, 2020) with Q-function estimated by linear sieve, that is bvMIS=bvDRL:=EnTX t=0bwt Rt−bQt(St, At) +bwt−1X abQt(St, a)πe(a|St) , wherebQt=Enϕ⊤ t(At, St)bβt, and βtis iteratively defined as bβt=bΣ−1 t(EnRt+γbΣt+1,tbβt+1). For ease of notation, we define bQt(S, πe) :=X abQt(S, a)πe(a|S), ϕt(S, πe) :=X aϕt(S, a)πe(a|S). 23 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE Recall that bwt=ϕt(St, At)bαt. By direct calculation, we have Enn bwt−1bQt(St, πe)o =Enn bα⊤ t−1ϕt−1(St, At)ϕ⊤ t(St, πe)bβto =bα⊤ t−1bΣt,t−1bβt. Enn bwtbQt(St, At)o =Enn bα⊤ tϕt(St, At)ϕt(St, At)⊤bβto =Enn (bΣ−1 tbΣt,t−1bαt−1)⊤ϕtϕ⊤ tbβto =bα⊤ t−1bΣt,t−1bβt. (23) where the second to last equality is obtained by the recursive definition of αt. It follows that Enbwt−1bQt(St, πe) = EnbwtbQt(St, At). Plugging into equation (23), we know that the MIS estimator is equivalent to DRL estimator. Now, suppose the estimated weight and Q-function converges to its true value, then if we replace bw(St, At)by bw(St:t−k, At:t−k), the resulting estimator will have a larger variance. Additionally, if the weight is estimated using all the history data, then bvMISbecomes the doubly robust bvDRestimator. The following theorem formalizes this result, indicating that for DRL estimator, the variance increases as more history are used to estimate the weights: bvMIS(k) =EnTX t=0bwt(Ht−k:t) Rt−bQ(St, At) +bwt−1(Ht−k−1:t−1)Z abQ(St, a)dπe(a|St) . We further assume that ∥bwt−wt∥=oP(n−1/4)and∥bQt−Qt∥=oP(n−1/4), where ∥bwt−wt∥and∥bQt−Qt∥denote the root MSEs (RMSEs) between bwt(Ht−k:t)andw(Ht−k:t), and between bQt(S, A)andQt(S, A), According to Neyman orthogonality, both the estimated reward and estimated behavior policy can be asymptotically replaced by its oracle value (Chernozhukov et al., 2018) without changing the OPE estimator’s asymptotic MSE (see also equations (6)-(9)for detailed explanation). Therefore, we obtain that bvMIS(k) =En(TX t=0wt(Rt−Qt(St, At)) +wt−1Qt(St, πe)) +oP(n−1/2). After rearranging the predominant term, we obtain that bvMIS(k)is asymptotically equals to bvMIS(k) =EnQ0(S0, πe) +EnTX t=0wt(At:t−k, St:t−k)(Rt−Qt(St, At) +Qt+1(St+1, πe)) If the Qfunction is correctly specified, then E[wt(At−k:t, St−k:t) (Rt−Qt(St, At) +Qt+1(St+1, πe))|St−k:t, At−k:t] =wt(At−k:t, St−k:t)E[Rt−Qt(St, At) +Qt+1(St+1, πe)|St−k:t, At−k:t] = 0 . (24) Denote Ut=Rt−Qπe(St, At) +Vπe(St+1). Then for any t′< t, Cov(wt(At−k:t, St−k:t)Ut, wt(At′−k:t′, St′−k:t′)Ut′) =E[wt(At−k:t, St−k:t)Utwt′(At′−k:t′, St′−k:t′)Ut′|St−k:t, At−k:t] =E{wt(At−k:t, St−k:t)wt′(At′−k:t′, St′−k:t′)Ut′E[Ut|St−k:t, At−k:t]} = 0 . (25) 24 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE It follows that, VarA(bvMIS(k)) =1 nVar Q0(S0, πe) +TX t=0wt(At−k:t, St−k:t)Ut! =1 nVar(Q0(S0,
https://arxiv.org/abs/2505.22492v1
πe)) +TX t=0Var(wt(At−k:t, St−k:t)Ut) =1 nVar(Q0(S0, πe)) +1 nTX t=0Var(wt(At−k:t, St−k:t)E[Ut|At−k:t, St−k:t]) +1 nTX t=0E w2 t(At−k:t, St−k:t)Var[Ut|At−k:t, St−k:t] =1 nVar(Q0(S0, πe)) +1 nTX t=0E w2 t(At−k:t, St−k:t)σ2(At, St) , (26) where σ2(At, St) =Var(Ut|At−k:t, St−k:t). Therefore, for any k′< k, E w2 t(At−k′:t, St−k′:t)σ2(At, St) =En (E[wt(At−k:t, St−k:t)|At−k′:t, St−k′:t])2σ2(At, St)o ≤E E[w2 t(At−k:t, St−k:t)|At−k′:t, St−k′:t]σ2(At, St) =E w2 t(At−k:t, St−k:t)σ2(At, St) , (27) where the first equality is based on the fact that wt(At−k′:t, St−k′:t) = E[λt|At−k′:t, St−k′:t] = E E[λt|At−k:t, St−k:t] At−k′:t, St−k′:t =E[w(At−k:t, St−k:t)|At−k′:t, St−k′:t]and the second equality is based on Jensen’s inequality. Thus, combining equations (26) and (27), we obtain that Var A(bvMIS(k′))≤VarA(bvMIS(k)). C.3. Assumptions and proof of Theorem 9 Regularity conditions for Theorem 9. We first introduce regularity conditions for Theorem 9. Suppose Θis the parametric space equipped with a norm ∥ · ∥ (Θ is not necessarily finite-dimensional). Denote Hbe the set of all possible trajectories and θ0be the true parameter. For trajectory H, letL(H, θ)be the log likelihood function. Let s(H, θ)[·]be the Fr ´echet derivative of L(H, θ)with respect to θ. For any h∈Θ,s(H, θ)[h]is defined by s(H, θ)[h] =∂ ∂ηL(H, θ+ηh) η=0. LetPbe the probability measure of Hinduced by behavior policy πθ0andPnbe the corresponding empirical probability measure. We impose the following regularity conditions. Assumption 8. For any θin a neighbourhood of θ0,P{s(H, θ)−s(H, θ 0)}=O(∥θ−θ0∥). Assumption 9. For any θ∈Θ, there exists a corresponding θ0nin the sieve space Θn, such that ∥θ−θ0n∥=o(n−1/4). Assumption 10. θnis a consistent estimator of θ0with∥θn−θ0∥=oP(n−1/4). Assumption 11. For some δ >0, the function class Fδ={s(H, θ)−s(H, θ 0) :∥θ−θ0∥< δ, H ∈ H} is aP-Donsker class. Assumption 12. s(H, θ)[h]is Fr ´echet differentiable at the true parameter θ0with a continuous derivative ˙sθ0[·, h]which satisfies Pn s(H,ˆθn)[h]−s(H, θ 0)[h]−˙sθ0[ˆθn−θ0, h]o =oP(n−1/2). Assumption 13. There exists a least favorable direction g0∈Θsuch that for any h∈Θ, E GTp(H, π e) p(H, θ 0)−s(H, θ 0)[g0] s(H, θ 0)[h] = 0. 25 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE We make some remarks on these assumptions. Assumptions 9 and 10 impose restrictions on the sieve space, requiring the sieve space well approximate the parameter space. Such conditions hold for sieve space including B-spline and deep neural network. Assumptions 11 and 12 are commonly required in semi-parametric literature (Zhao & Zhang, 2017), restricting the complexity of function class around the true parameter. Assumption 13 indicates that there exists a projection ofχ(H)p(H, π e)/p(H, θ 0)on the tangent space spanned by vector s(H, θ 0)[·]. This condition naturally holds when the parameter space is finite dimensional or the tangent space is a closed subspace. Proof of Theorem 9. We first show that for any h∈Θ, (i)√n(Pn−P)(s(H,ˆθn)[h]−s(H, θ 0)[h] =oP(1). (ii)P{s(H, θ 0)[h]}=oP(n−1/2),Pnn s(H,ˆθn)[h]o =oP(n−1/2). For part (i), noted that Pn (s(H,ˆθn)[h]−s(H, θ 0)[h])2o =P d2(θn, θ0) =o(1).Combining Assumption 11, the conclusion directly follows from Lemma 13.3 of Kosorok (2008). For part (ii), since s(H, θ)is the Fr ´echet derivative of log likelihood, it follows that P{s(H, θ 0)[h]}= 0 = oP(n−1/2). Meanwhile, Assumption 9 indicates that there exists
https://arxiv.org/abs/2505.22492v1
˜h∈Θn such that d(˜h, h) =o(n−1/4). Since θnmaximize PnL(H, θ)inΘn, it follows that Pnn s(H,ˆθn)[˜h]o = 0. Therefore, Pnn s(H,ˆθn)[h]o =Pnn s(H,ˆθn)[h]−s(H,ˆθn)[˜h]o , which can be further decomposed into three parts; Pnn s(H,ˆθn)[h]o = (Pn−P) s(H,ˆθn)−s(H, θ 0) [h]−(Pn−P) s(H,ˆθn)−s(H, θ 0) [˜h] +Pnn s(H, θ 0)[h]−s(H, θ 0)[˜h]o +Pn (s(H,ˆθn)−s(H, θ 0))[h]−(s(H,ˆθn)−s(H, θ 0))[˜h]o =:J1+J2+J3. ForJ1, follow a similar argument as proving claim (i), we obtain J1=oP(n−1/2). For J2,E(√nJ2)2=O(d(h,˜h)2) = o(1), which indicates J2=oP(n−1/2). ForJ3, direct calculation yields E|J3|≲d(θ0,ˆθn)d(h,˜h) =o(n−1/2). Therefore, Pnn s(H,ˆθn)[h]o =J1+J2+J3=oP(n−1/2). Combining claim (i),(ii) and Assumption 12, we obtain Pn{s(H, θ 0)[h]}= (Pn−P)(s(H, θ 0)−s(H,ˆθn))[h]−Pnn s(H,ˆθn)[h]o +Pn s(H,ˆθn)[h]−s(H, θ 0)[h]o =−Pn ˙sθ0[ˆθn−θ0, h]o +oP(n−1/2). (28) Takeh=ˆθn−θ0in (28) yields E GTp(H, π e) p(H;θ0)s(H, θ 0)[ˆθn−θ0] =−E{s(H, θ 0)[g0]s(H, θ 0)[ˆθn−θ0]} =En ˙sθ0[ˆθn−θ0, g0]o =−Pn{s(H, θ 0)[g0]}+oP(n−1/2). (29) 26 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE Then, according to the Taylor expansion on θ0, we obtain bvIS(k)−bv† IS=nX j=1Gi,Tp(Hj, πe) p(Hj;θ0)s(Hj, θ0)[ˆθn−θ0] +OP(∥ˆθn−θ0∥2) =E GTp(H, π e) p(H;θ0)s(H, θ 0)[ˆθn−θ0] +oP(n−1/2) =−Pn{s(H, θ 0)[g0]}+oP(n−1/2), where the second equality holds because of Assumption 10. Denote the main term on the right hand side be bv3. Then by Assumption 13, we have Cov(v† IS,bv3) = 0 . By the central limit theorem, Var (v† IS)andPn{s(H, θ 0)[g0]}are of order O(1/n). And thus, we have: VarA(bvOIS(k)) = VarA(bv† OIS)−VarA(v3)≤VarA(bv† OIS), which completes the first inequality in Theorem 9. Follow a very similar argument in proving VarA(bvOIS(k))≤VarA(bv† OIS), we can easily prove that VarA(bvSIS(k))≤ VarA(bv† SIS)and Var A(bvDR(k))≤VarA(bv† DR), and hence, we omit the details of proof. 27
https://arxiv.org/abs/2505.22492v1
arXiv:2505.22503v1 [cs.RO] 28 May 2025From Strangers to Assistants: Fast Desire Alignment for Embodied Agent-User Adaptation Yuanfei Wang12, Xinju Huang3, Fangwei Zhong3, Yaodong Yang24, Yizhou Wang1456, Yuanpei Chen24B, Hao Dong1B Abstract While embodied agents have made significant progress in performing complex physical tasks, real-world applications demand more than pure task execution. The agents must collaborate with unfamiliar agents and human users, whose goals are often vague and implicit. In such settings, interpreting ambiguous instructions and uncovering underlying desires is essential for effective assistance. Therefore, fast and accurate desire alignment becomes a critical capability for embodied agents. In this work, we first develop a home assistance simulation environment HA-Desire that integrates an LLM-driven human user agent exhibiting realistic value-driven goal selection and communication. The ego agent must interact with this proxy user to infer and adapt to the user’s latent desires. To achieve this, we present a novel framework FAMER for fast desire alignment, which introduces a desire-based mental reasoning mechanism to identify user intent and filter desire- irrelevant actions. We further design a reflection-based communication module that reduces redundant inquiries, and incorporate goal-relevant information extraction with memory persistence to improve information reuse and reduce unnecessary exploration. Extensive experiments demonstrate that our framework significantly enhances both task execution and communication efficiency, enabling embodied agents to quickly adapt to user-specific desires in complex embodied environments. 1 Introduction Embodied AI has seen rapid progress in recent years, driven by the collection of datasets [ 5,4,27, 12,37] and the development of large vision-language-action (VLA) models [ 3,10,20,35]. These advances have paved the way for general-purpose robots capable of performing complex manipulation tasks in the physical world. However, real-world deployment of embodied agents requires more than physical capabilities. It also demands the ability to interact effectively with diverse human users. One of the key challenges in such interactions lies in the variability of human preferences, values, and behaviors. Unlike physical tasks with well-defined goals, human desires are often ambiguous, context-dependent, and implicit. For an embodied agent to be truly helpful, it must be able to rapidly infer and align with the user’s underlying desires even when explicit instructions are vague or lacking. A prime example is the home assistant robots. Even if these robots are trained on broad human-centric datasets, they inevitably face unfamiliar users whose specific values and preferences are unknown at deployment. To offer effective assistance, the agent must infer and adapt to these user-specific attributes. In this way, the agent minimizes repetitive communication and demonstrates proactive, 1School of Computer Science, Peking University.2PKU-PsiBot Joint Lab.3School of Artificial In- telligence, Beijing Normal University.4Institute for Artificial Intelligence, Peking University.5Nat’l Eng. Research Center of Visual Technology, Peking University.6State Key Laboratory of General Artificial In- telligence, Peking University. BCorrespondence to: Hao Dong <hao.dong@pku.edu.cn>, Yuanpei Chen <yuanpei.chen312@gmail.com> Preprint. Under review. personalized behavior, similar to a considerate human assistant. For instance, as illustrated in Figure 1, a robot enters a new home without prior knowledge of the resident. Over time, it gradually learns that the user is allergic to caffeine and prefers something refreshing for breakfast due to poor sleep caused by
https://arxiv.org/abs/2505.22503v1
a heavy workload. Therefore, the robot infers that the user wants juice and serves it without needing to ask. Such behavior highlights the necessity of rapid and accurate desire alignment in embodied assistance, enabling robots to build trust and deliver truly helpful service. Previous works have investigated collaboration with unfamiliar partners under the paradigms of ad-hoc teamwork (AHT) [ 31,32,33] and zero-shot coordination (ZSC) [ 8,16,17]. However, these efforts have primarily focused on simplified domains such as board games like Hanabi [ 2] and 2D grid-based simulations like Level-Based Foraging [ 1] and Overcooked [ 6]. These environments lack human-like, value-driven goal specification, natural communication and embodied actions, limiting their applicability to realistic embodied agent-user adaptation scenarios. To bridge this gap, we first introduce a new embodied simulation environment, HA-Desire (Home Assistance with diverse Desire), which is built upon the VirtualHome platform [ 28]. HA-Desire features rich 3D household scenes and diverse objects, supporting various tasks such as preparing an afternoon snack or setting the dinner table. Crucially, it includes an LLM-driven proxy human user that samples goals based on randomly assigned value attributes and vague task descriptions. The user interacts with the ego agent via natural language and does not explicitly state its goals. Instead, it offers indirect, human-like hints such as “I want something sweet and crunchy.” This setup requires the ego agent to perform strategic inference and interactive reasoning to uncover the user’s latent desires and execute a task that fully satisfies them. To tackle this challenging problem of fast agent-user adaptation, we propose FAMER (Fast Adaptation via MEntalReasoning), a novel framework that leverages the reasoning capabilities and commonsense knowledge of large vision-language models to improve both communication efficiency and task execution. At the core of FAMER is a desire-centered mental reasoning module that extracts confirmed goals from user messages and infers the user’s underlying mental state, including values, preferences, and latent desires. To reduce redundant communications, FAMER also incorporates a reflection-based communication mechanism that prompts the agent to reason about what has already been learned and to ask only for missing or unconfirmed information. Additionally, FAMER includes a goal-relevant information extraction module that identifies critical task-related details, such as object containers or room locations, and stores them in a persistent memory across episodes. This enables the agent to reuse previously gathered knowledge and avoid unnecessary exploration. Together, these components allow the ego agent to rapidly align with the user’s desires and plan efficiently in complex, multi-step embodied tasks. We evaluate FAMER on two representative tasks: Snack & Table, in our HA-Desire environment. Each task is tested under two settings: Medium and Large, denoting the number of goals to satisfy. Extensive quantitative and qualitative experiments show that FAMER significantly outperforms baselines in task completion score, execution efficiency, and communication cost. Ablation studies further highlight the contribution of each key component in the framework. In summary, our contributions include: • We formulate the novel problem of rapid adaptation to value-driven, unknown users in em- bodied settings, and introduce HA-Desire, a 3D simulation environment featuring naturalistic user interactions for evaluating agent-user
https://arxiv.org/abs/2505.22503v1
adaptation. •We propose FAMER, a new framework that integrates desire-centered mental state reasoning, reflection-based efficient communication, and goal-related key information extraction to enable fast desire alignment for embodied agents. •We demonstrate the effectiveness of our proposed environment and framework through extensive quantitative and qualitative experiments. 2 Related Work Value Alignment Value alignment has been extensively studied in both language modeling and agent design. In the context of LLMs, alignment techniques such as RLHF [ 26,9,19] aim to align model outputs with human preferences, but these efforts primarily focus on static, text-based tasks and do not address the challenges of dynamic, embodied interactions. In human-AI collaboration, value 2 Welcome to this home!That’s exactly what I want!I am unfamiliar with this user. AssistantStrangerI have prepared juice for you. Figure 1: An illustration of Embodied Agent-User Adaptation. The embodied home assistant robot encounters a new user with unknown values and preferences. Through interaction over time, the agent learns the user’s aversion to caffeine and preference for refreshing drinks in the morning due to inadequate sleep. By aligning with the user’s implicit desires, the agent proactively serves juice without being explicitly instructed, demonstrating high-quality assistant service. alignment involves inferring user preferences through observation or feedback [ 40,15,13]. More closely related to our setting are mental reasoning agents inspired by Theory of Mind [ 30,38], which model other agents’ beliefs and desires to support assistance. D2A [ 36] simulates human desires using LLMs, but is limited to text-based environments. CHAIC [ 11] introduces an embodied social intelligence challenge that focuses on reasoning under physical constraints, but does not address the diversity of human values and goals. In contrast, our work introduces an integrated embodied simulation platform with naturalistic, value-driven goal generation and communication. Adaptive Agents Adaptation in multi-agent settings has been studied under the paradigms of zero- shot coordination (ZSC) [ 17,16,8,23,34] and ad-hoc teamwork (AHT) [ 33,31,7,25,24], where agents must coordinate with unseen partners without prior agreement. While these approaches are effective in structured domains such as Hanabi [ 2] and Overcooked [ 6], they rely on symbolic observations, making them less suitable for complex, embodied human-agent collaboration. More recently, LLM-based agent frameworks [ 21,39,42,41,22] have demonstrated impressive capabilities in reasoning and planning within interactive environments. However, most assume known goals or static user preferences and lack mechanisms for inferring latent values through interaction. Our work builds on this line by addressing the challenge of fast adaptation to unknown, value-driven user goals via desire inference, memory utilization, and efficient, human-like dialogue in rich embodied tasks. 3 Embodied Home Assistance Simulation Environment We present HA-Desire (Home Assistance with diverse Desire), a novel embodied simulation environ- ment designed to study user-specific adaptation in realistic home scenarios. As illustrated in Figure 2, HA-Desire is built upon a richly detailed 3D environment populated with diverse object assets and room layouts, allowing for visually grounded tasks with high variability. To simulate real-world agent-user adaptation challenges, we instantiate a proxy human user within the environment. This user is endowed with a set of latent value attributes and receives only a vague task
https://arxiv.org/abs/2505.22503v1
description. Based on these values, the user samples a set of desire-related goals from a potential goal set. Importantly, the user does not directly reveal these goals but instead provides indirect hints about preferences and intentions in response to the ego agent’s inquiries. We describe the problem formulation and proxy human user model in more detail below. 3.1 Problem Formulation In the desire-centered agent-user adaptation problem, the task goal is not explicitly fixed, but the number of goals is predefined. Instead, there exists a potential goal set Gp, from which the task goal must be sampled by the human user Hand inferred by the ego agent E. Given a vague task description T, the proxy human user first samples a set of value attributes Vfrom a predefined value space. Based on these values and the task description, the user then samples a set of desire-driven goals G=G(V, G p, T)⊂Gp. 3 EmbodiedSimulationEnvironment PotentialGoalSet Vague InstructionPrepare an afternoon snack. Various Value Attributes Value-drivenGoal SamplingThirsty & Hungry & Need refreshing:Desire-centered Human-like Communication Are cupcake and wine what you want?No, I’m in the mood of something crunchy and refreshing.Figure 2: Overview of the HA-Desire environment. The simulation environment contains diverse objects and scenes. The proxy human user samples value attributes from a predefined space, which guides goal selection from a potential goal set via LLM. The user is constrained from directly revealing the true goals and instead communicates through desire-centered hints. This setup encourages the ego agent to infer user intent through interactive reasoning rather than relying on explicit instructions. The true goal set Gis latent and not directly observable by the ego agent, which must infer it through interaction. The ego agent and the human user can communicate by exchanging messages MEand MH, respectively. The ego agent performs actions according to its policy π(A|O, M H, C, T ), where Odenotes the current observation of the environment and Cis a cross-episode memory context, including previous actions and dialogue history. During task execution, the ego agent receives a positive reward for successfully completing a true sub- goal in G, and a penalty for executing irrelevant or incorrect goals, which reflects a misalignment with the user’s desires. The environment supports multi-episode interactions, where the agent repeatedly engages with the same user. The user’s value attributes Vare consistent across episodes, encouraging the agent to gradually build an internal model of the user. The objective of the ego agent is to maximize cumulative reward by accurately inferring the user’s latent desires, while minimizing interaction steps and communication costs. This promotes both task and communication efficiency, which are critical for effective real-world embodied assistance. 3.2 Value-driven Human User As illustrated in Figure 2, the proxy human user begins by sampling discretized value attributes V from a predefined task-related value space. For example, in the Prepare Snack task, the value space spans five dimensions: Hungry, Thirsty, SweetTooth, Fruitarian, and Alcoholic, each taking on one of three discrete levels—Not, Somewhat, or Very. These attributes reflect user’s latent desires. Once the value attributes are sampled, the user invokes a large language model
https://arxiv.org/abs/2505.22503v1
to simulate realistic goal selection. Conditioned on the vague task description Tand the sampled values V, the LLM generates a set of corresponding desire-related goals G=G(V, G p, T)⊂Gp. Since multiple goals may align with the same value attribute, this sampling process is intentionally non-deterministic, mirroring the variability and ambiguity of human decision-making. To simulate natural and indirect human communication, the user is constrained via LLM prompting not to reveal the true goal set directly. Instead, it responds with value-driven hints that reflect its preferences and desires. This encourages the ego agent to reason about the user’s intent through interaction, rather than relying on explicit instructions. The detailed LLM prompting strategy used to sample goals and generate user responses is provided in the Appendix. 4 Method To enable fast and effective adaptation to user-specific desires, we propose a novel framework, FAMER (Fast Adaptation via MEntal Reasoning), which leverages the reasoning capabilities and commonsense knowledge embedded in large VLMs. In this work, we utilize GPT-4o [18]. 4 KeyInfo ExtractionDesire InferenceKeyInfo ContextAction & Dialogue HistoryPrevious GoalsConfirmed GoalsKeyInfoUpdateEfficient CommunicationGoal Oriented PlanningObs𝑴𝑯Goal ConfirmationDialogue History EnvironmentValue-drivenHuman User Hidden Goal ProgressMemoryPerceptionProgress UpdateAction𝑴𝑬InferGoal Action & Dialogue HistoryConfirmed GoalsPrevious Goals𝑴𝑬 Desire-centered Mental Reasoning Inferred GoalDialogue HistoryConfirmed GoalsPrevious GoalsRedundant Information ReductionIs juice what you want?MemorySend MessageExplore bedroomCheck fridgeGrab juiceGrab toothbrushGrab candleGoal-related Action Filter Figure 3: Overview of the FAMER framework. FAMER comprises three key components: KeyInfo Extraction, Desire-Centered Mental Reasoning (including Goal Confirmation and Desire Inference), and Efficient Communication. These are supported by a memory module, perception module, and planning module, which together form an integrated pipeline for embodied agent-user adaptation. As illustrated in Figure 3, FAMER integrates three core components: Key Information Extraction, Desire-Centered Mental Reasoning, and Efficient Communication. These modules work in concert to help the ego agent infer user intent, plan accurately, and minimize redundant interactions. We detail each component in the following subsections. 4.1 Information Extraction In HA-Desire, the ego agent receives first-person RGB-D images as observations. To extract structured information from these inputs, we employ a perception module based on Mask R-CNN [ 14], trained on collected scene images [ 42]. The module first predicts instance segmentation masks from the RGB image and then constructs 3D point clouds using the RGB-D data. These outputs are used to build a scene graph that encodes object locations, categories, and spatial relationships. We then introduce the first core component of FAMER: Key Information Extraction. With the confirmed and inferred goals context, this module filters and stores goal-relevant information extracted from the scene graph into a dedicated cross-episode memory buffer. For example, if the agent identifies that juice is located in the fridge within the kitchen, it stores the structured entry “juice in fridge in kitchen” in memory. During subsequent planning phases, this stored information is used to guide attention to known facts and reduce redundant exploration. As a result, the agent is better equipped to reuse previously acquired knowledge across episodes, improving task efficiency. 4.2 Desire-centered Mental Reasoning This module is composed of two interconnected components: Goal Confirmation and Desire Inference, as illustrated in Figure 3. Together, they enable
https://arxiv.org/abs/2505.22503v1
the agent to infer the user’s underlying desires . The Goal Confirmation component extracts confirmed goals from the user’s responses by VLM reasoning. For example, if the agent asks, “Is juice what you want?” and the user replies, “Correct! Try to look for something crunchy,” the system confirms that juice is one of the user’s desired items. This process grounds part of the goal set and reduces future uncertainty. Following confirmation, the Desire Inference component leverages the action & dialogue history, confirmed goals and past episode goals to reason about the user’s mental state, including value attributes and desires. Since user values remain consistent across episodes, the agent can incrementally improve its inference accuracy over time. By maintaining an internal model of the user, the agent can focus on narrowing down the remaining potential goals and avoid repetitive or redundant guesses. With both inferred and confirmed goals in hand, the agent filters out irrelevant actions during planning. As shown in Figure 3, if the current goals do not involve a toothbrush or candle, then actions involving 5 those objects are ignored. This reduces distraction and helps the agent maintain focus on goal-relevant objects and activities, thereby improving planning efficiency and task performance. 4.3 Efficient Communication Excessive or repetitive communication can diminish user satisfaction and hinder overall system efficiency. To address this, FAMER integrates an Efficient Communication module that promotes purposeful, context-aware dialogue between the agent and the user. This module leverages both the dialogue history and the agent’s inferred model of the user’s mental state to decide when and what to ask. Before initiating a new query, the agent performs an internal reflection over its current knowledge—what goals have been confirmed, which value attributes have been inferred, and what uncertainties remain. This reflective mechanism helps avoid redundant or previously resolved questions, particularly across multi-episode interactions. When communication is necessary, the agent formulates targeted, desire-aligned questions aimed at resolving specific ambiguities. For example, if the agent has inferred a preference for sweet items but is uncertain about texture, it may ask, “Is cupcake what you want?” instead of issuing vague or open-ended queries. This focused interaction strategy minimizes the communication burden on the user while enabling the agent to efficiently acquire high-value information. 5 Experiment 5.1 Tasks & Metrics We evaluate FAMER in two representative tasks instantiated within our proposed HA-Desire en- vironment: Prepare Afternoon Snack andSet Up Dinner Table . Each task is associated with a five-dimensional value space that governs user preferences. The Snack task includes 10 potential goals, while the Table task contains 8. Each task is further divided into two levels: Medium and Large. In Medium setting, the number of target goals is 2, and the maximum episode length is 200 steps. In Large setting, the agent must satisfy 4 goals within 300 steps. For example, the Snack-M task involves a total of C(10,2) = 45 possible goal combinations. The agent is allowed to interact with the user for 3 episodes, and the user’s value attributes remain fixed throughout the entire interaction. We evaluate performance using the following metrics:
https://arxiv.org/abs/2505.22503v1
Score : Given Ntotal goals, the agent receives a reward of1 Nfor each correct goal achieved. Completing an incorrect or distracting goal incurs a penalty of −1 2N. The maximum achievable score per episode is 1, corresponding to the completion of all goals without any mistakes. Step : The total number of environment steps taken by the agent within a single episode. Communication Cost : The total number of tokens exchanged in messages between the agent and the user, including both queries and responses. Notably, HA-Desire supports diverse objects and scenes and goal generation is automated, making it easy to extend to other tasks. Further details of the tasks are provided in the Appendix. 5.2 Baselines & Ablations We compare FAMER against three baselines and evaluate its components through three ablated variants. CoELA [42]: An LLM-based multiagent cooperation framework that includes perception, com- munication, planning, memory, and execution modules. In its original form, CoELA assumes full observability of goals. To adapt it to our setting, we modify its prompting so that the agent is only aware of the potential goal set and the number of target goals. ProAgent [41]: A proactive LLM-based agent framework designed for collaboration in fixed-goal tasks. It lacks mechanisms for handling goal uncertainty or communication. We extend ProAgent by adding cross-episode memory and explicit feedback to support adaptation to latent user desires. 6 Figure 4: Quantitative results in Snack-M and Snack-L. Figure 5: Quantitative results in Table-M and Table-L. MHP : An MCTS-based Hierarchical goal-sampling Planner adapted from the Watch-and-Help Challenge [ 29]. We introduce subgoal sampling to handle uncertain goals and maintain a cross- episode success memory. Like ProAgent, MHP does not support communication. FAMER w/o Desire : Removes the Goal Confirmation, Desire Inference, and goal-related action filtering modules. Communication quality is reduced due to the lack of inferred or confirmed goals. FAMER w/o EC : Disables the Efficient Communication module, leading to less targeted and potentially redundant dialogue. FAMER w/o KeyInfo : Removes the Key Information Extraction module, preventing the agent from leveraging cross-episode memory for known object-location pairs. 5.3 Quantitative Results We evaluate performance using the three metrics on both Snack and Table tasks at two difficulty levels: Medium and Large. Results for the Snack-M and Snack-L tasks are shown in Figure 4, while those for Table-M and Table-L are presented in Figure 5. For each method, the three adjacent bars represent performance across three consecutive episodes with the same user. 7 Figure 6: Ablation results in Snack-M. From the figures, it is evident that FAMER consistently outperforms all three baselines across all metrics. In terms of score, FAMER achieves the maximum value of 1.0 in nearly every episode, indicating that it successfully completes all desired goals while avoiding incorrect ones. CoELA performs second best but falls short due to its reliance on long-context LLM prompting alone, which leads to occasional misinterpretation of user desires. This limitation will be further illustrated in the qualitative analysis section. MHP and ProAgent perform the worst, as they lack communication capabilities and rely solely on trial-and-error to identify goals.
https://arxiv.org/abs/2505.22503v1
Such an inefficient process often incurs penalties. Notably, their performance gradually improves across episodes, reflecting slow adaptation to latent user desires through repeated interactions. FAMER also demonstrates superior efficiency in execution, requiring significantly fewer environment steps to complete tasks. MHP and ProAgent often exhaust the full episode length without completing the goals. This improvement is a direct result of FAMER’s integrated components: desire-centered reasoning rapidly narrows the goal space, efficient communication accelerates goal confirmation, and key information extraction reduces unnecessary exploration by reusing known object-location pairs. In terms of communication cost, FAMER significantly outperforms CoELA, as shown in Figures 4 and 5. This efficiency stems from FAMER’s reflection-based communication strategy, which avoids repeated or redundant questions. In contrast, CoELA frequently issues similar or vague queries due to its lack of explicit goal-tracking mechanisms. Ablation Study We further evaluate the contribution of each FAMER component through ablation studies on the Snack-M task. As shown in Figure 6, all three ablated variants show performance degradation across the evaluated metrics. Among them, FAMER w/o KeyInfo maintains the same score as FAMER but requires more steps, demonstrating that the KeyInfo Extraction module improves efficiency by reducing unnecessary exploration, rather than directly influencing goal correctness. A significant performance drop is observed in both FAMER w/o Desire and FAMER w/o EC, highlighting that desire modeling and efficient communication are central to agent-user adaptation. Without goal confirmation, desire inference, and goal-aligned action filtering, the agent struggles to accurately interpret user intent and often executes incorrect or inefficient actions. 5.4 Qualitative Analysis To further highlight FAMER’s strengths, we present an intuitive comparison against CoELA on the Snack-L task. As illustrated in Figure 7, we visualize the agents’ behavior through a series of key frames sampled across the episode. In this example, Alice refers to the ego agent and Bob refers to the human user. During task execution, the CoELA agent demonstrates three typical issues that contribute to its inferior performance. First, CoELA struggles to correctly extract and infer desires. For instance, when the user says, “I want something crunchy or refreshing,” which aligns with chips and juice, CoELA incorrectly interprets this as a preference for apple, and retrieves it as the first item. Similarly, in step 6, when the user mentions wanting “something that complements tea,” the agent mistakenly infers cupcake instead of the intended milk. These errors illustrate CoELA’s limited ability to perform precise desire inference, particularly in the face of ambiguous or indirect language. Second, CoELA exhibits repeated and inconsistent behavior due to insufficient integration between planning and memory. In steps 3 and 4, the agent redundantly grabs and places an apple on the coffee table, mistakenly treating it as an unfulfilled goal. This reflects a lack of attention to confirmed goals or past actions. In contrast, FAMER incorporates goal-aligned action filtering to suppress such irrelevant behaviors once a goal has been ruled out. 8 12345678123456True Goals: cereal, creamybuns, cupcake, appleAlice: Are 'wine', 'milk' what you want?Bob: Think sweet and a bit fruity.Alice: Are 'creamybuns', 'cupcake', 'juice', 'apple' what you want?Bob: Think crunchy and sweet along with what
https://arxiv.org/abs/2505.22503v1
you have.Alice: Are 'creamybuns', 'chips', 'cupcake', 'apple' what you want?Bob: You're close! 'creamybuns', 'cupcake', and 'apple' are right. Think crunchy and sweet for the missing item.Alice: Is 'cereal' what you want?Bob: Yes, 'cereal' is one of them. Remember the creamy, sweet, fruity combination we've been discussing. You're getting closer!Wrong DesireRedundant ActionRedundant CommunicationFAMERBob: Think about items that can be crunchy or refreshing. Step 1: Grab apple. (True: chips & juice)Bob: Instead of 'apple', think of something that complements tea.Step 6: Grab cupcake. (True: milk)Step 3 & 4 Repeatedly grab & put apple. CoELATrue Goals: creamybuns,milk,juice,chips Alice: "Hi Bob, I've placed <creamybuns> and <chips> on the <coffeetable>. I found <apple> here, which can be refreshing. Shall I add it, or would you prefer another item like <juice> or <wine>? "Bob: "Juice sounds good for refreshing. Keep going!"Alice: "Hi Bob, I've added <creamybuns>, <chips>, and <juice> to the <coffeetable>. I'm holding <apple>. Please confirm if <apple> should be the final item or suggest another crunchy or refreshing option from the kitchen or unexplored areas. Thanks!"Bob: "Hi Alice, you've done well with 'creamybuns', 'chips', and 'juice'. Instead of 'apple', think of something that complements tea well. Keep exploring!"Figure 7: Qualitative comparison between CoELA and FAMER on the Snack-L task. The figure shows a sequence of key frames illustrating agent behavior across one episode. CoELA exhibits three typical failure modes: 1. Misinterpreting latent user desire 2. Redundant actions 3. Excessive, repetitive communication. FAMER demonstrates more accurate desire inference, targeted questions, and efficient planning. It successfully identifies all four goals with minimal trial-and-error and completes the task with fewer steps and lower communication cost. Third, CoELA engages in redundant communication. As shown in Figure 7, the agent repeatedly mentions creamybuns and chips to the user, even after those items have already been retrieved and confirmed. This not only wastes communication bandwidth but also reflects poor tracking of dialogue. In contrast, the FAMER agent asks focused questions to resolve uncertainty. Within a limited number of interactions, it successfully infers all four desired items and efficiently retrieves and places them on the coffee table. This example illustrates FAMER’s advantages in goal inferring, memory-informed planning, and communication efficiency, enabling superior performance in complex scenarios. 6 Conclusion We address the critical problem of adapting embodied agents to unfamiliar users with implicit values and desires, which is a key challenge for real-world deployment of assistive AI. To facilitate develop- ment and evaluation in this setting, we introduce HA-Desire , a novel 3D simulation environment featuring value-driven proxy users, natural language communication, and object-rich household tasks. Unlike prior benchmarks, HA-Desire captures the complexity of real-world assistance by simulating ambiguous goal specifications and indirect, human-like communication. Building on this environment, we propose FAMER , a framework for fast desire alignment that integrates three key components: Key Information Extraction, Desire-Centered Mental Reasoning, and Efficient Communication. These modules work together to help the agent interpret vague instructions, infer user intent, and act with high efficiency while minimizing redundant dialogue. Extensive experiments on two representative tasks (Snack and Table) at varying difficulty levels show that FAMER consistently outperforms
https://arxiv.org/abs/2505.22503v1