text string | source string |
|---|---|
0.09 0.39.87 ± 0.05 0.6781 ± 0.05 0.4185 ± 0.07 EBR (Ours) 0.8533 ± 0.09 0.4277 ± 0.02 0.7030 ± 0.05 0.4290 ± 0.02 Table 1. MIMIC-IV: Comparison of average performance with standard deviation across multiple modality missingness rates. taken care of, existing SOTA models perform notably poorly when the noise rate is in... | https://arxiv.org/abs/2505.22483v1 |
and low-rank simplic- ity bias. We established, both theoretically and empirically, that modality collapse happens due to low rank gradient updates forcing the fusion head neurons to polysemantically encode predictive features of one modality with noisy fea- tures from another, leading to the eventual collapse of the l... | https://arxiv.org/abs/2505.22483v1 |
Georgescu, S., and Dutta, A. Learning conditional invariances through non-commutativity. In ICLR , 2024. Chen, J. and Zhang, A. Hgmf: Heterogeneous graph-based fusion for multimodal data with incompleteness. In ACM SIGKDD , 2020. De Veaux, R. D. and Ungar, L. H. Multicollinearity: A tale of two nonparametric regression... | https://arxiv.org/abs/2505.22483v1 |
X. Smil: Multimodal learning with severely missing modality. In AAAI , 2021. Ma, M., Ren, J., Zhao, L., Testuggine, D., and Peng, X. Are multimodal transformers robust to missing modality? In CVPR , 2022. Nagrani, A., Yang, S., Arnab, A., Jansen, A., Schmid, C., and Sun, C. Attention bottlenecks for multimodal fusion. ... | https://arxiv.org/abs/2505.22483v1 |
You, J., Ma, X., Ding, Y ., Kochenderfer, M. J., and Leskovec, J. Handling missing data with graph repre- sentation learning. In NeurIPS , 2020. Zhang, C., Chu, X., Ma, L., Zhu, Y ., Wang, Y ., Wang, J., and Zhao, J. M3care: Learning with missing modalities in multimodal healthcare data. In ACM SIGKDD , 2022. Zhao, F.,... | https://arxiv.org/abs/2505.22483v1 |
Polysemantic Collision). As the number of modalities increase, the fraction of polysemantic neurons encoding features from different modalities, for a given depth and width, increases quadratically in the number of modalities as follows: p(wp)≥m(m−1)(dim fmin)2 mX i=1dimfi!2, where p(wp)is the probability of a neuron b... | https://arxiv.org/abs/2505.22483v1 |
a higher similarity (dot-product) with the polysemantic subspace wp. Additionally, since zyandzϵare conjugate to each other, zywould exhibit a low similarity (dot-product) with wp, activating in the opposite direction as that of zϵ. Again, from Lemma 3, since zϵis entangled with wp, when present in the input, it would ... | https://arxiv.org/abs/2505.22483v1 |
conditional cross-entropy H(x;y|z)provided (amount of unique label information held) by each feature is the same, i.e.,I(x;y|z1) =I(x;y|z2) =...=I(x;y|zk), at any iteration nof SGD, the norm of the difference between wand the average gradient outer product (AGOP) of the complete weight matrix Wis bounded as follows: w−... | https://arxiv.org/abs/2505.22483v1 |
basins / multimodal combinations, the steepness must come from the rank minimization term. Therefore, the combination with a steep entry must lead to a lower rank solution. As observed by (Javaloy et al., 2022), no local improvement in the minimization of the marginal loss may be due to conflicting gradients in the loc... | https://arxiv.org/abs/2505.22483v1 |
it means that the modality contains more predictive information and less noise than the rest. Knowledge distillation to align the representations of the other modalities with the target would thus denoise the other modalities, allocating a larger fraction of the feature space of the modality-specific encodings of such ... | https://arxiv.org/abs/2505.22483v1 |
space, which leads to a reduction in loss but a simultaneous increase in rank. The reason behind this saddle-geometry is the presence of noisy features from the one modality in entanglement with the predictive features from the another, which results in an adversarial minimax game between the two. On either side of the... | https://arxiv.org/abs/2505.22483v1 |
to discover independent causal mechanisms on the aggregate of all modalities (Parascandolo et al., 2018). The degree to which the latent factor representations of the individual modalities can be compressed, i.e., the value of ϵin Theorem 3, depends on the size / rank of the invariant (Arjovsky et al., 2019) subspace. ... | https://arxiv.org/abs/2505.22483v1 |
5. Based on these observations, we choose weakest-to-strongest as the sequence to benchmark our KD based implicit basis reallocation mechanism. 18 A Closer Look at Multimodal Representation Collapse Figure 8. avMNIST: With increasing noise rate, existing approaches suffer from modality collapse due to noisy cross-modal... | https://arxiv.org/abs/2505.22483v1 |
fusion head, it is more likely that those non-linear dependencies would be resolved and linearized in the final representation space prior to classification. Theoretically, the bound in Thm 2 is derived based on the AGOP, i.e., x∈X∇φW(x)∇φW(x)T, being a low rank subspace in W (corresponding to an independent set of fea... | https://arxiv.org/abs/2505.22483v1 |
this modality-classifier on the weights of our multimodal fusion head and record the average cross-entropy (CE) in its outputs. Higher values of cross-entropy indicate higher levels of cross-modal polysemanticity, since the probability masses are spread out across multiple modalities. In Table 9, we report the results ... | https://arxiv.org/abs/2505.22483v1 |
On the Surprising Effectiveness of Large Learning Rates under Standard Width Scaling Moritz Haas1Sebastian Bordt1Ulrike von Luxburg1Leena Chennuru Vankadara2 1University of Tübingen, Tübingen AI Center {mo.haas,sebastian.bordt,ulrike.luxburg}@uni-tuebingen.de 2Gatsby Computational Neuroscience Unit, University College ... | https://arxiv.org/abs/2505.22491v1 |
1, where we see that the optimal learning rates (solid lines) for different models trained in SP decay much slower than the theoretically predicted maximal stable scaling law (dashed gray lines). This discrepancy presents a fundamental puzzle: Why does SP remain stable and effective at large learning rates, despite the... | https://arxiv.org/abs/2505.22491v1 |
on the stability of other parameterizations, particularly SP with µP learning rates (SP-full-align, Everett et al., 2024). However, we empirically show that SP-full-align does not provide learning rate transfer on vision datasets due to inherent width dependence. •We show that our width-scaling considerations provide s... | https://arxiv.org/abs/2505.22491v1 |
from the change in weights ∆Wl t of the current layer, and the propagating updates , arising indirectly from activation changes ∆xl−1 t in preceding layers: ∆hl t= (∆ Wl t)xl−1 t|{z} Effective Updates+Wl 0(∆xl−1 t).|{z} Propagating Updates(RCC) We say the layer admits maximal stable feature learning if both the effecti... | https://arxiv.org/abs/2505.22491v1 |
barely width-dependent Everett et al. (2024) highlight that at finite width and over extended training times, it is a priori unclear whether the pairs (∆Wl t, xl−1 t)and(WL+1 0,∆xL t)remain strongly correlated or whether their alignment exponents (p1:L+1, qL+1)should rather be thought of as dynamically changing over th... | https://arxiv.org/abs/2505.22491v1 |
in Figure C.2 validate the maximal stable learning rate scaling η=O(n−1). Hence catapult dynamics alone do not suffice for explaining large learning rate stability in SP. 4 Cross-entropy loss enables stable feature learning under large learning rates in standard parameterization First, let us briefly recall why infinit... | https://arxiv.org/abs/2505.22491v1 |
as in (c) above and, in addition, loss and loss-logit derivatives diverge, that is |L(ft(ξt), yt)| → ∞ and∥χt∥RMS→ ∞ . The formal statement together with a proof can be found in Appendix C.3. For an intuitive under- standing of this result, note that the only effect that the choice of loss function L(f, y)has on the fi... | https://arxiv.org/abs/2505.22491v1 |
versus with ηn=η·n−1/2under CE loss. Center-right versus right: Optimal learning rate (solid) and minimal unstable learning rate (dashed) for 2-layer MLPs on generated multi-index data and 8-layer MLPs on CIFAR-10 and MNIST. Optimal learning rates are often close to max-stable learning rates. Theoretical instability pr... | https://arxiv.org/abs/2505.22491v1 |
induce larger optimal learning rate scaling ηn≈Θ(n−1/2)toward preserving input-layer feature learning at scale. ∥∆ft∥RMS→0. Overall, this shows that existing infinite-width theory was indeed predictive of the maximal stable learning rate exponents under MSE loss, but that CE loss induces qualitatively more favorable be... | https://arxiv.org/abs/2505.22491v1 |
This implies that logits diverge through WL+1 0∆xL tas soon as feature learning does not vanish. Instead our theoretical results in Section 4 show that logit divergence 8 Figure 7: Performance difference be- tween losses is larger in SP than in µP. Optimal training accuracy of 8-layer MLPs trained with SGD on MNIST (le... | https://arxiv.org/abs/2505.22491v1 |
memorization under logit blowup may improve learning speed. How is generalization affected? Logit blowup may partially explain overconfidence in neural networks in SP, and suggests that wide networks in µP may be more calibrated. Numerical considerations. In this paper, we consider the regime of sufficient numerical pr... | https://arxiv.org/abs/2505.22491v1 |
, 2024a. Cited on page 16. Blake Bordelon, Lorenzo Noci, Mufan Bill Li, Boris Hanin, and Cengiz Pehlevan. Depthwise hyper- parameter transfer in residual networks: Dynamics and scaling limit. In The Twelfth International Conference on Learning Representations (ICLR) , 2024b. Cited on page 16. Tom Brown, Benjamin Mann, ... | https://arxiv.org/abs/2505.22491v1 |
on page 2, 3, 4, 8, 17, 18, 21, 45, 56, 57, 58, 59, 60, 61, 62. Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. arXiv:2403.08295 , 2024. Cite... | https://arxiv.org/abs/2505.22491v1 |
Surya Ganguli. Get rich quick: exact solutions reveal how unbalanced ini- tializations promote rapid feature learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Cited on page 17, 30. Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, and Guy Gur-Ari. The lar... | https://arxiv.org/abs/2505.22491v1 |
Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS) , 2019. Cited on page 30. Dan Qiao, Kaiqi... | https://arxiv.org/abs/2505.22491v1 |
on page 2, 3, 9, 16, 19, 20, 21, 22, 24, 27, 57. Greg Yang and Etai Littwin. Tensor programs ivb: Adaptive optimization in the infinite-width limit. arXiv:2308.01814 , 2023. Cited on page 3, 16, 23, 24. Greg Yang, Edward J Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu ... | https://arxiv.org/abs/2505.22491v1 |
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 E Refined coordinate checks 31 E.1 SGD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 E.2 Adam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 E.3 Normalization layers and Ad... | https://arxiv.org/abs/2505.22491v1 |
Finite-width deviations already accumulate after a few steps of training (Wenger et al., 2023), in particular under CE loss (Yu et al., 2025). Considerable effort has been invested in finding a descriptive infinite-width model for SP. Sohl-Dickstein et al. (2020) note that the NTK diverges under large learning rates ηn... | https://arxiv.org/abs/2505.22491v1 |
et al. (2025) confirm L−1-block scaling to be the ‘correct’ scaling by providing additional desiderata and empirical evidence on Transformers. Bordelon et al. (2024a) also show that the infinite within-head dimension limit effectively leads to a single-head Transformer, an the infinite number of heads limit concentrate... | https://arxiv.org/abs/2505.22491v1 |
that, opposed to lazy networks, feature learning networks can learn low rank spikes in hidden layer weights/kernels to help with sparse tasks. Qiao et al. (2024) show that large learning rates induce sparse linear spline fits in univariate gradient descent training by showing that all stable minima are flat, non-interp... | https://arxiv.org/abs/2505.22491v1 |
et al., 2024) in SP, µP, NTP and mean field parameterizations with corrected layerwise learning rate scalings, questioning the infinite-width alignment predictions between weights and incoming activations at finite width over the course of long training. They recommend SP with ADAM in conjunction with µP-learning rate ... | https://arxiv.org/abs/2505.22491v1 |
learning rate scaling η(W) =η/fan_in (W). Here, all biases as well as normalization layer weights should be understood as weights to the one-dimensional input 1, hence fan_in = 1. For recovering width-independent weight decay, weight decay requires the inverse scaling wd·fan_in (W). TP-like width scaling arguments are ... | https://arxiv.org/abs/2505.22491v1 |
fixed to width-scaling dimension ( input-like ), width-scaling to width-scaling ( hidden-like ) or width-scaling to fixed dimension ( output-like ). Here, all bias vectors and normalization layer weights can be understood as input-like weights to the one-dimensional input 1. Any sum of length n→ ∞ that occurs in indivi... | https://arxiv.org/abs/2505.22491v1 |
v= Θ( nc), meaning ∥v∥RMS = Θ( nc). Assuming ∥ϕ′(hl t)∥RMS = Θ(1) as for ReLU (otherwise we would get vanishing gradients), the entries of the following width-scaling vectors scale as ∂f ∂xL t= WL+1 t=WL+1 0−∆WL+1 t=O(n−1/2),∂f ∂hl t=∂f ∂xl t⊙ϕ′(hl t) = Θ(∂f ∂xl t), ∂f ∂xl−1 t= (Wl 0)⊤∂f ∂hl t−ηθWlt−1X s=0χs(∂f ∂hls)⊤∂... | https://arxiv.org/abs/2505.22491v1 |
input layer feature learning under η= Θ(1) , where ˜ft= Θ( n). In random feature models, η just determines the extremeness of memorization of the training labels, where η= Θ( n−1)induces width-independence and η=ω(n−1)increasing memorization. C.2 Measuring Alignment Everett et al. (2024, Fig. 2) provides RMS-alignment ... | https://arxiv.org/abs/2505.22491v1 |
1]. ◀ Definition C.5 (Training routine ).Atraining routine is a combination of base learning rate η≥0, training sequence {(ξt, yt)}t∈Nand a continuously differentiable loss function L(f(ξ), y)using the SGD update rule. ◀ Definition C.6 (Stability ).We say a parametrization of a (L+ 1) -layer MLP is stable if 1. For eve... | https://arxiv.org/abs/2505.22491v1 |
In addition, there exists a training routine and input ξsuch that ∥nα−1·ft(ξ)∥RMS = Ω∗(1). (c)Catastrophic instability (α <1 2): For any l∈[L], there exists a training routine and a ξ∈Rdin, such that ∥ft(ξ)∥RMS =ω∗(1),∥xl t(ξ)∥RMS =ω∗(1)and∥∇l t∥RMS =ω∗(1). Under mean -squared error (MSE) loss , a stable regime as in (... | https://arxiv.org/abs/2505.22491v1 |
in the TP as δˆft=ˆθL+1δWL+1 txL t n+ˆθLfˆWL+1 t−1δxL t n, by replacing θ′ L+1byˆθL+1:=θαθ′ L+1and replacing θ′ LfbyˆθLf:=θαθ′ Lf, where θα:=nα−1. The adapted pre-factors ensure that δˆfremains O∗(1)for a well-defined TP. The TP master theorem now implies almost sure convergence of the rescaled logit updates δˆft→˚δˆft... | https://arxiv.org/abs/2505.22491v1 |
over the course of training, λt=λ0. Maximal Update Parameterization. We define a 2-layer linear network in µP with arbitrary weight multipliers as f= ¯v¯ux, with reparameterization-invariant weights ¯uij∼N(0,1/din)and¯vi∼N(0,1/n2),¯u= n−auu,¯v=n−avv, and the original weights u, vare trained with MSE loss and layerwise ... | https://arxiv.org/abs/2505.22491v1 |
kernel does not require width-dependent scaling factors, ˜Θ(x, x′) =x⊤ ∥u∥2+∥v∥2 x′. In other words, under these weight multipliers, width-independence in parameter space translates into width-independence in function space. ◀ Standard Parameterization. We define training a 2-layer linear network in SP with global le... | https://arxiv.org/abs/2505.22491v1 |
is a mild one that states η=O(n). We will now focus on the second one. 27 Solving for the roots of this polynomial in η, we get η1,2=1 2∥x∥2χf nntpnspλ±q n2 ntpn2spλ2−8∥x∥2nntpχf . Assuming n2 spnntpλ2≫8∥x∥2χf=:C, we get nspnntpλq 1−C n2spnntpλ2≈nspnntpλ(1− C 2n2spnntpλ2−1 4(C n2spnntpλ2)2). In that case η1≈2 nspλand... | https://arxiv.org/abs/2505.22491v1 |
across training) Difference between finite networks and their infinite-width limit from the same initial condition across 100random seeds for µP (line), SP (dotted line) and NTP (dashed line) after T= 2,5or10steps (left to right), running plain SGD in gray, 0.1warmup in green and 0.1weight decay in orange. Initially, w... | https://arxiv.org/abs/2505.22491v1 |
network given by f∗(ξ) =sign(P4 i=1siϕ(w⊤ iξ))with unit vectors w1=e1,w2=e2,w3=−e1, w4=−e2and signs s1=s3= +1 ands2=s4=−1. This results in the nonlinear target function f∗(ξ) =sign(ξ1−ξ2)for all ξ∈Rdinwithξ1>0orξ2>0, butf∗(ξ) =sign(ξ2−ξ1)for all ξ∈(−∞,0)×(−∞,0). We do not use label noise. This dataset requires learning... | https://arxiv.org/abs/2505.22491v1 |
limited computational resources. Minimal unstable learning rates are defined as the smallest learning rates to produce loss worse than (optimal CE loss +1) at each width. The x-axes showing learning rates are scaled as (n/256)α. In this way, the learning rate at base width 256remains the same for comparability of the c... | https://arxiv.org/abs/2505.22491v1 |
explode as Θ(n), both empirical exponents aren−1/2smaller, so that the input layer has vanishing feature learning and the hidden layer is still exploding. This ostensible contradiction is resolved when repeating the coordinate check but initializing the last layer to 0(Figure E.3). Now the predicted scaling exponents a... | https://arxiv.org/abs/2505.22491v1 |
are remarkably strongly dominated by a single direction. As hidden-layer activations are slowly diverging, their alignment is only beginning to decrease at large widths n≥4096 . The beginning instability of ∥∆x2∥RMS will eventually induce training instability and suboptimal accuracy at large width, which is hard to pre... | https://arxiv.org/abs/2505.22491v1 |
0= 0. Figure E.4: (Shallow nets learn features width-independently under large learning rate scaling) Same as Figure E.1 but for 2-layer MLPs trained in SP with width-independent ηn= 0.0003 with standard initialization (left) and last-layer initialized to 0(right). The input layer and output layer scalings behave as in... | https://arxiv.org/abs/2505.22491v1 |
effective updates ∥∆Wl txl−1 t∥RMS do not perfectly align with the scaling law at infinite width, indicating that the alignment between ∆Wl tandxl−1 tevolves non-trivially across width and that the spectral norm ∥∆Wl∥∗and pure infinite-width predictions are less useful for explaining the behaviour of Adam at moderate w... | https://arxiv.org/abs/2505.22491v1 |
Large input variance has to be stabilized by increased activation sparsity. Figure E.14: (Activation sparsity barely affected under normalization) Same as Figure E.12 but showing the fraction of activation entries that equal 0. Both initializations do not significantly sparsify activations beyond 50%. E.4 Alignment and... | https://arxiv.org/abs/2505.22491v1 |
feature learning, neither training nor test loss monotonically improve with scale under MSE loss. Random feature models. When only training the last layer, fully width-independent training dynamics are achieved with ηn=η·n−1. Figure F.18 shows that this exponent clearly results in learning rate transfer for 2-layer ReL... | https://arxiv.org/abs/2505.22491v1 |
for SGD, we still observe the optimal learning rate scaling ηn=η·n−1in deep MLPs on MNIST (Appendix F.5) and on CIFAR-10 (Appendix F.7), indicating that width-independence in hidden- and output-layer dominates input layer feature learning. F.2 Transformer experiments As we consider single-pass training, training and va... | https://arxiv.org/abs/2505.22491v1 |
values from Brown et al. (2020) results in quite a stable scaling law with exponent −0.648, which is larger than −1required for hidden-layer stability but significantly smaller than 0required for width-independent input layer learning. But note that jointly increasing batch size, n_layers and n_heads might be confoundi... | https://arxiv.org/abs/2505.22491v1 |
rate under Θ(n−1/2),Θ(n−1)andΘ(n−1)scaling, respectively. In the MSE plot, ending lines indicate divergence for larger learning rates. Observe that wider networks generalize worse with scale as they lose input layer feature learning. F.4 MLPs with SGD on MNIST With MSE loss, observe a clear O(n−1)optimal and maximal st... | https://arxiv.org/abs/2505.22491v1 |
MLPs trained with Adam on MNIST the optimal learning rate scales at most as ηn=O(n−1). MLPs with 2 or 3 layers tend to have larger optimal learning rate scaling exponents around n−1/2, but with an increasing amount of layers the conflicting objectives of first layer versus hidden layer width-independent learning are do... | https://arxiv.org/abs/2505.22491v1 |
MSE loss. Apparently, large hidden layer updates can be stabilized over the course of training. As 52 an additional inductive bias that self-stabilizes large gradients, activations are sparsified which may enhance generalization under large learning rates. Figure F.19 shows very slow decay of the optimal learning rate ... | https://arxiv.org/abs/2505.22491v1 |
view, Adam can even tolerate larger learning rates than the hidden-layer feature learning ηn= Θ( n−1), and the optimal learning rate may also be pushed toward input layer feature learning. Indeed, when fixing the first layer (Figure F.22), all MLPs transfer under ηn= Θ( n−1), which now achieves full width-independent e... | https://arxiv.org/abs/2505.22491v1 |
and Adam. The fact that logit blowup does not prevent stable training under CE loss explains why we can achieve non-vanishing feature learning under SP last-layer initialization. When dropping the logit stability constraint, we can ask which is the optimal layerwise learning rate scaling under standard last-layer initi... | https://arxiv.org/abs/2505.22491v1 |
1/2, but necessary for fulfilling the desideratum that the weight updates in all layers affect the output function non-vanishingly. Figure F.34 shows that indeed all weight updates behave width-independently under µP with standard last-layer initialization (WL+1 0)ij∼N(0, n−1). But the output logits are dominated by th... | https://arxiv.org/abs/2505.22491v1 |
does not even add expressivity. Generalization, learning rate transfer and learning rate sensitivity after 20 epochs tends to be similar in all 3 considered parameterizations in deep ReLU MLPs (Figure F.31), showing again that parameterizations with logit blowup are a viable alternative. Especially in deep ReLU MLPs, t... | https://arxiv.org/abs/2505.22491v1 |
The validation-optimal learning rate scales width-independently in all cases. Observe that, while all variants generalize similarly well, the susceptibility to poorly tuned learning rates is much larger in µP than under parameterizations with large last-layer initialization. Figure F.30: (Train accuracy of effective up... | https://arxiv.org/abs/2505.22491v1 |
Demystifying the Paradox of Importance Sampling with an Estimated History-Dependent Behavior Policy in Off-Policy Evaluation Hongyi Zhou1Josiah P. Hanna2Jin Zhu3Ying Yang1Chengchun Shi3 Abstract This paper studies off-policy evaluation (OPE) in reinforcement learning with a focus on behav- ior policy estimation for imp... | https://arxiv.org/abs/2505.22492v1 |
widely used for fine-tuning large language models (Ouyang et al., 2022). In practice, the behavior policy might be unknown and must be estimated from the historical data to construct the IS ratio. Paradoxically, IS with an estimated behavior policy results in an estimator with lower asymptotic variance and often lower ... | https://arxiv.org/abs/2505.22492v1 |
et al., 2008; Le et al., 2019; Feng et al., 2020; Luckett et al., 2020; Hao et al., 2021; Liao et al., 2021; Chen & Qi, 2022; Shi et al., 2022b; Li et al., 2023; Liu et al., 2023; Bian et al., 2025). •IS methods . This paper focuses on the family of IS estimators, which can be further classified into three types, accor... | https://arxiv.org/abs/2505.22492v1 |
estimating Markov behavior policies and left the justification for using history as an open question. Our analysis significantly advances their analyses in the following ways: (i) We offer a bias-variance decomposition to theoretically demystify this paradox. (ii) We demonstrate that the variance varies monotonically w... | https://arxiv.org/abs/2505.22492v1 |
sample size n. The following lemma summarizes the perfor- mance of the three estimators in terms of their asymptotic MSEs. Lemma 1. MSE A(bvCD IS)≤MSE A(bvCA IS)≤MSE A(bv† IS). The first equality hold if and only if the reward function ris independent of the context Swhereas the second equality holds if and only if E(R... | https://arxiv.org/abs/2505.22492v1 |
the above theoretical analysis did not consider the biases of IS estimators. As depicted in the left panel of Figure 1, incorporating history-dependent behavior policy estimation can increase bias in small samples. In our forthcoming analysis of MDPs, we will carefully examine the finite-sample biases of different IS e... | https://arxiv.org/abs/2505.22492v1 |
respectively. This leads to bv† MIS=En(PT t=0γtwtRt). We will investigate the theoretical properties of these esti- mators in the next two sections. 4. Demystifying the paradox in MDPs In this section, we conduct a rigorous theoretical analysis to evaluate the impact of replacing the oracle behavior pol- icy with an es... | https://arxiv.org/abs/2505.22492v1 |
implications: 1.Equation (2)obtains a bias-variance decomposition for the MSE of bvOIS(k). In particular, the first term on the right-hand-side (RHS) of (2)corresponds to its asymp- totic variance, which is of the order O(n−1), whereas the second term upper bounds its finite-sample bias, which decays to zero at a faste... | https://arxiv.org/abs/2505.22492v1 |
En(PT t=0λtγtRt). Similar to OIS, Theorem 4 suggests that using an estimated behavior policy will lower the MSE of the resulting SIS estimator in large samples through pro- jection. Meanwhile, the longer the history-length, the lower the asymptotic MSE, leading to the following corollary. Corollary 5. Letkandk′be two p... | https://arxiv.org/abs/2505.22492v1 |
the MIS ratio. Unlike the previously discussed ratios{λt}t, which can be known in settings such as ran- domized studies, the MIS ratio depends on the marginal state distribution and is typically unknown, even when the behavior policy is given. In the literature, several methods have been developed to estimate the MIS r... | https://arxiv.org/abs/2505.22492v1 |
variance of the resulting OPE estimator. Specifically, we assume the policy class Πcan be rep- resented by {π(Ht−k:t;θ), θ∈Θ}with an infinite- dimensional Hilbert space Θ. Let Θ1⊆. . .Θn⊆ Θn+1. . .⊆Θbe a sequence of finite-dimensional sieve spaces. For a given sample size n, we compute the esti- matorbθnby maximizing t... | https://arxiv.org/abs/2505.22492v1 |
the other two. The detailed results are deferred to Appendix B. 7. Discussion This paper demystifies the paradox concerning the impact of history-dependent behavior policy estimation on IS-type OPE estimators by establishing a bias-variance decompo- sition of their MSEs. Our analysis reveals a trade-off in the choice o... | https://arxiv.org/abs/2505.22492v1 |
paper provides a theoretical foundation for using history-dependent behavior policy estimators for OPE in re- 9 Demystifying the Paradox of IS with an Estimated History-Dependent Behavior Policy in OPE inforcement learning. Our research reveals that while these estimators may decrease accuracy with small sample sizes, ... | https://arxiv.org/abs/2505.22492v1 |
, volume 33, pp. 9398–9411. Curran Associates, Inc., 2020. Dud´ık, M., Erhan, D., Langford, J., and Li, L. Doubly Ro- bust Policy Evaluation and Optimization. Statistical Sci- ence, 29(4):485 – 511, 2014. doi: 10.1214/14-STS500. Fan, J., Wang, Z., Xie, Y ., and Yang, Z. A theoretical analysis of deep q-learning. In Lea... | https://arxiv.org/abs/2505.22492v1 |
M. Double reinforcement learning for efficient off-policy evaluation in markov decision pro- cesses. Journal of Machine Learning Research , 21(167): 1–63, 2020. URL http://jmlr.org/papers/ v21/19-827.html . Kallus, N. and Uehara, M. Efficiently breaking the curse of horizon in off-policy evaluation with double reinforc... | https://arxiv.org/abs/2505.22492v1 |
Group, C. P. P. R. Marginal mean models for dynamic regimes. Journal of the American Statistical Association , 96(456):1410–1423, 2001. Nachum, O., Chow, Y ., Dai, B., and Li, L. Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections. Advances in neural information processing systems ,... | https://arxiv.org/abs/2505.22492v1 |
learning framework. Journal of the Ameri- can Statistical Association , 118(543):2059–2071, 2023. Shi, C., Zhu, J., Shen, Y ., Luo, S., Zhu, H., and Song, R. Off- policy confidence interval estimation with confounded markov decision process. Journal of the American Statis- tical Association , 119(545):273–284, 2024. Su... | https://arxiv.org/abs/2505.22492v1 |
Semiparametrically effi- cient off-policy evaluation in linear markov decision pro- cesses. In International Conference on Machine Learning , pp. 38227–38257. PMLR, 2023. Xie, T., Ma, Y ., and Wang, Y .-X. Towards optimal off-policy evaluation for reinforcement learning with marginalized importance sampling . Curran As... | https://arxiv.org/abs/2505.22492v1 |
s). Therefore, the reward function is a deterministic function defined as r(s, a) = 10 a+ 0.1(1 + 2 s). For the illustrative example, we can derive the closed-form expression of the policy’s value, which is 4.2. Numerical experiments in Section 6. In Cartpole environment, the state space Sis a subset of R4. For any s∈ ... | https://arxiv.org/abs/2505.22492v1 |
C.1. Proof of Lemma 1 According to the definitions of bvCD ISandbvCA IS, it follows from straightforward calculations that bvCA IS=EnnX aπe(a)br(a) +πe(A) bπb(A)[R−br(A)]o , and bvCD IS=EnnX aπe(a)br(S, a) +πe(A) bπb(A|S)[R−br(S, A)]o . According to Neyman orthogonality, both the estimated reward and estimated behavior... | https://arxiv.org/abs/2505.22492v1 |
second inequality follows from the fact that MSE A(bv† IS) =1 nVarπe(A) πb(A)R =1 nVarπe(A) πb(A)[R−E(R|A)] +1 nVarπe(A) πb(A)E(R|A) . =MSE A(bvCA IS) +1 nVarπe(A) πb(A)E(R|A) ≥MSE A(bvCA IS). The equality holds if and only if E(R|A) = 0 almost surely. C.2. Proof of Theorems in Section 4 Details of Assumption 1... | https://arxiv.org/abs/2505.22492v1 |
Behavior Policy in OPE where Rn2=1 n( 1 nnX i=1u(Hi, θ∗)s(Hi, θ∗)−E[u(H, θ∗)s(H, θ∗)]) I−1(θ∗)1 nnX j=1s(Hj, θ∗). Again, according to the central limit theorem, we have 1 nnX i=1u(Hi, θ∗)s(Hi, θ∗)−E[u(H, θ∗)s(H, θ∗)] =Op r T nCTRmaxε−1! . Therefore, we obtain Rn2is also of order Op (k+1)CTRmax nε2 . Plug into equatio... | https://arxiv.org/abs/2505.22492v1 |
+γQt+1(St+1, πe))) , (19) withQt(S, πe) =R aQt(S, a)dπe(a|S)and the doubly robust estimator with oracle weight can be represented as bv† DR=EnQ0(S0, πe) +EnTX t=0Pπe(HSt+1) Pθ∗(HSt+1)γt(Rt−Qt(St, At) +γQt+1(St+1, πe)) . For notation simplicity, we denote u(HSt+1, θ) =Pπe(HAt) Pπθ(HAt)γt(Rt−Qt(St, At) +γQt+1(St+1, πe)... | https://arxiv.org/abs/2505.22492v1 |
I−1(γ∗)I12J−1E"TX t=0u(HSt+1)s(HT, η∗)# −E"TX t=0u(HSt+1)s⊤(HT, η∗)# J−1E"TX t=0u(HSt+1)s(HT, η∗)# =σ2(k′) +E J−1/2IT 12I−1(γ∗)TX t=0u(HSt+1)s(Ht, γ∗)−J−1/2TX t=0u(HSt+1)s(Ht, η∗) 2 , withJ=I(η∗)−IT 12I−1(γ∗)I12. Thus, we obtain σ2(k)≥σ2(k′)for any k′< k. To this end, we finishes the proof of Var(ProjT(k)(bv† DR))is de... | https://arxiv.org/abs/2505.22492v1 |
πe)) +TX t=0Var(wt(At−k:t, St−k:t)Ut) =1 nVar(Q0(S0, πe)) +1 nTX t=0Var(wt(At−k:t, St−k:t)E[Ut|At−k:t, St−k:t]) +1 nTX t=0E w2 t(At−k:t, St−k:t)Var[Ut|At−k:t, St−k:t] =1 nVar(Q0(S0, πe)) +1 nTX t=0E w2 t(At−k:t, St−k:t)σ2(At, St) , (26) where σ2(At, St) =Var(Ut|At−k:t, St−k:t). Therefore, for any k′< k, E w2 t(At−... | https://arxiv.org/abs/2505.22492v1 |
˜h∈Θn such that d(˜h, h) =o(n−1/4). Since θnmaximize PnL(H, θ)inΘn, it follows that Pnn s(H,ˆθn)[˜h]o = 0. Therefore, Pnn s(H,ˆθn)[h]o =Pnn s(H,ˆθn)[h]−s(H,ˆθn)[˜h]o , which can be further decomposed into three parts; Pnn s(H,ˆθn)[h]o = (Pn−P) s(H,ˆθn)−s(H, θ 0) [h]−(Pn−P) s(H,ˆθn)−s(H, θ 0) [˜h] +Pnn s(H, θ 0)[h]−... | https://arxiv.org/abs/2505.22492v1 |
arXiv:2505.22503v1 [cs.RO] 28 May 2025From Strangers to Assistants: Fast Desire Alignment for Embodied Agent-User Adaptation Yuanfei Wang12, Xinju Huang3, Fangwei Zhong3, Yaodong Yang24, Yizhou Wang1456, Yuanpei Chen24B, Hao Dong1B Abstract While embodied agents have made significant progress in performing complex phys... | https://arxiv.org/abs/2505.22503v1 |
a heavy workload. Therefore, the robot infers that the user wants juice and serves it without needing to ask. Such behavior highlights the necessity of rapid and accurate desire alignment in embodied assistance, enabling robots to build trust and deliver truly helpful service. Previous works have investigated collabora... | https://arxiv.org/abs/2505.22503v1 |
adaptation. •We propose FAMER, a new framework that integrates desire-centered mental state reasoning, reflection-based efficient communication, and goal-related key information extraction to enable fast desire alignment for embodied agents. •We demonstrate the effectiveness of our proposed environment and framework th... | https://arxiv.org/abs/2505.22503v1 |
description. Based on these values, the user samples a set of desire-related goals from a potential goal set. Importantly, the user does not directly reveal these goals but instead provides indirect hints about preferences and intentions in response to the ego agent’s inquiries. We describe the problem formulation and ... | https://arxiv.org/abs/2505.22503v1 |
to simulate realistic goal selection. Conditioned on the vague task description Tand the sampled values V, the LLM generates a set of corresponding desire-related goals G=G(V, G p, T)⊂Gp. Since multiple goals may align with the same value attribute, this sampling process is intentionally non-deterministic, mirroring th... | https://arxiv.org/abs/2505.22503v1 |
the agent to infer the user’s underlying desires . The Goal Confirmation component extracts confirmed goals from the user’s responses by VLM reasoning. For example, if the agent asks, “Is juice what you want?” and the user replies, “Correct! Try to look for something crunchy,” the system confirms that juice is one of t... | https://arxiv.org/abs/2505.22503v1 |
Score : Given Ntotal goals, the agent receives a reward of1 Nfor each correct goal achieved. Completing an incorrect or distracting goal incurs a penalty of −1 2N. The maximum achievable score per episode is 1, corresponding to the completion of all goals without any mistakes. Step : The total number of environment ste... | https://arxiv.org/abs/2505.22503v1 |
Such an inefficient process often incurs penalties. Notably, their performance gradually improves across episodes, reflecting slow adaptation to latent user desires through repeated interactions. FAMER also demonstrates superior efficiency in execution, requiring significantly fewer environment steps to complete tasks.... | https://arxiv.org/abs/2505.22503v1 |
you have.Alice: Are 'creamybuns', 'chips', 'cupcake', 'apple' what you want?Bob: You're close! 'creamybuns', 'cupcake', and 'apple' are right. Think crunchy and sweet for the missing item.Alice: Is 'cereal' what you want?Bob: Yes, 'cereal' is one of them. Remember the creamy, sweet, fruity combination we've been discus... | https://arxiv.org/abs/2505.22503v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.