diff --git "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml" --- "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" +++ "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml" @@ -7,858 +7,12 @@ http://www.rssboard.org/rss-specification en-us - Fri, 23 Jan 2026 05:00:07 +0000 + Sat, 24 Jan 2026 05:00:03 +0000 rss-help@arxiv.org - Fri, 23 Jan 2026 00:00:00 -0500 + Sat, 24 Jan 2026 00:00:00 -0500 Saturday Sunday - - Statistical Reinforcement Learning in the Real World: A Survey of Challenges and Future Directions - https://arxiv.org/abs/2601.15353 - arXiv:2601.15353v1 Announce Type: new -Abstract: Reinforcement learning (RL) has achieved remarkable success in real-world decision-making across diverse domains, including gaming, robotics, online advertising, public health, and natural language processing. Despite these advances, a substantial gap remains between RL research and its deployment in many practical settings. Two recurring challenges often underlie this gap. First, many settings offer limited opportunity for the agent to interact extensively with the target environment due to practical constraints. Second, many target environments often undergo substantial changes, requiring redesign and redeployment of RL systems (e.g., advancements in science and technology that change the landscape of healthcare delivery). Addressing these challenges and bridging the gap between basic research and application requires theory and methodology that directly inform the design, implementation, and continual improvement of RL systems in real-world settings. - In this paper, we frame the application of RL in practice as a three-component process: (i) online learning and optimization during deployment, (ii) post- or between-deployment offline analyses, and (iii) repeated cycles of deployment and redeployment to continually improve the RL system. We provide a narrative review of recent advances in statistical RL that address these components, including methods for maximizing data utility for between-deployment inference, enhancing sample efficiency for online learning within-deployment, and designing sequences of deployments for continual improvement. We also outline future research directions in statistical RL that are use-inspired -- aiming for impactful application of RL in practice. - oai:arXiv.org:2601.15353v1 - stat.AP - cs.LG - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Asim H. Gazi, Yongyi Guo, Daiqi Gao, Ziping Xu, Kelly W. Zhang, Susan A. Murphy - - - Robust X-Learner: Breaking the Curse of Imbalance and Heavy Tails via Robust Cross-Imputation - https://arxiv.org/abs/2601.15360 - arXiv:2601.15360v1 Announce Type: new -Abstract: Estimating Heterogeneous Treatment Effects (HTE) in industrial applications such as AdTech and healthcare presents a dual challenge: extreme class imbalance and heavy-tailed outcome distributions. While the X-Learner framework effectively addresses imbalance through cross-imputation, we demonstrate that it is fundamentally vulnerable to "Outlier Smearing" when reliant on Mean Squared Error (MSE) minimization. In this failure mode, the bias from a few extreme observations ("whales") in the minority group is propagated to the entire majority group during the imputation step, corrupting the estimated treatment effect structure. To resolve this, we propose the Robust X-Learner (RX-Learner). This framework integrates a redescending {\gamma}-divergence objective -- structurally equivalent to the Welsch loss under Gaussian assumptions -- into the gradient boosting machinery. We further stabilize the non-convex optimization using a Proxy Hessian strategy grounded in Majorization-Minimization (MM) principles. Empirical evaluation on a semi-synthetic Criteo Uplift dataset demonstrates that the RX-Learner reduces the Precision in Estimation of Heterogeneous Effect (PEHE) metric by 98.6% compared to the standard X-Learner, effectively decoupling the stable "Core" population from the volatile "Periphery". - oai:arXiv.org:2601.15360v1 - stat.ML - cs.LG - econ.EM - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Eichi Uehara - - - Non-Stationary Functional Bilevel Optimization - https://arxiv.org/abs/2601.15363 - arXiv:2601.15363v1 Announce Type: new -Abstract: Functional bilevel optimization (FBO) provides a powerful framework for hierarchical learning in function spaces, yet current methods are limited to static offline settings and perform suboptimally in online, non-stationary scenarios. We propose SmoothFBO, the first algorithm for non-stationary FBO with both theoretical guarantees and practical scalability. SmoothFBO introduces a time-smoothed stochastic hypergradient estimator that reduces variance through a window parameter, enabling stable outer-loop updates with sublinear regret. Importantly, the classical parametric bilevel case is a special reduction of our framework, making SmoothFBO a natural extension to online, non-stationary settings. Empirically, SmoothFBO consistently outperforms existing FBO methods in non-stationary hyperparameter optimization and model-based reinforcement learning, demonstrating its practical effectiveness. Together, these results establish SmoothFBO as a general, theoretically grounded, and practically viable foundation for bilevel optimization in online, non-stationary scenarios. - oai:arXiv.org:2601.15363v1 - stat.ML - cs.LG - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Jason Bohne, Ieva Petrulionyte, Michael Arbel, Julien Mairal, Pawe{\l} Polak - - - Distributional Balancing for Causal Inference: A Unified Framework via Characteristic Function Distance - https://arxiv.org/abs/2601.15449 - arXiv:2601.15449v1 Announce Type: new -Abstract: Weighting methods are essential tools for estimating causal effects in observational studies, with the goal of balancing pre-treatment covariates across treatment groups. Traditional approaches pursue this objective indirectly, for example, via inverse propensity score weighting or by matching a finite number of covariate moments, and therefore do not guarantee balance of the full joint covariate distributions. Recently, distributional balancing methods have emerged as robust, nonparametric alternatives that directly target alignment of entire covariate distributions, but they lack a unified framework, formal theoretical guarantees, and valid inferential procedures. We introduce a unified framework for nonparametric distributional balancing based on the characteristic function distance (CFD) and show that widely used discrepancy measures, including the maximum mean discrepancy and energy distance, arise as special cases. Our theoretical analysis establishes conditions under which the resulting CFD-based weighting estimator achieves $\sqrt{n}$-consistency. Since the standard bootstrap may fail for this estimator, we propose subsampling as a valid alternative for inference. We further extend our approach to an instrumental variable setting to address potential unmeasured confounding. Finally, we evaluate the performance of our method through simulation studies and a real-world application, where the proposed estimator performs well and exhibits results consistent with our theoretical predictions. - oai:arXiv.org:2601.15449v1 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Diptanil Santra, Guanhua Chen, Chan Park - - - Treatment effect: a critique - https://arxiv.org/abs/2601.15467 - arXiv:2601.15467v1 Announce Type: new -Abstract: Two broad positions within statistics define a treatment effect, on the one hand, as a parameter of a statistical model, and on the other, as an appropriate population-level difference in outcomes or counterfactual outcomes under the different treatment regimes. This short expository paper presents some simple but consequential insights on the two formulations, contrasting the answers under the most favourable fictitious idealisation for the counterfactual framework. These observations clarify the relationship between Fisherian model-based inference and modern counterfactual formulations, and emphasise concerns, raised by Cox and others, regarding the suitability of model-free definitions as targets of inference when scientific conclusions are intended to generalise beyond the observed sample. Parts of the paper are necessarily controversial; we follow Cox (1958a) in not putting these forward in any dogmatic spirit. - oai:arXiv.org:2601.15467v1 - stat.OT - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Heather Battey, Charlotte Edgar - - - Geometric Morphometrics approach for classifying children's nutritional status on out of sample data - https://arxiv.org/abs/2601.15491 - arXiv:2601.15491v1 Announce Type: new -Abstract: Current alignment-based methods for classification in geometric morphometrics do not generally address the classification of new individuals that were not part of the study sample. However, in the context of infant and child nutritional assessment from body shape images this is a relevant problem. In this setting, classification rules obtained on the shape space from a reference sample cannot be used on out-of-sample individuals in a straightforward way. Indeed, a series of sample dependent processing steps, such as alignment (Procrustes analysis, for instance) or allometric regression, need to be conducted before the classification rule can be applied. This work proposes ways of obtaining shape coordinates for a new individual and analyzes the effect of using different template configurations on the sample of study as target for registration of the out-of-sample raw coordinates. Understanding sample characteristics and collinearity among shape variables is crucial for optimal classification results when evaluating children's nutritional status using arm shape analysis from photos. The SAM Photo Diagnosis App\c{opyright} Program's goal is to develop an offline smartphone tool, enabling updates of the training sample across different nutritional screening campaigns. - oai:arXiv.org:2601.15491v1 - stat.AP - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - 10.1038/s41598-025-85718-4 - Scientific Reports 15, 3906 (2025) - Laura Medialdea, Ana Arribas-Gil, \'Alvaro P\'erez-Romero, Amador G\'omez - - - Low-Dimensional Adaptation of Rectified Flow: A New Perspective through the Lens of Diffusion and Stochastic Localization - https://arxiv.org/abs/2601.15500 - arXiv:2601.15500v1 Announce Type: new -Abstract: In recent years, Rectified flow (RF) has gained considerable popularity largely due to its generation efficiency and state-of-the-art performance. In this paper, we investigate the degree to which RF automatically adapts to the intrinsic low dimensionality of the support of the target distribution to accelerate sampling. We show that, using a carefully designed choice of the time-discretization scheme and with sufficiently accurate drift estimates, the RF sampler enjoys an iteration complexity of order $O(k/\varepsilon)$ (up to log factors), where $\varepsilon$ is the precision in total variation distance and $k$ is the intrinsic dimension of - the target distribution. In addition, we show that the denoising diffusion probabilistic model (DDPM) procedure is equivalent to a stochastic version of RF by establishing a novel connection between these processes and stochastic localization. Building on this connection, we further design a stochastic RF sampler that also adapts to the low-dimensionality of the target distribution under milder requirements on the accuracy of the drift estimates, and also with a specific time schedule. We illustrate with simulations on the synthetic data and text-to-image data experiments the improved performance of the proposed samplers implementing the newly designed time-discretization schedules. - oai:arXiv.org:2601.15500v1 - stat.ML - cs.AI - cs.LG - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Saptarshi Roy, Alessandro Rinaldo, Purnamrita Sarkar - - - Assessing the informative value of macroeconomic indicators for public health forecasting - https://arxiv.org/abs/2601.15514 - arXiv:2601.15514v1 Announce Type: new -Abstract: Macroeconomic conditions influence the environments in which health systems operate, yet their value as leading signals of health system capacity has not been systematically evaluated. In this study, we examine whether selected macroeconomic indicators contain predictive information for several capacity-related public health targets, including employment in the health and social assistance workforce, new business applications in the sector, and health care construction spending. Using monthly U.S. time series data, we evaluate multiple forecasting approaches, including neural network models with different optimization strategies, generalized additive models, random forests, and time series models with exogenous macroeconomic indicators, under alternative model fitting designs. Across evaluation settings, we find that macroeconomic indicators provide a consistent and reproducible predictive signal for some public health targets, particularly workforce and infrastructure measures, while other targets exhibit weaker or less stable predictability. Models emphasizing stability and implicit regularization tend to perform more reliably during periods of economic volatility. These findings suggest that macroeconomic indicators may serve as useful upstream signals for digital public health monitoring, while underscoring the need for careful model selection and validation when translating economic trends into health system forecasting tools. - oai:arXiv.org:2601.15514v1 - stat.AP - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Shome Chakraborty, Fardil Khan, Soutik Ghosal - - - Model-Free Inference for Characterizing Protein Mutations through a Coevolutionary Lens - https://arxiv.org/abs/2601.15566 - arXiv:2601.15566v1 Announce Type: new -Abstract: Multiple sequence alignment (MSA) data play a crucial role in the study of protein mutations, with contact prediction being a notable application. Existing methods are often model-based or algorithmic and typically do not incorporate statistical inference to quantify the uncertainty of the prediction outcomes. To address this, we propose a novel framework that transforms the task of contact prediction into a statistical testing problem. Our approach is motivated by the partial correlation for continuous random variables. With one-hot encoding of MSA data, we are able to construct a partial correlation graph for multivariate categorical variables. In this framework, two connected nodes in the graph indicate that the corresponding positions on the protein form a contact. A new spectrum-based test statistic is introduced to test whether two positions are partially correlated. Moreover, the new framework enables the identification of amino acid combinations that contribute to the correlation within the identified contacts, an important but largely unexplored aspect of protein mutations. Numerical experiments demonstrate that our proposed method is valid in terms of controlling Type I errors and powerful in general. Real data applications on various protein families further validate the practical utility of our approach in coevolution and mutation analysis. - oai:arXiv.org:2601.15566v1 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Fan Yang, Zhao Ren, Wen Zhou, Kejue Jia, Robert Jernigan - - - On the Nonasymptotic Scaling Guarantee of Hyperparameter Estimation in Inhomogeneous, Weakly-Dependent Complex Network Dynamical Systems - https://arxiv.org/abs/2601.15603 - arXiv:2601.15603v1 Announce Type: new -Abstract: Hierarchical Bayesian models are increasingly used in large, inhomogeneous complex network dynamical systems by modeling parameters as draws from a hyperparameter-governed distribution. However, theoretical guarantees for these estimates as the system size grows have been lacking. A critical concern is that hyperparameter estimation may diverge for larger networks, undermining the model's reliability. Formulating the system's evolution in a measure transport perspective, we propose a theoretical framework for estimating hyperparameters with mean-type observations, which are prevalent in many scientific applications. Our primary contribution is a nonasymptotic bound for the deviation of estimate of hyperparameters in inhomogeneous complex network dynamical systems with respect to network population size, which is established for a general family of optimization algorithms within a fixed observation duration. While we firstly establish a consistency result for systems with independent nodes, our main result extends this guarantee to the more challenging and realistic setting of weakly-dependent nodes. We validate our theoretical findings with numerical experiments on two representative models: a Susceptible-Infected-Susceptible model and a Spiking Neuronal Network model. In both cases, the results confirm that the estimation error decreases as the network population size increases, aligning with our theoretical guarantees. This research proposes the foundational theory to ensure that hierarchical Bayesian methods are statistically consistent for large-scale inhomogeneous systems, filling a gap in this area of theoretical research and justifying their application in practice. - oai:arXiv.org:2601.15603v1 - math.ST - cs.IT - math.IT - stat.ML - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Yi Yu, Yubo Hou, Yinchong Wang, Nan Zhang, Jianfeng Feng, Wenlian Lu - - - Climate Vulnerability and Community Health: Identifying Greensboro Neighborhoods at Intersectional Risk - https://arxiv.org/abs/2601.15675 - arXiv:2601.15675v1 Announce Type: new -Abstract: This study develops an integrated, intersectional climate vulnerability assessment for Greensboro, North Carolina, a midsize city in the rapidly changing American Southeast. Moving beyond generalized mapping, we combine demographic, socioeconomic, health, and environmental data at the census tract level to identify neighborhoods where flood exposure, chronic health burdens, and social disadvantage spatially converge. Through k-means and hierarchical clustering, we identify four distinct neighborhood typologies, including a critically high-risk cluster characterized by high flood exposure, extreme poverty, poor respiratory health, and aging housing. The findings demonstrate that climate-related risks are not randomly distributed but systematically cluster in historically marginalized communities, revealing a clear environmental justice disparity. This place-based typology approach provides a targeted framework for policymakers to design integrated interventions that bridge flood management, public health, housing, and social services to build equitable urban resilience - oai:arXiv.org:2601.15675v1 - stat.AP - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Rehinatu Usman, Onyedikachi J. Okeke - - - Learning Functional Graphs with Nonlinear Sufficient Dimension Reduction - https://arxiv.org/abs/2601.15696 - arXiv:2601.15696v1 Announce Type: new -Abstract: Functional graphical models have undergone extensive development during the recent years, leading to a variety models such as the functional Gaussian graphical model, the functional copula Gaussian graphical model, the functional Bayesian graphical model, the nonparametric functional additive graphical model, and the conditional functional graphical model. These models rely either on some parametric form of distributions on random functions, or on additive conditional independence, a criterion that is different from probabilistic conditional independence. In this paper we introduce a nonparametric functional graphical model based on functional sufficient dimension reduction. Our method not only relaxes the Gaussian or copula Gaussian assumptions, but also enhances estimation accuracy by avoiding the ``curse of dimensionality''. Moreover, it retains the probabilistic conditional independence as the criterion to determine the absence of edges. By doing simulation study and analysis of the f-MRI dataset, we demonstrate the advantages of our method. - oai:arXiv.org:2601.15696v1 - stat.ME - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Kyongwon Kim, Bing Li - - - Algebraic Statistics in OSCAR - https://arxiv.org/abs/2601.15807 - arXiv:2601.15807v1 Announce Type: new -Abstract: We introduce the AlgebraicStatistics section of the OSCAR computer algebra system. We give an overview of its extensible design and highlight its features including serialization of data types for sharing results and creating databases, and state-of-the-art implicitization algorithms. - oai:arXiv.org:2601.15807v1 - stat.CO - cs.NE - math.AC - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tobias Boege, Antony Della Vecchia, Marina Garrote-L\'opez, Benjamin Hollering - - - A two-sample pseudo-observation-based regression approach for the relative treatment effect - https://arxiv.org/abs/2601.15880 - arXiv:2601.15880v1 Announce Type: new -Abstract: The relative treatment effect is an effect measure for the order of two sample-specific outcome variables. It has the interpretation of a probability and also a connection to the area under the ROC curve. In the literature it has been considered for both ordinal or right-censored time-to-event outcomes. For both cases, the present paper introduces a distribution-free regression model that relates the relative treatment effect to a linear combination of covariates. To fit the model, we develop a pseudo-observation-based procedure yielding consistent and asymptotically normal coefficient estimates. In addition, we propose bootstrap-based hypothesis tests to infer the effects of the covariates on the relative treatment effect. A simulation study compares the novel method to Cox regression, demonstrating that the proposed hypothesis tests have high power and keep up with the z-test of the Cox model even in scenarios where the latter is specified correctly. The new methods are used to re-analyze data from the SUCCESS-A trial for progression-free survival of breast cancer patients. - oai:arXiv.org:2601.15880v1 - stat.ME - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Dennis Dobler, Alina Schenk, Matthias Schmid - - - Leave-one-out testing for node-level differences in Gaussian graphical models - https://arxiv.org/abs/2601.15896 - arXiv:2601.15896v1 Announce Type: new -Abstract: We study two-sample equality testing in Gaussian graphical models. Classical likelihood ratio tests on decomposable graphs admit clique-wise factorizations, offering limited localization and unstable finite-sample behaviour. We propose node-level inference via a leave-one-out Bartlett-adjusted test on a fully connected graph. The resulting increments have standard chi-square null limits, enabling calibrated significance for single nodes and fixed-size subsets. Simulations confirm validity, and a case study shows practical utility. - oai:arXiv.org:2601.15896v1 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Davide Benussi, Ester Alongi, Erika Banzato - - - Detecting interpolation errors in infant mortality counts in 20th Century England and Wales - https://arxiv.org/abs/2601.15936 - arXiv:2601.15936v1 Announce Type: new -Abstract: Understanding historical datasets, such as the England and Wales infant mortality data, for local government districts can provide valuable insights into our changing society. Such analyses can prove challenging in practice, due to frequent changes in the boundaries of local government districts for which records are collected. One solution adopted in the literature to overcome such practical challenges is to pre-process data using areal interpolation to render the units consistent over the time period of focus. However, such methods are prone to errors. In this paper we introduce a novel changepoint method to detect instances where interpolation performs poorly. We demonstrate the utility of our method on original data, and also demonstrate how correcting interpolation errors can affect the clustering of the infant mortality curves. - oai:arXiv.org:2601.15936v1 - stat.AP - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Tessa Wilkie, Idris Eckley, Paul Fearnhead, Ian Gregory - - - A Hierarchical Bayesian Framework for Model-based Prognostics - https://arxiv.org/abs/2601.15942 - arXiv:2601.15942v1 Announce Type: new -Abstract: In prognostics and health management (PHM) of engineered systems, maintenance decisions are ideally informed by predictions of a system's remaining useful life (RUL) based on operational data. Model-based prognostics algorithms rely on a parametric model of the system degradation process. The model parameters are learned from real-time operational data collected on the system. However, there can be valuable information in data from similar systems or components, which is not typically utilized in PHM. In this contribution, we propose a hierarchical Bayesian modeling (HBM) framework for PHM that integrates both operational data and run-to-failure data from similar systems or components. The HBM framework utilizes hyperparameter distributions learned from data of similar systems or components as priors. It enables efficient updates of predictions as more information becomes available, allowing for increasingly accurate assessments of the degradation process and its associated variability. The effectiveness of the proposed framework is demonstrated through two experimental applications involving real-world data from crack growth and lithium battery degradation. Results show significant improvements in RUL prediction accuracy and demonstrate how the framework facilitates uncertainty management through predictive distributions. - oai:arXiv.org:2601.15942v1 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Xinyu Jia, Iason Papaioannou, Daniel Straub - - - A Fast Monte Carlo Newton-Raphson Algorithm to Estimate Generalized Linear Mixed Models with Dense Covariance - https://arxiv.org/abs/2601.16022 - arXiv:2601.16022v1 Announce Type: new -Abstract: Estimation of Generalised linear mixed models (GLMM) including spatial Gaussian process models is often considered computationally impractical for even moderately sized datasets. In this article, we propose a fast Monte Carlo maximum likelihood (MCML) algorithm for the estimation of GLMMs. The algorithm is a stochastic Newton-Raphson method, which approximates the expected Hessian and gradient of the log-likelihood by drawing samples of the random effects. We propose a new stopping criterion for efficient termination and preventing long runs of sampling in the stationary post-convergence phase of the algorithm and discuss Monte Carlo sample size choice. We run a series of simulation comparisons of spatial statistical models alongside the popular integrated nested Laplacian approximation method and demonstrate potential for similar or improved estimator performance and reduced running times. We also consider scaling of the algorithms to large datasets and demonstrate a greater than 100-fold reduction in running times using modern GPU hardware to illustrate the feasibility of full maximum likelihood methods with big spatial datasets. - oai:arXiv.org:2601.16022v1 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Samuel I. Watson, Yixin Wang, Emanuele Giorgi - - - Risk reversal for least squares estimators under nested convex constraints - https://arxiv.org/abs/2601.16041 - arXiv:2601.16041v1 Announce Type: new -Abstract: In constrained stochastic optimization, one naturally expects that imposing a stricter feasible set does not increase the statistical risk of an estimator defined by projection onto that set. In this paper, we show that this intuition can fail even in canonical settings. - We study the Gaussian sequence model, a deliberately austere test best, where for a compact, convex set $\Theta \subset \mathbb{R}^d$ one observes \[ Y = \theta^\star + \sigma Z, \qquad Z \sim N(0, I_d), \] and seeks to estimate an unknown parameter $\theta^\star \in \Theta$. The natural estimator is the least squares estimator (LSE), which coincides with the Euclidean projection of $Y$ onto $\Theta$. We construct an explicit example exhibiting \emph{risk reversal}: for sufficiently large noise, there exist nested compact convex sets $\Theta_S \subset \Theta_L$ and a parameter $\theta^\star \in \Theta_S$ such that the LSE constrained to $\Theta_S$ has strictly larger risk than the LSE constrained to $\Theta_L$. We further show that this phenomenon can persist at the level of worst-case risk, with the supremum risk over the smaller constraint set exceeding that over the larger one. - We clarify this behavior by contrasting noise regimes. In the vanishing-noise limit, the risk admits a first-order expansion governed by the statistical dimension of the tangent cone at $\theta^\star$, and tighter constraints uniformly reduce risk. In contrast, in the diverging-noise regime, the risk is determined by global geometric interactions between the constraint set and random noise directions. Here, the embedding of $\Theta_S$ within $\Theta_L$ can reverse the risk ordering. - These results reveal a previously unrecognized failure mode of projection-based estimators: in sufficiently noisy settings, tightening a constraint can paradoxically degrade statistical performance. - oai:arXiv.org:2601.16041v1 - math.ST - cs.LG - math.OC - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Omar Al-Ghattas - - - Fully Functional Weighted Testing for Abrupt and Gradual Location Changes in Functional Time Series - https://arxiv.org/abs/2601.16058 - arXiv:2601.16058v1 Announce Type: new -Abstract: Change point tests for abrupt changes in the mean of functional data, i.e., random elements in infinite-dimensional Hilbert spaces, are either based on dimension reduction techniques, e.g., based on principal components, or directly based on a functional CUSUM (cumulative sum) statistic. The former have often been criticized as not being fully functional and losing too much information. On the other hand, unlike the latter, they take the covariance structure of the data into account by weighting the CUSUM statistics obtained after dimension reduction with the inverse covariance matrix. In this paper, as a middle ground between these two approaches, we propose an alternative statistic that includes the covariance structure with an offset parameter to produce a scale-invariant test procedure and to increase power when the change is not aligned with the first components. We obtain the asymptotic distribution under the null hypothesis for this new test statistic, allowing for time dependence of the data. Furthermore, we introduce versions of all three test statistics for gradual change situations, which have not been previously considered for functional data, and derive their limit distribution. Further results shed light on the asymptotic power behavior for all test statistics under various ground truths for the alternatives. - oai:arXiv.org:2601.16058v1 - math.ST - stat.ME - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Claudia Kirch, Hedvika Rano\v{s}ov\'a, Martin Wendler - - - On damage of interpolation to adversarial robustness in regression - https://arxiv.org/abs/2601.16070 - arXiv:2601.16070v1 Announce Type: new -Abstract: Deep neural networks (DNNs) typically involve a large number of parameters and are trained to achieve zero or near-zero training error. Despite such interpolation, they often exhibit strong generalization performance on unseen data, a phenomenon that has motivated extensive theoretical investigations. Comforting results show that interpolation indeed may not affect the minimax rate of convergence under the squared error loss. In the mean time, DNNs are well known to be highly vulnerable to adversarial perturbations in future inputs. A natural question then arises: Can interpolation also escape from suboptimal performance under a future $X$-attack? In this paper, we investigate the adversarial robustness of interpolating estimators in a framework of nonparametric regression. A finding is that interpolating estimators must be suboptimal even under a subtle future $X$-attack, and achieving perfect fitting can substantially damage their robustness. An interesting phenomenon in the high interpolation regime, which we term the curse of simple size, is also revealed and discussed. Numerical experiments support our theoretical findings. - oai:arXiv.org:2601.16070v1 - stat.ML - cs.LG - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jingfu Peng, Yuhong Yang - - - A forward-only scheme for online learning of proposal distributions in particle filters - https://arxiv.org/abs/2601.16089 - arXiv:2601.16089v1 Announce Type: new -Abstract: We introduce a new online approach for constructing proposal distributions in particle filters using a forward scheme. Our method progressively incorporates future observations to refine proposals. This is in contrast to backward-scheme algorithms that require access to the entire dataset, such as the iterated auxiliary particle filters (Guarniero et al., 2017, arXiv:1511.06286) and controlled sequential Monte Carlo (Heng et al., 2020, arXiv:1708.08396) which leverage all future observations through backward recursion. In comparison, our forward scheme achieves a gradual improvement of proposals that converges toward the proposal targeted by these backward methods. We show that backward approaches can be numerically unstable even in simple settings. Our forward method, however, offers significantly greater robustness with only a minor trade-off in performance, measured by the variance of the marginal likelihood estimator. Numerical experiments on both simulated and real data illustrate the enhanced stability of our forward approach. - oai:arXiv.org:2601.16089v1 - stat.CO - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Sylvain Procope-Mamert, Nicolas Chopin, Maud Delattre, Guillaume Kon Kam King - - - On the spherical cardioid distribution and its goodness-of-fit - https://arxiv.org/abs/2601.16095 - arXiv:2601.16095v1 Announce Type: new -Abstract: In this paper, we study the spherical cardioid distribution, a higher-dimensional and higher-order generalization of the circular cardioid distribution. This distribution is rotationally symmetric and generates unimodal, multimodal, axial, and girdle-like densities. We show several characteristics of the spherical cardioid that make it highly tractable: simple density evaluation, closedness under convolution, explicit expressions for vectorized moments, and efficient simulation. The moments of the spherical cardioid up to a given order coincide with those of the uniform distribution on the sphere, highlighting its closeness to the latter. We derive estimators by the method of moments and maximum likelihood, their asymptotic distributions, and their asymptotic relative efficiencies. We give the machinery for a bootstrap goodness-of-fit test based on the projected-ecdf approach, including the projected distribution and closed-form expressions for test statistics. An application to modeling the orbits of long-period comets shows the usefulness of the spherical cardioid distribution in real data analyses. - oai:arXiv.org:2601.16095v1 - stat.ME - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Eduardo Garc\'ia-Portugu\'es - - - Synthetic Augmentation in Imbalanced Learning: When It Helps, When It Hurts, and How Much to Add - https://arxiv.org/abs/2601.16120 - arXiv:2601.16120v1 Announce Type: new -Abstract: Imbalanced classification, where one class is observed far less frequently than the other, often causes standard training procedures to prioritize the majority class and perform poorly on rare but important cases. A classic and widely used remedy is to augment the minority class with synthetic examples, but two basic questions remain under-resolved: when does synthetic augmentation actually help, and how many synthetic samples should be generated? - We develop a unified statistical framework for synthetic augmentation in imbalanced learning, studying models trained on imbalanced data augmented with synthetic minority samples and evaluated under the balanced population risk. Our theory shows that synthetic data is not always beneficial. In a ``local symmetry" regime, imbalance is not the dominant source of error near the balanced optimum, so adding synthetic samples cannot improve learning rates and can even degrade performance by amplifying generator mismatch. When augmentation can help (a ``local asymmetry" regime), the optimal synthetic size depends on generator accuracy and on whether the generator's residual mismatch is directionally aligned with the intrinsic majority-minority shift. This structure can make the best synthetic size deviate from naive full balancing, sometimes by a small refinement and sometimes substantially when generator bias is systematic. Practically, we recommend Validation-Tuned Synthetic Size (VTSS): select the synthetic size by minimizing balanced validation loss over a range centered near the fully balanced baseline, while allowing meaningful departures when the data indicate them. Simulations and a real sepsis prediction study support the theory and illustrate when synthetic augmentation helps, when it cannot, and how to tune its quantity effectively. - oai:arXiv.org:2601.16120v1 - stat.ML - cs.LG - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Zhengchi Ma, Anru R. Zhang - - - Beyond Predictive Uncertainty: Reliable Representation Learning with Structural Constraints - https://arxiv.org/abs/2601.16174 - arXiv:2601.16174v1 Announce Type: new -Abstract: Uncertainty estimation in machine learning has traditionally focused on the prediction stage, aiming to quantify confidence in model outputs while treating learned representations as deterministic and reliable by default. In this work, we challenge this implicit assumption and argue that reliability should be regarded as a first-class property of learned representations themselves. We propose a principled framework for reliable representation learning that explicitly models representation-level uncertainty and leverages structural constraints as inductive biases to regularize the space of feasible representations. Our approach introduces uncertainty-aware regularization directly in the representation space, encouraging representations that are not only predictive but also stable, well-calibrated, and robust to noise and structural perturbations. Structural constraints, such as sparsity, relational structure, or feature-group dependencies, are incorporated to define meaningful geometry and reduce spurious variability in learned representations, without assuming fully correct or noise-free structure. Importantly, the proposed framework is independent of specific model architectures and can be integrated with a wide range of representation learning methods. - oai:arXiv.org:2601.16174v1 - stat.ML - cs.LG - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Yiyao Yang - - - Inference on the Significance of Modalities in Multimodal Generalized Linear Models - https://arxiv.org/abs/2601.16196 - arXiv:2601.16196v1 Announce Type: new -Abstract: Despite the popular of multimodal statistical models, there lacks rigorous statistical inference tools for inferring the significance of a single modality within a multimodal model, especially in high-dimensional models. For high-dimensional multimodal generalized linear models, we propose a novel entropy-based metric, called the expected relative entropy, to quantify the information gain of one modality in addition to all other modalities in the model. We propose a deviance-based statistic to estimate the expected relative entropy, prove that it is consistent and its asymptotic distribution can be approximated by a non-central chi-squared distribution. That enables the calculation of confidence intervals and p-values to assess the significance of the expected relative entropy for a given modality. We numerically evaluate the empirical performance of our proposed inference tool by simulations and apply it to a multimodal neuroimaging dataset to demonstrate its good performance on various high-dimensional multimodal generalized linear models. - oai:arXiv.org:2601.16196v1 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Wanting Jin, Guorong Wu, Quefeng Li - - - You Need Better Attention Priors - https://arxiv.org/abs/2601.15380 - arXiv:2601.15380v1 Announce Type: cross -Abstract: We generalize the attention mechanism by viewing it through the lens of Entropic Optimal Transport, revealing that standard attention corresponds to a transport problem regularized by an implicit uniform prior. We introduce Generalized Optimal transport Attention with Trainable priors (GOAT), a new attention mechanism that replaces this naive assumption with a learnable, continuous prior. This prior maintains full compatibility with optimized kernels such as FlashAttention. GOAT also provides an EOT-based explanation of attention sinks and materializes a solution for them, avoiding the representational trade-offs of standard attention. Finally, by absorbing spatial information into the core attention computation, GOAT learns an extrapolatable prior that combines the flexibility of learned positional embeddings with the length generalization of fixed encodings. - oai:arXiv.org:2601.15380v1 - cs.LG - cs.CL - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Elon Litman, Gabe Guo - - - A tensor network formalism for neuro-symbolic AI - https://arxiv.org/abs/2601.15442 - arXiv:2601.15442v1 Announce Type: cross -Abstract: The unification of neural and symbolic approaches to artificial intelligence remains a central open challenge. In this work, we introduce a tensor network formalism, which captures sparsity principles originating in the different approaches in tensor decompositions. In particular, we describe a basis encoding scheme for functions and model neural decompositions as tensor decompositions. The proposed formalism can be applied to represent logical formulas and probability distributions as structured tensor decompositions. This unified treatment identifies tensor network contractions as a fundamental inference class and formulates efficiently scaling reasoning algorithms, originating from probability theory and propositional logic, as contraction message passing schemes. The framework enables the definition and training of hybrid logical and probabilistic models, which we call Hybrid Logic Network. The theoretical concepts are accompanied by the python library tnreason, which enables the implementation and practical use of the proposed architectures. - oai:arXiv.org:2601.15442v1 - cs.AI - cs.LG - cs.LO - cs.NA - math.NA - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Alex Goessmann, Janina Sch\"utte, Maximilian Fr\"ohlich, Martin Eigel - - - Learning from Synthetic Data: Limitations of ERM - https://arxiv.org/abs/2601.15468 - arXiv:2601.15468v1 Announce Type: cross -Abstract: The prevalence and low cost of LLMs have led to a rise of synthetic content. From review sites to court documents, ``natural'' content has been contaminated by data points that appear similar to natural data, but are in fact LLM-generated. In this work we revisit fundamental learning theory questions in this, now ubiquitous, setting. We model this scenario as a sequence of learning tasks where the input is a mix of natural and synthetic data, and the learning algorithms are oblivious to the origin of any individual example. - We study the possibilities and limitations of ERM in this setting. For the problem of estimating the mean of an arbitrary $d$-dimensional distribution, we find that while ERM converges to the true mean, it is outperformed by an algorithm that assigns non-uniform weights to examples from different generations of data. For the PAC learning setting, the disparity is even more stark. We find that ERM does not always converge to the true concept, echoing the model collapse literature. However, we show there are algorithms capable of learning the correct hypothesis for arbitrary VC classes and arbitrary amounts of contamination. - oai:arXiv.org:2601.15468v1 - cs.LG - cs.DS - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Kareem Amin, Alex Bie, Weiwei Kong, Umar Syed, Sergei Vassilvitskii - - - BanditLP: Large-Scale Stochastic Optimization for Personalized Recommendations - https://arxiv.org/abs/2601.15552 - arXiv:2601.15552v1 Announce Type: cross -Abstract: We present BanditLP, a scalable multi-stakeholder contextual bandit framework that unifies neural Thompson Sampling for learning objective-specific outcomes with a large-scale linear program for constrained action selection at serving time. The methodology is application-agnostic, compatible with arbitrary neural architectures, and deployable at web scale, with an LP solver capable of handling billions of variables. Experiments on public benchmarks and synthetic data show consistent gains over strong baselines. We apply this approach in LinkedIn's email marketing system and demonstrate business win, illustrating the value of integrated exploration and constrained optimization in production. - oai:arXiv.org:2601.15552v1 - cs.LG - cs.AI - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Phuc Nguyen, Benjamin Zelditch, Joyce Chen, Rohit Patra, Changshuai Wei - - - Lead distance under a pickoff limit in Major League Baseball: A sequential game model - https://arxiv.org/abs/2601.15608 - arXiv:2601.15608v1 Announce Type: cross -Abstract: Major League Baseball (MLB) recently limited pitchers to three pickoff attempts, creating a cat-and-mouse game between pitcher and runner. Each failed attempt adds pressure on the pitcher to avoid using another, and the runner can intensify this pressure by extending their leadoff toward the next base. We model this dynamic as a two-player zero-sum sequential game in which the runner first chooses a lead distance, and then the pitcher chooses whether to attempt a pickoff. We establish optimality characterizations for the game and present variants of value iteration and policy iteration to solve the game. Using lead distance data, we estimate generalized linear mixed-effects models for pickoff and stolen base outcome probabilities given lead distance, context, and player skill. We compute the game-theoretic equilibria under the two-player model, as well as the optimal runner policy under a simplified one-player Markov decision process (MDP) model. In the one-player setting, our results establish an actionable rule of thumb: the Two-Foot Rule, which recommends that a runner increase their lead by two feet after each pickoff attempt. - oai:arXiv.org:2601.15608v1 - math.OC - stat.AP - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Scott Powers, Sivaramakrishnan Ramani, Jacob Hahn, Andrew J. Schaefer - - - Community-Size Biases in Statistical Inference of Communities in Temporal Networks - https://arxiv.org/abs/2601.15635 - arXiv:2601.15635v1 Announce Type: cross -Abstract: In the study of time-dependent (i.e., temporal) networks, researchers often examine the evolution of communities, which are sets of densely connected sets of nodes that are connected sparsely to other nodes. An increasingly prominent approach to studying community structure in temporal networks is statistical inference. In the present paper, we study the performance of a class of statistical-inference methods for community detection in temporal networks. We represent temporal networks as multilayer networks, with each layer encoding a time step, and we illustrate that statistical-inference models that generate community assignments via either a uniform distribution on community assignments or discrete-time Markov processes are biased against generating communities with large or small numbers of nodes. In particular, we demonstrate that statistical-inference methods that use such generative models tend to poorly identify community structure in networks with large or small communities. To rectify this issue, we introduce a novel statistical model that generates the community assignments of the nodes in given layer (i.e., at a given time) using all of the community assignments in the previous layer. We prove results that guarantee that our approach greatly mitigates the bias against large and small communities, so using our generative model is beneficial for studying community structure in networks with large or small communities. Our code is available at https://github.com/tfaust0196/TemporalCommunityComparison. - oai:arXiv.org:2601.15635v1 - cs.SI - physics.soc-ph - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Theodore Y. Faust, Arash A. Amini, Mason A. Porter - - - An Empirical Study on Ensemble-Based Transfer Learning Bayesian Optimisation with Mixed Variable Types - https://arxiv.org/abs/2601.15640 - arXiv:2601.15640v1 Announce Type: cross -Abstract: Bayesian optimisation is a sample efficient method for finding a global optimum of expensive black-box objective functions. Historic datasets from related problems can be exploited to help improve performance of Bayesian optimisation by adapting transfer learning methods to various components of the Bayesian optimisation pipeline. In this study we perform an empirical analysis of various ensemble-based transfer learning Bayesian optimisation methods and pipeline components. We expand on previous work in the literature by contributing some specific pipeline components, and three new real-time transfer learning Bayesian optimisation benchmarks. In particular we propose to use a weighting strategy for ensemble surrogate model predictions based on regularised regression with weights constrained to be positive, and a related component for handling the case when transfer learning is not improving Bayesian optimisation performance. We find that in general, two components that help improve transfer learning Bayesian optimisation performance are warm start initialisation and constraining weights used with ensemble surrogate model to be positive. - oai:arXiv.org:2601.15640v1 - cs.LG - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Natasha Trinkle, Huong Ha, Jeffrey Chan - - - From Passive Metric to Active Signal: The Evolving Role of Uncertainty Quantification in Large Language Models - https://arxiv.org/abs/2601.15690 - arXiv:2601.15690v1 Announce Type: cross -Abstract: While Large Language Models (LLMs) show remarkable capabilities, their unreliability remains a critical barrier to deployment in high-stakes domains. This survey charts a functional evolution in addressing this challenge: the evolution of uncertainty from a passive diagnostic metric to an active control signal guiding real-time model behavior. We demonstrate how uncertainty is leveraged as an active control signal across three frontiers: in \textbf{advanced reasoning} to optimize computation and trigger self-correction; in \textbf{autonomous agents} to govern metacognitive decisions about tool use and information seeking; and in \textbf{reinforcement learning} to mitigate reward hacking and enable self-improvement via intrinsic rewards. By grounding these advancements in emerging theoretical frameworks like Bayesian methods and Conformal Prediction, we provide a unified perspective on this transformative trend. This survey provides a comprehensive overview, critical analysis, and practical design patterns, arguing that mastering the new trend of uncertainty is essential for building the next generation of scalable, reliable, and trustworthy AI. - oai:arXiv.org:2601.15690v1 - cs.AI - stat.AP - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Jiaxin Zhang, Wendi Cui, Zhuohang Li, Lifu Huang, Bradley Malin, Caiming Xiong, Chien-Sheng Wu - - - Extreme Score Distributions in Countable-Outcome Round-Robin Tournaments of Equally Strong Players - https://arxiv.org/abs/2601.15950 - arXiv:2601.15950v1 Announce Type: cross -Abstract: We consider a general class of round-robin tournament models of equally strong players. In these models, each of the $n$ players competes against every other player exactly once. For each match between two players, the outcome is a value from a countable subset of the unit interval, and the scores of the two players in a match sum to one. The final score of each player is defined as the sum of the scores obtained in matches against all other players. We study the distribution of extreme scores, including the maximum, second maximum, and lower-order extremes. Since the exact distribution is computationally intractable even for small values of $n$, we derive asymptotic results as the number of players $n$ tends to infinity, including limiting distributions, and rates of convergence. - oai:arXiv.org:2601.15950v1 - math.PR - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yaakov Malinovsky - - - Minimax-optimal Halpern iterations for Lipschitz maps - https://arxiv.org/abs/2601.15996 - arXiv:2601.15996v1 Announce Type: cross -Abstract: This paper investigates the minimax-optimality of Halpern fixed-point iterations for Lipschitz maps in general normed spaces. Starting from an a priori bound on the orbit of iterates, we derive non-asymptotic estimates for the fixed-point residuals. These bounds are tight, meaning that they are attained by a suitable Lipschitz map and an associated Halpern sequence. By minimizing these tight bounds we identify the minimax-optimal Halpern scheme. For contractions, the optimal iteration exhibits a transition from an initial Halpern phase to the classical Banach-Picard iteration and, as the Lipschitz constant approaches one, we recover the known convergence rate for nonexpansive maps. For expansive maps, the algorithm is purely Halpern with no Banach-Picard phase; moreover, on bounded domains, the residual estimates converge to the minimal displacement bound. Inspired by the minimax-optimal iteration, we design an adaptive scheme whose residuals are uniformly smaller than the minimax-optimal bounds, and can be significantly sharper in practice. Finally, we extend the analysis by introducing alternative bounds based on the distance to a fixed point, which allow us to handle mappings on unbounded domains; including the case of affine maps for which we also identify the minimax-optimal iteration. - oai:arXiv.org:2601.15996v1 - math.OC - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mario Bravo, Roberto Cominetti, Jongmin Lee - - - Pushing the limits of unconstrained machine-learned interatomic potentials - https://arxiv.org/abs/2601.16195 - arXiv:2601.16195v1 Announce Type: cross -Abstract: Machine-learned interatomic potentials (MLIPs) are increasingly used to replace computationally demanding electronic-structure calculations to model matter at the atomic scale. The most commonly used model architectures are constrained to fulfill a number of physical laws exactly, from geometric symmetries to energy conservation. Evidence is mounting that relaxing some of these constraints can be beneficial to the efficiency and (somewhat surprisingly) accuracy of MLIPs, even though care should be taken to avoid qualitative failures associated with the breaking of physical symmetries. Given the recent trend of \emph{scaling up} models to larger numbers of parameters and training samples, a very important question is how unconstrained MLIPs behave in this limit. Here we investigate this issue, showing that -- when trained on large datasets -- unconstrained models can be superior in accuracy and speed when compared to physically constrained models. We assess these models both in terms of benchmark accuracy and in terms of usability in practical scenarios, focusing on static simulation workflows such as geometry optimization and lattice dynamics. We conclude that accurate unconstrained models can be applied with confidence, especially since simple inference-time modifications can be used to recover observables that are consistent with the relevant physical symmetries. - oai:arXiv.org:2601.16195v1 - physics.chem-ph - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Filippo Bigi, Paolo Pegolo, Arslan Mazitov, Michele Ceriotti - - - Parameterising the effect of a continuous treatment using average derivative effects - https://arxiv.org/abs/2109.13124 - arXiv:2109.13124v2 Announce Type: replace -Abstract: The average treatment effect (ATE) is commonly used to quantify the main effect of a binary treatment on an outcome. Extensions to continuous treatments are usually based on the dose-response curve or shift interventions, but both require strong overlap conditions and the resulting curves may be difficult to summarise. We focus instead on average derivative effects (ADEs) that are scalar estimands related to infinitesimal shift interventions requiring only local overlap assumptions. ADEs, however, are rarely used in practice because their estimation usually requires estimating conditional density functions. By characterising the Riesz representers of weighted ADEs, we propose a new class of estimands that provides a unified view of weighted ADEs/ATEs when the treatment is continuous/binary. We derive the estimand in our class that minimises the nonparametric efficiency bound, thereby extending optimal weighting results from the binary treatment literature to the continuous setting. We develop efficient estimators for two weighted ADEs that avoid density estimation and are amenable to modern machine learning methods, which we evaluate in simulations and an applied analysis of Warfarin dosage effects. - oai:arXiv.org:2109.13124v2 - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Oliver J. Hines, Karla Diaz-Ordaz, Stijn Vansteelandt - - - Sequential model confidence sets - https://arxiv.org/abs/2404.18678 - arXiv:2404.18678v4 Announce Type: replace -Abstract: In most prediction and estimation situations, scientists consider various statistical models for the same problem, and naturally want to select amongst the best. Hansen et al. (2011) provide a powerful solution to this problem by the so-called model confidence set, a subset of the original set of available models that contains the best models with a given level of confidence. Importantly, model confidence sets respect the underlying selection uncertainty by being flexible in size. However, they presuppose a fixed sample size which stands in contrast to the fact that model selection and forecast evaluation are inherently sequential tasks where we successively collect new data and where the decision to continue or conclude a study may depend on the previous outcomes. In this article, we extend model confidence sets sequentially over time by relying on sequential testing methods. Recently, e-processes and confidence sequences have been introduced as new, safe methods for assessing statistical evidence. Sequential model confidence sets allow to continuously monitor the models' performances and come with time-uniform, nonasymptotic coverage guarantees. - oai:arXiv.org:2404.18678v4 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Sebastian Arnold, Georgios Gavrilopoulos, Benedikt Schulz, Johanna Ziegel - - - Beyond Fixed Horizons: A Theoretical Framework for Adaptive Denoising Diffusions - https://arxiv.org/abs/2501.19373 - arXiv:2501.19373v2 Announce Type: replace -Abstract: We introduce a new class of generative diffusion models that, unlike conventional denoising diffusion models, achieve a time-homogeneous structure for both the noising and denoising processes, allowing the number of steps to adaptively adjust based on the noise level. This is accomplished by conditioning the forward process using Doob's $h$-transform, which terminates the process at a suitable sampling distribution at a random time. The model is particularly well suited for generating data with lower intrinsic dimensions, as the termination criterion simplifies to a first-hitting rule. A key feature of the model is its adaptability to the target data, enabling a variety of downstream tasks using a pre-trained unconditional generative model. These tasks include natural conditioning through appropriate initialisation of the denoising process and classification of noisy data. - oai:arXiv.org:2501.19373v2 - stat.ML - cs.LG - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - S\"oren Christensen, Jan Kallsen, Claudia Strauch, Lukas Trottner - - - Spectral decomposition-assisted multi-study factor analysis - https://arxiv.org/abs/2502.14600 - arXiv:2502.14600v2 Announce Type: replace -Abstract: This article focuses on covariance estimation for multi-study data. Popular approaches employ factor-analytic terms with shared and study-specific loadings that decompose the variance into (i) a shared low-rank component, (ii) study-specific low-rank components, and (iii) a diagonal term capturing idiosyncratic variability. Our proposed methodology estimates the latent factors via spectral decompositions, with a novel approach for separating shared and specific factors, and infers the factor loadings and residual variances via surrogate Bayesian regressions. The resulting posterior has a simple product form across outcomes, bypassing the need for Markov chain Monte Carlo sampling and facilitating parallelization. The proposed methodology has major advantages over current Bayesian competitors in terms of computational speed, scalability and stability while also having strong frequentist guarantees. The theory and methods also add to the rich literature on frequentist methods for factor models with shared and group-specific components of variation. The approximation error decreases as the sample size and the data dimension diverge, formalizing a blessing of dimensionality. We show favorable asymptotic properties, including central limit theorems for point estimators and posterior contraction, and excellent empirical performance in simulations. The methods are applied to integrate three studies on gene associations among immune cells. - oai:arXiv.org:2502.14600v2 - stat.ME - stat.CO - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Lorenzo Mauri, Niccol\`o Anceschi, David B. Dunson - - - Local geometry of high-dimensional mixture models: Effective spectral theory and dynamical transitions - https://arxiv.org/abs/2502.15655 - arXiv:2502.15655v3 Announce Type: replace -Abstract: We study the local geometry of empirical risks in high dimensions via the spectral theory of their Hessian and information matrices. We focus on settings where the data, $(Y_\ell)_{\ell =1}^n \in \mathbb{R}^d$, are i.i.d. draws of a $k$-Gaussian mixture model, and the loss depends on the projection of the data into a fixed number of vectors, namely $\mathbf{x}^\top Y$, where $\mathbf{x}\in \mathbb{R}^{d\times C}$ are the parameters, and $C$ need not equal $k$. This setting captures a broad class of problems such as classification by one and two-layer networks and regression on multi-index models. We provide exact formulas for the limits of the empirical spectral distribution and outlier eigenvalues and eigenvectors of such matrices in the proportional asymptotics limit, where the number of samples and dimension $n,d\to\infty$ and $n/d=\phi \in (0,\infty)$. These limits depend on the parameters $\mathbf{x}$ only through the summary statistic of the $(C+k)\times (C+k)$ Gram matrix of the parameters and class means, $\mathbf{G} = (\mathbf{x},\boldsymbol{\mu})^\top(\mathbf{x},\boldsymbol{\mu})$. - It is known that under general conditions, when $\mathbf{x}$ is trained by online stochastic gradient descent, the evolution of these same summary statistics along training converges to the solution of an autonomous system of ODEs, called the effective dynamics. This enables us to connect the training dynamics to the spectral theory of these matrices generated with test data. We demonstrate our general results by analyzing the effective spectrum along the effective dynamics in the case of multi-class logistic regression. In this setting, the empirical Hessian and information matrices have substantially different spectra, each with their own static and even dynamical spectral transitions. - oai:arXiv.org:2502.15655v3 - math.ST - math.PR - stat.ML - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Gerard Ben Arous, Reza Gheissari, Jiaoyang Huang, Aukosh Jagannath - - - Likelihood Matching for Diffusion Models - https://arxiv.org/abs/2508.03636 - arXiv:2508.03636v2 Announce Type: replace -Abstract: We propose a Likelihood Matching approach for training diffusion models by first establishing an equivalence between the likelihood of the target data distribution and a likelihood along the sample path of the reverse diffusion. To efficiently compute the reverse sample likelihood, a quasi-likelihood is considered to approximate each reverse transition density by a Gaussian distribution with matched conditional mean and covariance, respectively. The score and Hessian functions for the diffusion generation are estimated by maximizing the quasi-likelihood, ensuring a consistent matching of both the first two transitional moments between every two time points. A stochastic sampler is introduced to facilitate computation that leverages both the estimated score and Hessian information. We establish consistency of the quasi-maximum likelihood estimation, and provide non-asymptotic convergence guarantees for the proposed sampler, quantifying the rates of the approximation errors due to the score and Hessian estimation, dimensionality, and the number of diffusion steps. Empirical and simulation evaluations demonstrate the effectiveness of the proposed Likelihood Matching and validate the theoretical results. - oai:arXiv.org:2508.03636v2 - stat.ML - cs.LG - math.ST - stat.AP - stat.ME - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Lei Qian, Wu Su, Yanqi Huang, Song Xi Chen - - - Not All Accuracy Is Equal: Prioritizing Independence in Infectious Disease Forecasting - https://arxiv.org/abs/2509.21191 - arXiv:2509.21191v2 Announce Type: replace -Abstract: Ensemble forecasts have become a cornerstone of large-scale disease response, underpinning decision making at agencies such as the US Centers for Disease Control and Prevention (CDC). Their growing use reflects the goal of combining multiple models to improve accuracy and stability versus relying on any single model. However, while ensembles regularly demonstrate stability against individual model failures, improved accuracy is not guaranteed. During the COVID-19 pandemic, the CDC's multi-model ensemble outperformed the best single model by only 1\%, and CDC flu ensembles have often ranked below individual models. - Prior work has established that ensemble performance depends critically on diversity: when models make independent errors, combining them yields substantial gains. In practice, however, this diversity is often lacking. Here, we propose that this is due in part to how models are developed and selected: both modelers and ensemble builders optimize for stand-alone accuracy rather than ensemble contribution, and most epidemic forecasts are built from a small set of approaches trained on the same surveillance data. The result is highly correlated errors, limiting the benefit of ensembling. - This suggests that in developing models and ensembles, we should prioritize models that contribute complementary information rather than replicating existing approaches. We present a toy example illustrating the theoretical cost of correlated errors, analyze correlations among COVID-19 forecasting models, and propose improvements to model fitting and ensemble construction that foster genuine diversity. Ensembles built with this principle in mind produce forecasts that are more robust and more valuable for epidemic preparedness and response. - oai:arXiv.org:2509.21191v2 - stat.AP - q-bio.QM - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Carson Dudley, Marisa Eisenberg - - - regTPS-KLE: A Novel Approach To Approximate A Gaussian Random Field for Bayesian Spatial Modeling - https://arxiv.org/abs/2510.04256 - arXiv:2510.04256v2 Announce Type: replace -Abstract: Gaussian random field is a ubiquitous model for spatial phenomena in diverse scientific disciplines. Its approximation is often crucial for computational feasibility in simulation, inference, and uncertainty quantification. The Karhunen-Lo\`eve Expansion provides a theoretically optimal basis for representing a Gaussian random field as a sum of deterministic orthonormal functions weighted by uncorrelated random variables. While this is a well-established method for dimension reduction and approximation of (spatial) stochastic processes, its practical application depends on the explicit or implicit definition of the covariance structure. In this work, we propose a novel approach, referred to as regTPS-KLE, for approximating a Gaussian random field by explicitly constructing its covariance via a regularized thin plate spline (TPS) kernel. Because TPS kernels are conditionally positive definite and lack a direct spectral decomposition, we formulate the covariance as the inverse of a regularized elliptic operator. To evaluate its statistical performance, we compare its predictive accuracy and computational efficiency with a Gaussian random field approximation constructed using the stochastic partial differential equations (SPDE) method and implemented within an MCMC algorithm. In simulation studies, the predictive differences between the SPDE and regTPS-KLE models were minimal when the spatial field was generated using Mat\`ern and exponential covariance functions, while regTPS-KLE models consistently outperformed the SPDE approach in terms of computational efficiency. In a real data application, regTPS-KLE exhibits superior predictive accuracy compared with SPDE models based on leave-one-out cross-validation while also achieving improved computational efficiency. - oai:arXiv.org:2510.04256v2 - stat.CO - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Joaquin Cavieres, Sebastian Krumscheid - - - Inference in pseudo-observation-based regression using (biased) covariance estimation and naive bootstrapping - https://arxiv.org/abs/2510.06815 - arXiv:2510.06815v2 Announce Type: replace -Abstract: The pseudo-observation method is regularly applied to time-to-event data. However, to date such analyses have relied on not formally verified statements or ad-hoc methods regarding covariance estimation. This paper strives to close this gap in the literature. To begin with, we demonstrate that the usual Huber-White estimator is not consistent for the limiting covariance of parameter estimates in pseudo-observation regression approaches. By confirming that a plug-in estimator can be used instead, we obtain asymptotically exact and consistent tests for general linear hypotheses in the parameters of the model. Additionally, we confirm that naive bootstrapping can not be used for covariance estimation in the pseudo-observation model either. However, it can be used for hypothesis testing by applying a suitable studentization. Simulations illustrate the good performance of our proposed methods in many scenarios. Finally, we obtain a general uniform law of large numbers for U- and V-statistics, as such statistics are central in the mathematical analysis of the inference procedures developed in this work. - oai:arXiv.org:2510.06815v2 - stat.ME - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Simon Mack, Morten Overgaard, Dennis Dobler - - - RESOLVE-IPD: High-Fidelity Individual Patient Data Reconstruction and Uncertainty-Aware Subgroup Meta-Analysis - https://arxiv.org/abs/2511.01785 - arXiv:2511.01785v2 Announce Type: replace -Abstract: Individual patient data (IPD) from oncology trials are essential for reliable evidence synthesis but are rarely publicly available, necessitating reconstruction from published Kaplan-Meier (KM) curves. Existing reconstruction methods suffer from digitization errors, unrealistic uniform censoring assumptions, and the inability to recover subgroup-level IPD when only aggregate statistics are available. We developed RESOLVE-IPD, a unified computational framework that enables high-fidelity IPD reconstruction and uncertainty-aware subgroup meta-analysis to address these limitations. RESOLVE-IPD comprises two components. The first component, High-Fidelity IPD Reconstruction, integrates the VEC-KM and CEN-KM modules: VEC-KM extracts precise KM coordinates and explicit censoring marks from vectorized figures, minimizing digitization error, while CEN-KM corrects overlapping censor symbols and eliminates the uniform censoring assumption. The second component, Uncertainty-Aware Subgroup Recovery, employs the MAPLE (Marginal Assignment of Plausible Labels and Evidence Propagation) algorithm to infer patient-level subgroup labels consistent with published summary statistics (e.g., hazard ratio, median overall survival) when subgroup KM curves are unavailable. MAPLE generates ensembles of mathematically valid labelings, facilitating a propagating meta-analysis that quantifies and reflects uncertainty from subgroup reconstruction. RESOLVE-IPD was validated through a subgroup meta-analysis of four trials in advanced esophageal squamous cell carcinoma, focusing on the programmed death ligand 1 (PD-L1)-low population. RESOLVE-IPD enables accurate IPD reconstruction and robust, uncertainty-aware subgroup meta-analyses, strengthening the reliability and transparency of secondary evidence synthesis in precision oncology. - oai:arXiv.org:2511.01785v2 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Lang Lang, Yao Zhao, Qiuxin Gao, Yanxun Xu - - - Univariate-Guided Sparse Regression for Biobank-Scale High-Dimensional Omics Data - https://arxiv.org/abs/2511.22049 - arXiv:2511.22049v4 Announce Type: replace -Abstract: We present a scalable framework for computing polygenic risk scores (PRS) in high-dimensional genomic settings using the recently introduced Univariate-Guided Sparse Regression (uniLasso). UniLasso is a two-stage penalized regression procedure that leverages univariate coefficients and magnitudes to stabilize feature selection and enhance interpretability. Building on its theoretical and empirical advantages, we adapt uniLasso for application to the UK Biobank, a population-based repository comprising over one million genetic variants measured on hundreds of thousands of individuals from the United Kingdom. We further extend the framework to incorporate external summary statistics to increase predictive accuracy. Our results demonstrate that uniLasso attains predictive performance comparable to standard Lasso while selecting substantially fewer variants, yielding sparser and more interpretable models. Moreover, it exhibits superior performance in estimating PRS relative to its competitors, such as PRS-CS. Integrating external scores further improves prediction while maintaining sparsity. - oai:arXiv.org:2511.22049v4 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Joshua Richland, Tuomo Kiiskinen, William Wang, Sophia Lu, Balasubramanian Narasimhan, Trevor Hastie, Manuel Rivas, Robert Tibshirani - - - TPV: Parameter Perturbations Through the Lens of Test Prediction Variance - https://arxiv.org/abs/2512.11089 - arXiv:2512.11089v3 Announce Type: replace -Abstract: We identify test prediction variance (TPV)-- the first-order sensitivity of model outputs to parameter perturbations around a trained solution-- as a unifying quantity that links several classical observations about generalization in deep networks. TPV is a fully label-free object whose trace form separates the geometry of the trained model from the specific perturbation mechanism, allowing a broad family of parameter perturbations like SGD noise, label noise, finite-precision noise, and other post-training perturbations to be analyzed under a single framework. - Theoretically, we show that TPV estimated on the training set converges to its test-set value in the overparameterized limit, providing the first result that prediction variance under local parameter perturbations can be inferred from training inputs alone, and this stability is decoupled from generalization performance. Empirically, TPV exhibits a striking stability across datasets and architectures even for extremely narrow networks. Further, TPV correlates well with test loss, serving as a training-set based predictive metric for generalization. Code available at github.com/devansharpit/TPV. - oai:arXiv.org:2512.11089v3 - stat.ML - cs.LG - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Devansh Arpit - - - Conformal Blindness: A Note on $A$-Cryptic change-points - https://arxiv.org/abs/2601.01147 - arXiv:2601.01147v2 Announce Type: replace -Abstract: Conformal Test Martingales (CTMs) are a standard method within the Conformal Prediction framework for testing the crucial assumption of data exchangeability by monitoring deviations from uniformity in the p-value sequence. Although exchangeability implies uniform p-values, the converse does not hold. This raises the question of whether a significant break in exchangeability can occur, such that the p-values remain uniform, rendering CTMs blind. We answer this affirmatively, demonstrating the phenomenon of \emph{conformal blindness}. - Through explicit construction, for the theoretically ideal ``predictive oracle'' conformity measure (given by the true conditional density), we demonstrate the possibility of an \emph{$A$-cryptic change-point} (where $A$ refers to the conformity measure). Using bivariate Gaussian distributions, we identify a line along which a change in the marginal means does not alter the distribution of the conformity scores, thereby producing perfectly uniform p-values. - Simulations confirm that even a massive distribution shift can be perfectly cryptic to the CTM, highlighting a fundamental limitation and emphasising the critical role of the alignment of the conformity measure with potential shifts. - By contrasting the predictive oracle with recent results on detection-optimal scores, we emphasise that validity monitoring in safety-critical systems requires careful separation of predictive and diagnostic goals. - oai:arXiv.org:2601.01147v2 - stat.ML - cs.LG - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Johan Hallberg Szabadv\'ary - - - Small Gradient Norm Regret for Online Convex Optimization - https://arxiv.org/abs/2601.13519 - arXiv:2601.13519v2 Announce Type: replace -Abstract: This paper introduces a new problem-dependent regret measure for online convex optimization with smooth losses. The notion, which we call the $G^\star$ regret, depends on the cumulative squared gradient norm evaluated at the decision in hindsight $\sum_{t=1}^T \|\nabla \ell(x^\star)\|^2$. We show that the $G^\star$ regret strictly refines the existing $L^\star$ (small loss) regret, and that it can be arbitrarily sharper when the losses have vanishing curvature around the hindsight decision. We establish upper and lower bounds on the $G^\star$ regret and extend our results to dynamic regret and bandit settings. As a byproduct, we refine the existing convergence analysis of stochastic optimization algorithms in the interpolation regime. Some experiments validate our theoretical findings. - oai:arXiv.org:2601.13519v2 - stat.ML - cs.LG - math.OC - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Wenzhi Gao, Chang He, Madeleine Udell - - - Recent advances in the Bradley--Terry Model: theory, algorithms, and applications - https://arxiv.org/abs/2601.14727 - arXiv:2601.14727v2 Announce Type: replace -Abstract: This article surveys recent progress in the Bradley-Terry (BT) model and its extensions. We focus on the statistical and computational aspects, with emphasis on the regime in which both the number of objects and the volume of comparisons tend to infinity, a setting relevant to large-scale applications. The main topics include asymptotic theory for statistical estimation and inference, along with the associated algorithms. We also discuss applications of these models, including recent work on preference alignment in machine learning. Finally, we discuss several key challenges and outline directions for future research. - oai:arXiv.org:2601.14727v2 - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Shuxing Fang, Ruijian Han, Yuanhang Luo, Yiming Xu - - - Finite-Sample Inference for Sparsely Permuted Linear Regression - https://arxiv.org/abs/2601.14872 - arXiv:2601.14872v2 Announce Type: replace -Abstract: We study a linear observation model with an unknown permutation called \textit{permuted/shuffled linear regression}, where responses and covariates are mismatched and the permutation forms a discrete, factorial-size parameter. The permutation is a key component of the data-generating process, yet its statistical investigation remains challenging due to its discrete nature. We develop a general statistical inference framework on the permutation and regression coefficients. First, we introduce a localization step that reduces the permutation space to a small candidate set building on recent advances in the repro samples method, whose miscoverage decays polynomially with the number of Monte Carlo samples. Then, based on this localized set, we provide statistical inference procedures: a conditional Monte Carlo test of permutation structures with valid finite-sample Type-I error control. We also develop coefficient inference that remains valid under alignment uncertainty of permutations. For computational purposes, we develop a linear assignment problem computable in polynomial time and demonstrate that, with high probability, the solution is equivalent to that of the conventional least squares with large computational cost. Extensions to partially permuted designs and ridge regularization are further discussed. Extensive simulations and an application to air-quality data corroborate finite-sample validity, strong power to detect mismatches, and practical scalability. - oai:arXiv.org:2601.14872v2 - math.ST - cs.LG - stat.ME - stat.ML - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Hirofumi Ota, Masaaki Imaizumi - - - On the Exponential Convergence for Offline RLHF with Pairwise Comparisons - https://arxiv.org/abs/2406.12205 - arXiv:2406.12205v2 Announce Type: replace-cross -Abstract: We consider the problem of offline reinforcement learning from human feedback (RLHF) with pairwise comparisons proposed by Zhu et al. (2023), where the implicit reward is a linear function of an unknown parameter. Given an offline dataset, our objective consists in ascertaining the optimal action for each state, with the ultimate goal of minimizing the {\em simple regret}. We propose an algorithm, \underline{RL} with \underline{L}ocally \underline{O}ptimal \underline{W}eights or {\sc RL-LOW}, which yields an exponential form of simple regret of $\exp ( - \Omega(n/H) )$ where $n$ is the number of data samples and $H$ denotes an instance-dependent hardness quantity that depends explicitly on the suboptimality gap of each action. Furthermore, we derive a first-of-its-kind instance-dependent lower bound in offline RLHF with pairwise comparisons. Interestingly, we observe that the lower and upper bounds on the simple regret match order-wise in the exponent, demonstrating order-wise optimality of our {\sc RL-LOW}. In view of privacy considerations in practical applications, we also extend {\sc RL-LOW} to the setting of $(\varepsilon,\delta)$-differential privacy and show, somewhat surprisingly, that the hardness parameter $H$ is unchanged in the asymptotic regime as $n$ tends to infinity; this underscores the inherent efficiency of {\sc RL-LOW} in terms of preserving the privacy of the observed rewards. Given our focus on establishing instance-dependent bounds of exponential convergence, our research fills the research gap in existing studies that concentrate on establishing worst-case regrets of {\em inverse polynomial convergence} (e.g., $\widetilde{O}(\frac{1}{\sqrt{n}})$) for offline RLHF with pairwise comparisons. - oai:arXiv.org:2406.12205v2 - cs.LG - cs.AI - cs.IT - math.IT - math.ST - stat.ML - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhirui Chen, Vincent Y. F. Tan - - - Embracing Ambiguity: Bayesian Nonparametrics and Stakeholder Participation for Ambiguity-Aware Safety Evaluation - https://arxiv.org/abs/2504.15211 - arXiv:2504.15211v2 Announce Type: replace-cross -Abstract: Evaluations of generative AI models often collapse nuanced behaviour into a single number computed for a single decoding configuration. Such point estimates obscure tail risks, demographic disparities, and the existence of multiple near-optimal operating points. We propose a unified framework that embraces multiplicity by modelling the distribution of harmful behaviour across the entire space of decoding knobs and prompts, quantifying risk through tail-focused metrics, and integrating stakeholder preferences. Our technical contributions are threefold: (i) we formalise decoding Rashomon sets, regions of knob space whose risk is near-optimal under given criteria and measure their size and disagreement; (ii) we develop a dependent Dirichlet process (DDP) mixture with stakeholder-conditioned stick-breaking weights to learn multi-modal harm surfaces; and (iii) we introduce an active sampling pipeline that uses Bayesian deep learning surrogates to explore knob space efficiently. Our approach bridges multiplicity theory, Bayesian nonparametrics, and stakeholder-aligned sensitivity analysis, paving the way for trustworthy deployment of generative models. - oai:arXiv.org:2504.15211v2 - cs.AI - stat.AP - Fri, 23 Jan 2026 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Yanan Long - - - Life Sequence Transformer: Generative Modelling of Socio-Economic Trajectories from Administrative Data - https://arxiv.org/abs/2506.01874 - arXiv:2506.01874v2 Announce Type: replace-cross -Abstract: Generative modelling with Transformer architectures can simulate complex sequential structures across various applications. We extend this line of work to the social sciences by introducing a Transformer-based generative model tailored to longitudinal socio-economic data. Our contributions are: (i) we design a novel encoding method that represents socio-economic life histories as sequences, including overlapping events across life domains; and (ii) we adapt generative modelling techniques to simulate plausible alternative life trajectories conditioned on past histories. Using large-scale data from the Italian social security administration (INPS), we show that the model can be trained at scale, reproduces realistic labour market patterns consistent with known causal relationships, and generates coherent hypothetical life paths. This work demonstrates the feasibility of generative modelling for socio-economic trajectories and opens new opportunities for policy-oriented research, with counterfactual generation as a particularly promising application. - oai:arXiv.org:2506.01874v2 - econ.EM - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Alberto Cabezas, Carlotta Montorsi - - - Stability, Complexity and Data-Dependent Worst-Case Generalization Bounds - https://arxiv.org/abs/2507.06775 - arXiv:2507.06775v2 Announce Type: replace-cross -Abstract: Providing generalization guarantees for stochastic optimization algorithms remains a key challenge in learning theory. Recently, numerous works demonstrated the impact of the geometric properties of optimization trajectories on generalization performance. These works propose worst-case generalization bounds in terms of various notions of intrinsic dimension and/or topological complexity, which were found to empirically correlate with the generalization error. However, most of these approaches involve intractable mutual information terms, which limit a full understanding of the bounds. In contrast, some authors built on algorithmic stability to obtain worst-case bounds involving geometric quantities of a combinatorial nature, which are impractical to compute. In this paper, we address these limitations by combining empirically relevant complexity measures with a framework that avoids intractable quantities. To this end, we introduce the concept of \emph{random set stability}, tailored for the data-dependent random sets produced by stochastic optimization algorithms. Within this framework, we show that the worst-case generalization error can be bounded in terms of (i) the random set stability parameter and (ii) empirically relevant, data- and algorithm-dependent complexity measures of the random set. Moreover, our framework improves existing topological generalization bounds by recovering previous complexity notions without relying on mutual information terms. Through a series of experiments in practically relevant settings, we validate our theory by evaluating the tightness of our bounds and the interplay between topological complexity and stability. - oai:arXiv.org:2507.06775v2 - cs.LG - math.AT - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mario Tuci, Lennart Bastian, Benjamin Dupuis, Nassir Navab, Tolga Birdal, Umut \c{S}im\c{s}ekli - - - DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning - https://arxiv.org/abs/2512.08671 - arXiv:2512.08671v4 Announce Type: replace-cross -Abstract: Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor. - oai:arXiv.org:2512.08671v4 - cs.LG - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Huzaifa Arif - - - Distributional Limits for Eigenvalues of Graphon Kernel Matrices - https://arxiv.org/abs/2601.04584 - arXiv:2601.04584v2 Announce Type: replace-cross -Abstract: We study the fluctuation behavior of individual eigenvalues of kernel matrices arising from dense graphon-based random graphs. Under minimal integrability and boundedness assumptions on the graphon, we establish distributional limits for simple, well-separated eigenvalues of the associated integral operator. A sharp probabilistic dichotomy emerges: in the non-degenerate regime, the properly normalized empirical eigenvalue satisfies a central limit theorem with an explicit variance, whereas in the degenerate regime the leading stochastic term vanishes and the centered eigenvalue converges to a weighted chi-square law determined by the operator spectrum. - The analysis requires no smoothness or Lipschitz conditions on the kernel. Prior work under comparable assumptions established only operator convergence and eigenspace consistency; the present results characterize the full distributional behavior of individual eigenvalues, extending fluctuation theory beyond the reach of classical operator-level arguments. The proofs combine second-order perturbation expansions, concentration bounds for kernel matrices, and Hoeffding decompositions for symmetric statistics, revealing that at the $\sqrt{n}$ scale the dominant randomness arises from latent-position sampling rather than Bernoulli edge noise. - oai:arXiv.org:2601.04584v2 - math.PR - math.ST - stat.TH - Fri, 23 Jan 2026 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Behzad Aalipur - - - Fairness-informed Pareto Optimization : An Efficient Bilevel Framework - https://arxiv.org/abs/2601.13448 - arXiv:2601.13448v2 Announce Type: replace-cross -Abstract: Despite their promise, fair machine learning methods often yield Pareto-inefficient models, in which the performance of certain groups can be improved without degrading that of others. This issue arises frequently in traditional in-processing approaches such as fairness-through-regularization. In contrast, existing Pareto-efficient approaches are biased towards a certain perspective on fairness and fail to adapt to the broad range of fairness metrics studied in the literature. In this paper, we present BADR, a simple framework to recover the optimal Pareto-efficient model for any fairness metric. Our framework recovers its models through a Bilevel Adaptive Rescalarisation procedure. The lower level is a weighted empirical risk minimization task where the weights are a convex combination of the groups, while the upper level optimizes the chosen fairness objective. We equip our framework with two novel large-scale, single-loop algorithms, BADR-GD and BADR-SGD, and establish their convergence guarantees. We release badr, an open-source Python toolbox implementing our framework for a variety of learning tasks and fairness metrics. Finally, we conduct extensive numerical experiments demonstrating the advantages of BADR over existing Pareto-efficient approaches to fairness. - oai:arXiv.org:2601.13448v2 - cs.LG - math.OC - stat.ML - Fri, 23 Jan 2026 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Sofiane Tanji, Samuel Vaiter, Yassine Laguel - - - Recommending Best Paper Awards for ML/AI Conferences via the Isotonic Mechanism - https://arxiv.org/abs/2601.15249 - arXiv:2601.15249v2 Announce Type: replace-cross -Abstract: Machine learning and artificial intelligence conferences such as NeurIPS and ICML now regularly receive tens of thousands of submissions, posing significant challenges to maintaining the quality and consistency of the peer review process. This challenge is particularly acute for best paper awards, which are an important part of the peer review process, yet whose selection has increasingly become a subject of debate in recent years. In this paper, we introduce an author-assisted mechanism to facilitate the selection of best paper awards. Our method employs the Isotonic Mechanism for eliciting authors' assessments of their own submissions in the form of a ranking, which is subsequently utilized to adjust the raw review scores for optimal estimation of the submissions' ground-truth quality. We demonstrate that authors are incentivized to report truthfully when their utility is a convex additive function of the adjusted scores, and we validate this convexity assumption for best paper awards using publicly accessible review data of ICLR from 2019 to 2023 and NeurIPS from 2021 to 2023. Crucially, in the special case where an author has a single quota -- that is, may nominate only one paper -- we prove that truthfulness holds even when the utility function is merely nondecreasing and additive. This finding represents a substantial relaxation of the assumptions required in prior work. For practical implementation, we extend our mechanism to accommodate the common scenario of overlapping authorship. Finally, simulation results demonstrate that our mechanism significantly improves the quality of papers selected for awards. - oai:arXiv.org:2601.15249v2 - cs.LG - cs.AI - cs.GT - stat.ME - Fri, 23 Jan 2026 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Garrett G. Wen, Buxin Su, Natalie Collina, Zhun Deng, Weijie Su -