diff --git "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml" --- "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" +++ "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml" @@ -7,1174 +7,1224 @@ http://www.rssboard.org/rss-specification en-us - Wed, 10 Dec 2025 05:00:17 +0000 + Thu, 11 Dec 2025 05:00:08 +0000 rss-help@arxiv.org - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 Saturday Sunday - Mixed Exponential Statistical Structures and Their Approximation Operators - https://arxiv.org/abs/2512.07870 - arXiv:2512.07870v1 Announce Type: new -Abstract: The paper examines the construction and analysis of a new class of mixed exponential statistical structures that combine the properties of stochastic models and linear positive operators.The relevance of the topic is driven by the growing need to develop a unified theoretical framework capable of describing both continuous and discrete random structures that possess approximation properties. The aim of the study is to introduce and analyze a generalized family of mixed exponential statistical structures and their corresponding linear positive operators, which include known operators as particular cases. We define auxiliary statistical structures B and H through differential relations between their elements, and construct the main Phillips-type structure. Recurrent relations for the central moments are obtained, their properties are established, and the convergence and approximation accuracy of the constructed operators are investigated. The proposed approach allows mixed exponential structures to be viewed as a generalization of known statistical systems, providing a unified analytical and stochastic description. The results demonstrate that mixed exponential statistical structures can be used to develop new classes of positive operators with controllable preservation and approximation properties. The proposed methodology forms a basis for further research in constructing multidimensional statistical structures, analyzing operators in weighted spaces, and studying their asymptotic characteristics. - oai:arXiv.org:2512.07870v1 + Online Inference of Constrained Optimization: Primal-Dual Optimality and Sequential Quadratic Programming + https://arxiv.org/abs/2512.08948 + arXiv:2512.08948v1 Announce Type: new +Abstract: We study online statistical inference for the solutions of stochastic optimization problems with equality and inequality constraints. Such problems are prevalent in statistics and machine learning, encompassing constrained $M$-estimation, physics-informed models, safe reinforcement learning, and algorithmic fairness. We develop a stochastic sequential quadratic programming (SSQP) method to solve these problems, where the step direction is computed by sequentially performing a quadratic approximation of the objective and a linear approximation of the constraints. Despite having access to unbiased estimates of population gradients, a key challenge in constrained stochastic problems lies in dealing with the bias in the step direction. As such, we apply a momentum-style gradient moving-average technique within SSQP to debias the step. We show that our method achieves global almost-sure convergence and exhibits local asymptotic normality with an optimal primal-dual limiting covariance matrix in the sense of H\'ajek and Le Cam. In addition, we provide a plug-in covariance matrix estimator for practical inference. To our knowledge, the proposed SSQP method is the first fully online method that attains primal-dual asymptotic minimax optimality without relying on projection operators onto the constraint set, which are generally intractable for nonlinear problems. Through extensive experiments on benchmark nonlinear problems, as well as on constrained generalized linear models and portfolio allocation problems using both synthetic and real data, we demonstrate superior performance of our method, showing that the method and its asymptotic behavior not only solve constrained stochastic problems efficiently but also provide valid and practical online inference in real-world applications. + oai:arXiv.org:2512.08948v1 + stat.ML + cs.LG + math.OC math.ST stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Yurii Volkov, Oleksandr Volkov + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yihang Gao, Michael K. Ng, Michael W. Mahoney, Sen Na - Functional Random Forest with Adaptive Cost-Sensitive Splitting for Imbalanced Functional Data Classification - https://arxiv.org/abs/2512.07888 - arXiv:2512.07888v1 Announce Type: new -Abstract: Classification of functional data where observations are curves or trajectories poses unique challenges, particularly under severe class imbalance. Traditional Random Forest algorithms, while robust for tabular data, often fail to capture the intrinsic structure of functional observations and struggle with minority class detection. This paper introduces Functional Random Forest with Adaptive Cost-Sensitive Splitting (FRF-ACS), a novel ensemble framework designed for imbalanced functional data classification. The proposed method leverages basis expansions and Functional Principal Component Analysis (FPCA) to represent curves efficiently, enabling trees to operate on low dimensional functional features. To address imbalance, we incorporate a dynamic cost sensitive splitting criterion that adjusts class weights locally at each node, combined with a hybrid sampling strategy integrating functional SMOTE and weighted bootstrapping. Additionally, curve specific similarity metrics replace traditional Euclidean measures to preserve functional characteristics during leaf assignment. Extensive experiments on synthetic and real world datasets including biomedical signals and sensor trajectories demonstrate that FRF-ACS significantly improves minority class recall and overall predictive performance compared to existing functional classifiers and imbalance handling techniques. This work provides a scalable, interpretable solution for high dimensional functional data analysis in domains where minority class detection is critical. - oai:arXiv.org:2512.07888v1 - stat.ML - cs.AI - cs.LG - stat.AP + All Emulators are Wrong, Many are Useful, and Some are More Useful Than Others: A Reproducible Comparison of Computer Model Surrogates + https://arxiv.org/abs/2512.09060 + arXiv:2512.09060v1 Announce Type: new +Abstract: Accurate and efficient surrogate modeling is essential for modern computational science, and there are a staggering number of emulation methods to choose from. With new methods being developed all the time, comparing the relative strengths and weaknesses of different methods remains a challenge due to inconsistent benchmarking practices and (sometimes) limited reproducibility and transparency. In this work, we present a large-scale, fully reproducible comparison of $29$ distinct emulators across $60$ canonical test functions and $40$ real emulation datasets. To facilitate rigorous, apples-to-apples comparisons, we introduce the R package \texttt{duqling}, which streamlines reproducible simulation studies using a consistent, simple syntax, and automatic internal scaling of inputs. This framework allows researchers to compare emulators in a unified environment and makes it possible to replicate or extend previous studies with minimal effort, even across different publications. Our results provide detailed empirical insight into the strengths and weaknesses of state-of-the-art emulators and offer guidance for both method developers and practitioners selecting a surrogate for new data. We discuss best practices for emulator comparison and highlight how \texttt{duqling} can accelerate research in emulator design and application. + oai:arXiv.org:2512.09060v1 stat.CO - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Fahad Mostafa, Hafiz Khan + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Kellin N. Rumsey, Graham C. Gibson, Devin Francom, Reid Morris - Bayesian Semiparametric Joint Dynamic Model for Multitype Recurrent Events and a Terminal Event - https://arxiv.org/abs/2512.07973 - arXiv:2512.07973v1 Announce Type: new -Abstract: In many biomedical research, recurrent events such as myocardial infraction, stroke, and heart failure often result in a terminal outcome such as death. Understanding the relationship among the multi-type recurrent events and terminal event is essential for developing interventions to prolong the terminal event such as death. This study introduces a Bayesian semiparametric joint dynamic model for type-specific hazards that quantifies how the type-specific event history dynamically changes the intensities of each recurrent event type and the terminal event over calendar time. The framework jointly captures unmeasured heterogeneity through a shared frailty term, cumulative effects of past recurrent events on themselves and terminal events, and the effects of covariates. Gamma process priors (GPP) are used as a nonparametric prior for the baseline cumulative hazard function (CHF) and parametric priors for covariates and frailty. For a more accurate risk assessment, this model provides an analytical closed-form estimator of cumulative hazard functions (CHF) and frailties. The Breslow-Aalen-type estimators of CHFs are special cases of our estimators when the precision parameters are set to zero. We evaluate the performance of the model through extensive simulations and apply the method to the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). The analysis offers a practical past event effect based risk assessment for acute and chronic cardiovascular recurrent events with a terminal end point death and provides new information to support the prevention and treatment of cardiovascular disease to clinicians. - oai:arXiv.org:2512.07973v1 + Complementary strengths of the Neyman-Rubin and graphical causal frameworks + https://arxiv.org/abs/2512.09130 + arXiv:2512.09130v1 Announce Type: new +Abstract: This article contributes to the discussion on the relationship between the Neyman-Rubin and the graphical frameworks for causal inference. We present specific examples of data-generating mechanisms - such as those involving undirected or deterministic relationships and cycles - where analyses using a directed acyclic graph are challenging, but where the tools from the Neyman-Rubin causal framework are readily applicable. We also provide examples of data-generating mechanisms with M-bias, trapdoor variables, and complex front-door structures, where the application of the Neyman-Rubin approach is complicated, but the graphical approach is directly usable. The examples offer insights into commonly used causal inference frameworks and aim to improve comprehension of the languages for causal reasoning among a broad audience. + oai:arXiv.org:2512.09130v1 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Mithun Kumar Acharjee, AKM Fazlur Rahman + Tetiana Gorbach, Xavier de Luna, Juha Karvanen, Ingeborg Waernbaum - The limit joint distributions of some statistics used in testing the quality of random number generators - https://arxiv.org/abs/2512.08002 - arXiv:2512.08002v1 Announce Type: new -Abstract: The limit joint distribution of statistics that are generalizations of some statistics from the NIST STS, TestU01, and other packages is found under the following hypotheses $H_0$ and $H_1$. Hypothesis $H_0$ states that the tested sequence is a sequence of independent random vectors with a known distribution, and the simple alternative hypothesis $H_1$ converges in some sense to $H_0$ with increasing sample size. In addition, an analogue of the Berry-Esseen inequality is obtained for the statistics under consideration, and conditions for their asymptotic independence are found. - oai:arXiv.org:2512.08002v1 - math.ST + IntegralGP: Volumetric estimation of subterranean geochemical properties in mineral deposits by fusing assay data with different spatial supports + https://arxiv.org/abs/2512.09151 + arXiv:2512.09151v1 Announce Type: new +Abstract: This article presents an Integral Gaussian Process (IntegralGP) framework for volumetric estimation of subterranean properties in mineral deposits. It provides a unified representation for data with different spatial supports, which enables blasthole geochemical assays to be properly modelled as interval observations rather than points. This approach is shown to improve regression performance and boundary delineation. A core contribution is a description of the mathematical changes to the covariance expressions which allow these benefits to be realised. The gradient and anti-derivatives are obtained to facilitate learning of the kernel hyperparameters. Numerical stability issues are also discussed. To illustrate its application, an IntegralGP data fusion algorithm is described. The objective is to assimilate line-based blasthole assays and update a block model that provides long-range prediction of Fe concentration beneath the drilled bench. Heteroscedastic GP is used to fuse chemically compatible but spatially incongruous data with different resolutions and sample spacings. Domain knowledge embodied in the structure and empirical distribution of the block model must be generally preserved while local inaccuracies are corrected. Using validation measurements within the predicted bench, our experiments demonstrate an improvement in bench-below grade prediction performance. For material classification, IntegralGP fusion reduces the absolute error and model bias in categorical prediction, especially instances where waste blocks are mistakenly classified as high-grade. + oai:arXiv.org:2512.09151v1 + stat.ME stat.AP - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - M. P. Savelov + Expert Systems with Applications 298A (2026) 129429 + Anna Chlingaryan, Arman Melkumyan, Raymond Leung - Provable Diffusion Posterior Sampling for Bayesian Inversion - https://arxiv.org/abs/2512.08022 - arXiv:2512.08022v1 Announce Type: new -Abstract: This paper proposes a novel diffusion-based posterior sampling method within a plug-and-play (PnP) framework. Our approach constructs a probability transport from an easy-to-sample terminal distribution to the target posterior, using a warm-start strategy to initialize the particles. To approximate the posterior score, we develop a Monte Carlo estimator in which particles are generated using Langevin dynamics, avoiding the heuristic approximations commonly used in prior work. The score governing the Langevin dynamics is learned from data, enabling the model to capture rich structural features of the underlying prior distribution. On the theoretical side, we provide non-asymptotic error bounds, showing that the method converges even for complex, multi-modal target posterior distributions. These bounds explicitly quantify the errors arising from posterior score estimation, the warm-start initialization, and the posterior sampling procedure. Our analysis further clarifies how the prior score-matching error and the condition number of the Bayesian inverse problem influence overall performance. Finally, we present numerical experiments demonstrating the effectiveness of the proposed method across a range of inverse problems. - oai:arXiv.org:2512.08022v1 + WTNN: Weibull-Tailored Neural Networks for survival analysis + https://arxiv.org/abs/2512.09163 + arXiv:2512.09163v1 Announce Type: new +Abstract: The Weibull distribution is a commonly adopted choice for modeling the survival of systems subject to maintenance over time. When only proxy indicators and censored observations are available, it becomes necessary to express the distribution's parameters as functions of time-dependent covariates. Deep neural networks provide the flexibility needed to learn complex relationships between these covariates and operational lifetime, thereby extending the capabilities of traditional regression-based models. Motivated by the analysis of a fleet of military vehicles operating in highly variable and demanding environments, as well as by the limitations observed in existing methodologies, this paper introduces WTNN, a new neural network-based modeling framework specifically designed for Weibull survival studies. The proposed architecture is specifically designed to incorporate qualitative prior knowledge regarding the most influential covariates, in a manner consistent with the shape and structure of the Weibull distribution. Through numerical experiments, we show that this approach can be reliably trained on proxy and right-censored data, and is capable of producing robust and interpretable survival predictions that can improve existing approaches. + oai:arXiv.org:2512.09163v1 stat.ML cs.LG - cs.NA - math.NA - math.PR - math.ST - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + stat.AP + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jinyuan Chang, Chenguang Duan, Yuling Jiao, Ruoxuan Li, Jerry Zhijian Yang, Cheng Yuan + http://creativecommons.org/licenses/by/4.0/ + Gabrielle Rives, Olivier Lopez, Nicolas Bousquet - Defining 3-dimensional marine provinces with phytoplankton compositions - https://arxiv.org/abs/2512.08035 - arXiv:2512.08035v1 Announce Type: new -Abstract: Marine provinces rarely include fine-resolution biological data, and are often defined spatially across only latitude and longitude. Therefore, we aimed to determine how phytoplankton distributions define marine provinces across 3-dimensions (i.e., latitude, longitude, and depth). To do this, we developed a new algorithm called \texttt{bioprovince} which can be applied to compositional biological data. The algorithm first clusters compositional samples to identify spatially coherent groups of samples, then makes flexible province predictions in the broader 3d spatial grid based on environmental similarity. We applied \texttt{bioprovince} to phytoplankton Amplicon Sequencing Variants (ASVs) from five, depth-resolved ocean transects spanning north-south in the Pacific Ocean. In the surface layer of the ocean, our method agreed well with traditional Longhurst provinces. In some cases, the method revealed that with more granular taxonomic resolution afforded by ASVs, traditional Longhurst provinces were divided into smaller zones. Also, one of the major advances of this method is its ability to incorporate a third dimension, depth. Indeed, our analysis found significant depth-wise partitions throughout the Pacific with remarkable agreement in the equatorial region with the base of the euphotic zone. Our algorithm's ability to delineate 3-dimensional bioprovinces will enable scientists to discover new ecological interpretations of marine phytoplankton ecology and biogeography. Furthermore, as compositional biological data inherently exists in three spatial dimensions in nature, bioprovince is broadly applicable beyond marine plankton, offering a more holistic perspective on biological provinces across diverse environments. - oai:arXiv.org:2512.08035v1 + Refuting "Debunking the GAMLSS Myth: Simplicity Reigns in Pulmonary Function Diagnostics" + https://arxiv.org/abs/2512.09179 + arXiv:2512.09179v1 Announce Type: new +Abstract: We read with interest the above article by Zavorsky (2025, Respiratory Medicine, doi:10.1016/j.rmed.2024.107836) concerning reference equations for pulmonary function testing. The author compares a Generalized Additive Model for Location, Scale, and Shape (GAMLSS), which is the standard adopted by the Global Lung Function Initiative (GLI), with a segmented linear regression (SLR) model, for pulmonary function variables. The author presents an interesting comparison; however there are some fundamental issues with the approach. We welcome this opportunity for discussion of the issues that it raises. The author's contention is that (1) SLR provides "prediction accuracies on par with GAMLSS"; and (2) the GAMLSS model equations are "complicated and require supplementary spline tables", whereas the SLR is "more straightforward, parsimonious, and accessible to a broader audience". We respectfully disagree with both of these points. + oai:arXiv.org:2512.09179v1 stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Rafael Catoia Pulgrossi, Nathan L R Williams, Yubin Raut, Jed Fuhrman, Sangwon Hyun + http://creativecommons.org/licenses/by-nc-nd/4.0/ + 10.1016/j.rmed.2025.108557 + Robert A. Rigby, Mikis D. Stasinopoulos, Achim Zeileis, Sanja Stanojevic, Gillian Heller, Fernanda de Bastiani, Thomas Kneib, Andreas Mayr, Reto Stauffer, Nikolaus Umlauf - ADOPT: Additive Optimal Transport Regression - https://arxiv.org/abs/2512.08118 - arXiv:2512.08118v1 Announce Type: new -Abstract: Regression analysis for responses taking values in general metric spaces has received increasing attention, particularly for settings with Euclidean predictors $X \in \mathbb{R}^p$ and non-Euclidean responses $Y \in ( \mathcal{M}, d)$. While additive regression is a powerful tool for enhancing interpretability and mitigating the curse of dimensionality in the presence of multivariate predictors, its direct extension is hindered by the absence of vector space operations in general metric spaces. We propose a novel framework for additive optimal transport regression, which incorporates additive structure through optimal geodesic transports. A key idea is to extend the notion of optimal transports in Wasserstein spaces to general geodesic metric spaces. This unified approach accommodates a wide range of responses, including probability distributions, symmetric positive definite (SPD) matrices with various metrics and spherical data. The practical utility of the method is illustrated with correlation matrices derived from resting state fMRI brain imaging data. - oai:arXiv.org:2512.08118v1 + Access to healthcare for people with Alzheimer's Diseases and related dementias + https://arxiv.org/abs/2512.09217 + arXiv:2512.09217v1 Announce Type: new +Abstract: Background: Alzheimer's Disease and Related Dementias (ADRD) affects millions worldwide. Significant disparities exist in ADRD diagnosis and care, disproportionately impacting minority and socioeconomically vulnerable populations Objective: In this study, we investigate the relationship between ADRD density and accessibility to healthcare. We identify underserved and overserved areas in Maryland based on diagnosed cases and mortality due to ADRD, focusing on geographic disparities in care. Methods: 2023 Maryland ADRD patients were identified using ICD-10 codes from. Accessibility was measured using the Kernel Density Two-Step Floating Catchment Area (KD2SFCA) method. The Gini index and t-tests were used to analyze disparities between urban and rural areas. Hot Spot Analysis Getis-Ord Gi* and local bivariate relationships analysis were applied to assess spatial correlations. Principal component analysis (PCA) was applied to calculate the health risk index. Results: Hospital accessibility was unevenly distributed. Mortality rates from ADRD were higher in underserved areas with fewer hospitals. Hot spot analysis shows eastern and southern Maryland have zones with high mortality per population and per ADRD patient, surrounded by similarly high-rate zones. Central Maryland shows lower death rates per patient but more hospital facilities. In eastern Maryland, higher poverty areas are surrounded by zones with lower accessibility and higher health risk indices. Conclusion: Hospital accessibility is unevenly distributed, creating major rural disparities. Underserved regions in terms of access to healthcare facilities, particularly in eastern and southern Maryland, exhibit high ADRD mortality rates despite low diagnosis rates. This suggests that many ADRD cases remain undiagnosed, underdiagnosed, or subject to delayed treatment. + oai:arXiv.org:2512.09217v1 + stat.AP + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Saeed Saleh Namadi, Jie Chen, Deb Niemeier + + + Prenatal alcohol exposure and child cognition: semi-continuous exposures, causal inference and evidence synthesis + https://arxiv.org/abs/2512.09237 + arXiv:2512.09237v1 Announce Type: new +Abstract: We address the challenge of causal inference status and the dose-response effects with a semi-continuous exposure. A two-stage approach is proposed using estimating equation for multiple outcomes with large sample properties derived for the resulting estimators. Homogeneity tests are developed to assess whether causal effects of exposure status and the dose-response effects are the same across multiple outcomes. A global homogeneity test is also developed to assess whether the effect of exposure status (exposed/not exposed) and the dose-response effect of the continuous exposure level are each equal across all outcomes. The methods of estimation and testing are rigorously evaluated in simulation studies and applied to a motivating study on the effects of prenatal alcohol exposure on childhood cognition defined by executive function (EF), academic achievement in math, and learning and memory (LM). + oai:arXiv.org:2512.09237v1 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wookyeong Song, Hans-Georg M\"uller + http://creativecommons.org/licenses/by/4.0/ + Xiaoya Wang, Richard J. Cook, Yeying Zhu, Tugba Akkaya-Hocagil, R. Colin Carter, Sandra W. Jacobson, Joseph L. Jacobson, Louise M. Ryan - deepspat: An R package for modeling nonstationary spatial and spatio-temporal Gaussian and extremes data through deep deformations - https://arxiv.org/abs/2512.08137 - arXiv:2512.08137v1 Announce Type: new -Abstract: Nonstationarity in spatial and spatio-temporal processes is ubiquitous in environmental datasets, but is not often addressed in practice, due to a scarcity of statistical software packages that implement nonstationary models. In this article, we introduce the R software package deepspat, which allows for modeling, fitting and prediction with nonstationary spatial and spatio-temporal models applied to Gaussian and extremes data. The nonstationary models in our package are constructed using a deep multi-layered deformation of the original spatial or spatio-temporal domain, and are straightforward to implement. Model parameters are estimated using gradient-based optimization of customized loss functions with tensorflow, which implements automatic differentiation. The functionalities of the package are illustrated through simulation studies and an application to Nepal temperature data. - oai:arXiv.org:2512.08137v1 - stat.CO + MoDaH achieves rate optimal batch correction + https://arxiv.org/abs/2512.09259 + arXiv:2512.09259v1 Announce Type: new +Abstract: Batch effects pose a significant challenge in the analysis of single-cell omics data, introducing technical artifacts that confound biological signals. While various computational methods have achieved empirical success in correcting these effects, they lack the formal theoretical guarantees required to assess their reliability and generalization. To bridge this gap, we introduce Mixture-Model-based Data Harmonization (MoDaH), a principled batch correction algorithm grounded in a rigorous statistical framework. + Under a new Gaussian-mixture-model with explicit parametrization of batch effects, we establish the minimax optimal error rates for batch correction and prove that MoDaH achieves this rate by leveraging the recent theoretical advances in clustering data from anisotropic Gaussian mixtures. This constitutes, to the best of our knowledge, the first theoretical guarantee for batch correction. Extensive experiments on diverse single-cell RNA-seq and spatial proteomics datasets demonstrate that MoDaH not only attains theoretical optimality but also achieves empirical performance comparable to or even surpassing those of state-of-the-art heuristics (e.g., Harmony, Seurat-V5, and LIGER), effectively balancing the removal of technical noise with the conservation of biological signal. + oai:arXiv.org:2512.09259v1 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + math.ST + q-bio.GN + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Quan Vu, Xuanjie Shao, Rapha\"el Huser, Andrew Zammit-Mangion + Yang Cao, Zongming Ma - Non-parametric assessment of the calibration of individualized treatment effects - https://arxiv.org/abs/2512.08140 - arXiv:2512.08140v1 Announce Type: new -Abstract: An important aspect of the performance of algorithms that predict individualized treatment effects (ITE) is moderate calibration, i.e., the average treatment effect among individuals with predicted treatment effect of z being equal to z. The assessment of moderate calibration is a challenging task on two fronts: counterfactual responses are unobserved, and quantifying the conditional response function for models that generate continuous predicted values requires regularization or parametric modeling. Perhaps because of these challenges, there is currently no inferential method for the null hypothesis that an ITE model is moderately calibrated in a population. In this work, we propose non-parametric methods for the assessment of moderate calibration of ITE models for binary outcomes using data from a randomized trial. These methods simultaneously resolve both challenges, resulting in novel numerical, graphical, and inferential methods for the assessment of moderate calibration. The key idea is to formulate a stochastic process for the cumulative prediction errors that obeys a functional central limit theorem, enabling the use of the properties of Brownian motion for asymptotic inference. We propose two approaches to construct this process from a sample: a conditional approach that relies on predicted risks (often an output of ITE models), and a marginal approach based on replacing the cumulative conditional expected value and variance terms with their marginal counterparts. Numerical simulations confirm the desirable properties of both approaches and their ability to detect miscalibration of different forms. We use a case study to provide practical suggestions on graphical presentation and the interpretation of results. Moderate calibration of predicted ITEs can be assessed without requiring regularization techniques or making assumptions about the functional form of treatment response. - oai:arXiv.org:2512.08140v1 + Vaccine sieve analysis on deep sequencing data using competing risks Cox regression with failure type subject to misclassification + https://arxiv.org/abs/2512.09262 + arXiv:2512.09262v1 Announce Type: new +Abstract: Understanding how vaccines perform against different pathogen genotypes is crucial for developing effective prevention strategies, particularly for highly genetically diverse pathogens like HIV. Sieve analysis is a statistical framework used to determine whether a vaccine selectively prevents acquisition of certain genotypes while allowing breakthrough of other genotypes that evade immune responses. Traditionally, these analyses are conducted with a single sequence available per individual acquiring the pathogen. However, modern sequencing technology can provide detailed characterization of intra-individual viral diversity by capturing up to hundreds of pathogen sequences per person. In this work, we introduce methodology that extends sieve analysis to account for intra-individual viral diversity. Our approach estimates vaccine efficacy against viral populations with varying true (unobservable) frequencies of vaccine-mismatched mutations. To account for differential resolution of information from differing sequence counts per person, we use competing risks Cox regression with modeled causes of failure and propose an empirical Bayes approach for the classification model. Simulation studies demonstrate that our approach reduces bias, provides nominal confidence interval coverage, and improves statistical power compared to conventional methods. We apply our method to the HVTN 705 Imbokodo trial, which assessed the efficacy of a heterologous vaccine regimen in preventing HIV-1 acquisition. + oai:arXiv.org:2512.09262v1 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + stat.AP + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Mohsen Sadatsafavi, Jeroen Hoogland, Thomas P. A. Debray, John Petkau + James Peng, Michal Juraska, Pamela A. Shaw, Peter B. Gilbert - Propensity score adjustment when errors in achievement measures inform treatment assignment - https://arxiv.org/abs/2512.08144 - arXiv:2512.08144v1 Announce Type: new -Abstract: U.S. state education agencies mark schools displaying achievement gaps between demographic subgroups as needing improvement. Some schools may have few students in these subgroups, such that average end-of-year test scores only noisily measure the average "true" score--the score one would expect if students took the test many times. This, in addition to the masking of small subgroup averages in publicly available assessment data, poses challenges for evaluating interventions aimed at closing achievement gaps. We introduce propensity score estimates designed to achieve balance on subgroup average true scores. These estimates are available even when noisy measurements are not and improve overlap compared to those that ignore measurement error, leading to greater bias reduction of matching estimators. We demonstrate our methods through simulation and an application to a statewide initiative in Texas for curbing summer learning loss. - oai:arXiv.org:2512.08144v1 - stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Robust and Sparse Estimation of Unbounded Density Ratio under Heavy Contamination + https://arxiv.org/abs/2512.09266 + arXiv:2512.09266v1 Announce Type: new +Abstract: We examine the non-asymptotic properties of robust density ratio estimation (DRE) in contaminated settings. Weighted DRE is the most promising among existing methods, exhibiting doubly strong robustness from an asymptotic perspective. This study demonstrates that Weighted DRE achieves sparse consistency even under heavy contamination within a non-asymptotic framework. This method addresses two significant challenges in density ratio estimation and robust estimation. For density ratio estimation, we provide the non-asymptotic properties of estimating unbounded density ratios under the assumption that the weighted density ratio function is bounded. For robust estimation, we introduce a non-asymptotic framework for doubly strong robustness under heavy contamination, assuming that at least one of the following conditions holds: (i) contamination ratios are small, and (ii) outliers have small weighted values. This work provides the first non-asymptotic analysis of strong robustness under heavy contamination. + oai:arXiv.org:2512.09266v1 + stat.ML + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Joshua Wasserman, Michael R. Elliott, Ben B. Hansen + Ryosuke Nagumo, Hironori Fujisawa - Uncertainty quantification for mixed membership in multilayer networks with degree heterogeneity using Gaussian variational inference - https://arxiv.org/abs/2512.08146 - arXiv:2512.08146v1 Announce Type: new -Abstract: Analyzing multilayer networks is central to understanding complex relational measurements collected across multiple conditions or over time. A pivotal task in this setting is to quantify uncertainty in community structure while appropriately pooling information across layers and accommodating layer-specific heterogeneity. Building on the multilayer degree-corrected mixed-membership (ML-DCMM) model, which captures both stable community membership profiles and layer-specific vertex activity levels, we propose a Bayesian inference framework based on a spectral-assisted likelihood. We then develop a computationally efficient Gaussian variational inference algorithm implemented via stochastic gradient descent. Our theoretical analysis establishes a variational Bernstein--von Mises theorem, which provides a frequentist guarantee for using the variational posterior to construct confidence sets for mixed memberships. We demonstrate the utility of the method on a U.S. airport longitudinal network, where the procedure yields robust estimates, natural uncertainty quantification, and competitive performance relative to state-of-the-art methods. - oai:arXiv.org:2512.08146v1 + On the inverse of covariance matrices for unbalanced crossed designs + https://arxiv.org/abs/2512.09273 + arXiv:2512.09273v1 Announce Type: new +Abstract: This paper addresses a long-standing open problem in the analysis of linear mixed models with crossed random effects under unbalanced designs: how to find an analytic expression for the inverse of $\mathbf{V}$, the covariance matrix of the observed response. The inverse matrix $\mathbf{V}^{-1}$ is required for likelihood-based estimation and inference. However, for unbalanced crossed designs, $\mathbf{V}$ is dense and the lack of a closed-form representation for $\mathbf{V}^{-1}$, until now, has made using likelihood-based methods computationally challenging and difficult to analyse mathematically. We use the Khatri--Rao product to represent $\mathbf{V}$ and then to construct a modified covariance matrix whose inverse admits an exact spectral decomposition. Building on this construction, we obtain an elegant and simple approximation to $\mathbf{V}^{-1}$ for asymptotic unbalanced designs. For non-asymptotic settings, we derive an accurate and interpretable approximation under mildly unbalanced data and establish an exact inverse representation as a low-rank correction to this approximation, applicable to arbitrary degrees of unbalance. Simulation studies demonstrate the accuracy, stability, and computational tractability of the proposed framework. + oai:arXiv.org:2512.09273v1 stat.ME math.ST - stat.CO stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fangzheng Xie, Hsin-Hsiung Huang + Ziyang Lyu, S. A. Sisson, A. H. Welsh - Bayesian Semiparametric Mixture Cure (Frailty) Models - https://arxiv.org/abs/2512.08173 - arXiv:2512.08173v1 Announce Type: new -Abstract: In recent years, mixture cure models have gained increasing popularity in survival analysis as an alternative to the Cox proportional hazards model, particularly in settings where a subset of patients is considered cured. The proportional hazards mixture cure model is especially advantageous when the presence of a cured fraction can be reasonably assumed, providing a more accurate representation of long-term survival dynamics. In this study, we propose a novel hierarchical Bayesian framework for the semiparametric mixture cure model, which accommodates both the inclusion and exclusion of a frailty component, allowing for greater flexibility in capturing unobserved heterogeneity among patients. Samples from the posterior distribution are obtained using a Markov chain Monte Carlo method, leveraging a hierarchical structure inspired by Bayesian Lasso. Comprehensive simulation studies are conducted across diverse scenarios to evaluate the performance and robustness of the proposed models. Bayesian model comparison and assessment are performed using various criteria. Finally, the proposed approaches are applied to two well-known datasets in the cure model literature: the E1690 melanoma trial and a colon cancer clinical trial. - oai:arXiv.org:2512.08173v1 - stat.ME - math.ST - stat.CO + Impact of Positional Encoding: Clean and Adversarial Rademacher Complexity for Transformers under In-Context Regression + https://arxiv.org/abs/2512.09275 + arXiv:2512.09275v1 Announce Type: new +Abstract: Positional encoding (PE) is a core architectural component of Transformers, yet its impact on the Transformer's generalization and robustness remains unclear. In this work, we provide the first generalization analysis for a single-layer Transformer under in-context regression that explicitly accounts for a completely trainable PE module. Our result shows that PE systematically enlarges the generalization gap. Extending to the adversarial setting, we derive the adversarial Rademacher generalization bound. We find that the gap between models with and without PE is magnified under attack, demonstrating that PE amplifies the vulnerability of models. Our bounds are empirically validated by a simulation study. Together, this work establishes a new framework for understanding the clean and adversarial generalization in ICL with PE. + oai:arXiv.org:2512.09275v1 stat.ML - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Fatih K{\i}z{\i}laslan, Valeria Vitelli + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Weiyi He, Yue Xing - Worst-case generation via minimax optimization in Wasserstein space - https://arxiv.org/abs/2512.08176 - arXiv:2512.08176v1 Announce Type: new -Abstract: Worst-case generation plays a critical role in evaluating robustness and stress-testing systems under distribution shifts, in applications ranging from machine learning models to power grids and medical prediction systems. We develop a generative modeling framework for worst-case generation for a pre-specified risk, based on min-max optimization over continuous probability distributions, namely the Wasserstein space. Unlike traditional discrete distributionally robust optimization approaches, which often suffer from scalability issues, limited generalization, and costly worst-case inference, our framework exploits the Brenier theorem to characterize the least favorable (worst-case) distribution as the pushforward of a transport map from a continuous reference measure, enabling a continuous and expressive notion of risk-induced generation beyond classical discrete DRO formulations. Based on the min-max formulation, we propose a Gradient Descent Ascent (GDA)-type scheme that updates the decision model and the transport map in a single loop, establishing global convergence guarantees under mild regularity assumptions and possibly without convexity-concavity. We also propose to parameterize the transport map using a neural network that can be trained simultaneously with the GDA iterations by matching the transported training samples, thereby achieving a simulation-free approach. The efficiency of the proposed method as a risk-induced worst-case generator is validated by numerical experiments on synthetic and image data. - oai:arXiv.org:2512.08176v1 - stat.ML + Distributional Shrinkage II: Optimal Transport Denoisers with Higher-Order Scores + https://arxiv.org/abs/2512.09295 + arXiv:2512.09295v1 Announce Type: new +Abstract: We revisit the signal denoising problem through the lens of optimal transport: the goal is to recover an unknown scalar signal distribution $X \sim P$ from noisy observations $Y = X + \sigma Z$, with $Z$ being standard Gaussian independent of $X$ and $\sigma>0$ a known noise level. Let $Q$ denote the distribution of $Y$. We introduce a hierarchy of denoisers $T_0, T_1, \ldots, T_\infty : \mathbb{R} \to \mathbb{R}$ that are agnostic to the signal distribution $P$, depending only on higher-order score functions of $Q$. Each denoiser $T_K$ is progressively refined using the $(2K-1)$-th order score function of $Q$ at noise resolution $\sigma^{2K}$, achieving better denoising quality measured by the Wasserstein metric $W(T_K \sharp Q, P)$. The limiting denoiser $T_\infty$ identifies the optimal transport map with $T_\infty \sharp Q = P$. + We provide a complete characterization of the combinatorial structure underlying this hierarchy through Bell polynomial recursions, revealing how higher-order score functions encode the optimal transport map for signal denoising. We study two estimation strategies with convergence rates for higher-order scores from i.i.d. samples drawn from $Q$: (i) plug-in estimation via Gaussian kernel smoothing, and (ii) direct estimation via higher-order score matching. This hierarchy of agnostic denoisers opens new perspectives in signal denoising and empirical Bayes. + oai:arXiv.org:2512.09295v1 + math.ST cs.LG - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ML + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xiuyuan Cheng, Yao Xie, Linglingzhi Zhu, Yunqin Zhu + Tengyuan Liang - Distributional Random Forests for Complex Survey Designs on Reproducing Kernel Hilbert Spaces - https://arxiv.org/abs/2512.08179 - arXiv:2512.08179v1 Announce Type: new -Abstract: We study estimation of the conditional law $P(Y|X=\mathbf{x})$ and continuous functionals $\Psi(P(Y|X=\mathbf{x}))$ when $Y$ takes values in a locally compact Polish space, $X \in \mathbb{R}^p$, and the observations arise from a complex survey design. We propose a survey-calibrated distributional random forest (SDRF) that incorporates complex-design features via a pseudo-population bootstrap, PSU-level honesty, and a Maximum Mean Discrepancy (MMD) split criterion computed from kernel mean embeddings of H\'{a}jek-type (design-weighted) node distributions. We provide a framework for analyzing forest-style estimators under survey designs; establish design consistency for the finite-population target and model consistency for the super-population target under explicit conditions on the design, kernel, resampling multipliers, and tree partitions. As far as we are aware, these are the first results on model-free estimation of conditional distributions under survey designs. Simulations under a stratified two-stage cluster design provide finite sample performance and demonstrate the statistical error price of ignoring the survey design. The broad applicability of SDRF is demonstrated using NHANES: We estimate the tolerance regions of the conditional joint distribution of two diabetes biomarkers, illustrating how distributional heterogeneity can support subgroup-specific risk profiling for diabetes mellitus in the U.S. population. - oai:arXiv.org:2512.08179v1 - stat.ME - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Estimating order scale parameters of two scale mixture of exponential distributions + https://arxiv.org/abs/2512.09305 + arXiv:2512.09305v1 Announce Type: new +Abstract: Estimation of the ordered scale parameter of a two scale mixture of the exponential distribution is considered under Stein loss and symmetric loss. Under certain conditions, we prove that the inadmissibility equivariant estimator exhibits several improved estimators. Consequently, we propose various estimators that dominate the best affine equivariant estimators (BAEE). Also, we propose a class of estimators that dominates BAEE. We have proved that the boundary estimator of this class is a generalized Bayes estimator. The results are applied to the multivariate Lomax distribution and the Exponential Inverse Gaussian (E-IG) distribution. Consequently, we have obtained improved estimators for the ordered scale parameters of two multivariate Lomax distributions and the exponential inverse Gaussian distribution. For each case, we have conducted a simulation study to compare the risk performance of the improved estimators. + oai:arXiv.org:2512.09305v1 + math.ST + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yating Zou, Marcos Matabuena, Michael R. Kosorok + http://creativecommons.org/licenses/by-sa/4.0/ + Somnath Mondal, Lakshmi Kanta Patra + + + Group Cooperation Diverges onto Durable Low versus High Paths: Public Goods Experiments in 134 Honduran Villages + https://arxiv.org/abs/2512.09316 + arXiv:2512.09316v1 Announce Type: new +Abstract: We performed large, lab-in-the-field experiment (2,591 participants across 134 Honduran villages; ten rounds) and tracked how contribution behavior unfolds in fixed, anonymous groups of size five. Contribution separates early into two durable paths, one low and one high, with rare convergence thereafter. High-path players can be identified with strong accuracy early on. Groups that begin with an early majority of above-norm contributors (about 60%) are very likely finish high. The empirical finding of a bifurcation, consistent with the theory, shows that early, high contributions by socially central people steer groups onto, and help keep them on, a high-cooperation path. + oai:arXiv.org:2512.09316v1 + stat.AP + stat.OT + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Marios Papamichalis, Nicholas Christakis, Feng Fu - Nonparametric inference with massive data via grouped empirical likelihood - https://arxiv.org/abs/2512.08182 - arXiv:2512.08182v1 Announce Type: new -Abstract: To address the computational issue in empirical likelihood methods with massive data, this paper proposes a grouped empirical likelihood (GEL) method. It divides $N$ observations into $n$ groups, and assigns the same probability weight to all observations within the same group. GEL estimates the $n\ (\ll N)$ weights by maximizing the empirical likelihood ratio. The dimensionality of the optimization problem is thus reduced from $N$ to $n$, thereby lowering the computational complexity. We prove that GEL possesses the same first order asymptotic properties as the conventional empirical likelihood method under the estimating equation settings and the classical two-sample mean problem. A distributed GEL method is also proposed with several servers. Numerical simulations and real data analysis demonstrate that GEL can keep the same inferential accuracy as the conventional empirical likelihood method, and achieves substantial computational acceleration compared to the divide-and-conquer empirical likelihood method. We can analyze a billion data with GEL in tens of seconds on only one PC. - oai:arXiv.org:2512.08182v1 + Balancing Weights for Causal Mediation Analysis + https://arxiv.org/abs/2512.09337 + arXiv:2512.09337v1 Announce Type: new +Abstract: This paper develops methods for estimating the natural direct and indirect effects in causal mediation analysis. The efficient influence function-based estimator (EIF-based estimator) and the inverse probability weighting estimator (IPW estimator), which are standard in causal mediation analysis, both rely on the inverse of the estimated propensity scores, and thus they are vulnerable to two key issues (i) instability and (ii) finite-sample covariate imbalance. We propose estimators based on the weights obtained by an algorithm that directly penalizes weight dispersion while enforcing approximate covariate and mediator balance, thereby improving stability and mitigating bias in finite samples. We establish the convergence rates of the proposed weights and show that the resulting estimators are asymptotically normal and achieve the semiparametric efficiency bound. Monte Carlo simulations demonstrate that the proposed estimator outperforms not only the EIF-based estimator and the IPW estimator but also the regression imputation estimator in challenging scenarios with model misspecification. Furthermore, the proposed method is applied to a real dataset from a study examining the effects of media framing on immigration attitudes. + oai:arXiv.org:2512.09337v1 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + econ.EM + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Yongda Wang, Shifeng Xiong + Kentaro Kawato - A multivariate generalization of Hall's theorem for Edgeworth expansions of bootstrap distributions - https://arxiv.org/abs/2512.08200 - arXiv:2512.08200v1 Announce Type: new -Abstract: Theorem 5.1 in the monograph by Hall (1992) provides rigorous in-probability justification of Edgeworth expansions of bootstrap distributions. Proving this result was rather challenging because bootstrap distributions do not satisfy the classical Cram\'er condition and therefore classical methods for justifying Edgeworth expansions, e.g. Bhattacharya and Rao (1976) and Bhattacharya and Ghosh (1978), are not available. Hall's (1992) theorem is for a univariate statistic which can be expressed as a smooth function of means, though the underlying population can be multivariate. However, there are a number of applications where a multivariate version of Hall's theorem is needed, and generalizing the proof from the univariate case to the multivariate case is not immediate. Our primary purpose in this article is to fill this gap by stating a multivariate version of the theorem and sketching the modifications to the proof of Hall's (1992) Theorem 5.1 that are needed. - oai:arXiv.org:2512.08200v1 - math.ST - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Minimization of Functions on Dually Flat Spaces Using Geodesic Descent Based on Dual Connections + https://arxiv.org/abs/2512.09358 + arXiv:2512.09358v1 Announce Type: new +Abstract: We propose geodesic-based optimization methods on dually flat spaces, where the geometric structure of the parameter manifold is closely related to the form of the objective function. A primary application is maximum likelihood estimation in statistical models, especially exponential families, whose model manifolds are dually flat. We show that an m-geodesic update, which directly optimizes the log-likelihood, can theoretically reach the maximum likelihood estimator in a single step. In contrast, an e-geodesic update has a practical advantage in cases where the parameter space is geodesically complete, allowing optimization without explicitly handling parameter constraints. We establish the theoretical properties of the proposed methods and validate their effectiveness through numerical experiments. + oai:arXiv.org:2512.09358v1 + stat.CO + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Andrew T. A. Wood + http://creativecommons.org/licenses/by/4.0/ + Gaku Omiya, Fumiyasu Komaki - Wishart kernel density estimation for strongly mixing time series on the cone of positive definite matrices - https://arxiv.org/abs/2512.08232 - arXiv:2512.08232v1 Announce Type: new -Abstract: A Wishart kernel density estimator (KDE) is introduced for density estimation in the cone of positive definite matrices. The estimator is boundary-aware and mitigates the boundary bias suffered by conventional KDEs, while remaining simple to implement. Its mean squared error, uniform strong consistency on expanding compact sets, and asymptotic normality are established under the Lebesgue measure and suitable mixing conditions. This work represents the first study of density estimation on this space under any metric. For independent observations, an asymptotic upper bound on the mean absolute error is also derived. A simulation study compares the performance of the Wishart KDE to another boundary-aware KDE that relies on the matrix-variate lognormal distribution proposed by Schwartzman [Int. Stat. Rev., 2016, 84(3), 456-486]. Results suggest that the Wishart KDE is superior for a selection of autoregressive coefficient matrices and innovation covariance matrices when estimating the stationary marginal density of a Wishart autoregressive process. To illustrate the practical utility of the Wishart KDE, an application to finance is made by estimating the marginal density function of a time series of realized covariance matrices, calculated from 5-minute intra-day returns, between the share prices of Amazon Corp. and the Standard & Poor's 500 exchange-traded fund over a one-year period. All code is publicly available via the R package ksm to facilitate implementation of the method and reproducibility of the findings. - oai:arXiv.org:2512.08232v1 + Model-robust Inference for Seamless II/III Trials with Covariate Adaptive Randomization + https://arxiv.org/abs/2512.09430 + arXiv:2512.09430v1 Announce Type: new +Abstract: Seamless phase II/III trials have become a cornerstone of modern drug development, offering a means to accelerate evaluation while maintaining statistical rigor. However, most existing inference procedures are model-based, designed primarily for continuous outcomes, and often neglect the stratification used in covariate-adaptive randomization (CAR), limiting their practical relevance. In this paper, we propose a unified, model-robust framework for seamless phase II/III trials grounded in generalized linear models (GLMs), enabling valid inference across diverse outcome types, estimands, and CAR schemes. Using Z-estimation, we derive the asymptotic properties of treatment effect estimators and explicitly characterize how their variance depends on the underlying randomization procedure.Based on these results, we develop adjusted Wald tests that, together with Dunnett's multiple-comparison procedure and the inverse chi-square combination method, ensure valid overall Type I error. Extensive simulation studies and a trial example demonstrate that the proposed model-robust tests achieve superior power and reliable inference compared to conventional approaches. + oai:arXiv.org:2512.09430v1 stat.ME - math.PR - math.ST - stat.AP - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - L\'eo R. Belzile, Christian Genest, Fr\'ed\'eric Ouimet, Donald Richards + Kun Yi, Lucy Xia - Causal inference under interference: computational barriers and algorithmic solutions - https://arxiv.org/abs/2512.08252 - arXiv:2512.08252v1 Announce Type: new -Abstract: We study causal effect estimation under interference from network data. We work under the chain-graph formulation pioneered in Tchetgen Tchetgen et. al (2021). Our first result shows that polynomial time evaluation of treatment effects is computationally hard in this framework without additional assumptions on the underlying chain graph. Subsequently, we assume that the interactions among the study units are governed either by (i) a dense graph or (ii) an i.i.d. Gaussian matrix. In each case, we show that the treatment effects have well-defined limits as the population size diverges to infinity. Additionally, we develop polynomial time algorithms to consistently evaluate the treatment effects in each case. Finally, we estimate the unknown parameters from the observed data using maximum pseudo-likelihood estimates, and establish the stability of our causal effect estimators under this perturbation. Our algorithms provably approximate the causal effects in polynomial time even in low-temperature regimes where the canonical MCMC samplers are slow mixing. For dense graphs, our results use the notion of regularity partitions; for Gaussian interactions, our approach uses ideas from spin glass theory and Approximate Message Passing. - oai:arXiv.org:2512.08252v1 - math.ST - math.PR + Multiply-robust Estimator of Cumulative Incidence Function Difference for Right-Censored Competing Risks Data + https://arxiv.org/abs/2512.09433 + arXiv:2512.09433v1 Announce Type: new +Abstract: In causal inference, estimating the average treatment effect is a central objective, and in the context of competing risks data, this effect can be quantified by the cause-specific cumulative incidence function (CIF) difference. While doubly robust estimators give a more robust way to estimate the causal effect from the observational study, they remain inconsistent if both models are misspecified. To improve the robustness, we develop a multiply robust estimator for the difference in cause-specific CIFs using right-censored competing risks data. The proposed framework integrates the pseudo-value approach, which transforms the censored, time-dependent CIF into a complete-data outcome, with the multiply robust estimation framework. By specifying multiple candidate models for both the propensity score and the outcome regression, the resulting estimator is consistent and asymptotically unbiased, provided that at least one of the multiple propensity score or outcome regression models is correctly specified. Simulation studies show our multiply robust estimator remains virtually unbiased and maintains nominal coverage rates under various model misspecification scenarios and a wide range of choices for the censoring rate. Finally, the proposed multiply robust model is illustrated using the Right Heart Catheterization dataset. + oai:arXiv.org:2512.09433v1 stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yifei Tian, Ying Wu + + + Estimation of Stochastic Optimal Transport Maps + https://arxiv.org/abs/2512.09499 + arXiv:2512.09499v1 Announce Type: new +Abstract: The optimal transport (OT) map is a geometry-driven transformation between high-dimensional probability distributions which underpins a wide range of tasks in statistics, applied probability, and machine learning. However, existing statistical theory for OT map estimation is quite restricted, hinging on Brenier's theorem (quadratic cost, absolutely continuous source) to guarantee existence and uniqueness of a deterministic OT map, on which various additional regularity assumptions are imposed to obtain quantitative error bounds. In many real-world problems these conditions fail or cannot be certified, in which case optimal transportation is possible only via stochastic maps that can split mass. To broaden the scope of map estimation theory to such settings, this work introduces a novel metric for evaluating the transportation quality of stochastic maps. Under this metric, we develop computationally efficient map estimators with near-optimal finite-sample risk bounds, subject to easy-to-verify minimal assumptions. Our analysis further accommodates common forms of adversarial sample contamination, yielding estimators with robust estimation guarantees. Empirical experiments are provided which validate our theory and demonstrate the utility of the proposed framework in settings where existing theory fails. These contributions constitute the first general-purpose theory for map estimation, compatible with a wide spectrum of real-world applications where optimal transport may be intrinsically stochastic. + oai:arXiv.org:2512.09499v1 + stat.ML + cs.LG + math.ST stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by/4.0/ - Sohom Bhattacharya, Subhabrata Sen + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Sloan Nietert, Ziv Goldfeld - Perturbation-based Inference for Extreme Value Index - https://arxiv.org/abs/2512.08258 - arXiv:2512.08258v1 Announce Type: new -Abstract: The extreme value index (EVI) characterizes the tail behavior of a distribution and is crucial for extreme value theory. Inference on the EVI is challenging due to data scarcity in the tail region. We propose a novel method for constructing confidence intervals for the EVI using synthetic exceedances generated via perturbation. Rather than perturbing the entire sample, we add noise to exceedances above a high threshold and apply the generalized Pareto distribution (GPD) approximation. Confidence intervals are derived by simulating the distribution of pivotal statistics from the perturbed data. We show that the pivotal statistic is consistent, ensuring the proposed method provides consistent intervals for the EVI. Additionally, we demonstrate that the perturbed data is differentially private. When the GPD approximation is inadequate, we introduce a refined perturbation method. Simulation results show that our approach outperforms existing methods, providing robust and reliable inference. - oai:arXiv.org:2512.08258v1 + Calibration with Bagging of the Principal Components on a Large Number of Auxiliary Variables + https://arxiv.org/abs/2512.09505 + arXiv:2512.09505v1 Announce Type: new +Abstract: Calibration is a widely used method in survey sampling to adjust weights so that estimated totals of some chosen calibration variables match known population totals or totals obtained from other sources. When a large number of auxiliary variables are included as calibration variables, the variance of the total estimator can increase, and the calibration weights can become highly dispersed. To address these issues, we propose a solution inspired by bagging and principal component decomposition. With our approach, the principal components of the auxiliary variables are constructed. Several samples of calibration variables are selected without replacement and with unequal probabilities from among the principal components. For each sample, a system of weights is obtained. The final weights are the average weights of these different weighting systems. With our proposed method, it is possible to calibrate exactly for some of the main auxiliary variables. For the other auxiliary variables, the weights cannot be calibrated exactly. The proposed method allows us to obtain a total estimator whose variance does not explode when new auxiliary variables are added and to obtain very low scatter weights. Finally, our proposed method allows us to obtain a single weighting system that can be applied to several variables of interest of a survey. + oai:arXiv.org:2512.09505v1 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yiwei Tang, Judy Huixia Wang, Deyuan Li + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Caren Hasler, Arnaud Tripet, Yves Till\'e - Distribution of Gaps in Multi-lane Orderly and Disorderly Traffic Streams - https://arxiv.org/abs/2512.08585 - arXiv:2512.08585v1 Announce Type: new -Abstract: To study gap acceptance behaviour one needs the distribution (or probability density function) of gaps in the opposing stream. Further, in these times of widespread availability of large computing powers, traffic simulation has emerged as a popular analysis and design tool. Such simulations rely on randomly generating the arriving vehicles in a way that statistically resembles real-world streams. The generation process for disorderly streams requires information on gap distributions. A study of past literature reveals that very little work has been done to determine the distribution of gaps on multi-lane orderly and disorderly streams. This study aims to develop an analytical framework to specify the distribution of gaps for such streams. This analytical framework is built using the Renewal Process Theory. A maximum likelihood based process for the estimation of the parameters of the analytically derived distribution is also described. Later, real-world gap data from three different sites covering orderly and disorderly streams are used to show how the derived distribution function (using the proposed method) ably describes the observed gap distributions. - oai:arXiv.org:2512.08585v1 - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + Transformers for Tabular Data: A Training Perspective of Self-Attention via Optimal Transport + https://arxiv.org/abs/2512.09530 + arXiv:2512.09530v1 Announce Type: new +Abstract: This thesis examines self-attention training through the lens of Optimal Transport (OT) and develops an OT-based alternative for tabular classification. The study tracks intermediate projections of the self-attention layer during training and evaluates their evolution using discrete OT metrics, including Wasserstein distance, Monge gap, optimality, and efficiency. Experiments are conducted on classification tasks with two and three classes, as well as on a biomedical dataset. + Results indicate that the final self-attention mapping often approximates the OT optimal coupling, yet the training trajectory remains inefficient. Pretraining the MLP section on synthetic data partially improves convergence but is sensitive to their initialization. To address these limitations, an OT-based algorithm is introduced: it generates class-specific dummy Gaussian distributions, computes an OT alignment with the data, and trains an MLP to generalize this mapping. The method achieves accuracy comparable to Transformers while reducing computational cost and scaling more efficiently under standardized inputs, though its performance depends on careful dummy-geometry design. All experiments and implementations are conducted in R. + oai:arXiv.org:2512.09530v1 + stat.ML + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Ankita Sharma, Partha Chakroborty, Pranamesh Chakraborty + http://creativecommons.org/licenses/by/4.0/ + Antonio Candelieri, Alessandro Quadrio - Heuristics for Combinatorial Optimization via Value-based Reinforcement Learning: A Unified Framework and Analysis - https://arxiv.org/abs/2512.08601 - arXiv:2512.08601v1 Announce Type: new -Abstract: Since the 1990s, considerable empirical work has been carried out to train statistical models, such as neural networks (NNs), as learned heuristics for combinatorial optimization (CO) problems. When successful, such an approach eliminates the need for experts to design heuristics per problem type. Due to their structure, many hard CO problems are amenable to treatment through reinforcement learning (RL). Indeed, we find a wealth of literature training NNs using value-based, policy gradient, or actor-critic approaches, with promising results, both in terms of empirical optimality gaps and inference runtimes. Nevertheless, there has been a paucity of theoretical work undergirding the use of RL for CO problems. To this end, we introduce a unified framework to model CO problems through Markov decision processes (MDPs) and solve them using RL techniques. We provide easy-to-test assumptions under which CO problems can be formulated as equivalent undiscounted MDPs that provide optimal solutions to the original CO problems. Moreover, we establish conditions under which value-based RL techniques converge to approximate solutions of the CO problem with a guarantee on the associated optimality gap. Our convergence analysis provides: (1) a sufficient rate of increase in batch size and projected gradient descent steps at each RL iteration; (2) the resulting optimality gap in terms of problem parameters and targeted RL accuracy; and (3) the importance of a choice of state-space embedding. Together, our analysis illuminates the success (and limitations) of the celebrated deep Q-learning algorithm in this problem context. - oai:arXiv.org:2512.08601v1 + Don't Throw Away Your Beams: Improving Consistency-based Uncertainties in LLMs via Beam Search + https://arxiv.org/abs/2512.09538 + arXiv:2512.09538v1 Announce Type: new +Abstract: Consistency-based methods have emerged as an effective approach to uncertainty quantification (UQ) in large language models. These methods typically rely on several generations obtained via multinomial sampling, measuring their agreement level. However, in short-form QA, multinomial sampling is prone to producing duplicates due to peaked distributions, and its stochasticity introduces considerable variance in uncertainty estimates across runs. We introduce a new family of methods that employ beam search to generate candidates for consistency-based UQ, yielding improved performance and reduced variance compared to multinomial sampling. We also provide a theoretical lower bound on the beam set probability mass under which beam search achieves a smaller error than multinomial sampling. We empirically evaluate our approach on six QA datasets and find that its consistent improvements over multinomial sampling lead to state-of-the-art UQ performance. + oai:arXiv.org:2512.09538v1 stat.ML + cs.CL cs.LG - math.OC - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Orit Davidovich, Shimrit Shtern, Segev Wasserkrug, Nimrod Megiddo + Ekaterina Fadeeva, Maiya Goloburda, Aleksandr Rubashevskii, Roman Vashurin, Artem Shelmanov, Preslav Nakov, Mrinmaya Sachan, Maxim Panov - A Persistent Homology Pipeline for the Analysis of Neural Spike Train Data - https://arxiv.org/abs/2512.08637 - arXiv:2512.08637v1 Announce Type: new -Abstract: In this article, we introduce a Topological Data Analysis (TDA) pipeline for neural spike train data. Understanding how the brain transforms sensory information into perception and behavior requires analyzing coordinated neural population activity. Modern electrophysiology enables simultaneous recording of spike train ensembles, but extracting meaningful information from these datasets remains a central challenge in neuroscience. A fundamental question is how ensembles of neurons discriminate between different stimuli or behavioral states, particularly when individual neurons exhibit weak or no stimulus selectivity, yet their coordinated activity may still contribute to network-level encoding. We describe a TDA framework that identifies stimulus-discriminative structure in spike train ensembles recorded from the mouse insular cortex during presentation of deionized water stimuli at distinct non-nociceptive temperatures. We show that population-level topological signatures effectively differentiate oral thermal stimuli even when individual neurons provide little or no discrimination. These findings demonstrate that ensemble organization can carry perceptually relevant information that standard single-unit analysis may miss. The framework builds on a mathematical representation of spike train ensembles that enables persistent homology to be applied to collections of point processes. At its core is the widely-used Victor-Purpura (VP) distance. Using this metric, we construct persistence-based descriptors that capture multiscale topological features of ensemble geometry. Two key theoretical results support the method: a stability theorem establishing robustness of persistent homology to perturbations in the VP metric parameter, and a probabilistic stability theorem ensuring robustness of topological signatures. - oai:arXiv.org:2512.08637v1 + A Bayesian Approach for Robust Longitudinal Envelope Models + https://arxiv.org/abs/2512.09553 + arXiv:2512.09553v1 Announce Type: new +Abstract: The envelope model provides a dimension-reduction framework for multivariate linear regression. However, existing envelope methods typically assume normally distributed random errors and do not accommodate repeated measures in longitudinal studies. To address these limitations, we propose the robust longitudinal envelope model (RoLEM). RoLEM employs a scale mixture of matrix-variate normal distributions to model random errors, allowing it to handle potential outliers, and incorporates flexible correlation structures for repeated measurements. In addition, we introduce new prior and proposal distributions on the Grassmann manifold to facilitate Bayesian inference for RoLEM. Simulation studies and real data analysis demonstrate the superior performance of the proposed method. + oai:arXiv.org:2512.09553v1 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/publicdomain/zero/1.0/ - Cagatay Ayhan, Audrey N. Nash, Roberto Vincis, Martin Bauer, Richard Bertram, Tom Needham + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Peng Zeng, Yushan Mu - Exhausting the type I error level in event-driven group-sequential designs with a closed testing procedure for progression-free and overall survival - https://arxiv.org/abs/2512.08658 - arXiv:2512.08658v1 Announce Type: new -Abstract: In oncological clinical trials, overall survival (OS) is the gold-standard endpoint, but long follow-up and treatment switching can delay or dilute detectable effects. Progression-free survival (PFS) often provides earlier evidence and is therefore frequently used together with OS as multiple primary endpoints. Since in certain scenarios trial success may be defined if one of the two hypotheses involved can be rejected, a correction for multiple testing may be deemed necessary. Because PFS and OS are generally highly dependent, their test statistics are typically correlated. Ignoring this dependency (e.g. via a simple Bonferroni correction) is not power optimal. We develop a group-sequential testing procedure for the multiple primary endpoints PFS and OS that fully exhausts the family-wise error rate (FWER) by exploiting their dependence. Specifically, we characterize the joint asymptotic distribution of log-rank statistics across endpoints and multiple event-driven analysis cutoffs. Furthermore, we show that we can consistently estimate the covariance structure. Embedding these results in a closed testing procedure, we can recalculate critical values of the test statistics in order to spend the available type I error optimally. An important extension to the current literature is that we allow for both interim and final analysis to be event-driven. Simulations based on illness-death multi-state models empirically confirm FWER control for moderate to large sample sizes. Compared with a simple Bonferroni correction, the proposed methods recover roughly two thirds of the power loss for OS, increase disjunctive and conjunctive power, and enable meaningful early stopping. In planning, these gains translate into about 5% fewer OS events required to reach the targeted power. We also discuss practical issues in the implementation of such designs and possible extensions of the introduced method. - oai:arXiv.org:2512.08658v1 - stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Neural posterior inference with state-space models for calibrating ice sheet simulators + https://arxiv.org/abs/2512.09561 + arXiv:2512.09561v1 Announce Type: new +Abstract: Ice sheet models are routinely used to quantify and project an ice sheet's contribution to sea level rise. In order for an ice sheet model to generate realistic projections, its parameters must first be calibrated using observational data; this is challenging due to the nonlinearity of the model equations, the high dimensionality of the underlying parameters, and limited data availability for validation. This study leverages the emerging field of neural posterior approximation for efficiently calibrating ice sheet model parameters and boundary conditions. We make use of a one-dimensional (flowline) Shallow-Shelf Approximation model in a state-space framework. A neural network is trained to infer the underlying parameters, namely the bedrock elevation and basal friction coefficient along the flowline, based on observations of ice velocity and ice surface elevation. Samples from the approximate posterior distribution of the parameters are then used within an ensemble Kalman filter to infer latent model states, namely the ice thickness along the flowline. We show through a simulation study that our approach yields more accurate estimates of the parameters and states than a state-augmented ensemble Kalman filter, which is the current state-of-the-art. We apply our approach to infer the bed elevation and basal friction along a flowline in Thwaites Glacier, Antarctica. + oai:arXiv.org:2512.09561v1 + stat.AP + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Moritz Fabian Danzer, Kaspar Rufibach, Jan Beyersmann, Ren\'e Schmidt + Bao Anh Vu, Andrew Zammit-Mangion, David Gunawan, Felicity S. McCormack, Noel Cressie - Matrix Completion Survey: Theory, Algorithms, and Empirical Evaluation - https://arxiv.org/abs/2512.08689 - arXiv:2512.08689v1 Announce Type: new -Abstract: We present a concise survey of matrix completion methods and associated implemen- tations of several fundamental algorithms. Our study covers both passive and adaptive strategies. We further illustrate the behavior of a simple adaptive sampling scheme through controlled synthetic experiments. - oai:arXiv.org:2512.08689v1 - stat.CO - Wed, 10 Dec 2025 00:00:00 -0500 + Uniform-over-dimension location tests for multivariate and high-dimensional data + https://arxiv.org/abs/2512.09659 + arXiv:2512.09659v1 Announce Type: new +Abstract: Asymptotic methods for hypothesis testing in high-dimensional data usually require the dimension of the observations to increase to infinity, often with an additional relationship between the dimension (say, $p$) and the sample size (say, $n$). On the other hand, multivariate asymptotic testing methods are valid for fixed dimension only and their implementations typically require the sample size to be large compared to the dimension to yield desirable results. In practical scenarios, it is usually not possible to determine whether the dimension of the data conform to the conditions required for the validity of the high-dimensional asymptotic methods for hypothesis testing, or whether the sample size is large enough compared to the dimension of the data. In this work, we first describe the notion of uniform-over-$p$ convergences and subsequently, develop a uniform-over-dimension central limit theorem. An asymptotic test for the two-sample equality of locations is developed, which now holds uniformly over the dimension of the observations. Using simulated and real data, it is demonstrated that the proposed test exhibits better performance compared to several popular tests in the literature for high-dimensional data as well as the usual scaled two-sample tests for multivariate data, including the Hotelling's $T^2$ test for multivariate Gaussian data. + oai:arXiv.org:2512.09659v1 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Connor Panish, Leo Villani + Ritabrata Karmakar, Joydeep Chowdhury, Subhajit Dutta, Marc G. Genton - Stationary Point Constrained Inference via Diffeomorphisms - https://arxiv.org/abs/2512.08735 - arXiv:2512.08735v1 Announce Type: new -Abstract: Stationary points or derivative zero crossings of a regression function correspond to points where a trend reverses, making their estimation scientifically important. Existing approaches to uncertainty quantification for stationary points cannot deliver valid joint inference when multiple extrema are present, an essential capability in applications where the relative locations of peaks and troughs carry scientific significance. We develop a principled framework for functions with multiple regions of monotonicity by constraining the number of stationary points. We represent each function in the diffeomorphic formulation as the composition of a simple template and a smooth bijective transformation, and show that this parameterization enables coherent joint inference on the extrema. This construction guarantees a prespecified number of stationary points and provides a direct, interpretable parameterization of their locations. We derive non-asymptotic confidence bounds and establish approximate normality for the maximum likelihood estimators, with parallel results in the Bayesian setting. Simulations and an application to brain signal estimation demonstrate the method's accuracy and interpretability. - oai:arXiv.org:2512.08735v1 + A simple geometric proof for the characterisation of e-merging functions + https://arxiv.org/abs/2512.09708 + arXiv:2512.09708v1 Announce Type: new +Abstract: E-values offer a powerful framework for aggregating evidence across different (possibly dependent) statistical experiments. A fundamental question is to identify e-merging functions, namely mappings that merge several e-values into a single valid e-value. A simple and elegant characterisation of this function class was recently obtained by Wang(2025), though via technically involved arguments. This note gives a short and intuitive geometric proof of the same characterisation, based on a supporting hyperplane argument applied to concave envelopes. We also show that the result holds even without imposing monotonicity in the definition of e-merging functions, which was needed for the existing proof. This shows that any non-monotone merging rule is automatically dominated by a monotone one, and hence extending the definition beyond the monotone case brings no additional generality. + oai:arXiv.org:2512.09708v1 + math.ST + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 + new + http://creativecommons.org/licenses/by/4.0/ + Eugenio Clerico + + + Bayesian Model Selection with an Application to Cosmology + https://arxiv.org/abs/2512.09724 + arXiv:2512.09724v1 Announce Type: new +Abstract: We investigate cosmological parameter inference and model selection from a Bayesian perspective. Type Ia supernova data from the Dark Energy Survey (DES-SN5YR) are used to test the \(\Lambda\)CDM, \(w\)CDM, and CPL cosmological models. Posterior inference is performed via Hamiltonian Monte Carlo using the No-U-Turn Sampler (NUTS) implemented in NumPyro and analyzed with ArviZ in Python. Bayesian model comparison is conducted through Bayes factors computed using the \texttt{bridgesampling} library in R. The results indicate that all three models demonstrate similar predictive performance, but \(w\)CDM shows stronger evidence relative to \(\Lambda\)CDM and CPL. We conclude that, under the assumptions and data used in this study, \(w\)CDM provides a better description of cosmological expansion. + oai:arXiv.org:2512.09724v1 + stat.AP + astro-ph.CO stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Michael Price, Debdeep Pati, Ning Ning + http://creativecommons.org/licenses/by/4.0/ + Nikoloz Gigiberia - Genetic Regression Analysis of Human Brain Connectivity Using an Efficient Estimator of Genetic Covariance - https://arxiv.org/abs/2512.08756 - arXiv:2512.08756v1 Announce Type: new -Abstract: Non-invasive measurements of the human brain using magnetic resonance imaging (MRI) have significantly improved our understanding the brain's network organization by enabling measurement of anatomical connections between brain regions (structural connectivity) and their coactivation (functional connectivity). Heritability analyses have established that genetics account for considerable intersubject variability in structural and functional connectivity. However, characterizing how genetics shape the relationship between structural and functional connectomes remains challenging, since this association is obscured by unique environmental exposures in observed data. To address this, we develop a regression analysis framework that enables characterization of the relationship between latent genetic contributions to structural and functional connectivity. Implementing the proposed framework requires estimating genetic covariance matrices in multivariate random effects models, which is computationally intractable for high-dimensional connectome data using existing methods. We introduce a constrained method-of-moments estimator that is several orders of magnitude faster than existing methods without sacrificing estimation accuracy. For the genetic regression analysis, we develop regularized estimation approaches, including ridge, lasso, and tensor regression. Applying our method to Human Connectome Project data, we find that functional connectivity is moderately predictable from structure at the genetic level (max R^2 = 0.34), though it is not directly predictable in the observed data (max R^2 = 0.03). This stark contrast suggests that unique environmental factors mask strong genetically-encoded structure-function relationships. - oai:arXiv.org:2512.08756v1 + Network Meta Analysis of Mean Survival + https://arxiv.org/abs/2512.09732 + arXiv:2512.09732v1 Announce Type: new +Abstract: Decisions based upon pairwise comparisons of multiple treatments are naturally performed in terms of the mean survival of the selected study arms or functions thereof. However, synthesis of treatment comparisons is usually performed on surrogates of the mean survival, such as hazard ratios or restricted mean survival times. Thus, network meta-analysis techniques may suffer from the limitations of these approaches, such as incorrect proportional hazards assumption or short-term follow-up periods. We propose a Bayesian framework for the network meta-analysis of the main outcome informing the decision, the mean survival of a treatment. Its derivation involves extrapolation of the observed survival curves. We use methods for stable extrapolation that integrate long term evidence based upon mortality projections. Extrapolations are performed using flexible poly-hazard parametric models and M-spline-based methods. We assess the computational and statistical efficiency of different techniques using a simulation study and apply the developed methods to two real data sets. The proposed method is formulated within a decision theoretic framework for cost-effectiveness analyses, where the `best' treatment is to be selected and incorporating the associated cost information is straightforward. + oai:arXiv.org:2512.09732v1 stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Keshav Motwani, Ali Shojaie, Ariel Rokem, Eardi Lila + Anastasios Apsemidis, Dimitris Mavridis, Nikolaos Demiris - Point and interval estimators of a changepoint in stochastical dominance between two distributions - https://arxiv.org/abs/2512.08823 - arXiv:2512.08823v1 Announce Type: new -Abstract: For differences between means of continuous data from independent groups, the customary scale-free measure of effect is the standardized mean difference (SMD). To justify use of SMD, one should be reasonably confident that the group-level variances are equal. Empirical evidence often contradicts this assumption. Thus, we have investigated an alternate approach, based on stochastic ordering of the treatment and control distributions, that takes into account means and variances. For applying stochastic ordering, our development yields a key quantity, $\mathsf{A}$, the outcome value at which the direction of the ordering of the treatment and control distributions changes. - Using an extensive simulation, we studied relative bias of point estimators of $\mathsf{A}$ and coverage and relative width of bootstrap confidence intervals. - oai:arXiv.org:2512.08823v1 + A general class of continuous asymmetric distributions with positive support + https://arxiv.org/abs/2512.09787 + arXiv:2512.09787v1 Announce Type: new +Abstract: In order to better fit real-world datasets, studying asymmetric distribution is of great interest. In this work, we derive several mathematical properties of a general class of asymmetric distributions with positive support which shows up as a unified framework for Extreme Value Theory asymptotic results. The new model generalizes some well-known distribution models such as Generalized Gamma, Inverse Gamma, Weibull, Fr\'echet, Half-normal, Modified half-normal, Rayleigh, and Erlang. To highlight the applicability of our results, the performance of the analytical models is evaluated through real-life dataset modeling. + oai:arXiv.org:2512.09787v1 math.ST + stat.AP stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Elena Kulinskaya, David C. Hoaglin + Felipe S. Quintino, Pushpa N. Rathie, Luan C. S. M. Ozelim, Tiago A. da Fonseca, Roberto Vila - Commanding the Foul Shot: A New Ensemble of Free Throw Metrics - https://arxiv.org/abs/2512.08824 - arXiv:2512.08824v1 Announce Type: new -Abstract: With the NBA's adoption of in-game limb tracking in 2023, Sony's Hawk-Eye system now captures high-resolution, 3D poses of players and the ball 60 times per second. Linking these data to key events such as shots, passes, and rebounds opens a new era in NBA analytics. Here, we leverage Hawk-Eye tracking to introduce a novel ensemble of metrics for evaluating free-throw shooting and demonstrate that our framework captures skill more effectively than traditional make-or-miss statistics. Inspired by baseball analytics, we introduce command, which quantifies the quality of a free throw by measuring a shooter's accuracy and precision near the basket's bullseye. This metric recognizes that some makes (or misses) are better than others and captures a player's ability to execute quality attempts consistently. To identify what drives command, we define launch-based metrics assessing consistency in release velocity, angle, and 3D position. Players with greater touch -- i.e., more consistent launch dynamics -- exhibit stronger command as they can reliably control their shot trajectory. Finally, we develop a physics model to identify the range of launch conditions that result in a make and to determine which launch conditions are most robust to small perturbations. This framework reveals "safe" launch regions and explains why certain players, such as Steph Curry, excel at free throws, providing actionable insights for player development. - oai:arXiv.org:2512.08824v1 - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + A Conversation with Mike West + https://arxiv.org/abs/2512.09790 + arXiv:2512.09790v1 Announce Type: new +Abstract: Mike West is currently the Arts & Sciences Distinguished Professor Emeritus of Statistics and Decision Sciences at Duke University. Mike's research in Bayesian analysis spans multiple interlinked areas: theory and methods of dynamic models in time series analysis, foundations of inference and decision analysis, multivariate and latent structure analysis, stochastic computation and optimisation, among others. Inter-disciplinary R&D has ranged across applications in commercial forecasting, dynamic networks, finance, econometrics, signal processing, climatology, systems biology, genomics and neuroscience, among other areas. Among Mike's currently active research areas are forecasting, causal prediction and decision analysis in business, economic policy and finance, as well as in personal decision making. Mike led the development of academic statistics at Duke University from 1990-2002, and has been broadly engaged in professional leadership elsewhere. He is past president of the International Society for Bayesian Analysis (ISBA), and has served in founding roles and as board member for several professional societies, national and international centres and institutes. Recipient of numerous awards, Mike has been active in research with various companies, banks, government agencies and academic centres, co-founder of a successful biotechnology company, and board member for several financial and IT companies. He has published 4 books, several edited volumes and over 200 papers. Mike has worked with many undergraduate and Master's research students, and as of 2025 has mentored around 65 primary PhD students and postdoctoral associates who moved to academic, industrial or governmental positions involving advanced statistical and data science research. + oai:arXiv.org:2512.09790v1 + stat.OT + Thu, 11 Dec 2025 00:00:00 -0500 new http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jake McGrath, Amanda Glazer, Vanna Bushong, Michelle Nguyen, Kirk Goldsberry + Hedibert F. Lopes, Filippo Ascolani - Prediction Intervals for Individual Treatment Effects in a Multiple Decision Point Framework using Conformal Inference - https://arxiv.org/abs/2512.08828 - arXiv:2512.08828v1 Announce Type: new -Abstract: Accurately quantifying uncertainty of individual treatment effects (ITEs) across multiple decision points is crucial for personalized decision-making in fields such as healthcare, finance, education, and online marketplaces. Previous work has focused on predicting non-causal longitudinal estimands or constructing prediction bands for ITEs using cross-sectional data based on exchangeability assumptions. We propose a novel method for constructing prediction intervals using conformal inference techniques for time-varying ITEs with weaker assumptions than prior literature. We guarantee a lower bound for coverage, which is dependent on the degree of non-exchangeability in the data. Although our method is broadly applicable across decision-making contexts, we support our theoretical claims with simulations emulating micro-randomized trials (MRTs) -- a sequential experimental design for mobile health (mHealth) studies. We demonstrate the practical utility of our method by applying it to a real-world MRT - the Intern Health Study (IHS). - oai:arXiv.org:2512.08828v1 + RECAP Framework v1.0: A Multi-Layer Inheritance Architecture for Evidence Synthesis + https://arxiv.org/abs/2512.09821 + arXiv:2512.09821v1 Announce Type: new +Abstract: Evidence synthesis has advanced through improved reporting standards, bias assessment tools, and analytic methods, but current workflows remain limited by a single-layer structure in which conceptual, methodological, and procedural decisions are made on the same level. This forces each project to rebuild its methodological foundations from scratch, leading to inconsistencies, conceptual drift, and unstable reasoning across projects. RECAP Framework v1.0 introduces a three-layer meta-architecture consisting of methodological laws (Grandparent), domain-level abstractions (Parent), and project-level implementations (Child). The framework defines an inheritance system with strict rules for tiering, routing, and contamination control to preserve construct clarity, enforce inferential discipline, and support reproducibility across multi-project evidence ecosystems. RECAP provides a formal governance layer for evidence synthesis and establishes the foundation for a methodological lineage designed to stabilize reasoning across research programs. + oai:arXiv.org:2512.09821v1 stat.ME - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new http://creativecommons.org/licenses/by/4.0/ - Swaraj Bose, Walter Dempsey + Hung Kuan Lee - Partially Bayes p-values for large scale inference - https://arxiv.org/abs/2512.08847 - arXiv:2512.08847v1 Announce Type: new -Abstract: We seek to conduct statistical inference for a large collection of primary parameters, each with its own nuisance parameters. Our approach is partially Bayesian, in that we treat the primary parameters as fixed while we model the nuisance parameters as random and drawn from an unknown distribution which we endow with a nonparametric prior. We compute partially Bayes p-values by conditioning on nuisance parameter statistics, that is, statistics that are ancillary for the primary parameters and informative about the nuisance parameters. The proposed p-values have a Bayesian interpretation as tail areas computed with respect to the posterior distribution of the nuisance parameters. Similarly to the conditional predictive p-values of Bayarri and Berger, the partially Bayes p-values avoid double use of the data (unlike posterior predictive p-values). A key ingredient of our approach is that we model nuisance parameters hierarchically across problems; the sharing of information across problems leads to improved calibration. We illustrate the proposed partially Bayes p-values in two applications: the normal means problem with unknown variances and a location-scale model with unknown distribution shape. We model the scales via Dirichlet processes in both examples and the distribution shape via P\'olya trees in the second. Our proposed partially Bayes p-values increase power and calibration compared to purely frequentist alternatives. - oai:arXiv.org:2512.08847v1 + Predictor-Informed Bayesian Nonparametric Clustering + https://arxiv.org/abs/2512.09826 + arXiv:2512.09826v1 Announce Type: new +Abstract: In this project we are interested in performing clustering of observations such that the cluster membership is influenced by a set of predictors. To that end, we employ the Bayesian nonparameteric Common Atoms Model, which is a nested clustering algorithm that utilizes a (fixed) group membership for each observation to encourage more similar clustering of members of the same group. CAM operates by assuming each group has its own vector of cluster probabilities, which are themselves clustered to allow similar clustering for some groups. We extend this approach by treating the group membership as an unknown latent variable determined as a flexible nonparametric form of the covariate vector. Consequently, observations with similar predictor values will be in the same latent group and are more likely to be clustered together than observations with disparate predictors. We propose a pyramid group model that flexibly partitions the predictor space into these latent group memberships. This pyramid model operates similarly to a Bayesian regression tree process except that it uses the same splitting rule for at all nodes at the same tree depth which facilitates improved mixing. We outline a block Gibbs sampler to perform posterior inference from our model. Our methodology is demonstrated in simulation and real data examples. In the real data application, we utilize the RAND Health and Retirement Study to cluster and predict patient outcomes in terms of the number of overnight hospital stays. + oai:arXiv.org:2512.09826v1 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Nikolaos Ignatiadis, Li Ma + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Md Yasin Ali Parh, Jeremy T. Gaskins - Multifractal behavior of price changes in the Green Bonds funds - https://arxiv.org/abs/2512.08886 - arXiv:2512.08886v1 Announce Type: new -Abstract: Climate change has driven the market to seek new ways of raising funds to mitigate its effects. One such innovation is the emergence of Green Bonds financial assets specifically designed to support sustainable projects. This study explores the fractal behavior of daily price changes in thirty-five Green Bond funds using the Multifractal Detrended Fluctuation Analysis (MFDFA) method. Our results indicate that price changes exhibit persistent behavior and high multifractality, characterized by large fluctuations. Only one of the thirty-five time series analyzed showed an outlier result, suggesting that the funds display very similar behavior. By shuffling the series, we were able to reduce multifractality significantly. These findings suggest that Green Bond funds exhibit multifractal behavior typical of other financial assets. - oai:arXiv.org:2512.08886v1 - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + Supervised learning pays attention + https://arxiv.org/abs/2512.09912 + arXiv:2512.09912v1 Announce Type: new +Abstract: In-context learning with attention enables large neural networks to make context-specific predictions by selectively focusing on relevant examples. Here, we adapt this idea to supervised learning procedures such as lasso regression and gradient boosting, for tabular data. Our goals are to (1) flexibly fit personalized models for each prediction point and (2) retain model simplicity and interpretability. + Our method fits a local model for each test observation by weighting the training data according to attention, a supervised similarity measure that emphasizes features and interactions that are predictive of the outcome. Attention weighting allows the method to adapt to heterogeneous data in a data-driven way, without requiring cluster or similarity pre-specification. Further, our approach is uniquely interpretable: for each test observation, we identify which features are most predictive and which training observations are most relevant. We then show how to use attention weighting for time series and spatial data, and we present a method for adapting pretrained tree-based models to distributional shift using attention-weighted residual corrections. Across real and simulated datasets, attention weighting improves predictive performance while preserving interpretability, and theory shows that attention-weighting linear models attain lower mean squared error than the standard linear model under mixture-of-models data-generating processes with known subgroup structure. + oai:arXiv.org:2512.09912v1 + stat.ML + cs.AI + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Wenderson Gomes Barbosa, Kerolly Kedma Felix do Nascimento, F\'abio Sandro dos Santos, Silvio Fernando Alves Xavier J\'unior, Tiago A. E. Ferreira + http://creativecommons.org/licenses/by/4.0/ + Erin Craig, Robert Tibshirani - State and Parameter Estimation for a Neural Model of Local Field Potentials - https://arxiv.org/abs/2512.07842 - arXiv:2512.07842v1 Announce Type: cross -Abstract: The study of cortical dynamics during different states such as decision making, sleep and movement, is an important topic in Neuroscience. Modelling efforts aim to relate the neural rhythms present in cortical recordings to the underlying dynamics responsible for their emergence. We present an effort to characterize the neural activity from the cortex of a mouse during natural sleep, captured through local field potential measurements. Our approach relies on using a discretized Wilson--Cowan Amari neural field model for neural activity, along with a data assimilation method that allows the Bayesian joint estimation of the state and parameters. We demonstrate the feasibility of our approach on synthetic measurements before applying it to a dataset available in literature. Our findings suggest the potential of our approach to characterize the stimulus received by the cortex from other brain regions, while simultaneously inferring a state that aligns with the observed signal. - oai:arXiv.org:2512.07842v1 - q-bio.NC - math.DS - math.PR - stat.CO - Wed, 10 Dec 2025 00:00:00 -0500 + Optimizing Algorithms for Mobile Health Interventions with Active Querying Optimization + https://arxiv.org/abs/2512.08950 + arXiv:2512.08950v1 Announce Type: cross +Abstract: Reinforcement learning in mobile health (mHealth) interventions requires balancing intervention efficacy with user burden, particularly when state measurements (for example, user surveys or feedback) are costly yet essential. The Act-Then-Measure (ATM) heuristic addresses this challenge by decoupling control and measurement actions within the Action-Contingent Noiselessly Observable Markov Decision Process (ACNO-MDP) framework. However, the standard ATM algorithm relies on a temporal-difference-inspired Q-learning method, which is prone to instability in sparse and noisy environments. In this work, we propose a Bayesian extension to ATM that replaces standard Q-learning with a Kalman filter-style Bayesian update, maintaining uncertainty-aware estimates of Q-values and enabling more stable and sample-efficient learning. We evaluate our method in both toy environments and clinically motivated testbeds. In small, tabular environments, Bayesian ATM achieves comparable or improved scalarized returns with substantially lower variance and more stable policy behavior. In contrast, in larger and more complex mHealth settings, both the standard and Bayesian ATM variants perform poorly, suggesting a mismatch between ATM's modeling assumptions and the structural challenges of real-world mHealth domains. These findings highlight the value of uncertainty-aware methods in low-data settings while underscoring the need for new RL algorithms that explicitly model causal structure, continuous states, and delayed feedback under observation cost constraints. + oai:arXiv.org:2512.08950v1 + cs.LG + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by/4.0/ - Daniele Avitabile, Gabriel J. Lord, Khadija Meddouni + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Aseel Rawashdeh - Bayesian Optimization for Function-Valued Responses under Min-Max Criteria - https://arxiv.org/abs/2512.07868 - arXiv:2512.07868v1 Announce Type: cross -Abstract: Bayesian optimization is widely used for optimizing expensive black box functions, but most existing approaches focus on scalar responses. In many scientific and engineering settings the response is functional, varying smoothly over an index such as time or wavelength, which makes classical formulations inadequate. Existing methods often minimize integrated error, which captures average performance but neglects worst case deviations. To address this limitation we propose min-max Functional Bayesian Optimization (MM-FBO), a framework that directly minimizes the maximum error across the functional domain. Functional responses are represented using functional principal component analysis, and Gaussian process surrogates are constructed for the principal component scores. Building on this representation, MM-FBO introduces an integrated uncertainty acquisition function that balances exploitation of worst case expected error with exploration across the functional domain. We provide two theoretical guarantees: a discretization bound for the worst case objective, and a consistency result showing that as the surrogate becomes accurate and uncertainty vanishes, the acquisition converges to the true min-max objective. We validate the method through experiments on synthetic benchmarks and physics inspired case studies involving electromagnetic scattering by metaphotonic devices and vapor phase infiltration. Results show that MM-FBO consistently outperforms existing baselines and highlights the importance of explicitly modeling functional uncertainty in Bayesian optimization. - oai:arXiv.org:2512.07868v1 + DW-KNN: A Transparent Local Classifier Integrating Distance Consistency and Neighbor Reliability + https://arxiv.org/abs/2512.08956 + arXiv:2512.08956v1 Announce Type: cross +Abstract: K-Nearest Neighbors (KNN) is one of the most used ML classifiers. However, if we observe closely, standard distance-weighted KNN and relative variants assume all 'k' neighbors are equally reliable. In heterogeneous feature space, this becomes a limitation that hinders reliability in predicting true levels of the observation. + We propose DW-KNN (Double Weighted KNN), a transparent and robust variant that integrates exponential distance with neighbor validity. This enables instance-level interpretability, suppresses noisy or mislabeled samples, and reduces hyperparameter sensitivity. + Comprehensive evaluation on 9 data-sets helps to demonstrate that DW-KNN achieves 0.8988 accuracy on average. It ranks 2nd among six methods and within 0.2% of the best-performing Ensemble KNN. It also exhibits the lowest cross-validation variance (0.0156), indicating reliable prediction stability. Statistical significance test confirmed ($p < 0.001$) improvement over compactness weighted KNN (+4.09\%) and Kernel weighted KNN (+1.13\%). The method provides a simple yet effective alternative to complex adaptive schemes, particularly valuable for high-stakes applications requiring explainable predictions. + oai:arXiv.org:2512.08956v1 cs.LG - cs.AI stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - Pouya Ahadi, Reza Marzban, Ali Adibi, Kamran Paynabar + Kumarjit Pathak, Karthik K, Sachin Madan, Jitin Kapila - Softly Symbolifying Kolmogorov-Arnold Networks - https://arxiv.org/abs/2512.07875 - arXiv:2512.07875v1 Announce Type: cross -Abstract: Kolmogorov-Arnold Networks (KANs) offer a promising path toward interpretable machine learning: their learnable activations can be studied individually, while collectively fitting complex data accurately. In practice, however, trained activations often lack symbolic fidelity, learning pathological decompositions with no meaningful correspondence to interpretable forms. We propose Softly Symbolified Kolmogorov-Arnold Networks (S2KAN), which integrate symbolic primitives directly into training. Each activation draws from a dictionary of symbolic and dense terms, with learnable gates that sparsify the representation. Crucially, this sparsification is differentiable, enabling end-to-end optimization, and is guided by a principled Minimum Description Length objective. When symbolic terms suffice, S2KAN discovers interpretable forms; when they do not, it gracefully degrades to dense splines. We demonstrate competitive or superior accuracy with substantially smaller models across symbolic benchmarks, dynamical systems forecasting, and real-world prediction tasks, and observe evidence of emergent self-sparsification even without regularization pressure. - oai:arXiv.org:2512.07875v1 - cs.LG - cs.NE - physics.data-an - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + BISTRO - A Bi-Fidelity Stochastic Gradient Framework using Trust-Regions for Optimization Under Uncertainty + https://arxiv.org/abs/2512.09055 + arXiv:2512.09055v1 Announce Type: cross +Abstract: Stochastic optimization of engineering systems is often infeasible due to repeated evaluations of a computationally expensive, high-fidelity simulation. Bi-fidelity methods mitigate this challenge by leveraging a cheaper, approximate model to accelerate convergence. Most existing bi-fidelity approaches, however, exploit either design-space curvature or random-space correlation, not both. We present BISTRO - a BI-fidelity Stochastic Trust-Region Optimizer for unconstrained optimization under uncertainty through a stochastic approximation procedure. This approach exploits the curvature information of a low-fidelity objective function to converge within a basin of a local minimum of the high-fidelity model where low-fidelity curvature information is no longer valuable. The method then switches to a variance-reduced stochastic gradient descent procedure. We provide convergence guarantees in expectation under certain regularity assumptions and ensure the best-case $\mathcal{O}(1/n)$ convergence rate for stochastic optimization. On benchmark problems and a 20-dimensional space shuttle reentry case, BISTRO converges faster than adaptive sampling and variance reduction procedures and cuts computational expense by up to 29x. + oai:arXiv.org:2512.09055v1 + math.OC + stat.CO + Thu, 11 Dec 2025 00:00:00 -0500 cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - James Bagrow, Josh Bongard + Thomas O. Dixon, Geoffrey F. Bomarito, James E. Warner, Alex A. Gorodetsky - Fourier-Enhanced Recurrent Neural Networks for Electrical Load Time Series Downscaling - https://arxiv.org/abs/2512.07876 - arXiv:2512.07876v1 Announce Type: cross -Abstract: We present a Fourier-enhanced recurrent neural network (RNN) for downscaling electrical loads. The model combines (i) a recurrent backbone driven by low-resolution inputs, (ii) explicit Fourier seasonal embeddings fused in latent space, and (iii) a self-attention layer that captures dependencies among high-resolution components within each period. Across four PJM territories, the approach yields RMSE lower and flatter horizon-wise than classical Prophet baselines (with and without seasonality/LAA) and than RNN ablations without attention or Fourier features. - oai:arXiv.org:2512.07876v1 + Banach neural operator for Navier-Stokes equations + https://arxiv.org/abs/2512.09070 + arXiv:2512.09070v1 Announce Type: cross +Abstract: Classical neural networks are known for their ability to approximate mappings between finite-dimensional spaces, but they fall short in capturing complex operator dynamics across infinite-dimensional function spaces. Neural operators, in contrast, have emerged as powerful tools in scientific machine learning for learning such mappings. However, standard neural operators typically lack mechanisms for mixing or attending to input information across space and time. In this work, we introduce the Banach neural operator (BNO) -- a novel framework that integrates Koopman operator theory with deep neural networks to predict nonlinear, spatiotemporal dynamics from partial observations. The BNO approximates a nonlinear operator between Banach spaces by combining spectral linearization (via Koopman theory) with deep feature learning (via convolutional neural networks and nonlinear activations). This sequence-to-sequence model captures dominant dynamic modes and allows for mesh-independent prediction. Numerical experiments on the Navier-Stokes equations demonstrate the method's accuracy and generalization capabilities. In particular, BNO achieves robust zero-shot super-resolution in unsteady flow prediction and consistently outperforms conventional Koopman-based methods and deep learning models. + oai:arXiv.org:2512.09070v1 + cs.NE cs.LG stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qi Chen, Mihai Anitescu + 10.1063/5.0284818 + Bo Zhang - CrowdLLM: Building LLM-Based Digital Populations Augmented with Generative Models - https://arxiv.org/abs/2512.07890 - arXiv:2512.07890v1 Announce Type: cross -Abstract: The emergence of large language models (LLMs) has sparked much interest in creating LLM-based digital populations that can be applied to many applications such as social simulation, crowdsourcing, marketing, and recommendation systems. A digital population can reduce the cost of recruiting human participants and alleviate many concerns related to human subject study. However, research has found that most of the existing works rely solely on LLMs and could not sufficiently capture the accuracy and diversity of a real human population. To address this limitation, we propose CrowdLLM that integrates pretrained LLMs and generative models to enhance the diversity and fidelity of the digital population. We conduct theoretical analysis of CrowdLLM regarding its great potential in creating cost-effective, sufficiently representative, scalable digital populations that can match the quality of a real crowd. Comprehensive experiments are also conducted across multiple domains (e.g., crowdsourcing, voting, user rating) and simulation studies which demonstrate that CrowdLLM achieves promising performance in both accuracy and distributional fidelity to human data. - oai:arXiv.org:2512.07890v1 - cs.MA + Beyond the Hype: Comparing Lightweight and Deep Learning Models for Air Quality Forecasting + https://arxiv.org/abs/2512.09076 + arXiv:2512.09076v1 Announce Type: cross +Abstract: Accurate forecasting of urban air pollution is essential for protecting public health and guiding mitigation policies. While Deep Learning (DL) and hybrid pipelines dominate recent research, their complexity and limited interpretability hinder operational use. This study investigates whether lightweight additive models -- Facebook Prophet (FBP) and NeuralProphet (NP) -- can deliver competitive forecasts for particulate matter (PM$_{2.5}$, PM$_{10}$) in Beijing, China. Using multi-year pollutant and meteorological data, we applied systematic feature selection (correlation, mutual information, mRMR), leakage-safe scaling, and chronological data splits. Both models were trained with pollutant and precursor regressors, with NP additionally leveraging lagged dependencies. For context, two machine learning baselines (LSTM, LightGBM) and one traditional statistical model (SARIMAX) were also implemented. Performance was evaluated on a 7-day holdout using MAE, RMSE, and $R^2$. Results show that FBP consistently outperformed NP, SARIMAX, and the learning-based baselines, achieving test $R^2$ above 0.94 for both pollutants. These findings demonstrate that interpretable additive models remain competitive with both traditional and complex approaches, offering a practical balance of accuracy, transparency, and ease of deployment. + oai:arXiv.org:2512.09076v1 cs.LG - stat.ME + cs.AI stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ryan Feng Lin, Keyu Tian, Hanming Zheng, Congjing Zhang, Li Zeng, Shuai Huang - - - Expectations in Expectation Propagation - https://arxiv.org/abs/2512.08034 - arXiv:2512.08034v1 Announce Type: cross -Abstract: Expectation Propagation (EP) is a widely used message-passing algorithm that decomposes a global inference problem into multiple local ones. It approximates marginal distributions (beliefs) using intermediate functions (messages). While beliefs must be proper probability distributions that integrate to one, messages may have infinite integral values. In Gaussian-projected EP, such messages take a Gaussian form and appear as if they have "negative" variances. Although allowed within the EP framework, these negative-variance messages can impede algorithmic progress. - In this paper, we investigate EP in linear models and analyze the relationship between the corresponding beliefs. Based on the analysis, we propose both non-persistent and persistent approaches that prevent the algorithm from being blocked by messages with infinite integral values. - Furthermore, by examining the relationship between the EP messages in linear models, we develop an additional approach that avoids the occurrence of messages with infinite integral values. - oai:arXiv.org:2512.08034v1 - cs.IT - eess.SP - math.IT - stat.CO - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Zilu Zhao, Fangqing Xiao, Dirk Slock + Moazzam Umer Gondal, Hamad ul Qudous, Asma Ahmad Farhan - LUNA: Linear Universal Neural Attention with Generalization Guarantees - https://arxiv.org/abs/2512.08061 - arXiv:2512.08061v1 Announce Type: cross -Abstract: Scaling attention faces a critical bottleneck: the $\mathcal{O}(n^2)$ quadratic computational cost of softmax attention, which limits its application in long-sequence domains. While linear attention mechanisms reduce this cost to $\mathcal{O}(n)$, they typically rely on fixed random feature maps, such as random Fourier features or hand-crafted functions. This reliance on static, data-agnostic kernels creates a fundamental trade-off, forcing practitioners to sacrifice significant model accuracy for computational efficiency. We introduce \textsc{LUNA}, a kernelized linear attention mechanism that eliminates this trade-off, retaining linear cost while matching and surpassing the accuracy of quadratic attention. \textsc{LUNA} is built on the key insight that the kernel feature map itself should be learned rather than fixed a priori. By parameterizing the kernel, \textsc{LUNA} learns a feature basis tailored to the specific data and task, overcoming the expressive limitations of fixed-feature methods. \textsc{Luna} implements this with a learnable feature map that induces a positive-definite kernel and admits a streaming form, yielding linear time and memory scaling in the sequence length. Empirical evaluations validate our approach across diverse settings. On the Long Range Arena (LRA), \textsc{Luna} achieves state-of-the-art average accuracy among efficient Transformers under compute parity, using the same parameter count, training steps, and approximate FLOPs. \textsc{Luna} also excels at post-hoc conversion: replacing softmax in fine-tuned BERT and ViT-B/16 checkpoints and briefly fine-tuning recovers most of the original performance, substantially outperforming fixed linearizations. - oai:arXiv.org:2512.08061v1 + Causal Attribution of Model Performance Gaps in Medical Imaging Under Distribution Shifts + https://arxiv.org/abs/2512.09094 + arXiv:2512.09094v1 Announce Type: cross +Abstract: Deep learning models for medical image segmentation suffer significant performance drops due to distribution shifts, but the causal mechanisms behind these drops remain poorly understood. We extend causal attribution frameworks to high-dimensional segmentation tasks, quantifying how acquisition protocols and annotation variability independently contribute to performance degradation. We model the data-generating process through a causal graph and employ Shapley values to fairly attribute performance changes to individual mechanisms. Our framework addresses unique challenges in medical imaging: high-dimensional outputs, limited samples, and complex mechanism interactions. Validation on multiple sclerosis (MS) lesion segmentation across 4 centers and 7 annotators reveals context-dependent failure modes: annotation protocol shifts dominate when crossing annotators (7.4% $\pm$ 8.9% DSC attribution), while acquisition shifts dominate when crossing imaging centers (6.5% $\pm$ 9.1%). This mechanism-specific quantification enables practitioners to prioritize targeted interventions based on deployment context. + oai:arXiv.org:2512.09094v1 + eess.IV + cs.CV cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Ashkan Shahbazi, Ping He, Ali Abbasi, Yikun Bai, Xinran Liu, Elaheh Akbari, Darian Salehi, Navid NaderiAlizadeh, Soheil Kolouri - - - Cabin Layout, Seat Density, and Passenger Segmentation in Air Transport: Implications for Prices, Ancillary Revenues, and Efficiency - https://arxiv.org/abs/2512.08066 - arXiv:2512.08066v1 Announce Type: cross -Abstract: This study investigates how the layout and density of seats in aircraft cabins influence the pricing of airline tickets on domestic flights. The analysis is based on microdata from boarding passes linked to face-to-face interviews with passengers, allowing us to relate the price paid to the location on the aircraft seat map, as well as market characteristics and flight operations. Econometric models were estimated using the Post-Double-Selection LASSO (PDS-LASSO) procedure, which selects numerous controls for unobservable factors linked to commercial and operational aspects, thus enabling better identification of the effect of variables such as advance purchase, reason for travel, fuel price, market structure, and load factor, among others. The results suggest that a higher density of seat rows is associated with lower prices, reflecting economies of scale with the increase in aircraft size and gains in operational efficiency. An unexpected result was also obtained: in situations where there was no seat selection fee, passengers with more expensive tickets were often allocated middle seats due to purchasing at short notice, when the side alternatives were no longer available. This behavior helps explain the economic logic behind one of the main ancillary revenues of airlines. In addition to quantitative analysis, the study incorporates an exploratory approach to innovative cabin concepts and their possible effects on density and comfort on board. - oai:arXiv.org:2512.08066v1 - eess.SY - cs.SY - econ.GN - q-fin.EC - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 cross http://creativecommons.org/licenses/by/4.0/ - 10.5281/zenodo.17860616 - Communications in Airline Economics Research, 202117818, 2025 - Alessandro V. M. Oliveira, Moises D. Vassallo + Pedro M. Gordaliza, Nataliia Molchanova, Jaume Banus, Thomas Sanchez, Meritxell Bach Cuadra - Subcellular proteome niche discovery using semi-supervised functional clustering - https://arxiv.org/abs/2512.08087 - arXiv:2512.08087v1 Announce Type: cross -Abstract: Intracellular compartmentalization of proteins underpins their function and the metabolic processes they sustain. Various mass spectrometry-based proteomics methods (subcellular spatial proteomics) now allow high throughput subcellular protein localization. Yet, the curation, analysis and interpretation of these data remain challenging, particularly in non-model organisms where establishing reliable marker proteins is difficult, and in contexts where experimental replication and subcellular fractionation are constrained. Here, we develop FSPmix, a semi-supervised functional clustering method implemented as an open-source R package, which leverages partial annotations from a subset of marker proteins to predict protein subcellular localization from subcellular spatial proteomics data. This method explicitly assumes that protein signatures vary smoothly across subcellular fractions, enabling more robust inference under low signal-to-noise data regimes. We applied FSPmix to a subcellular proteomics dataset from a marine diatom, allowing us to assign probabilistic localizations to proteins and uncover potentially new protein functions. Altogether, this work lays the foundation for more robust statistical analysis and interpretation of subcellular proteomics datasets, particularly in understudied organisms. - oai:arXiv.org:2512.08087v1 - q-bio.QM - q-bio.SC - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + Exploratory Mean-Variance with Jumps: An Equilibrium Approach + https://arxiv.org/abs/2512.09224 + arXiv:2512.09224v1 Announce Type: cross +Abstract: Revisiting the continuous-time Mean-Variance (MV) Portfolio Optimization problem, we model the market dynamics with a jump-diffusion process and apply Reinforcement Learning (RL) techniques to facilitate informed exploration within the control space. We recognize the time-inconsistency of the MV problem and adopt the time-inconsistent control (TIC) approach to analytically solve for an exploratory equilibrium investment policy, which is a Gaussian distribution centered on the equilibrium control of the classical MV problem. Our approach accounts for time-inconsistent preferences and actions, and our equilibrium policy is the best option an investor can take at any given time during the investment period. Moreover, we leverage the martingale properties of the equilibrium policy, design a RL model, and propose an Actor-Critic RL algorithm. All of our RL model parameters converge to the corresponding true values in a simulation study. Our numerical study on 24 years of real market data shows that the proposed RL model is profitable in 13 out of 14 tests, demonstrating its practical applicability in real world investment. + oai:arXiv.org:2512.09224v1 + q-fin.PM + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Ziyue Zheng, Loay J. Jabre, Matthew McIlvin, Mak A. Saito, Sangwon Hyun + http://creativecommons.org/licenses/by/4.0/ + Yuling Max Chen, Bin Li, David Saunders - Complexity of One-Dimensional ReLU DNNs - https://arxiv.org/abs/2512.08091 - arXiv:2512.08091v1 Announce Type: cross -Abstract: We study the expressivity of one-dimensional (1D) ReLU deep neural networks through the lens of their linear regions. For randomly initialized, fully connected 1D ReLU networks (He scaling with nonzero bias) in the infinite-width limit, we prove that the expected number of linear regions grows as $\sum_{i = 1}^L n_i + \mathop{{o}}\left(\sum_{i = 1}^L{n_i}\right) + 1$, where $n_\ell$ denotes the number of neurons in the $\ell$-th hidden layer. We also propose a function-adaptive notion of sparsity that compares the expected regions used by the network to the minimal number needed to approximate a target within a fixed tolerance. - oai:arXiv.org:2512.08091v1 - cs.LG + Debiased Bayesian Inference for High-dimensional Regression Models + https://arxiv.org/abs/2512.09257 + arXiv:2512.09257v1 Announce Type: cross +Abstract: There has been significant progress in Bayesian inference based on sparsity-inducing (e.g., spike-and-slab and horseshoe-type) priors for high-dimensional regression models. The resulting posteriors, however, in general do not possess desirable frequentist properties, and the credible sets thus cannot serve as valid confidence sets even asymptotically. We introduce a novel debiasing approach that corrects the bias for the entire Bayesian posterior distribution. We establish a new Bernstein-von Mises theorem that guarantees the frequentist validity of the debiased posterior. We demonstrate the practical performance of our proposal through Monte Carlo simulations and two empirical applications in economics. + oai:arXiv.org:2512.09257v1 + econ.EM + math.ST + stat.CO + stat.ME stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jonathan Kogan, Hayden Jananthan, Jeremy Kepner + Qihui Chen, Zheng Fang, Ruixuan Liu - Branching Fixed Effects: A Proposal for Communicating Uncertainty - https://arxiv.org/abs/2512.08101 - arXiv:2512.08101v1 Announce Type: cross -Abstract: Economists often rely on estimates of linear fixed effects models developed by other teams of researchers. Assessing the uncertainty in these estimates can be challenging. I propose a form of sample splitting for network data that breaks two-way fixed effects estimates into statistically independent branches, each of which provides an unbiased estimate of the parameters of interest. These branches facilitate uncertainty quantification, moment estimation, and shrinkage. Algorithms are developed for efficiently extracting branches from large datasets. I illustrate these techniques using a benchmark dataset from Veneto, Italy that has been widely used to study firm wage effects. - oai:arXiv.org:2512.08101v1 - econ.EM - stat.AP - stat.CO - Wed, 10 Dec 2025 00:00:00 -0500 + On asymptotic behavior of solutions to random fractional Riesz-Bessel equations with cyclic long memory initial conditions + https://arxiv.org/abs/2512.09308 + arXiv:2512.09308v1 Announce Type: cross +Abstract: This paper investigates fractional Riesz-Bessel equations with random initial conditions. The spectra of these random initial conditions exhibit singularities both at zero frequency and at non-zero frequencies, which correspond to the cases of classical long-range dependence and cyclic long-range dependence, respectively. Using spectral methods and asymptotic theory, it is shown that the rescaled solutions of the equations converge to spatio-temporal Gaussian random fields. The limit fields are stationary in space and non-stationary in time. The covariance and spectral structures of the resulting asymptotic random fields are provided. The paper further establishes multiscaling limit theorems for the case of regularly varying asymptotics. A numerical example illustrating the theoretical results is also presented. + oai:arXiv.org:2512.09308v1 + math.PR + math.ST + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by/4.0/ - Patrick Kline + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Maha Mosaad A. Alghamdi, Andriy Olenko - Any Old Tom, Dick or Harry: The Citation Impact of First Name Genderedness - https://arxiv.org/abs/2512.08219 - arXiv:2512.08219v1 Announce Type: cross -Abstract: This paper attempts a first analysis of citation distributions based on the genderedness of authors' first name. Following the extraction of first name and sex data from all human entity triplets contained in Wikidata, a first name genderedness table is first created based on compiled sex frequencies, then merged with bibliometric data from eponymous, US-affiliated authors. Comparisons of various cumulative distributions show that citation concentrations fluctuations are highest at the opposite ends of the genderedness spectrum, as authors with very feminine and masculine first names respectively get a lower and higher share of citations for every article published, irrespective of their contribution role. - oai:arXiv.org:2512.08219v1 - cs.DL - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + Self-Supervised Learning with Gaussian Processes + https://arxiv.org/abs/2512.09322 + arXiv:2512.09322v1 Announce Type: cross +Abstract: Self supervised learning (SSL) is a machine learning paradigm where models learn to understand the underlying structure of data without explicit supervision from labeled samples. The acquired representations from SSL have demonstrated useful for many downstream tasks including clustering, and linear classification, etc. To ensure smoothness of the representation space, most SSL methods rely on the ability to generate pairs of observations that are similar to a given instance. However, generating these pairs may be challenging for many types of data. Moreover, these methods lack consideration of uncertainty quantification and can perform poorly in out-of-sample prediction settings. To address these limitations, we propose Gaussian process self supervised learning (GPSSL), a novel approach that utilizes Gaussian processes (GP) models on representation learning. GP priors are imposed on the representations, and we obtain a generalized Bayesian posterior minimizing a loss function that encourages informative representations. The covariance function inherent in GPs naturally pulls representations of similar units together, serving as an alternative to using explicitly defined positive samples. We show that GPSSL is closely related to both kernel PCA and VICReg, a popular neural network-based SSL method, but unlike both allows for posterior uncertainties that can be propagated to downstream tasks. Experiments on various datasets, considering classification and regression tasks, demonstrate that GPSSL outperforms traditional methods in terms of accuracy, uncertainty quantification, and error control. + oai:arXiv.org:2512.09322v1 + cs.LG + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by/4.0/ - Maxime Holmberg Sainte-Marie, Vincent Larivi\`ere + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Yunshan Duan, Sinead Williamson - Low Rank Support Quaternion Matrix Machine - https://arxiv.org/abs/2512.08327 - arXiv:2512.08327v1 Announce Type: cross -Abstract: Input features are conventionally represented as vectors, matrices, or third order tensors in the real field, for color image classification. Inspired by the success of quaternion data modeling for color images in image recovery and denoising tasks, we propose a novel classification method for color image classification, named as the Low-rank Support Quaternion Matrix Machine (LSQMM), in which the RGB channels are treated as pure quaternions to effectively preserve the intrinsic coupling relationships among channels via the quaternion algebra. For the purpose of promoting low-rank structures resulting from strongly correlated color channels, a quaternion nuclear norm regularization term, serving as a natural extension of the conventional matrix nuclear norm to the quaternion domain, is added to the hinge loss in our LSQMM model. An Alternating Direction Method of Multipliers (ADMM)-based iterative algorithm is designed to effectively resolve the proposed quaternion optimization model. Experimental results on multiple color image classification datasets demonstrate that our proposed classification approach exhibits advantages in classification accuracy, robustness and computational efficiency, compared to several state-of-the-art methods using support vector machines, support matrix machines, and support tensor machines. - oai:arXiv.org:2512.08327v1 - cs.CV + CFLight: Enhancing Safety with Traffic Signal Control through Counterfactual Learning + https://arxiv.org/abs/2512.09368 + arXiv:2512.09368v1 Announce Type: cross +Abstract: Traffic accidents result in millions of injuries and fatalities globally, with a significant number occurring at intersections each year. Traffic Signal Control (TSC) is an effective strategy for enhancing safety at these urban junctures. Despite the growing popularity of Reinforcement Learning (RL) methods in optimizing TSC, these methods often prioritize driving efficiency over safety, thus failing to address the critical balance between these two aspects. Additionally, these methods usually need more interpretability. CounterFactual (CF) learning is a promising approach for various causal analysis fields. In this study, we introduce a novel framework to improve RL for safety aspects in TSC. This framework introduces a novel method based on CF learning to address the question: ``What if, when an unsafe event occurs, we backtrack to perform alternative actions, and will this unsafe event still occur in the subsequent period?'' To answer this question, we propose a new structure causal model to predict the result after executing different actions, and we propose a new CF module that integrates with additional ``X'' modules to promote safe RL practices. Our new algorithm, CFLight, which is derived from this framework, effectively tackles challenging safety events and significantly improves safety at intersections through a near-zero collision control strategy. Through extensive numerical experiments on both real-world and synthetic datasets, we demonstrate that CFLight reduces collisions and improves overall traffic performance compared to conventional RL methods and the recent safe RL model. Moreover, our method represents a generalized and safe framework for RL methods, opening possibilities for applications in other domains. The data and code are available in the github https://github.com/MJLee00/CFLight-Enhancing-Safety-with-Traffic-Signal-Control-through-Counterfactual-Learning. + oai:arXiv.org:2512.09368v1 cs.LG - math.OC - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wang Chen, Ziyan Luo, Shuangyue Wang + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Mingyuan Li, Chunyu Liu, Zhuojun Li, Xiao Liu, Guangsheng Yu, Bo Du, Jun Shen, Qiang Wu - A Multivariate Bernoulli-Based Sampling Method for Multi-Label Data with Application to Meta-Research - https://arxiv.org/abs/2512.08371 - arXiv:2512.08371v1 Announce Type: cross -Abstract: Datasets may contain observations with multiple labels. If the labels are not mutually exclusive, and if the labels vary greatly in frequency, obtaining a sample that includes sufficient observations with scarcer labels to make inferences about those labels, and which deviates from the population frequencies in a known manner, creates challenges. In this paper, we consider a multivariate Bernoulli distribution as our underlying distribution of a multi-label problem. We present a novel sampling algorithm that takes label dependencies into account. It uses observed label frequencies to estimate multivariate Bernoulli distribution parameters and calculate weights for each label combination. This approach ensures the weighted sampling acquires target distribution characteristics while accounting for label dependencies. We applied this approach to a sample of research articles from Web of Science labeled with 64 biomedical topic categories. We aimed to preserve category frequency order, reduce frequency differences between most and least common categories, and account for category dependencies. This approach produced a more balanced sub-sample, enhancing the representation of minority categories. - oai:arXiv.org:2512.08371v1 + Drawback of Enforcing Equivariance and its Compensation via the Lens of Expressive Power + https://arxiv.org/abs/2512.09673 + arXiv:2512.09673v1 Announce Type: cross +Abstract: Equivariant neural networks encode symmetry as an inductive bias and have achieved strong empirical performance in wide domains. However, their expressive power remains not well understood. Focusing on 2-layer ReLU networks, this paper investigates the impact of equivariance constraints on the expressivity of equivariant and layer-wise equivariant networks. By examining the boundary hyperplanes and the channel vectors of ReLU networks, we construct an example showing that equivariance constraints could strictly limit expressive power. However, we demonstrate that this drawback can be compensated via enlarging the model size. Furthermore, we show that despite a larger model size, the resulting architecture could still correspond to a hypothesis space with lower complexity, implying superior generalizability for equivariant networks. + oai:arXiv.org:2512.09673v1 cs.LG + cs.AI + cs.NE stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Simon Chung, Colby J. Vorland, Donna L. Maney, Andrew W. Brown + http://creativecommons.org/licenses/by/4.0/ + Yuzhu Chen, Tian Qin, Xinmei Tian, Fengxiang He, Dacheng Tao - A Distribution Testing Approach to Clustering Distributions - https://arxiv.org/abs/2512.08376 - arXiv:2512.08376v1 Announce Type: cross -Abstract: We study the following distribution clustering problem: Given a hidden partition of $k$ distributions into two groups, such that the distributions within each group are the same, and the two distributions associated with the two clusters are $\varepsilon$-far in total variation, the goal is to recover the partition. We establish upper and lower bounds on the sample complexity for two fundamental cases: (1) when one of the cluster's distributions is known, and (2) when both are unknown. Our upper and lower bounds characterize the sample complexity's dependence on the domain size $n$, number of distributions $k$, size $r$ of one of the clusters, and distance $\varepsilon$. In particular, we achieve tightness with respect to $(n,k,r,\varepsilon)$ (up to an $O(\log k)$ factor) for all regimes. - oai:arXiv.org:2512.08376v1 - cs.DS - cs.IT - math.IT - math.ST - stat.ML - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Innovation ARIMA models application to predict pressure variations in water supply networks with open-loop control. Case study in Noja (Cantabria, Spain) + https://arxiv.org/abs/2512.09717 + arXiv:2512.09717v1 Announce Type: cross +Abstract: Water utilities are increasingly concerned about losses, leaks, and illegal connections in their distribution networks. Pressure control is typically managed through pressure reducing valves with electrically controlled actuators based on predefined tables according to the pressure at the critical point control. This openloop control method lacks direct feedback between the PRV and CPC, making it challenging to distinguish whether pressure variations originate from normal head losses or abnormal network conditions. Unlike traditional applications of ARIMA focused on water demand forecasting, this study explores its novel use in pressure management within distribution networks, aiming to predict P3 pressure based on head losses across a defined hydraulic sector. To achieve this objective, a predictive model based on the Box-Jenkins methodology and its variations is implemented to analyse time series data. An action path is established to determine the optimal model ARIMA, ARMA, ARMAX, etc. which is subsequently validated using real operational data from Noja, a coastal town in northern Spain characterized by significant seasonal population fluctuations. By accurately forecasting CPC pressure, this system enhances the detection of anomalous patterns, enabling more efficient network pressure management. The study demonstrates the potential of advanced modelling techniques in optimizing water distribution networks, providing valuable insights to improve system efficiency, reliability, and sustainability in urban environments. + oai:arXiv.org:2512.09717v1 + physics.app-ph + stat.AP + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Gunjan Kumar, Yash Pote, Jonathan Scarlett + http://creativecommons.org/licenses/by/4.0/ + 10.1016/j.nexus.2025.100423 + Energy Nexus 18 (2025) 100423 + David Munoz-Rodriguez, Manuel J. Gonzalez-Ortega, Maria-Jesus Aguilera-Urena, Andres Ortega-Ballesteros, Alberto-Jesus Perea-Moreno - Minimax and Bayes Optimal Adaptive Experimental Design for Treatment Choice - https://arxiv.org/abs/2512.08513 - arXiv:2512.08513v1 Announce Type: cross -Abstract: We consider an adaptive experiment for treatment choice and design a minimax and Bayes optimal adaptive experiment with respect to regret. Given binary treatments, the experimenter's goal is to choose the treatment with the highest expected outcome through an adaptive experiment, in order to maximize welfare. We consider adaptive experiments that consist of two phases, the treatment allocation phase and the treatment choice phase. The experiment starts with the treatment allocation phase, where the experimenter allocates treatments to experimental subjects to gather observations. During this phase, the experimenter can adaptively update the allocation probabilities using the observations obtained in the experiment. After the allocation phase, the experimenter proceeds to the treatment choice phase, where one of the treatments is selected as the best. For this adaptive experimental procedure, we propose an adaptive experiment that splits the treatment allocation phase into two stages, where we first estimate the standard deviations and then allocate each treatment proportionally to its standard deviation. We show that this experiment, often referred to as Neyman allocation, is minimax and Bayes optimal in the sense that its regret upper bounds exactly match the lower bounds that we derive. To show this optimality, we derive minimax and Bayes lower bounds for the regret using change-of-measure arguments. Then, we evaluate the corresponding upper bounds using the central limit theorem and large deviation bounds. - oai:arXiv.org:2512.08513v1 + New Approximation Results and Optimal Estimation for Fully Connected Deep Neural Networks + https://arxiv.org/abs/2512.09853 + arXiv:2512.09853v1 Announce Type: cross +Abstract: \citet{farrell2021deep} establish non-asymptotic high-probability bounds for general deep feedforward neural network (with rectified linear unit activation function) estimators, with \citet[Theorem 1]{farrell2021deep} achieving a suboptimal convergence rate for fully connected feedforward networks. The authors suggest that improved approximation of fully connected networks could yield sharper versions of \citet[Theorem 1]{farrell2021deep} without altering the theoretical framework. By deriving approximation bounds specifically for a narrower fully connected deep neural network, this note demonstrates that \citet[Theorem 1]{farrell2021deep} can be improved to achieve an optimal rate (up to a logarithmic factor). Furthermore, this note briefly shows that deep neural network estimators can mitigate the curse of dimensionality for functions with compositional structure and functions defined on manifolds. + oai:arXiv.org:2512.09853v1 econ.EM - cs.LG - math.ST - stat.ME stat.ML - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Masahiro Kato + http://creativecommons.org/licenses/by/4.0/ + Zhaoji Tang - DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning - https://arxiv.org/abs/2512.08671 - arXiv:2512.08671v1 Announce Type: cross -Abstract: Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor. - oai:arXiv.org:2512.08671v1 + HPM-KD: Hierarchical Progressive Multi-Teacher Framework for Knowledge Distillation and Efficient Model Compression + https://arxiv.org/abs/2512.09886 + arXiv:2512.09886v1 Announce Type: cross +Abstract: Knowledge Distillation (KD) has emerged as a promising technique for model compression but faces critical limitations: (1) sensitivity to hyperparameters requiring extensive manual tuning, (2) capacity gap when distilling from very large teachers to small students, (3) suboptimal coordination in multi-teacher scenarios, and (4) inefficient use of computational resources. We present \textbf{HPM-KD}, a framework that integrates six synergistic components: (i) Adaptive Configuration Manager via meta-learning that eliminates manual hyperparameter tuning, (ii) Progressive Distillation Chain with automatically determined intermediate models, (iii) Attention-Weighted Multi-Teacher Ensemble that learns dynamic per-sample weights, (iv) Meta-Learned Temperature Scheduler that adapts temperature throughout training, (v) Parallel Processing Pipeline with intelligent load balancing, and (vi) Shared Optimization Memory for cross-experiment reuse. Experiments on CIFAR-10, CIFAR-100, and tabular datasets demonstrate that HPM-KD: achieves 10x-15x compression while maintaining 85% accuracy retention, eliminates the need for manual tuning, and reduces training time by 30-40% via parallelization. Ablation studies confirm independent contribution of each component (0.10-0.98 pp). HPM-KD is available as part of the open-source DeepBridge library. + oai:arXiv.org:2512.09886v1 cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + stat.AP + Thu, 11 Dec 2025 00:00:00 -0500 cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Huzaifa Arif + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Gustavo Coelho Haase, Paulo Henrique Dourado da Silva - Unsupervised Learning of Density Estimates with Topological Optimization - https://arxiv.org/abs/2512.08895 - arXiv:2512.08895v1 Announce Type: cross -Abstract: Kernel density estimation is a key component of a wide variety of algorithms in machine learning, Bayesian inference, stochastic dynamics and signal processing. However, the unsupervised density estimation technique requires tuning a crucial hyperparameter: the kernel bandwidth. The choice of bandwidth is critical as it controls the bias-variance trade-off by over- or under-smoothing the topological features. Topological data analysis provides methods to mathematically quantify topological characteristics, such as connected components, loops, voids et cetera, even in high dimensions where visualization of density estimates is impossible. In this paper, we propose an unsupervised learning approach using a topology-based loss function for the automated and unsupervised selection of the optimal bandwidth and benchmark it against classical techniques -- demonstrating its potential across different dimensions. - oai:arXiv.org:2512.08895v1 + Provably Learning from Modern Language Models via Low Logit Rank + https://arxiv.org/abs/2512.09892 + arXiv:2512.09892v1 Announce Type: cross +Abstract: While modern language models and their inner workings are incredibly complex, recent work (Golowich, Liu & Shetty; 2025) has proposed a simple and potentially tractable abstraction for them through the observation that empirically, these language models all seem to have approximately low logit rank. Roughly, this means that a matrix formed by the model's log probabilities of various tokens conditioned on certain sequences of tokens is well approximated by a low rank matrix. + In this paper, our focus is on understanding how this structure can be exploited algorithmically for obtaining provable learning guarantees. Since low logit rank models can encode hard-to-learn distributions such as noisy parities, we study a query learning model with logit queries that reflects the access model for common APIs. Our main result is an efficient algorithm for learning any approximately low logit rank model from queries. We emphasize that our structural assumption closely reflects the behavior that is empirically observed in modern language models. Thus, our result gives what we believe is the first end-to-end learning guarantee for a generative model that plausibly captures modern language models. + oai:arXiv.org:2512.09892v1 cs.LG + cs.AI + cs.DS stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Suina Tanweer, Firas A. Khasawneh + Noah Golowich, Allen Liu, Abhishek Shetty + + + Analytic queueing model for ambulance services + https://arxiv.org/abs/1602.06579 + arXiv:1602.06579v2 Announce Type: replace +Abstract: We present predictive tools to calculate the number of ambulances needed according to demand of entrance calls and time of service. Our analysis discriminates between emergency and non-urgent calls. First, we consider the nonstationary regime where we apply previous results of first-passage time of one dimensional random walks. Then, we reconsider the stationary regime with a detailed discussion of the conditional probabilities and we discuss the key performance indicators. + oai:arXiv.org:1602.06579v2 + stat.AP + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-sa/4.0/ + Pedro A. Pury + + + Detecting and Localizing Anomalous Cliques in Inhomogeneous Networks using Egonets + https://arxiv.org/abs/1807.08925 + arXiv:1807.08925v3 Announce Type: replace +Abstract: Cliques, or fully connected subgraphs, are among the most important and well-studied graph motifs in network science. We consider the problem of finding a statisti- cally anomalous clique hidden in a large network. There are two parts to this problem: (1) detection, i.e., determining whether an anomalous clique is present, and (2) localization, i.e., determining which vertices of the network constitute the detected clique. While this problem has been extensively studied under the homogeneous Erdos-Renyi model, little progress has been made beyond this simple setting, and no existing method can perform detection and localization in inhomogeneous networks within finite time. To address this gap, we first show that in homogeneous networks, the anomalousness of a clique depends solely on its size. This property does not carry over to inhomogeneous networks, where the identity of the vertices forming the clique plays a critical role, and a smaller clique can be more anomalous than a larger one. Building on this insight, we propose a unified method for clique detection and localization based on a class of subgraphs called egonets. The proposed method generalizes to a wide variety of inhomogeneous network models and is naturally amenable to parallel computing. We establish the theoretical properties of the proposed method and demonstrate its empirical performance through simulation studies and application to two real world networks. + oai:arXiv.org:1807.08925v3 + stat.ME + stat.ML + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Subhankar Bhadra, Srijan Sengupta - Limit results for distributed estimation of invariant subspaces in multiple networks inference and PCA - https://arxiv.org/abs/2206.04306 - arXiv:2206.04306v5 Announce Type: replace -Abstract: Several statistical problems, such as multiple heterogeneous graph analysis, distributed PCA, integrative data analysis, and simultaneous dimension reduction of images, can involve a collection of $m$ matrices whose leading subspaces $U^{(i)}$ consist of a shared subspace $U_c$ and individual subspaces $U_s^{(i)}$. We consider a distributed estimation procedure that first obtains $\hat U^{(i)}$ as the leading singular vectors for each observed noisy matrix, then computes the leading left singular vectors of the concatenated matrix $[\hat U^{(1)}|\hat U^{(2)}|\dots|\hat U^{(m)}]$ as $\hat U_c$, and finally computes the leading singular vectors of the projection of each $\hat U^{(i)}$ onto the orthogonal complement of $\hat U_c$ as $\hat U_s^{(i)}$. In this paper, we provide a framework for deriving limit results for such distributed estimation procedures, including expansions of estimation errors in both common and individual subspaces and their asymptotically normal approximations. We apply this framework specifically to (1) parameter estimation for multiple heterogeneous random graphs with shared subspaces, and (2) distributed PCA for independent sub-Gaussian random vectors with spiked covariance structures. Leveraging these results, we also consider a two-sample test for the null hypothesis that a pair of random graphs have the same edge probabilities, and present a test statistic whose limiting distribution converges to a central (resp., non-central) $\chi^2$ distribution under the null (resp., local alternative) hypothesis. - oai:arXiv.org:2206.04306v5 + Quasi Model-Assisted Estimators under Nonresponse in Sample Surveys + https://arxiv.org/abs/2208.04621 + arXiv:2208.04621v2 Announce Type: replace +Abstract: In the presence of auxiliary information, model-assisted estimators rely on a working model linking the variable of interest to the auxiliary variables in order to improve the efficiency of the Horvitz-Thompson estimator. Model-assisted estimators cannot be directly computed with nonresponse since the values of the variable of interest is missing for a part of the sample units. In this article, we present and study a class of quasi-model-assisted estimators that extend model-assisted estimators to settings with non-ignorable nonresponse. These estimators combine a working model and a response model. The former is used to improve the efficiency, the latter to reweight the nonrespondents. A wide range of statistical learning methods can be used to estimate either of these models. We show that several well-known existing estimators are particular cases of quasi-model-assisted estimators. We examine the behavior of these estimators through a simulation study. The results illustrate how these estimators remain competitive in terms of bias and variance when one of the two models is poorly specified. + oai:arXiv.org:2208.04621v2 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Caren Hasler, Esther Eustache + + + Nonparametric estimation of the job-size distribution for an M/G/1 queue with Poisson sampling + https://arxiv.org/abs/2307.10116 + arXiv:2307.10116v4 Announce Type: replace +Abstract: This work presents a non-parametric estimator for the cumulative distribution function (CDF) of the job-size distribution for a queue with compound Poisson input. The workload process is observed according to an independent Poisson sampling process. The nonparametric estimator is constructed by first estimating the characteristic function (CF) and then applying an inversion formula. The convergence rate of the CF estimator at $s$ is shown to be of the order of $s^2/n$, where $n$ is the sample size. This convergence rate is leveraged to explore the bias-variance tradeoff of the inversion estimator. It is demonstrated that within a certain class of continuous distributions, the risk, in terms of MSE, is uniformly bounded by $C n^{-\frac{\eta}{1+\eta}}$, where $C$ is a positive constant and the parameter $\eta>0$ depends on the smoothness of the underlying class of distributions. A heuristic method is further developed to address the case of an unknown rate of the compound Poisson input process. + oai:arXiv.org:2307.10116v4 math.ST + math.PR stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Liron Ravner + + + High-dimensional Newey-Powell Test Via Approximate Message Passing + https://arxiv.org/abs/2311.05056 + arXiv:2311.05056v2 Announce Type: replace +Abstract: We propose a high-dimensional extension of the heteroscedasticity test proposed in Newey and Powell (1987). Our test is based on expectile regression in the proportional asymptotic regime where n/p \to \delta \in (0,1]. The asymptotic analysis of the test statistic uses the Approximate Message Passing (AMP) algorithm, from which we obtain the limiting distribution of the test and establish its asymptotic power. The numerical performance of the test is validated through an extensive simulation study. As real-data applications, we present the analysis based on ``international economic growth" data (Belloni et al., 2011), which is found to be homoscedastic, and ``supermarket" data (Lan et al., 2016), which is found to be heteroscedastic. + oai:arXiv.org:2311.05056v2 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Runbing Zheng, Minh Tang + Jing Zhou, Hui Zou - Solving the Poisson equation using coupled Markov chains - https://arxiv.org/abs/2206.05691 - arXiv:2206.05691v5 Announce Type: replace -Abstract: This article shows how coupled Markov chains that meet exactly after a random number of iterations can be used to generate unbiased estimators of the solutions of the Poisson equation. Through this connection, we re-derive known unbiased estimators of expectations with respect to the stationary distribution of a Markov chain and provide conditions for the finiteness of their moments. We further construct unbiased estimators of the asymptotic variance of Markov chain ergodic averages, and provide conditions for the finiteness of the estimators' moments of any order. If their second moment is finite, the average of independent copies of such estimators converges to the asymptotic variance at the Monte Carlo rate, comparing favorably to known rates for batch means and spectral variance estimators. The results are illustrated with numerical experiments. - oai:arXiv.org:2206.05691v5 + LASPATED: A Library for the Analysis of Spatio-Temporal Discrete Data (User Manual) + https://arxiv.org/abs/2407.13889 + arXiv:2407.13889v3 Announce Type: replace +Abstract: This is the User Manual of the LASPATED library. This library is available on GitHub (at https://github.com/vguigues/LASPATED)) and provides a set of tools to analyze spatiotemporal data. A video tutorial for this library is available on Youtube. It is made of a Python package for time and space discretizations and of two packages (one in Matlab and one in C++) implementing the calibration of the probabilistic models for stochastic spatio-temporal data proposed in the companion paper arXiv:2203.16371v2. + oai:arXiv.org:2407.13889v3 stat.CO - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by/4.0/ - Randal Douc, Pierre E. Jacob, Anthony Lee, Dootika Vats + http://creativecommons.org/licenses/by-sa/4.0/ + Vincent Guigues, Anton J. Kleywegt, Giovanni Amorim, Andre Krauss, Victor Hugo Nascimento - Parsimonious Generative Machine Learning for Non-Gaussian Tail Modeling - https://arxiv.org/abs/2402.14368 - arXiv:2402.14368v3 Announce Type: replace -Abstract: The presence of non-Gaussian tails is a prevalent characteristic in many financial modeling scenarios, necessitating the use of complex non-Gaussian distributions such as the generalized beta of the second kind (GB2) and the skewed generalized $t$ (SGT). The approach we propose for modeling heavy-tailed data differs significantly from traditional methods. We utilize generative machine learning, which offers an entirely different paradigm for modeling distributions. A parsimonious nonlinear transformation is applied to a simple base random variable such as Gaussian. The parameters can be estimated effectively, and the theoretical heavy-tail properties are derived. Robust performance is observed with this approach when compared to traditional distributions. More importantly, this method is broadly useful for machine learning due to its mathematical elegance and numerical convenience. - oai:arXiv.org:2402.14368v3 + Bayesian Statistical Modeling in Action for Estimation and Forecasting in Low- and Middle-income Countries: The Case of the Family Planning Estimation Tool + https://arxiv.org/abs/2501.00007 + arXiv:2501.00007v2 Announce Type: replace +Abstract: The Family Planning Estimation Tool (FPET) is used in low- and middle-income countries to produce estimates and short-term forecasts of family planning indicators, such as modern contraceptive use and unmet need for contraceptives. Estimates are obtained via a Bayesian statistical model that is fitted to country-specific data from surveys and service statistics data. The model has evolved over the last decade based on user inputs. + In this paper we summarize the main features of the statistical model used in FPET and introduce recent updates related to capturing contraceptive transitions, fitting to survey data that may be error prone, and the use of service statistics data. We assess model performance through a validation exercise and find that FPET is reasonably well calibrated. + We use our experience with FPET to briefly discuss lessons learned and open challenges related to the broader field of statistical modeling for monitoring of demographic and global health indicators. + oai:arXiv.org:2501.00007v2 stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Xing Yan, Yue Zhao, Qi Wu, Wenxuan Ma + http://creativecommons.org/licenses/by-sa/4.0/ + Leontine Alkema, Herbert Susmann, Evan Ray, Shauna Mooney, Niamh Cahill, Kristin Bietsch, A. A. Jayachandran, Rogers Kagimu, Priya Emmart, Zenon Mujani, Khan Muhammad, Brighton Muzavazi, Rebecca Rosenberg, John Stover, Emily Sonneveldt - Minimax optimal seriation in polynomial time - https://arxiv.org/abs/2405.08747 - arXiv:2405.08747v3 Announce Type: replace -Abstract: We consider the seriation problem, whose goal is to recover a hidden ordering from a noisy observation of a permuted Robinson matrix. We establish sharp minimax rates under average-Lipschitz conditions that strictly extend the bi-Lipschitz framework of [Giraud et al., 2023]. We further design a polynomial-time algorithm that attains these optimal rates, thereby resolving two open questions raised in [Giraud et al., 2023]. Finally, our analysis extends to a broader class of matrices beyond those generated by exact permutations. - oai:arXiv.org:2405.08747v3 - math.ST - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Sampling from density power divergence-based generalized posterior distribution via stochastic optimization + https://arxiv.org/abs/2501.07790 + arXiv:2501.07790v2 Announce Type: replace +Abstract: Robust Bayesian inference using density power divergence (DPD) has emerged as a promising approach for handling outliers in statistical estimation. Although the DPD-based posterior offers theoretical guarantees of robustness, its practical implementation faces significant computational challenges, particularly for general parametric models with intractable integral terms. These challenges are specifically pronounced in high-dimensional settings, where traditional numerical integration methods are inadequate and computationally expensive. Herein, we propose a novel {approximate} sampling methodology that addresses these limitations by integrating the loss-likelihood bootstrap with a stochastic gradient descent algorithm specifically designed for DPD-based estimation. Our approach enables efficient and scalable sampling from DPD-based posteriors for a broad class of parametric models, including those with intractable integrals. We further extend it to accommodate generalized linear models. Through comprehensive simulation studies, we demonstrate that our method efficiently samples from DPD-based posteriors, offering superior computational scalability compared to conventional methods, specifically in high-dimensional settings. The results also highlight its ability to handle complex parametric models with intractable integral terms. + oai:arXiv.org:2501.07790v2 + stat.ME + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yann Issartel, Christophe Giraud, Nicolas Verzelen + Naruki Sonobe, Tomotaka Momozaki, Tomoyuki Nakagawa - Survey of Data-driven Newsvendor: Unified Analysis and Spectrum of Achievable Regrets - https://arxiv.org/abs/2409.03505 - arXiv:2409.03505v4 Announce Type: replace -Abstract: In the Newsvendor problem, the goal is to guess the number that will be drawn from some distribution, with asymmetric consequences for guessing too high vs. too low. In the data-driven version, the distribution is unknown, and one must work with samples from the distribution. Data-driven Newsvendor has been studied under many variants: additive vs. multiplicative regret, high probability vs. expectation bounds, and different distribution classes. This paper studies all combinations of these variants, filling in many gaps in the literature and simplifying many proofs. In particular, we provide a unified analysis based on the notion of clustered distributions, which in conjunction with our new lower bounds, shows that the entire spectrum of regrets between $1/\sqrt{n}$ and $1/n$ can be possible. Simulations on commonly-used distributions demonstrate that our notion is the "correct" predictor of empirical regret across varying data sizes. - oai:arXiv.org:2409.03505v4 + Dynamic Pricing in the Linear Valuation Model using Shape Constraints + https://arxiv.org/abs/2502.05776 + arXiv:2502.05776v4 Announce Type: replace +Abstract: We propose a shape-constrained approach to dynamic pricing for censored data in the linear valuation model eliminating the need for tuning parameters commonly required by existing methods. Previous works have addressed the challenge of unknown market noise distribution $F_0$ using strategies ranging from kernel methods to reinforcement learning algorithms, such as bandit techniques and upper confidence bounds (UCB), under the assumption that $F_0$ satisfies Lipschitz (or stronger) conditions. In contrast, our method relies on isotonic regression under the weaker assumption that $F_0$ is $\alpha$-H\"older continuous for some $\alpha \in (0,1]$, for which we derive a regret upper bound. Simulations and experiments with real-world data obtained by Welltower Inc (a major healthcare Real Estate Investment Trust) consistently demonstrate that our method attains lower empirical regret in comparison to several existing methods in the literature while offering the advantage of being tuning-parameter free. + oai:arXiv.org:2502.05776v4 stat.ML cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Zhuoxin Chen, Will Ma + Daniele Bracale, Moulinath Banerjee, Yuekai Sun, Kevin Stoll, Salam Turki - Efficient Analysis of Latent Spaces in Heterogeneous Networks - https://arxiv.org/abs/2412.02151 - arXiv:2412.02151v4 Announce Type: replace -Abstract: This work proposes a unified framework for efficient estimation under latent space modeling of heterogeneous networks. We consider a class of latent space models that decompose latent vectors into shared and network-specific components across networks. We develop a novel procedure that first identifies the shared latent vectors and further refines estimates through efficient score equations to achieve statistical efficiency. Oracle error rates for estimating the shared and heterogeneous latent vectors are established simultaneously. The analysis framework offers remarkable flexibility, accommodating various types of edge weights under general distributions. - oai:arXiv.org:2412.02151v4 + Estimation of Treatment Effects based on Kernel Matching + https://arxiv.org/abs/2502.10958 + arXiv:2502.10958v2 Announce Type: replace +Abstract: Kernel matching is a widely used technique for estimating treatment effects, particularly valuable in observational studies where randomized controlled trials are not feasible. While kernel-matching approaches have demonstrated practical advantages in exploiting similarities between treated and control units, their theoretical properties have remained only partially explored. In this paper, we make a key contribution by establishing the asymptotic normality and consistency of kernel-matching estimators for both the average treatment effect (ATE) and the average treatment effect on the treated (ATT) through influence function techniques, thereby providing a rigorous theoretical foundation for their use in causal inference. Furthermore, we derive the asymptotic distributions of the ATE and ATT estimators when the propensity score is estimated rather than known, extending the theoretical guarantees to the practically relevant cases. Through extensive Monte Carlo simulations, the estimators exhibit consistently improved performance over standard treatment-effect estimators. We further illustrate the method by analyzing the National Supported Work Demonstration job-training data with the kernel-matching estimator. + oai:arXiv.org:2502.10958v2 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Chong Ding, Zheng Li, Hon Keung Tony Ng, Wei Gao + + + Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning + https://arxiv.org/abs/2502.16816 + arXiv:2502.16816v4 Announce Type: replace +Abstract: We present the first finite-sample analysis of policy evaluation in robust average-reward Markov Decision Processes (MDPs). Prior work in this setting have established only asymptotic convergence guarantees, leaving open the question of sample complexity. In this work, we address this gap by showing that the robust Bellman operator is a contraction under a carefully constructed semi-norm, and developing a stochastic approximation framework with controlled bias. Our approach builds upon Multi-Level Monte Carlo (MLMC) techniques to estimate the robust Bellman operator efficiently. To overcome the infinite expected sample complexity inherent in standard MLMC, we introduce a truncation mechanism based on a geometric distribution, ensuring a finite expected sample complexity while maintaining a small bias that decays exponentially with the truncation level. Our method achieves the order-optimal sample complexity of $\tilde{\mathcal{O}}(\epsilon^{-2})$ for robust policy evaluation and robust average reward estimation, marking a significant advancement in robust reinforcement learning theory. + oai:arXiv.org:2502.16816v4 + stat.ML + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yuang Tian, Jiajin Sun, Yinqiu He + Yang Xu, Washim Uddin Mondal, Vaneet Aggarwal - Simple proof of robustness for Bayesian heavy-tailed linear regression models - https://arxiv.org/abs/2501.06349 - arXiv:2501.06349v3 Announce Type: replace -Abstract: In the Bayesian literature, a line of research called resolution of conflict is about the characterization of robustness against outliers of statistical models. The robustness characterization of a model is achieved by establishing the limiting behaviour of the posterior distribution under an asymptotic framework in which the outliers move away from the bulk of the data. The proofs of the robustness characterization results, especially the recent ones for regression models, are technical and not intuitive, limiting the accessibility and preventing the development of theory in that line of research. In this paper, we highlight that the proof complexity is due to the generality of the assumptions on the prior distribution. To address the issue of accessibility, we present a significantly simpler proof for a linear regression model with a specific class of prior distributions, among which we find typically used prior distributions. The class of prior distributions is such that each regression coefficient has a sub-exponential distribution, which allows to exploit a tail bound, contrarily to previous approaches. The proof is intuitive and uses classical results of probability theory. The generality of the assumption on the error distribution is also appealing; essentially, it can be any distribution with regularly varying or log-regularly varying tails. So far, there does not exist a result in such generality for models with regularly varying distributions. We also investigate the necessity of the assumptions. To promote the development of theory in resolution of conflict, we highlight how the key steps of the proof can be adapted for other models and present an application of the proof technique in the context of generalized linear models. - oai:arXiv.org:2501.06349v3 + Revenue Maximization Under Sequential Price Competition Via The Estimation Of s-Concave Demand Functions + https://arxiv.org/abs/2503.16737 + arXiv:2503.16737v5 Announce Type: replace +Abstract: We consider price competition among multiple sellers over a selling horizon of $T$ periods. In each period, sellers simultaneously offer their prices (which are made public) and subsequently observe their respective demand (not made public). The demand function of each seller depends on all sellers' prices through a private, unknown, and nonlinear relationship. We propose a dynamic pricing policy that uses semi-parametric least-squares estimation and show that when the sellers employ our policy, their prices converge at a rate of $O(T^{-1/7})$ to the Nash equilibrium prices that sellers would reach if they were fully informed. Each seller incurs a regret of $O(T^{5/7})$ relative to a dynamic benchmark policy. A theoretical contribution of our work is proving the existence of equilibrium under shape-constrained demand functions via the concept of $s$-concavity and establishing regret bounds of our proposed policy. Technically, we also establish new concentration results for the least squares estimator under shape constraints. Our findings offer significant insights into dynamic competition-aware pricing and contribute to the broader study of non-parametric learning in strategic decision-making. + oai:arXiv.org:2503.16737v5 + stat.ML + cs.LG + math.PR math.ST stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Philippe Gagnon + Daniele Bracale, Moulinath Banerjee, Cong Shi, Yuekai Sun - Covariate-Adjusted Response-Adaptive Design with Delayed Outcomes - https://arxiv.org/abs/2502.01062 - arXiv:2502.01062v3 Announce Type: replace -Abstract: Covariate-adjusted response-adaptive (CARA) designs have gained widespread adoption for their clear benefits in enhancing experimental efficiency and participant welfare. These designs dynamically adjust treatment allocations during interim analyses based on participant responses and covariates collected during the experiment. However, delayed responses can significantly compromise the effectiveness of CARA designs, as they hinder timely adjustments to treatment assignments when certain participant outcomes are not immediately observed. In this paper, we propose a fully forward-looking CARA design that dynamically updates treatment assignments throughout the experiment as response delay mechanisms are progressively estimated. Our design strategy is informed by novel semiparametric efficiency calculations that explicitly account for outcome delays in a multi-stage setting. Through both theoretical investigations and simulation studies, we demonstrate that our proposed design offers a robust solution for handling delayed outcomes in CARA designs, yielding significant improvements in both statistical power and participant welfare. - oai:arXiv.org:2502.01062v3 - stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems + https://arxiv.org/abs/2503.18309 + arXiv:2503.18309v4 Announce Type: replace +Abstract: Gaussian process state-space models (GPSSMs) offer a principled framework for learning and inference in nonlinear dynamical systems with uncertainty quantification. However, existing GPSSMs are limited by the use of multiple independent stationary Gaussian processes (GPs), leading to prohibitive computational and parametric complexity in high-dimensional settings and restricted modeling capacity for non-stationary dynamics. To address these challenges, we propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems. Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive non-stationary implicit process prior that can capture complex transition dynamics while significantly reducing model complexity. For the inference of the implicit process, we develop a variational inference algorithm that jointly approximates the posterior over the underlying GP and the neural network parameters defining the normalizing flows. To avoid explicit variational parameterization of the latent states, we further incorporate the ensemble Kalman filter (EnKF) into the variational framework, enabling accurate and efficient state estimation. Extensive empirical evaluations on synthetic and real-world datasets demonstrate the superior performance of our ETGPSSM in system dynamics learning, high-dimensional state estimation, and time-series forecasting, outperforming existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy. + oai:arXiv.org:2503.18309v4 + stat.ML + cs.LG + eess.SP + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xinwei Ma, Jingshen Wang, Waverly Wei + Zhidi Lin, Ying Li, Feng Yin, Juan Maro\~nas, Alexandre H. Thi\'ery + + + Exact identifiability analysis for a class of partially observed near-linear stochastic differential equation models + https://arxiv.org/abs/2503.19241 + arXiv:2503.19241v3 Announce Type: replace +Abstract: Stochasticity plays a key role in many biological systems, necessitating the calibration of stochastic mathematical models to interpret associated data. For model parameters to be estimated reliably, it is typically the case that they must be structurally identifiable. Yet, while theory underlying structural identifiability analysis for deterministic differential equation models is highly developed, there are currently no tools for the general assessment of stochastic models. In this work, we present a differential algebra-based framework for the structural identifiability analysis of linear and a class of near-linear partially observed stochastic differential equation (SDE) models. Our framework is based on a deterministic recurrence relation that describes the dynamics of the statistical moments of the system of SDEs. From this relation, we iteratively form a series of necessarily satisfied equations involving only the observed moments, from which we are able to establish structurally identifiable parameter combinations. We demonstrate our framework for a suite of linear (two- and $n$-dimensional) and non-linear (two-dimensional) models. Most importantly, we define the notion of structural identifiability for SDE models and establish the effect of the initial condition on identifiability. We conclude with a discussion on the applicability and limitations of our approach, and potential future research directions in this understudied area. + oai:arXiv.org:2503.19241v3 + stat.ME + q-bio.QM + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Alexander P Browning, Michael J Chappell, Hamid Rahkooy, Torkel E Loman, Ruth E Baker - Multivariable Behavioral Change Modeling of Epidemics in the Presence of Undetected Infections - https://arxiv.org/abs/2503.00982 - arXiv:2503.00982v3 Announce Type: replace -Abstract: Epidemic models are invaluable tools to understand and implement strategies to control the spread of infectious diseases, as well as to inform public health policies and resource allocation. However, current modeling approaches have limitations that reduce their practical utility, such as the exclusion of human behavioral change in response to the epidemic or ignoring the presence of undetected infectious individuals in the population. These limitations became particularly evident during the COVID-19 pandemic, underscoring the need for more accurate and informative models. To address these challenges, we develop a novel Bayesian epidemic modeling framework to better capture the complexities of disease spread by incorporating behavioral responses and undetected infections. In particular, our framework makes three contributions: 1) leveraging additional data on hospitalizations and deaths in modeling the disease dynamics, 2) accounting for data uncertainty arising from the large presence of asymptomatic and undetected infections, and 3) allowing the population behavioral change to be dynamically influenced by multiple data sources (cases and deaths). We thoroughly investigate the properties of the proposed model via simulation, and illustrate its utility on COVID-19 data from Montreal and Miami. - oai:arXiv.org:2503.00982v3 + A Restricted Latent Class Hidden Markov Model for Polytomous Responses, Polytomous Attributes, and Covariates: Identifiability and Application + https://arxiv.org/abs/2503.20940 + arXiv:2503.20940v4 Announce Type: replace +Abstract: We introduce a restricted latent class exploratory model for longitudinal data with ordinal attributes and respondent-specific covariates. Responses follow a time inhomogeneous hidden Markov model where the probability of a particular latent state at a time point is conditional on values at the previous time point of the respondent's covariates and latent state. We prove that the model is identifiable, state a Bayesian formulation, and demonstrate its efficacy in a variety of scenarios through two simulation studies. We apply the model to response data from a mathematics examination, comparing the results to a previously published confirmatory analysis, and also apply it to emotional state response data which was measured over a several-day period. + oai:arXiv.org:2503.20940v4 stat.ME - physics.soc-ph - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Caitlin Ward, Rob Deardon, Alexandra M. Schmidt + Eric Alan Wayman, Steven Andrew Culpepper, Jeff Douglas, Jesse Bowers - Median Consensus Embedding for Dimensionality Reduction - https://arxiv.org/abs/2503.08103 - arXiv:2503.08103v2 Announce Type: replace -Abstract: This study proposes median consensus embedding (MCE) to address variability in low-dimensional embeddings caused by random initialization in nonlinear dimensionality reduction techniques such as $t$-distributed stochastic neighbor embedding. MCE is defined as the geometric median of multiple embeddings. By assuming multiple embeddings as independent and identically distributed random samples and applying large deviation theory, we prove that MCE achieves consistency at an exponential rate. Furthermore, we develop a practical algorithm to implement MCE by constructing a distance function between embeddings based on the Frobenius norm of the pairwise distance matrix of data points. Application to actual data demonstrates that MCE converges rapidly and effectively reduces instability. We further combine MCE with multiple imputation to address missing values and consider multiscale hyperparameters. Results confirm that MCE effectively mitigates instability issues in embedding methods arising from random initialization and other sources. - oai:arXiv.org:2503.08103v2 - stat.ML - cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Evaluation of clinical utility in emulated clinical trials + https://arxiv.org/abs/2506.03991 + arXiv:2506.03991v2 Announce Type: replace +Abstract: Dynamic treatment regimes have been proposed to personalize treatment decisions by utilizing historical patient data, but they may not always improve on the current standard of care. It is thus meaningful to integrate the standard of care into the evaluation of treatment strategies, and previous works have suggested doing so through the concept of clinical utility. Here we will focus on the comparative component of clinical utility as the average outcome had the full population received treatment based on the proposed dynamic treatment regime in comparison to the full population receiving the ``standard" treatment assignment mechanism, such as a physician's choice. Clinical trials to evaluate clinical utility are rarely conducted, and thus, previous works have proposed an emulated clinical trial framework using observational data. However, only one simple estimator was previously suggested, and the practical details of how one would conduct this emulated trial were not detailed. Here, we illuminate these details and propose several estimators of clinical utility based on estimators proposed in the dynamic treatment regime literature. We illustrate the considerations and the estimators in a real data example investigating treatment rules for rheumatoid arthritis, where we highlight that in addition to the standard of care, the current medical guidelines should also be compared to any estimated ``optimal'' decision rule. + oai:arXiv.org:2506.03991v2 + stat.AP + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Yui Tomo, Daisuke Yoneoka + Johannes Hruza, Arvid Sj\"olander, Erin Gabriel, Samir Bhatt, Michael Sachs - Sufficient digits and density estimation: A Bayesian nonparametric approach using generalized finite P\'olya trees - https://arxiv.org/abs/2506.09437 - arXiv:2506.09437v3 Announce Type: replace -Abstract: This paper proposes a novel approach for statistical modelling of a continuous random variable $X$ on $[0, 1)$, based on its digit representation $X=.X_1X_2\ldots$. In general, $X$ can be coupled with a latent random variable $N$ so that $(X_1,\ldots,X_N)$ becomes a sufficient statistics and $.X_{N+1}X_{N+2}\ldots$ is uniformly distributed. In line with this fact, and focusing on binary digits for simplicity, we propose a family of generalized finite P{\'o}lya trees that induces a random density for a sample, which becomes a flexible tool for density estimation. Here, the digit system may be random and learned from the data. We provide a detailed Bayesian analysis, including closed form expression for the posterior distribution. We analyse the frequentist properties as the sample size increases, and provide sufficient conditions for consistency of the posterior distributions of the random density and $N$. We consider an extension to data spanning multiple orders of magnitude, and propose a prior distribution that encodes the so-called extended Newcomb-Benford law. Such a model shows promising results for density estimation of human-activity data. Our methodology is illustrated on several synthetic and real datasets. - oai:arXiv.org:2506.09437v3 - stat.ME - math.PR - math.ST - stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Diffusion Secant Alignment for Score-Based Density Ratio Estimation + https://arxiv.org/abs/2509.04852 + arXiv:2509.04852v3 Announce Type: replace +Abstract: Estimating density ratios has become increasingly important with the recent rise of score-based and diffusion-inspired methods. However, current tangent-based approaches rely on a high-variance learning objective, which leads to unstable training and costly numerical integration during inference. We propose \textit{Interval-annealed Secant Alignment Density Ratio Estimation (ISA-DRE)}, a score-based framework along diffusion interpolants that replaces the instantaneous tangent with its interval integral, the secant, as the learning target. We show theoretically that the secant is a provably lower variance and smoother target for neural approximation, and also a strictly more general representation that contains the tangent as the infinitesimal limit. To make secant learning feasible, we introduce the \textit{Secant Alignment Identity (SAI)} to enforce self consistency between secant and tangent representations, and \textit{Contraction Interval Annealing (CIA)} to ensure stable convergence. Empirically, this stability-first formulation produces high efficiency and accuracy. ISA-DRE achieves comparable or superior results with fewer function evaluations, demonstrating robustness under large distribution discrepancies and effectively mitigating the density-chasm problem. + oai:arXiv.org:2509.04852v3 + stat.ML + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mario Beraha, Jesper M{\o}ller + http://creativecommons.org/licenses/by/4.0/ + Wei Chen, Shigui Li, Jiacheng Li, Jian Xu, Zhiqi Lin, Junmei Yang, Delu Zeng, John Paisley, Qibin Zhao - A General Approach to Visualizing Uncertainty in Statistical Graphics - https://arxiv.org/abs/2508.00937 - arXiv:2508.00937v3 Announce Type: replace -Abstract: We present a general approach to visualizing uncertainty in static 2-D statistical graphics. If we treat a visualization as a function of its underlying quantities, uncertainty in those quantities induces a distribution over images. We show how to aggregate these images into a single visualization that represents the uncertainty. The approach can be viewed as a generalization of sample-based approaches that use overlay. Notably, standard representations, such as confidence intervals and bands, emerge with their usual coverage guarantees without being explicitly quantified or visualized. As a proof of concept, we implement our approach in the IID setting using resampling, provided as an open-source Python library. Because the approach operates directly on images, the user needs only to supply the data and the code for visualizing the quantities of interest without uncertainty. Through several examples, we show how both familiar and novel forms of uncertainty visualization can be created. The implementation is not only a practical validation of the underlying theory but also an immediately usable tool that can complement existing uncertainty-visualization libraries. - oai:arXiv.org:2508.00937v3 - stat.ME - cs.GR + Next-Generation Reservoir Computing for Dynamical Inference + https://arxiv.org/abs/2509.11338 + arXiv:2509.11338v2 Announce Type: replace +Abstract: We present a simple and scalable implementation of next-generation reservoir computing (NGRC) for modeling dynamical systems from time-series data. The method uses a pseudorandom nonlinear projection of time-delay embedded inputs, allowing the feature-space dimension to be chosen independently of the observation size and offering a flexible alternative to polynomial-based NGRC projections. We demonstrate the approach on benchmark tasks, including attractor reconstruction and bifurcation diagram estimation, using partial and noisy measurements. We further show that small amounts of measurement noise during training act as an effective regularizer, improving long-term autonomous stability compared to standard regression alone. Across all tests, the models remain stable over long rollouts and generalize beyond the training data. The framework offers explicit control of system state during prediction, and these properties make NGRC a natural candidate for applications such as surrogate modeling and digital-twin applications. + oai:arXiv.org:2509.11338v2 + stat.ML cs.LG - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Bernarda Petek, David Nabergoj, Erik \v{S}trumbelj + http://creativecommons.org/licenses/by/4.0/ + Rok Cestnik, Erik A. Martens - Gaussian Approximation for Two-Timescale Linear Stochastic Approximation - https://arxiv.org/abs/2508.07928 - arXiv:2508.07928v2 Announce Type: replace -Abstract: In this paper, we establish non-asymptotic bounds for accuracy of normal approximation for linear two-timescale stochastic approximation (TTSA) algorithms driven by martingale difference or Markov noise. Focusing on both the last iterate and Polyak-Ruppert averaging regimes, we derive bounds for normal approximation in terms of the convex distance between probability distributions. Our analysis reveals a non-trivial interaction between the fast and slow timescales: the normal approximation rate for the last iterate improves as the timescale separation increases, while it decreases in the Polyak-Ruppert averaged setting. We also provide the high-order moment bounds for the error of linear TTSA algorithm, which may be of independent interest. - oai:arXiv.org:2508.07928v2 + Generalized Guarantees for Variational Inference in the Presence of Even and Elliptical Symmetry + https://arxiv.org/abs/2511.01064 + arXiv:2511.01064v2 Announce Type: replace +Abstract: We extend several recent results providing symmetry-based guarantees for variational inference (VI) with location-scale families. VI approximates a target density $p$ by the best match $q^*$ in a family $Q$ of tractable distributions that in general does not contain $p$. It is known that VI can recover key properties of $p$, such as its mean and correlation matrix, when $p$ and $Q$ exhibit certain symmetries and $q^*$ is found by minimizing the reverse Kullback-Leibler divergence. We extend these guarantees in two important directions. First, we provide symmetry-based guarantees for $f$-divergences, a broad class that includes the reverse and forward Kullback-Leibler divergences and the $\alpha$-divergences. We highlight properties specific to the reverse Kullback-Leibler divergence under which we obtain our strongest guarantees. Second, we obtain further guarantees for VI when the target density $p$ exhibits even and elliptical symmetries in some but not all of its coordinates. These partial symmetries arise naturally in Bayesian hierarchical models, where the prior induces a challenging geometry but still possesses axes of symmetry. We illustrate these theoretical results in a number of experimental settings. + oai:arXiv.org:2511.01064v2 stat.ML cs.LG - math.OC - math.PR + stat.CO + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Charles C. Margossian, Lawrence K. Saul + + + Statistical Properties of Rectified Flow + https://arxiv.org/abs/2511.03193 + arXiv:2511.03193v3 Announce Type: replace +Abstract: Rectified flow (Liu et al., 2022; Liu, 2022; Wu et al., 2023) is a method for defining a transport map between two distributions, and enjoys popularity in machine learning, although theoretical results supporting the validity of these methods are scant. The rectified flow can be regarded as an approximation to optimal transport, but in contrast to other transport methods that require optimization over a function space, computing the rectified flow only requires standard statistical tools such as regression or density estimation, which we leverage to develop empirical versions of transport maps. We study some structural properties of the rectified flow, including existence, uniqueness, and regularity, as well as the related statistical properties, such as rates of convergence and central limit theorems, for some selected estimators. To do so, we analyze the bounded and unbounded cases separately as each presents unique challenges. In both cases, we are able to establish convergence at faster rates than those for the usual nonparametric regression and density estimation. + oai:arXiv.org:2511.03193v3 math.ST + cs.LG + stat.ME + stat.ML stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 + replace + http://creativecommons.org/licenses/by/4.0/ + Gonzalo Mena, Arun Kumar Kuchibhotla, Larry Wasserman + + + Function-on-Function Bayesian Optimization + https://arxiv.org/abs/2511.12783 + arXiv:2511.12783v2 Announce Type: replace +Abstract: Bayesian optimization (BO) has been widely used to optimize expensive and gradient-free objective functions across various domains. However, existing BO methods have not addressed the objective where both inputs and outputs are functions, which increasingly arise in complex systems as advanced sensing technologies. To fill this gap, we propose a novel function-on-function Bayesian optimization (FFBO) framework. Specifically, we first introduce a function-on-function Gaussian process (FFGP) model with a separable operator-valued kernel to capture the correlations between function-valued inputs and outputs. Compared to existing Gaussian process models, FFGP is modeled directly in the function space. Based on FFGP, we define a scalar upper confidence bound (UCB) acquisition function using a weighted operator-based scalarization strategy. Then, a scalable functional gradient ascent algorithm (FGA) is developed to efficiently identify the optimal function-valued input. We further analyze the theoretical properties of the proposed method. Extensive experiments on synthetic and real-world data demonstrate the superior performance of FFBO over existing approaches. + oai:arXiv.org:2511.12783v2 + stat.ML + cs.LG + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Bogdan Butyrin, Artemy Rubtsov, Alexey Naumov, Vladimir Ulyanov, Sergey Samsonov + Jingru Huang, Haijie Xu, Manrui Jiang, Chen Zhang - A Case for a "Refutations and Critiques" Track in Statistics Journals - https://arxiv.org/abs/2509.03702 - arXiv:2509.03702v3 Announce Type: replace -Abstract: The statistics community, which has traditionally lacked a transparent and open peer-review system, faces a challenge of inconsistent paper quality, with some published work containing substantial errors. This problem resonates with concerns raised by Schaeffer et al. (2025) regarding the rapid growth of machine learning research. They argue that peer review has proven insufficient to prevent the publication of ``misleading, incorrect, flawed or perhaps even fraudulent studies'' and that a ``dynamic self-correcting research ecosystem'' is needed. This note provides a concrete illustration of this problem by examining two published papers, Wang, Zhou and Lin (2025) and Liu et al. (2023), and exposing striking and critical errors in their proofs. The presence of such errors in major journals raises a fundamental question about the importance and verification of mathematical proofs in our field. Echoing the proposal from Schaeffer et al. (2025), we argue that reforming the peer-review system itself is likely impractical. Instead, we propose a more viable path forward: the creation of a high-profile, reputable platform, such as a ``Refutations and Critiques'' track on arXiv, to provide visibility to vital research that critically challenges prior work. Such a mechanism would be crucial for enhancing the reliability and credibility of statistical research. - oai:arXiv.org:2509.03702v3 - stat.ME + Solving a Research Problem in Mathematical Statistics with AI Assistance + https://arxiv.org/abs/2511.18828 + arXiv:2511.18828v2 Announce Type: replace +Abstract: Over the last few months, AI models including large language models have improved greatly. There are now several documented examples where they have helped professional mathematical scientists prove new results, sometimes even helping resolve known open problems. In this short note, we add another example to the list, by documenting how we were able to solve a previously unsolved research problem in robust mathematical statistics with crucial help from GPT-5. Our problem concerns robust density estimation, where the observations are perturbed by Wasserstein-bounded contaminations. In a previous preprint (Chao and Dobriban, 2023, arxiv:2308.01853v2), we have obtained upper and lower bounds on the minimax optimal estimation error; which were, however, not sharp. + Starting in October 2025, making significant use of GPT-5 Pro, we were able to derive the minimax optimal error rate (reported in version 3 of the above arxiv preprint). GPT-5 provided crucial help along the way, including by suggesting calculations that we did not think of, and techniques that were not familiar to us, such as the dynamic Benamou-Brenier formulation, for key steps in the analysis. Working with GPT-5 took a few weeks of effort, and we estimate that it could have taken several months to get the same results otherwise. At the same time, there are still areas where working with GPT-5 was challenging: it sometimes provided incorrect references, and glossed over details that sometimes took days of work to fill in. We outline our workflow and steps taken to mitigate issues. Overall, our work can serve as additional documentation for a new age of human-AI collaborative work in mathematical science. + oai:arXiv.org:2511.18828v2 math.ST + cs.AI + cs.LG stat.TH - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Zhen Li + http://creativecommons.org/licenses/by/4.0/ + Edgar Dobriban - Bayes Factor Hypothesis Testing in Meta-Analyses: Practical Advantages and Methodological Considerations - https://arxiv.org/abs/2511.22535 - arXiv:2511.22535v2 Announce Type: replace -Abstract: Bayesian hypothesis testing via Bayes factors offers a principled alternative to classical p-value methods in meta-analysis, particularly suited to its cumulative and sequential nature. Unlike commonly reported p-values for standard null hypothesis significance testing, Bayes factors allow for quantifying support both for and against the existence of an effect, facilitate ongoing evidence monitoring, and maintain coherent long-run behavior as additional studies are incorporated. Recent theoretical developments further show how Bayes factors can flexibly control Type I error rates through connections to e-value theory. Despite these advantages, their use remains limited in the meta-analytic literature. This paper provides a critical overview of their theoretical properties, methodological considerations, such as prior sensitivity, and practical advantages for evidence synthesis. Two illustrative applications are provided: one on statistical learning in individuals with language impairments, and another on seroma incidence following post-operative exercise in breast cancer patients. New tools supporting these methods are available in the open-source R package BFpack. - oai:arXiv.org:2511.22535v2 + Two-stage Estimation for Causal Inference Involving a Semi-continuous Exposure + https://arxiv.org/abs/2511.20985 + arXiv:2511.20985v3 Announce Type: replace +Abstract: Methods for causal inference are well developed for binary and continuous exposures, but in many settings, the exposure has a substantial mass at zero-such exposures are called semi-continuous. We propose a general causal framework for such semi-continuous exposures, together with a novel two-stage estimation strategy. A two-part propensity structure is introduced for the semi-continuous exposure, with one component for exposure status (exposed vs unexposed) and another for the exposure level among those exposed, and incorporates both into a marginal structural model that disentangles the effects of exposure status and dose. The two-stage procedure sequentially targets the causal dose-response among exposed individuals and the causal effect of exposure status at a reference dose, allowing flexibility in the choice of propensity score methods in the second stage. We establish consistency and asymptotic normality for the resulting estimators, and characterise their limiting values under misspecification of the propensity score models. Simulation studies evaluate finite sample performance and robustness, and an application to a study of prenatal alcohol exposure and child cognition demonstrates how the proposed methods can be used to address a range of scientific questions about both exposure status and exposure intensity. + oai:arXiv.org:2511.20985v3 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Joris Mulder, Robbie C. M. van Aert + http://creativecommons.org/licenses/by/4.0/ + Xiaoya Wang, Richard J. Cook, Yeying Zhu, Tugba Akkaya-Hocagil, R. Colin Carter, Sandra W. Jacobson, Joseph L. Jacobson, Louise M. Ryan - Assumption-Lean Differential Variance Inference for Heterogeneous Treatment Effect Detection - https://arxiv.org/abs/2512.03254 - arXiv:2512.03254v3 Announce Type: replace -Abstract: The conditional average treatment effect (CATE) is frequently estimated to refute the homogeneous treatment effect assumption. Under this assumption, all units making up the population under study experience identical benefit from a given treatment. Uncovering heterogeneous treatment effects through inference about the CATE, however, requires that covariates truly modifying the treatment effect be reliably collected at baseline. CATE-based techniques will necessarily fail to detect violations when effect modifiers are omitted from the data due to, for example, resource constraints. Severe measurement error has a similar impact. To address these limitations, we prove that the homogeneous treatment effect assumption can be gauged through inference about contrasts of the potential outcomes' variances. We derive causal machine learning estimators of these contrasts and study their asymptotic properties. We establish that these estimators are doubly robust and asymptotically linear under mild conditions, permitting formal hypothesis testing about the homogeneous treatment effect assumption even when effect modifiers are missing or mismeasured. Numerical experiments demonstrate that these estimators' asymptotic guarantees are approximately achieved in experimental and observational data alike. These inference procedures are then used to detect heterogeneous treatment effects in the re-analysis of randomized controlled trials investigating targeted temperature management in cardiac arrest patients. - oai:arXiv.org:2512.03254v3 + Sequential Randomization Tests Using e-values: Applications for trial monitoring + https://arxiv.org/abs/2512.04366 + arXiv:2512.04366v3 Announce Type: replace +Abstract: Sequential monitoring of randomized trials traditionally relies on parametric assumptions or asymptotic approximations. We discuss a nonparametric sequential test and its application to continuous and time-to-event endpoints that derives validity solely from the randomization mechanism. Using a betting framework, these tests constructs a test martingale by sequentially wagering on treatment assignments given observed outcomes. Under the null hypothesis of no treatment effect, the expected wealth cannot grow, guaranteeing anytime-valid Type I error control regardless of stopping rule. We prove validity and present simulation studies demonstrating calibration and power. These methods provide a conservative, assumption-free complement to model-based sequential analyses. + oai:arXiv.org:2512.04366v3 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + stat.AP + Thu, 11 Dec 2025 00:00:00 -0500 replace http://creativecommons.org/licenses/by/4.0/ - Philippe A. Boileau, Hani Zaki, Gabriele Lileikyte, Niklas Nielsen, Patrick R. Lawler, Mireille E. Schnitzer + Fernando G Zampieri - Controlling the False Discovery Proportion in Matched Observational Studies - https://arxiv.org/abs/2512.06601 - arXiv:2512.06601v2 Announce Type: replace -Abstract: We provide an approach to exploratory data analysis in matched observational studies with a single intervention and multiple endpoints. In such settings, the researcher would like to explore evidence for actual treatment effects among these variables while accounting not only for the possibility of false discoveries, but also for the potential impact of unmeasured confounding. For any candidate subset of hypotheses about these outcomes, we provide sensitivity sets for the proportion of the hypotheses within the subset which are actually true. The resulting sensitivity statements are valid simultaneously over all possible choices for the rejected set, allowing the researcher to search for promising subsets of hypotheses that maintain a large estimated fraction of true discoveries even if hidden bias is present. The approach is well suited to sensitivity analysis, as conclusions that some fraction of outcomes are affected by the treatment exhibit larger robustness to unmeasured confounding than findings that any particular outcome is affected. We show how a sequence of integer programs, in tandem with screening steps, facilitate the efficient computation of the required sensitivity sets. We illustrate the practical utility of our method through both simulation studies and a data example on the long-term impacts of childhood abuse. - oai:arXiv.org:2512.06601v2 + ADOPT: Additive Optimal Transport Regression + https://arxiv.org/abs/2512.08118 + arXiv:2512.08118v2 Announce Type: replace +Abstract: Regression analysis for responses taking values in general metric spaces has received increasing attention, particularly for settings with Euclidean predictors $X \in \mathbb{R}^p$ and non-Euclidean responses $Y$ in metric spaces. While additive regression is a powerful tool for enhancing interpretability and mitigating the curse of dimensionality in the presence of multivariate predictors, its direct extension is hindered by the absence of vector space operations in general metric spaces. We propose a novel framework for additive optimal transport regression, which incorporates additive structure through optimal geodesic transports. A key idea is to extend the notion of optimal transports in Wasserstein spaces to general geodesic metric spaces. This unified approach accommodates a wide range of responses, including probability distributions, symmetric positive definite (SPD) matrices with various metrics and spherical data. The practical utility of the method is illustrated with correlation matrices derived from resting state fMRI brain imaging data. + oai:arXiv.org:2512.08118v2 stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Mengqi Lin, Colin Fogarty + Wookyeong Song, Hans-Georg M\"uller - Discovering Influential Factors in Variational Autoencoders - https://arxiv.org/abs/1809.01804 - arXiv:1809.01804v3 Announce Type: replace-cross -Abstract: In the field of machine learning, it is still a critical issue to identify and supervise the learned representation without manually intervening or intuition assistance to extract useful knowledge or serve for the downstream tasks. In this work, we focus on supervising the influential factors extracted by the variational autoencoder(VAE). The VAE is proposed to learn independent low dimension representation while facing the problem that sometimes pre-set factors are ignored. We argue that the mutual information of the input and each learned factor of the representation plays a necessary indicator of discovering the influential factors. We find the VAE objective inclines to induce mutual information sparsity in factor dimension over the data intrinsic dimension and therefore result in some non-influential factors whose function on data reconstruction could be ignored. We show mutual information also influences the lower bound of the VAE's reconstruction error and downstream classification task. To make such indicator applicable, we design an algorithm for calculating the mutual information for the VAE and prove its consistency. Experimental results on MNIST, CelebA and DEAP datasets show that mutual information can help determine influential factors, of which some are interpretable and can be used to further generation and classification tasks, and help discover the variant that connects with emotion on DEAP dataset. - oai:arXiv.org:1809.01804v3 + Information-Theoretic Active Correlation Clustering + https://arxiv.org/abs/2402.03587 + arXiv:2402.03587v3 Announce Type: replace-cross +Abstract: Correlation clustering is a flexible framework for partitioning data based solely on pairwise similarity or dissimilarity information, without requiring the number of clusters as input. However, in many practical scenarios, these pairwise similarities are not available a priori and must be obtained through costly measurements or human feedback. This motivates the use of active learning to query only the most informative pairwise comparisons, enabling effective clustering under budget constraints. In this work, we develop a principled active learning approach for correlation clustering by introducing several information-theoretic acquisition functions that prioritize queries based on entropy and expected information gain. These strategies aim to reduce uncertainty about the clustering structure as efficiently as possible. We evaluate our methods across a range of synthetic and real-world settings and show that they significantly outperform existing baselines in terms of clustering accuracy and query efficiency. Our results highlight the benefits of combining active learning with correlation clustering in settings where similarity information is costly or limited. + oai:arXiv.org:2402.03587v3 cs.LG stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shiqi Liu, Jingxin Liu, Qian Zhao, Xiangyong Cao, Huibin Li, Deyu Meng, Hongying Meng, Sheng Liu + IEEE International Conference on Data Mining (ICDM), 2025 + Linus Aronsson, Morteza Haghir Chehreghani - Identifying Treatment and Spillover Effects Using Exposure Contrasts - https://arxiv.org/abs/2403.08183 - arXiv:2403.08183v4 Announce Type: replace-cross -Abstract: To report spillover effects, a common practice is to regress outcomes on statistics summarizing neighbors' treatments. This paper studies nonparametric analogs of these estimands, which we refer to as exposure contrasts. We demonstrate that a contrast may have the opposite sign of the unit-level effects of interest even under unconfoundedness. We then provide interpretable conditions on interference and the assignment mechanism under which exposure contrasts can be represented as convex averages of the unit-level effects and therefore avoid sign reversals. These conditions encompass cluster-randomized trials, network experiments, and observational settings with peer effects in selection into treatment. - oai:arXiv.org:2403.08183v4 + Kernel Three Pass Regression Filter + https://arxiv.org/abs/2405.07292 + arXiv:2405.07292v4 Announce Type: replace-cross +Abstract: We forecast a single time series using a high-dimensional set of predictors. When these predictors share common underlying dynamics, an approximate latent factor model provides a powerful characterization of their co-movements Bai(2003). These latent factors succinctly summarize the data and can also be used for prediction, alleviating the curse of dimensionality in high-dimensional prediction exercises, see Stock & Watson (2002a). However, forecasting using these latent factors suffers from two potential drawbacks. First, not all pervasive factors among the set of predictors may be relevant, and using all of them can lead to inefficient forecasts. The second shortcoming is the assumption of linear dependence of predictors on the underlying factors. The first issue can be addressed by using some form of supervision, which leads to the omission of irrelevant information. One example is the three-pass regression filter proposed by Kelly & Pruitt (2015). We extend their framework to cases where the form of dependence might be nonlinear by developing a new estimator, which we refer to as the Kernel Three-Pass Regression Filter (K3PRF). This alleviates the aforementioned second shortcoming. The estimator is computationally efficient and performs well empirically. The short-term performance matches or exceeds that of established models, while the long-term performance shows significant improvement. + oai:arXiv.org:2405.07292v4 econ.EM + q-fin.ST stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Michael P. Leung - - - Explosive neural networks via higher-order interactions in curved statistical manifolds - https://arxiv.org/abs/2408.02326 - arXiv:2408.02326v3 Announce Type: replace-cross -Abstract: Higher-order interactions underlie complex phenomena in systems such as biological and artificial neural networks, but their study is challenging due to the scarcity of tractable models. By leveraging a generalisation of the maximum entropy principle, we introduce curved neural networks as a class of models with a limited number of parameters that are particularly well-suited for studying higher-order phenomena. Through exact mean-field descriptions, we show that these curved neural networks implement a self-regulating annealing process that can accelerate memory retrieval, leading to explosive order-disorder phase transitions with multi-stability and hysteresis effects. Moreover, by analytically exploring their memory-retrieval capacity using the replica trick, we demonstrate that these networks can enhance memory capacity and robustness of retrieval over classical associative-memory networks. Overall, the proposed framework provides parsimonious models amenable to analytical study, revealing higher-order phenomena in complex networks. - oai:arXiv.org:2408.02326v3 - cond-mat.dis-nn - cond-mat.stat-mech - cs.IT - math.IT - nlin.AO - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - 10.1038/s41467-025-61475-w - Aguilera, M., Morales, P.A., Rosas, F.E. et al. Explosive neural networks via higher-order interactions in curved statistical manifolds. Nature Communications 16, 6511 (2025) - Miguel Aguilera, Pablo A. Morales, Fernando E. Rosas, Hideaki Shimazaki - - - GLL: A Differentiable Graph Learning Layer for Neural Networks - https://arxiv.org/abs/2412.08016 - arXiv:2412.08016v2 Announce Type: replace-cross -Abstract: Standard deep learning architectures used for classification generate label predictions with a projection head and softmax activation function. Although successful, these methods fail to leverage the relational information between samples for generating label predictions. In recent works, graph-based learning techniques, namely Laplace learning, have been heuristically combined with neural networks for both supervised and semi-supervised learning (SSL) tasks. However, prior works approximate the gradient of the loss function with respect to the graph learning algorithm or decouple the processes; end-to-end integration with neural networks is not achieved. In this work, we derive backpropagation equations, via the adjoint method, for inclusion of a general family of graph learning layers into a neural network. The resulting method, distinct from graph neural networks, allows us to precisely integrate similarity graph construction and graph Laplacian-based label propagation into a neural network layer, replacing a projection head and softmax activation function for general classification task. Our experimental results demonstrate smooth label transitions across data, improved generalization and robustness to adversarial attacks, and improved training dynamics compared to a standard softmax-based approach. - oai:arXiv.org:2412.08016v2 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jason Brown, Bohan Chen, Harris Hardiman-Mostow, Jeff Calder, Andrea L. Bertozzi - - - Flow-based Conformal Prediction for Multi-dimensional Time Series - https://arxiv.org/abs/2502.05709 - arXiv:2502.05709v2 Announce Type: replace-cross -Abstract: Time series prediction underpins a broad range of downstream tasks across many scientific domains. Recent advances and increasing adoption of black-box machine learning models for time series prediction highlight the critical need for reliable uncertainty quantification. While conformal prediction has gained attention as a reliable uncertainty quantification method, conformal prediction for time series faces two key challenges: (1) adaptively leveraging correlations in features and non-conformity scores to overcome the exchangeability assumption, and (2) constructing prediction sets for multi-dimensional outcomes. To address these challenges jointly, we propose a novel conformal prediction method for time series using flow with classifier-free guidance. We provide coverage guarantees by establishing exact non-asymptotic marginal coverage and a finite-sample bound on conditional coverage for the proposed method. Evaluations on real-world time series datasets demonstrate that our method constructs significantly smaller prediction sets than existing conformal prediction methods while maintaining target coverage. - oai:arXiv.org:2502.05709v2 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Junghwan Lee, Chen Xu, Yao Xie + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Rajveer Jat, Daanish Padha - Proper Learnability and the Role of Unlabeled Data - https://arxiv.org/abs/2502.10359 - arXiv:2502.10359v2 Announce Type: replace-cross -Abstract: Proper learning refers to the setting in which learners must emit predictors in the underlying hypothesis class $H$, and often leads to learners with simple algorithmic forms (e.g. empirical risk minimization (ERM), structural risk minimization (SRM)). The limitation of proper learning, however, is that there exist problems which can only be learned improperly, e.g. in multiclass classification. Thus, we ask: Under what assumptions on the hypothesis class or the information provided to the learner is a problem properly learnable? We first demonstrate that when the unlabeled data distribution is given, there always exists an optimal proper learner governed by distributional regularization, a randomized generalization of regularization. We refer to this setting as the distribution-fixed PAC model, and continue to evaluate the learner on its worst-case performance over all distributions. Our result holds for all metric loss functions and any finite learning problem (with no dependence on its size). Further, we demonstrate that sample complexities in the distribution-fixed PAC model can shrink by only a logarithmic factor from the classic PAC model, strongly refuting the role of unlabeled data in PAC learning (from a worst-case perspective). - We complement this with impossibility results which obstruct any characterization of proper learnability in the realizable PAC model. First, we observe that there are problems whose proper learnability is logically undecidable, i.e., independent of the ZFC axioms. We then show that proper learnability is not a monotone property of the underlying hypothesis class, and that it is not a local property (in a precise sense). Our impossibility results all hold even for the fundamental setting of multiclass classification, and go through a reduction of EMX learning (Ben-David et al., 2019) to proper classification which may be of independent interest. - oai:arXiv.org:2502.10359v2 + Spectral Analysis of Diffusion Models with Application to Schedule Design + https://arxiv.org/abs/2502.00180 + arXiv:2502.00180v3 Announce Type: replace-cross +Abstract: Diffusion models (DMs) have emerged as powerful tools for modeling complex data distributions and generating realistic new samples. Over the years, advanced architectures and sampling methods have been developed to make these models practically usable. However, certain synthesis process decisions still rely on heuristics without a solid theoretical foundation. In our work, we offer a novel analysis of the DM's inference process, introducing a comprehensive frequency response perspective. Specifically, by relying on Gaussianity assumption, we present the inference process as a closed-form spectral transfer function, capturing how the generated signal evolves in response to the initial noise. We demonstrate how the proposed analysis can be leveraged to design a noise schedule that aligns effectively with the characteristics of the data. The spectral perspective also provides insights into the underlying dynamics and sheds light on the relationship between spectral properties and noise schedule structure. Our results lead to scheduling curves that are dependent on the spectral content of the data, offering a theoretical justification for some of the heuristics taken by practitioners. + oai:arXiv.org:2502.00180v3 cs.LG stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, Shang-Hua Teng + http://creativecommons.org/licenses/by-sa/4.0/ + Roi Benita, Michael Elad, Joseph Keshet - Representation Retrieval Learning for Heterogeneous Data Integration - https://arxiv.org/abs/2503.09494 - arXiv:2503.09494v3 Announce Type: replace-cross -Abstract: In the era of big data, large-scale, multi-source, multi-modality datasets are increasingly ubiquitous, offering unprecedented opportunities for predictive modeling and scientific discovery. However, these datasets often exhibit complex heterogeneity, such as covariates shift, posterior drift, and blockwise missingness, which worsen predictive performance of existing supervised learning algorithms. To address these challenges simultaneously, we propose a novel Representation Retrieval (R2) framework, which integrates a dictionary of representation learning modules (representer dictionary) with data source-specific sparsity-induced machine learning model (learners). Under the R2 framework, we introduce the notion of integrativeness for each representer, and propose a novel Selective Integration Penalty (SIP) to explicitly encourage more integrative representers to improve predictive performance. Theoretically, we show that the excess risk bound of the R2 framework is characterized by the integrativeness of representers, and SIP effectively improves the excess risk. Extensive simulation studies validate the superior performance of R2 framework and the effect of SIP. We further apply our method to two real-world datasets to confirm its empirical success. - oai:arXiv.org:2503.09494v3 - cs.LG + Automatic Inference for Value-Added Regressions + https://arxiv.org/abs/2503.19178 + arXiv:2503.19178v2 Announce Type: replace-cross +Abstract: A large empirical literature regresses outcomes on empirical Bayes shrinkage estimates of value-added, yet little is known about whether this approach leads to unbiased estimates and valid inference for the downstream regression coefficients. We study a general class of empirical Bayes estimators and the properties of the resulting regression coefficients. We show that estimators can be asymptotically biased and inference can be invalid if the shrinkage estimator does not account for heteroskedasticity in the noise when estimating value added. By contrast, shrinkage estimators properly constructed to model this heteroskedasticity perform an automatic bias correction: the associated regression estimator is asymptotically unbiased, asymptotically normal, and efficient in the sense that it is asymptotically equivalent to regressing on the true (latent) value-added. Further, OLS standard errors from regressing on shrinkage estimates are consistent in this case. As such, efficient inference is easy for practitioners to implement: simply regress outcomes on shrinkage estimates of value-added that account for noise heteroskedasticity. + oai:arXiv.org:2503.19178v2 + econ.EM stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Qi Xu, Annie Qu + Tian Xie - Rethinking Few-Shot Image Fusion: Granular Ball Priors Enable General-Purpose Deep Fusion - https://arxiv.org/abs/2504.08937 - arXiv:2504.08937v4 Announce Type: replace-cross -Abstract: In image fusion tasks, the absence of real fused images as priors forces most deep learning approaches to rely on large-scale paired datasets to extract global weighting features or to generate pseudo-supervised images through algorithmic constructions. Unlike previous methods, this work re-examines prior-guided learning under few-shot conditions by introducing rough set theory. We regard the traditional algorithm as a prior generator, while the network re-inferrs and adaptively optimizes the prior through a dynamic loss function, reducing the inference burden of the network and enabling effective few-shot learning.To provide the prior, we propose the Granular Ball Pixel Computation (GBPC) algorithm. GBPC models pixel pairs in a luminance subspace using meta-granular balls and mines intra-ball information at multiple granular levels. At the fine-grained level, sliding granular balls assign adaptive weights to individual pixels to produce pixel-level prior fusion. At the coarse-grained level, the algorithm performs split computation within a single image to estimate positive and boundary domain distributions, enabling modality awareness and prior confidence estimation, which dynamically guide the loss weighting.The network and the algorithmic prior are coupled through the loss function to form an integrated framework. Thanks to the dynamic weighting mechanism, the network can adaptively adjust to different priors during training, enhancing its perception and fusion capability across modalities. We name this framework GBFF (Granular Ball Fusion Framework). Experiments on four fusion tasks demonstrate that even with only ten training image pairs per task, GBFF achieves superior performance in both visual quality and model compactness. Code is available at: https://github.com/DMinjie/GBFF - oai:arXiv.org:2504.08937v4 - cs.GR - cs.CV - cs.LG - eess.IV - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Inference on effect size after multiple hypothesis testing + https://arxiv.org/abs/2503.22369 + arXiv:2503.22369v3 Announce Type: replace-cross +Abstract: Significant treatment effects are often emphasized when interpreting and summarizing empirical findings in studies that estimate multiple, possibly many, treatment effects. Under this kind of selective reporting, conventional treatment effect estimates may be biased and their corresponding confidence intervals may undercover the true effect sizes. We propose new estimators and confidence intervals that provide valid inferences on the effect sizes of the significant effects after multiple hypothesis testing. Our methods are based on the principle of selective conditional inference and complement a wide range of tests, including step-up tests and bootstrap-based step-down tests. Our approach is scalable, allowing us to study an application with over 370 estimated effects. We justify our procedure for asymptotically normal treatment effect estimators. We provide two empirical examples that demonstrate bias correction and confidence interval adjustments for significant effects. The magnitude and direction of the bias correction depend on the correlation structure of the estimated effects and whether the interpretation of the significant effects depends on the (in)significance of other effects. + oai:arXiv.org:2503.22369v3 + econ.EM + math.ST + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Minjie Deng, Yan Wei, An Wu, Yuncan Ouyang, Hao Zhai, Qianyao Peng + http://creativecommons.org/licenses/by-sa/4.0/ + Andreas Dzemski, Ryo Okui, Wenjie Wang - SSRCA: a novel machine learning pipeline to perform sensitivity analysis for agent-based models - https://arxiv.org/abs/2506.00168 - arXiv:2506.00168v3 Announce Type: replace-cross -Abstract: Agent-based models (ABMs) are widely used in biology to understand how individual actions scale into emergent population behavior. Modelers employ sensitivity analysis (SA) algorithms to quantify input parameters' impact on model outputs, however, it is hard to perform SA for ABMs due to their computational and complex nature. In this work, we develop the Simulate, Summarize, Reduce, Cluster, and Analyze (SSRCA) methodology, a machine-learning based pipeline designed to facilitate SA for ABMs. In particular, SSRCA can achieve the following tasks for ABMS: 1) identify sensitive model parameters, 2) reveal common output model patterns, and 3) determine which input parameter values generate these patterns. We use an example ABM of tumor spheroid growth to showcase how SSRCA identifies four common patterns from the ABM and the parameter regions that generate these outputs. Additionally, we compare the SA results between SSRCA and the popular Sobol' Method and find that SSRCA's identified sensitive parameters are robust to the choice of model descriptors while Sobol's are not. This analysis could streamline data-driven tasks, such as parameter estimation, for ABMs by reducing parameter space. While we highlight these results with an ABM on tumor spheroid formation, the SSRCA Methodology is broadly applicable to biological ABMs. - oai:arXiv.org:2506.00168v3 - q-bio.QM - q-bio.CB + Adversarially Pretrained Transformers May Be Universally Robust In-Context Learners + https://arxiv.org/abs/2505.14042 + arXiv:2505.14042v2 Announce Type: replace-cross +Abstract: Adversarial training is one of the most effective adversarial defenses, but it incurs a high computational cost. In this study, we present the first theoretical analysis suggesting that adversarially pretrained transformers can serve as universally robust foundation models -- models that can robustly adapt to diverse downstream tasks with only lightweight tuning. Specifically, we demonstrate that single-layer linear transformers, after adversarial pretraining across a variety of classification tasks, can robustly generalize to unseen classification tasks through in-context learning from clean demonstrations (i.e., without requiring additional adversarial training or examples). This universal robustness stems from the model's ability to adaptively focus on robust features within given tasks. We also show the two open challenges for attaining robustness: accuracy--robustness trade-off and sample-hungry training. This study initiates the discussion on the utility of universally robust foundation models. While their training is expensive, the investment would prove worthwhile as downstream tasks can enjoy free adversarial robustness. The code is available at https://github.com/s-kumano/universally-robust-in-context-learner. + oai:arXiv.org:2505.14042v2 + cs.LG + cs.CV stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Edward H. Rohr, John T. Nardini + http://creativecommons.org/licenses/by-nc-sa/4.0/ + Soichiro Kumano, Hiroshi Kera, Toshihiko Yamasaki - Knowledge Adaptation as Posterior Correction - https://arxiv.org/abs/2506.14262 - arXiv:2506.14262v3 Announce Type: replace-cross -Abstract: Adaptation is the holy grail of intelligence, but even the best AI models lack the adaptability of toddlers. In spite of great progress, little is known about the mechanisms by which machines can learn to adapt as fast as humans and animals. Here, we cast adaptation as `correction' of old posteriors and show that a wide-variety of existing adaptation methods follow this very principle, including those used for continual learning, federated learning, unlearning, and model merging. In all these settings, more accurate posteriors often lead to smaller corrections and can enable faster adaptation. Posterior correction is derived by using the dual representation of the Bayesian Learning Rule of Khan and Rue (2023), where the interference between the old representation and new information is quantified by using the natural-gradient mismatch. We present many examples demonstrating how machines can learn to adapt quickly by using posterior correction. - oai:arXiv.org:2506.14262v3 + A Framework for Controllable Multi-objective Learning with Annealed Stein Variational Hypernetworks + https://arxiv.org/abs/2506.06715 + arXiv:2506.06715v3 Announce Type: replace-cross +Abstract: Pareto Set Learning (PSL) is popular as an efficient approach to obtaining the complete optimal solution in Multi-objective Learning (MOL). A set of optimal solutions approximates the Pareto set, and its mapping is a set of dense points in the Pareto front in objective space. However, some current methods face a challenge: how to make the Pareto solution is diverse while maximizing the hypervolume value. In this paper, we propose a novel method to address this challenge, which employs Stein Variational Gradient Descent (SVGD) to approximate the entire Pareto set. SVGD pushes a set of particles towards the Pareto set by applying a form of functional gradient descent, which helps to converge and diversify optimal solutions. Additionally, we employ diverse gradient direction strategies to thoroughly investigate a unified framework for SVGD in multi-objective optimization and adapt this framework with an annealing schedule to promote stability. We introduce our method, SVH-MOL, and validate its effectiveness through extensive experiments on multi-objective problems and multi-task learning, demonstrating its superior performance. + oai:arXiv.org:2506.06715v3 cs.LG - cs.AI stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Mohammad Emtiyaz Khan + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Minh-Duc Nguyen, Dung D. Le - Elucidated Rolling Diffusion Models for Probabilistic Forecasting of Complex Dynamics - https://arxiv.org/abs/2506.20024 - arXiv:2506.20024v3 Announce Type: replace-cross -Abstract: Diffusion models are a powerful tool for probabilistic forecasting, yet most applications in high-dimensional complex systems predict future states individually. This approach struggles to model complex temporal dependencies and fails to explicitly account for the progressive growth of uncertainty inherent to the systems. While rolling diffusion frameworks, which apply increasing noise to forecasts at longer lead times, have been proposed to address this, their integration with state-of-the-art, high-fidelity diffusion techniques remains a significant challenge. We tackle this problem by introducing Elucidated Rolling Diffusion Models (ERDM), the first framework to successfully unify a rolling forecast structure with the principled, performant design of Elucidated Diffusion Models (EDM). To do this, we adapt the core EDM components-its noise schedule, network preconditioning, and Heun sampler-to the rolling forecast setting. The success of this integration is driven by three key contributions: (i) a novel loss weighting scheme that focuses model capacity on the mid-range forecast horizons where determinism gives way to stochasticity; (ii) an efficient initialization strategy using a pre-trained EDM for the initial window; and (iii) a bespoke hybrid sequence architecture for robust spatiotemporal feature extraction under progressive denoising. On 2D Navier-Stokes simulations and ERA5 global weather forecasting at 1.5-degree resolution, ERDM consistently outperforms key diffusion-based baselines, including conditional autoregressive EDM. ERDM offers a flexible and powerful general framework for tackling diffusion-based dynamics forecasting problems where modeling uncertainty propagation is paramount. - oai:arXiv.org:2506.20024v3 + Efficient $Q$-Learning and Actor-Critic Methods for Robust Average Reward Reinforcement Learning + https://arxiv.org/abs/2506.07040 + arXiv:2506.07040v3 Announce Type: replace-cross +Abstract: We present a non-asymptotic convergence analysis of $Q$-learning and actor-critic algorithms for robust average-reward Markov Decision Processes (MDPs) under contamination, total-variation (TV) distance, and Wasserstein uncertainty sets. A key ingredient of our analysis is showing that the optimal robust $Q$ operator is a strict contraction with respect to a carefully designed semi-norm (with constant functions quotiented out). This property enables a stochastic approximation update that learns the optimal robust $Q$-function using $\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. We also provide an efficient routine for robust $Q$-function estimation, which in turn facilitates robust critic estimation. Building on this, we introduce an actor-critic algorithm that learns an $\epsilon$-optimal robust policy within $\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. We provide numerical simulations to evaluate the performance of our algorithms. + oai:arXiv.org:2506.07040v3 cs.LG cs.AI - physics.ao-ph stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Advances in Neural Information Processing Systems (NeurIPS), 2025 - Salva R\"uhling Cachay, Miika Aittala, Karsten Kreis, Noah Brenowitz, Arash Vahdat, Morteza Mardani, Rose Yu + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Yang Xu, Swetha Ganesh, Vaneet Aggarwal - Hebbian Physics Networks: A Self-Organizing Computational Architecture Based on Local Physical Laws - https://arxiv.org/abs/2507.00641 - arXiv:2507.00641v2 Announce Type: replace-cross -Abstract: Physical transport processes organize through local interactions that redistribute imbalance while preserving conservation. Classical solvers enforce this organization by applying fixed discrete operators on rigid grids. We introduce the Hebbian Physics Network (HPN), a computational framework that replaces this rigid scaffolding with a plastic transport geometry. An HPN is a coupled dynamical system of physical states on nodes and constitutive weights on edges in a graph. Residuals--local violations of continuity, momentum balance, or energy conservation--act as thermodynamic forces that drive the joint evolution of both the state and the operator (i.e. the adaptive weights). The weights adapt through a three-factor Hebbian rule, which we prove constitutes a strictly local gradient descent on the residual energy. This mechanism ensures thermodynamic stability: near equilibrium, the learned operator naturally converges to a symmetric, positive-definite form, rigorously reproducing Onsager\'s reciprocal relations without explicit enforcement. Far from equilibrium, the system undergoes a self-organizing search for a transport topology that restores global coercivity. Unlike optimization-based approaches that impose physics through global loss functions, HPNs embed conservation intrinsically: transport is restored locally by the evolving operator itself, without a global Poisson solve or backpropagated objective. We demonstrate the framework on scalar diffusion and incompressible lid-driven cavity flow, showing that physically consistent transport geometries and flow structures emerge from random initial conditions solely through residual-driven local adaptation. HPNs thus reframe computation not as the solution of a fixed equation, but as a thermodynamic relaxation process where the constitutive geometry and physical state co-evolve. - oai:arXiv.org:2507.00641v2 - nlin.AO - cs.LG + Bayesian power spectral density estimation for LISA noise based on P-splines with a parametric boost + https://arxiv.org/abs/2510.00533 + arXiv:2510.00533v2 Announce Type: replace-cross +Abstract: Flexible and accurate noise characterization is crucial for the precise estimation of gravitational-wave parameters. We introduce a Bayesian method for estimating the power spectral density (PSD) of long, stationary time series, explicitly tailored for LISA data analysis. Our approach models the PSD as the geometric mean of a parametric and a nonparametric component, combining the knowledge from parametric models with the flexibility to capture deviations from theoretical expectations. The nonparametric component is expressed by a mixture of penalized B-splines. Adaptive, data-driven knot placement, performed once at initialization, removes the need for reversible-jump Markov chain Monte Carlo, while hierarchical roughness-penalty priors prevent overfitting. Validation on simulated autoregressive AR(4) data demonstrates estimator consistency and shows that well-matched parametric components reduce the integrated absolute error compared to an uninformative baseline, requiring fewer spline knots to achieve comparable accuracy. Applied to one year of simulated LISA X-channel (univariate) noise, our method achieves relative integrated absolute errors of $\mathcal{O}(10^{-2})$, making it suitable for iterative analysis pipelines and multi-year mission data sets. + oai:arXiv.org:2510.00533v2 + gr-qc + astro-ph.IM + physics.comp-ph stat.CO - stat.ME - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Gunjan Auti, Hirofumi Daiguji, Gouhei Tanaka + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Nazeela Aimen, Patricio Maturana-Russel, Avi Vajpeyi, Nelson Christensen, Renate Meyer - Amortized Bayesian Meta-Learning for Low-Rank Adaptation of Large Language Models - https://arxiv.org/abs/2508.14285 - arXiv:2508.14285v2 Announce Type: replace-cross -Abstract: Fine-tuning large language models (LLMs) with low-rank adaptation (LoRA) is a cost-effective way to incorporate information from a specific dataset. However, it is often unclear how well the fine-tuned LLM will generalize, i.e., how well it will perform on unseen datasets. Methods have been proposed to improve generalization by optimizing in-context prompts, or by using meta-learning to fine-tune LLMs. However, these methods are expensive in memory and computation, requiring either long-context prompts or saving copies of parameters and using second-order gradient updates. To address these challenges, we propose Amortized Bayesian Meta-Learning for LoRA (ABMLL). This method builds on amortized Bayesian meta-learning for smaller models, adapting this approach to LLMs while maintaining its computational efficiency. We reframe task-specific and global parameters in the context of LoRA and use a new hyperparameter to balance reconstruction accuracy and the fidelity of task-specific parameters to the global ones. ABMLL provides effective generalization and scales to large models such as LLAMA3-8B. Furthermore, as a result of using a Bayesian framework, ABMLL provides improved uncertainty quantification. We test ABMLL on CrossFit and Unified-QA datasets and find that it outperforms existing methods on these benchmarks in terms of both accuracy and expected calibration error. - oai:arXiv.org:2508.14285v2 + A Generic Machine Learning Framework for Radio Frequency Fingerprinting + https://arxiv.org/abs/2510.09775 + arXiv:2510.09775v2 Announce Type: replace-cross +Abstract: Fingerprinting radio frequency (RF) emitters typically involves finding unique characteristics that are featured in their received signal. These fingerprints are nuanced, but sufficiently detailed, motivating the pursuit of methods that can successfully extract them. The downstream task that requires the most meticulous RF fingerprinting (RFF) is known as specific emitter identification (SEI), which entails recognising each individual transmitter. RFF and SEI have a long history, with numerous defence and civilian applications such as signal intelligence, electronic surveillance, physical-layer authentication of wireless devices, to name a few. In recent years, data-driven RFF approaches have become popular due to their ability to automatically learn intricate fingerprints. They generally deliver superior performance when compared to traditional RFF techniques that are often labour-intensive, inflexible, and only applicable to a particular emitter type or transmission scheme. In this paper, we present a generic and versatile machine learning (ML) framework for data-driven RFF with several popular downstream tasks such as SEI, data association (EDA) and RF emitter clustering (RFEC). It is emitter-type agnostic. We then demonstrate the introduced framework for several tasks using real RF datasets for spaceborne surveillance, signal intelligence and countering drones applications. + oai:arXiv.org:2510.09775v2 cs.LG - cs.AI + cs.CR stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Liyi Zhang, Jake Snell, Thomas L. Griffiths - - - A statistical test for network similarity - https://arxiv.org/abs/2508.14399 - arXiv:2508.14399v2 Announce Type: replace-cross -Abstract: In this article, we revisit and expand our prior work on graph similarity. As with our earlier work, we focus on a view of similarity which does not require node correspondence between graphs under comparison. Our work is suited to the temporal study of networks, change-point and anomaly detection and simple comparisons of static graphs. It provides a similarity metric for the study of (weakly) connected graphs. Our work proposes a metric designed to compare networks and assess the (dis)similarity between them. For example, given three different graphs with possibly different numbers of nodes, $G_1$, $G_2$ and $G_3$, we aim to answer two questions: a) "How different is $G_1 $ from $G_2$?" and b) "Is graph $G_3$ more similar to $G_1$ or to $G_2$?". We illustrate the value of our test and its accuracy through several new experiments, using synthetic and real-world graphs. - oai:arXiv.org:2508.14399v2 - cs.DM - stat.AP - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by-nc-nd/4.0/ - Pierre Miasnikof, Alexander Y. Shetopaloff + Alex Hiles, Bashar I. Ahmad - Contractive kinetic Langevin samplers beyond global Lipschitz continuity - https://arxiv.org/abs/2509.12031 - arXiv:2509.12031v2 Announce Type: replace-cross -Abstract: In this paper, we examine the problem of sampling from log-concave distributions with (possibly) superlinear gradient growth under kinetic (underdamped) Langevin algorithms. Using a carefully tailored taming scheme, we propose two novel discretizations of the kinetic Langevin SDE, and we show that they are both contractive and satisfy a log-Sobolev inequality. Building on this, we establish a series of non-asymptotic bounds in $2$-Wasserstein distance between the law reached by each algorithm and the underlying target measure. - oai:arXiv.org:2509.12031v2 - math.PR - cs.NA - math.NA - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Online Price Competition under Generalized Linear Demands + https://arxiv.org/abs/2511.10718 + arXiv:2511.10718v3 Announce Type: replace-cross +Abstract: We study sequential price competition among $N$ sellers, each influenced by the pricing decisions of their rivals. Specifically, the demand function for each seller $i$ follows the single index model $\lambda_i(\mathbf{p}) = \mu_i(\langle \boldsymbol{\theta}_{i,0}, \mathbf{p} \rangle)$, with known increasing link $\mu_i$ and unknown parameter $\boldsymbol{\theta}_{i,0}$, where the vector $\mathbf{p}$ denotes the vector of prices offered by all the sellers simultaneously at a given instant. Each seller observes only their own realized demand -- unobservable to competitors -- and the prices set by rivals. Our framework generalizes existing approaches that focus solely on linear demand models. We propose a novel decentralized policy, PML-GLUCB, that combines penalized MLE with an upper-confidence pricing rule, removing the need for coordinated exploration phases across sellers -- which is integral to previous linear models -- and accommodating both binary and real-valued demand observations. Relative to a dynamic benchmark policy, each seller achieves $O(N^{2}\sqrt{T}\log(T))$ regret, which essentially matches the optimal rate known in the linear setting. A significant technical contribution of our work is the development of a variant of the elliptical potential lemma -- typically applied in single-agent systems -- adapted to our competitive multi-agent environment. + oai:arXiv.org:2511.10718v3 + cs.GT + math.ST + stat.ME + stat.TH + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://creativecommons.org/licenses/by/4.0/ - Iosif Lytras, Panayotis Mertikopoulos + Daniele Bracale, Moulinath Banerjee, Cong Shi, Yuekai Sun - Graph Coloring for Multi-Task Learning - https://arxiv.org/abs/2509.16959 - arXiv:2509.16959v4 Announce Type: replace-cross -Abstract: When different objectives conflict with each other in multi-task learning, gradients begin to interfere and slow convergence, thereby potentially reducing the final model's performance. To address this, we introduce SON-GOKU, a scheduler that computes gradient interference, constructs an interference graph, and then applies greedy graph-coloring to partition tasks into groups that align well with each other. At each training step, only one group (color class) of tasks are activated, and the grouping partition is constantly recomputed as task relationships evolve throughout training. By ensuring that each mini-batch contains only tasks that pull the model in the same direction, our method improves the effectiveness of any underlying multi-task learning optimizer without additional tuning. Since tasks within these groups will update in compatible directions, multi-task learning will improve model performance rather than impede it. Empirical results on six different datasets show that this interference-aware graph-coloring approach consistently outperforms baselines and state-of-the-art multi-task optimizers. We provide extensive theory showing why grouping and sequential updates improve multi-task learning, with guarantees on descent, convergence, and accurately identifying what tasks conflict or align. - oai:arXiv.org:2509.16959v4 + Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory + https://arxiv.org/abs/2511.23083 + arXiv:2511.23083v2 Announce Type: replace-cross +Abstract: High-capacity kernel Hopfield networks exhibit a \textit{Ridge of Optimization} characterized by extreme stability. While previously linked to \textit{Spectral Concentration}, its origin remains elusive. Here, we analyze the network dynamics on a statistical manifold, revealing that the Ridge corresponds to the Edge of Stability, a critical boundary where the Fisher Information Matrix becomes singular. We demonstrate that the apparent Euclidean force antagonism is a manifestation of \textit{Dual Equilibrium} in the Riemannian space. This unifies learning dynamics and capacity via the Minimum Description Length principle, offering a geometric theory of self-organized criticality. + oai:arXiv.org:2511.23083v2 cs.LG - cs.AI cs.NE stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Santosh Patapati - - - CLAPS: Posterior-Aware Conformal Intervals via Last-Layer Laplace - https://arxiv.org/abs/2512.01384 - arXiv:2512.01384v2 Announce Type: replace-cross -Abstract: We present CLAPS, a posterior-aware conformal regression method that pairs a Last-Layer Laplace Approximation with split-conformal calibration. From the resulting Gaussian posterior, CLAPS defines a simple two-sided posterior CDF score that aligns the conformity metric with the full predictive shape, not just a point estimate. This alignment yields narrower prediction intervals at the same target coverage, especially on small to medium tabular datasets where data are scarce and uncertainty modeling matters. We also provide a lightweight diagnostic suite that separates aleatoric and epistemic components and visualizes posterior behavior, helping practitioners understand why intervals shrink when they do. Across multiple benchmarks using the same MLP backbone, CLAPS consistently attains nominal coverage with improved efficiency and minimal overhead, offering a clear, practical upgrade to residual-based conformal baselines. - oai:arXiv.org:2512.01384v2 - cs.LG - stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Dongseok Kim, Hyoungsun Choi, Mohamed Jismy Aashik Rasool, Gisung Oh + Akira Tamamori - Mitigating the Curse of Detail: Scaling Arguments for Feature Learning and Sample Complexity - https://arxiv.org/abs/2512.04165 - arXiv:2512.04165v3 Announce Type: replace-cross -Abstract: Two pressing topics in the theory of deep learning are the interpretation of feature learning mechanisms and the determination of implicit bias of networks in the rich regime. Current theories of rich feature learning, often appear in the form of high-dimensional non-linear equations, which require computationally intensive numerical solutions. Given the many details that go into defining a deep learning problem, this complexity is a significant and often unavoidable challenge. Here, we propose a powerful heuristic route for predicting the data and width scales at which various patterns of feature learning emerge. This form of scale analysis is considerably simpler than exact theories and reproduces the scaling exponents of various known results. In addition, we make novel predictions on complex toy architectures, such as three-layer non-linear networks and attention heads, thus extending the scope of first-principle theories of deep learning. - oai:arXiv.org:2512.04165v3 + A Multivariate Bernoulli-Based Sampling Method for Multi-Label Data with Application to Meta-Research + https://arxiv.org/abs/2512.08371 + arXiv:2512.08371v2 Announce Type: replace-cross +Abstract: Datasets may contain observations with multiple labels. If the labels are not mutually exclusive, and if the labels vary greatly in frequency, obtaining a sample that includes sufficient observations with scarcer labels to make inferences about those labels, and which deviates from the population frequencies in a known manner, creates challenges. In this paper, we consider a multivariate Bernoulli distribution as our underlying distribution of a multi-label problem. We present a novel sampling algorithm that takes label dependencies into account. It uses observed label frequencies to estimate multivariate Bernoulli distribution parameters and calculate weights for each label combination. This approach ensures the weighted sampling acquires target distribution characteristics while accounting for label dependencies. We applied this approach to a sample of research articles from Web of Science labeled with 64 biomedical topic categories. We aimed to preserve category frequency order, reduce frequency differences between most and least common categories, and account for category dependencies. This approach produced a more balanced sub-sample, enhancing the representation of minority categories. + oai:arXiv.org:2512.08371v2 cs.LG stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Noa Rubin, Orit Davidovich, Zohar Ringel + http://creativecommons.org/licenses/by-nc-nd/4.0/ + Simon Chung, Colby J. Vorland, Donna L. Maney, Andrew W. Brown - Uncertainty Quantification for Scientific Machine Learning using Sparse Variational Gaussian Process Kolmogorov-Arnold Networks (SVGP KAN) - https://arxiv.org/abs/2512.05306 - arXiv:2512.05306v2 Announce Type: replace-cross -Abstract: Kolmogorov-Arnold Networks have emerged as interpretable alternatives to traditional multi-layer perceptrons. However, standard implementations lack principled uncertainty quantification capabilities essential for many scientific applications. We present a framework integrating sparse variational Gaussian process inference with the Kolmogorov-Arnold topology, enabling scalable Bayesian inference with computational complexity quasi-linear in sample size. Through analytic moment matching, we propagate uncertainty through deep additive structures while maintaining interpretability. We use three example studies to demonstrate the framework's ability to distinguish aleatoric from epistemic uncertainty: calibration of heteroscedastic measurement noise in fluid flow reconstruction, quantification of prediction confidence degradation in multi-step forecasting of advection-diffusion dynamics, and out-of-distribution detection in convolutional autoencoders. These results suggest Sparse Variational Gaussian Process Kolmogorov-Arnold Networks (SVGP KANs) is a promising architecture for uncertainty-aware learning in scientific machine learning. - oai:arXiv.org:2512.05306v2 + DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning + https://arxiv.org/abs/2512.08671 + arXiv:2512.08671v2 Announce Type: replace-cross +Abstract: Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor. + oai:arXiv.org:2512.08671v2 cs.LG stat.ML - Wed, 10 Dec 2025 00:00:00 -0500 + Thu, 11 Dec 2025 00:00:00 -0500 replace-cross - http://creativecommons.org/licenses/by/4.0/ - Y. Sungtaek Ju + http://arxiv.org/licenses/nonexclusive-distrib/1.0/ + Huzaifa Arif