diff --git "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
--- "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
+++ "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
@@ -7,2050 +7,1174 @@
http://www.rssboard.org/rss-specificationen-us
- Tue, 09 Dec 2025 05:00:02 +0000
+ Wed, 10 Dec 2025 05:00:17 +0000rss-help@arxiv.org
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500SaturdaySunday
- From Tail Universality to Bernstein-von Mises: A Unified Statistical Theory of Semi-Implicit Variational Inference
- https://arxiv.org/abs/2512.06107
- arXiv:2512.06107v1 Announce Type: new
-Abstract: Semi-implicit variational inference (SIVI) constructs approximate posteriors of the form $q(\theta) = \int k(\theta | z) r(dz)$, where the conditional kernel is parameterized and the mixing base is fixed and tractable. This paper develops a unified "approximation-optimization-statistics'' theory for such families.
- On the approximation side, we show that under compact L1-universality and a mild tail-dominance condition, semi-implicit families are dense in L1 and can achieve arbitrarily small forward Kullback-Leibler (KL) error. We also identify two sharp obstructions to global approximation: (i) an Orlicz tail-mismatch condition that induces a strictly positive forward-KL gap, and (ii) structural restrictions, such as non-autoregressive Gaussian kernels, that force "branch collapse'' in conditional distributions. For each obstruction we give a minimal structural modification that restores approximability.
- On the optimization side, we establish finite-sample oracle inequalities and prove that the empirical SIVI objectives L(K,n) $\Gamma$-converge to their population limit as n and K tend to infinity. These results give consistency of empirical maximizers, quantitative control of finite-K surrogate bias, and stability of the resulting variational posteriors.
- Combining the approximation and optimization analyses yields the first general end-to-end statistical theory for SIVI: we characterize precisely when SIVI can recover the target distribution, when it cannot, and how architectural and algorithmic choices govern the attainable asymptotic behavior.
- oai:arXiv.org:2512.06107v1
+ Mixed Exponential Statistical Structures and Their Approximation Operators
+ https://arxiv.org/abs/2512.07870
+ arXiv:2512.07870v1 Announce Type: new
+Abstract: The paper examines the construction and analysis of a new class of mixed exponential statistical structures that combine the properties of stochastic models and linear positive operators.The relevance of the topic is driven by the growing need to develop a unified theoretical framework capable of describing both continuous and discrete random structures that possess approximation properties. The aim of the study is to introduce and analyze a generalized family of mixed exponential statistical structures and their corresponding linear positive operators, which include known operators as particular cases. We define auxiliary statistical structures B and H through differential relations between their elements, and construct the main Phillips-type structure. Recurrent relations for the central moments are obtained, their properties are established, and the convergence and approximation accuracy of the constructed operators are investigated. The proposed approach allows mixed exponential structures to be viewed as a generalization of known statistical systems, providing a unified analytical and stochastic description. The results demonstrate that mixed exponential statistical structures can be used to develop new classes of positive operators with controllable preservation and approximation properties. The proposed methodology forms a basis for further research in constructing multidimensional statistical structures, analyzing operators in weighted spaces, and studying their asymptotic characteristics.
+ oai:arXiv.org:2512.07870v1math.ST
- stat.MLstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sean Plummer
+ http://creativecommons.org/licenses/by/4.0/
+ Yurii Volkov, Oleksandr Volkov
- Spatial Analysis for AI-segmented Histopathology Images: Methods and Implementation
- https://arxiv.org/abs/2512.06116
- arXiv:2512.06116v1 Announce Type: new
-Abstract: Quantitatively characterizing the spatial organization of cells and their interaction is essential for understanding cancer progression and immune response. Recent advances in machine intelligence have enabled large-scale segmentation and classification of cell nuclei from digitized histopathology slides, generating massive point pattern and marked point pattern datasets. However, accessible tools for quantitative analysis of such complex cellular spatial organization remain limited. In this paper, we first review 27 traditional spatial summary statistics, areal indices, and topological features applicable to point pattern data. Then, we introduce SASHIMI (Spatial Analysis for Segmented Histopathology Images using Machine Intelligence), a browser-based tool for real-time spatial analysis of artificial intelligence (AI)-segmented histopathology images. SASHIMI computes a comprehensive suite of mathematically grounded descriptors, including spatial statistics, proximity-based measures, grid-level similarity indices, spatial autocorrelation measures, and topological descriptors, to quantify cellular abundance and cell-cell interaction. Applied to two cancer datasets, oral potentially malignant disorders (OPMD) and non-small-cell lung cancer (NSCLC), SASHIMI identified multiple spatial features significantly associated with patient survival outcomes. SASHIMI provides an accessible and reproducible platform for single-cell-level spatial profiling of tumor morphological architecture, offering a robust framework for quantitative exploration of tissue organization across cancer types.
- oai:arXiv.org:2512.06116v1
+ Functional Random Forest with Adaptive Cost-Sensitive Splitting for Imbalanced Functional Data Classification
+ https://arxiv.org/abs/2512.07888
+ arXiv:2512.07888v1 Announce Type: new
+Abstract: Classification of functional data where observations are curves or trajectories poses unique challenges, particularly under severe class imbalance. Traditional Random Forest algorithms, while robust for tabular data, often fail to capture the intrinsic structure of functional observations and struggle with minority class detection. This paper introduces Functional Random Forest with Adaptive Cost-Sensitive Splitting (FRF-ACS), a novel ensemble framework designed for imbalanced functional data classification. The proposed method leverages basis expansions and Functional Principal Component Analysis (FPCA) to represent curves efficiently, enabling trees to operate on low dimensional functional features. To address imbalance, we incorporate a dynamic cost sensitive splitting criterion that adjusts class weights locally at each node, combined with a hybrid sampling strategy integrating functional SMOTE and weighted bootstrapping. Additionally, curve specific similarity metrics replace traditional Euclidean measures to preserve functional characteristics during leaf assignment. Extensive experiments on synthetic and real world datasets including biomedical signals and sensor trajectories demonstrate that FRF-ACS significantly improves minority class recall and overall predictive performance compared to existing functional classifiers and imbalance handling techniques. This work provides a scalable, interpretable solution for high dimensional functional data analysis in domains where minority class detection is critical.
+ oai:arXiv.org:2512.07888v1
+ stat.ML
+ cs.AI
+ cs.LGstat.AP
- q-bio.QM
- Tue, 09 Dec 2025 00:00:00 -0500
+ stat.CO
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Y. Park, F. Wu, X. Feng, S. Yang, E. H. Wang, B. Yao, C. Moon, G. Xiao, Q. Li
+ http://creativecommons.org/licenses/by/4.0/
+ Fahad Mostafa, Hafiz Khan
- Mode Choice Heterogeneity Among Zero-Vehicle Households: A Latent Class Cluster Approach
- https://arxiv.org/abs/2512.06127
- arXiv:2512.06127v1 Announce Type: new
-Abstract: In transportation planning, Zero-Vehicle Households (ZVHs) are often treated as a uniform group with limited mobility options and assumed to rely heavily on walking or public transit. However, such assumptions overlook the diverse travel strategies ZVHs employ in response to varying trip needs and sociodemographic factors. This study addresses this gap by applying a weighted Latent Class Cluster Analysis (LCCA) to data from the 2022 National Household Travel Survey (NHTS) to uncover distinct mobility patterns within the ZVH population. Using travel mode and trip purpose as indicators and demographic, economic, and built environment variables as covariates, we identified three latent classes :Shared mobility errand workers (36.3%), who primarily use transit and ridehailing for commuting and essential activities; car based shoppers (29.9%), who depend on informal vehicle access for longer discretionary trips and active travel Shoppers (33.8%), who rely on walking or cycling for short, local shopping oriented travel. These behavioral findings enable policymakers to develop differentiated planning solutions to the specific needs of each segment among the ZVHs population across varied geographic and demographic settings.
- oai:arXiv.org:2512.06127v1
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Bayesian Semiparametric Joint Dynamic Model for Multitype Recurrent Events and a Terminal Event
+ https://arxiv.org/abs/2512.07973
+ arXiv:2512.07973v1 Announce Type: new
+Abstract: In many biomedical research, recurrent events such as myocardial infraction, stroke, and heart failure often result in a terminal outcome such as death. Understanding the relationship among the multi-type recurrent events and terminal event is essential for developing interventions to prolong the terminal event such as death. This study introduces a Bayesian semiparametric joint dynamic model for type-specific hazards that quantifies how the type-specific event history dynamically changes the intensities of each recurrent event type and the terminal event over calendar time. The framework jointly captures unmeasured heterogeneity through a shared frailty term, cumulative effects of past recurrent events on themselves and terminal events, and the effects of covariates. Gamma process priors (GPP) are used as a nonparametric prior for the baseline cumulative hazard function (CHF) and parametric priors for covariates and frailty. For a more accurate risk assessment, this model provides an analytical closed-form estimator of cumulative hazard functions (CHF) and frailties. The Breslow-Aalen-type estimators of CHFs are special cases of our estimators when the precision parameters are set to zero. We evaluate the performance of the model through extensive simulations and apply the method to the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). The analysis offers a practical past event effect based risk assessment for acute and chronic cardiovascular recurrent events with a terminal end point death and provides new information to support the prevention and treatment of cardiovascular disease to clinicians.
+ oai:arXiv.org:2512.07973v1
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Nancy Kasamala, Arthur Mukwaya, Nana Kankam Gyimah, Judith Mwakalonge, Gurcan Comert, Saidi Siuhi, Akinbobola Jegede
+ Mithun Kumar Acharjee, AKM Fazlur Rahman
- Forests of Uncertaint(r)ees: Using tree-based ensembles to estimate probability distributions of future conflict
- https://arxiv.org/abs/2512.06210
- arXiv:2512.06210v1 Announce Type: new
-Abstract: Predictions of fatalities from violent conflict on the PRIO-GRID-month (pgm) level are characterized by high levels of uncertainty, limiting their usefulness in practical applications. We discuss the two main sources of uncertainty for this prediction task, the nature of violent conflict and data limitations, embedding this in the wider literature on uncertainty quantification in machine learning. We develop a strategy to quantify uncertainty in conflict forecasting, shifting from traditional point predictions to full predictive distributions. Our approach compares and combines multiple tree-based classifiers and distributional regressors in a custom auto-ML setup, estimating distributions for each pgm individually. We also test the integration of regional models in spatial ensembles as a potential avenue to reduce uncertainty. The models are able to consistently outperform a suite of benchmarks derived from conflict history in predictions up to one year in advance, with performance driven by regions where conflict was observed. With our evaluation, we emphasize the need to understand how a metric behaves for a given prediction problem, in our case characterized by extremely high zero-inflatedness. While not resulting in better predictions, the integration of smaller models does not decrease performance for this prediction task, opening avenues to integrate data sources with less spatial coverage in the future.
- oai:arXiv.org:2512.06210v1
+ The limit joint distributions of some statistics used in testing the quality of random number generators
+ https://arxiv.org/abs/2512.08002
+ arXiv:2512.08002v1 Announce Type: new
+Abstract: The limit joint distribution of statistics that are generalizations of some statistics from the NIST STS, TestU01, and other packages is found under the following hypotheses $H_0$ and $H_1$. Hypothesis $H_0$ states that the tested sequence is a sequence of independent random vectors with a known distribution, and the simple alternative hypothesis $H_1$ converges in some sense to $H_0$ with increasing sample size. In addition, an analogue of the Berry-Esseen inequality is obtained for the statistics under consideration, and conditions for their asymptotic independence are found.
+ oai:arXiv.org:2512.08002v1
+ math.STstat.AP
- cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
+ stat.TH
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Daniel Mittermaier, Tobias Bohne, Martin Hofer, Daniel Racek
+ M. P. Savelov
- Contextual Strongly Convex Simulation Optimization: Optimize then Predict with Inexact Solutions
- https://arxiv.org/abs/2512.06270
- arXiv:2512.06270v1 Announce Type: new
-Abstract: In this work, we study contextual strongly convex simulation optimization and adopt an "optimize then predict" (OTP) approach for real-time decision making. In the offline stage, simulation optimization is conducted across a set of covariates to approximate the optimal-solution function; in the online stage, decisions are obtained by evaluating this approximation at the observed covariate. The central theoretical challenge is to understand how the inexactness of solutions generated by simulation-optimization algorithms affects the optimality gap, which is overlooked in existing studies. To address this, we develop a unified analysis framework that explicitly accounts for both solution bias and variance. Using Polyak-Ruppert averaging SGD as an illustrative simulation-optimization algorithm, we analyze the optimality gap of OTP under four representative smoothing techniques: $k$ nearest neighbor, kernel smoothing, linear regression, and kernel ridge regression. We establish convergence rates, derive the optimal allocation of the computational budget $\Gamma$ between the number of design covariates and the per-covariate simulation effort, and demonstrate the convergence rate can approximately achieve $\Gamma^{-1}$ under appropriate smoothing technique and sample-allocation rule. Finally, through a numerical study, we validate the theoretical findings and demonstrate the effectiveness and practical value of the proposed approach.
- oai:arXiv.org:2512.06270v1
+ Provable Diffusion Posterior Sampling for Bayesian Inversion
+ https://arxiv.org/abs/2512.08022
+ arXiv:2512.08022v1 Announce Type: new
+Abstract: This paper proposes a novel diffusion-based posterior sampling method within a plug-and-play (PnP) framework. Our approach constructs a probability transport from an easy-to-sample terminal distribution to the target posterior, using a warm-start strategy to initialize the particles. To approximate the posterior score, we develop a Monte Carlo estimator in which particles are generated using Langevin dynamics, avoiding the heuristic approximations commonly used in prior work. The score governing the Langevin dynamics is learned from data, enabling the model to capture rich structural features of the underlying prior distribution. On the theoretical side, we provide non-asymptotic error bounds, showing that the method converges even for complex, multi-modal target posterior distributions. These bounds explicitly quantify the errors arising from posterior score estimation, the warm-start initialization, and the posterior sampling procedure. Our analysis further clarifies how the prior score-matching error and the condition number of the Bayesian inverse problem influence overall performance. Finally, we present numerical experiments demonstrating the effectiveness of the proposed method across a range of inverse problems.
+ oai:arXiv.org:2512.08022v1stat.MLcs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Nifei Lin, Heng Luo, L. Jeff Hong
-
-
- The Bag-and-Whisker Plot: A New Bagplot for Bivariate Data
- https://arxiv.org/abs/2512.06314
- arXiv:2512.06314v1 Announce Type: new
-Abstract: The bagplot, also known as the "bag-and-bolster plot", is a notable extension of the boxplot from univariate to bivariate data. Although widely used, its practical application is hindered by two key limitations: the fixed inflation factor for outlier detection that does not adapt to the sample size, and the unstable convex hull used to visualize its fence. In this paper, we propose a new bagplot, namely the "bag-and-whisker plot'', as an improvement method to address these limitations. Our framework recasts outlier detection as a multiple testing problem, yielding a data-adaptive fence that controls statistical error rates and enhances the reliability of outlier identification. To further resolve graphical instability, we introduce a refined visualization that abandons the convex hull (the bolster) with a direct rendering of the statistical fence, complemented by granular whiskers that effectively illustrate the data's spread. Extensive simulations and real-world data analyses demonstrate that our new bagplot exhibits superior adaptivity and robustness compared to the existing standard, and thus can be highly recommended for practical use.
- oai:arXiv.org:2512.06314v1
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shenghao Qin, Bowen Gang, Tiejun Tong, Hengjian Cui
-
-
- Subsampling Confidence Bound for Persistent Diagram via Time-delay Embedding
- https://arxiv.org/abs/2512.06324
- arXiv:2512.06324v1 Announce Type: new
-Abstract: Time-delay embedding is a fundamental technique in Topological Data Analysis (TDA) for reconstructing the phase space dynamics of time-series data. While persistent homology effectively identifies topological features, such as cycles associated with periodicity, a rigorous statistical framework for quantifying the uncertainty of these features has been lacking in this context. In this paper, we propose a subsampling based method to construct confidence sets for persistence diagrams derived from time-delay embeddings. We establish finite sample guarantees for the validity of these confidence bounds under regularity conditions specifically for $C^{1,1}$ functions with positive reach and prove their asymptotic convergence as the embedding dimension tends to infinity. This framework provides a principled statistical test for periodicity, enabling the distinction between true periodic signals and non-periodic approximations. Simulation studies demonstrate that our method achieves detection performance comparable to the Generalized Lomb-Scargle periodogram on periodic data while exhibiting superior robustness in distinguishing non-periodic signals with time-varying frequencies, such as chirp signals.
- oai:arXiv.org:2512.06324v1
+ cs.NA
+ math.NA
+ math.PRmath.STstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Donghyun Park, Junhyun An, Taehyoung Kim, Jisu Kim
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Jinyuan Chang, Chenguang Duan, Yuling Jiao, Ruoxuan Li, Jerry Zhijian Yang, Cheng Yuan
- Modeling Spatio-temporal Extremes via Conditional Variational Autoencoders
- https://arxiv.org/abs/2512.06348
- arXiv:2512.06348v1 Announce Type: new
-Abstract: Extreme weather events are widely studied in fields such as agriculture, ecology, and meteorology. The spatio-temporal co-occurrence of extreme events can strengthen or weaken under changing climate conditions. In this paper, we propose a novel approach to model spatio-temporal extremes by integrating climate indices via a conditional variational autoencoder (cXVAE). A convolutional neural network (CNN) is embedded in the decoder to convolve climatological indices with the spatial dependence within the latent space, thereby allowing the decoder to be dependent on the climate variables. There are three main contributions here. First, we demonstrate through extensive simulations that the proposed conditional XVAE accurately emulates spatial fields and recovers spatially and temporally varying extremal dependence with very low computational cost post training. Second, we provide a simple, scalable approach to detecting condition-driven shifts and whether the dependence structure is invariant to the conditioning variable. Third, when dependence is found to be condition-sensitive, the conditional XVAE supports counterfactual experiments allowing intervention on the climate covariate and propagating the associated change through the learned decoder to quantify differences in joint tail risk, co-occurrence ranges, and return metrics. To demonstrate the practical utility and performance of the model in real-world scenarios, we apply our method to analyze the monthly maximum Fire Weather Index (FWI) over eastern Australia from 2014 to 2024 conditioned on the El Ni\~{n}o/Southern Oscillation (ENSO) index.
- oai:arXiv.org:2512.06348v1
- stat.ML
- cs.LG
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Defining 3-dimensional marine provinces with phytoplankton compositions
+ https://arxiv.org/abs/2512.08035
+ arXiv:2512.08035v1 Announce Type: new
+Abstract: Marine provinces rarely include fine-resolution biological data, and are often defined spatially across only latitude and longitude. Therefore, we aimed to determine how phytoplankton distributions define marine provinces across 3-dimensions (i.e., latitude, longitude, and depth). To do this, we developed a new algorithm called \texttt{bioprovince} which can be applied to compositional biological data. The algorithm first clusters compositional samples to identify spatially coherent groups of samples, then makes flexible province predictions in the broader 3d spatial grid based on environmental similarity. We applied \texttt{bioprovince} to phytoplankton Amplicon Sequencing Variants (ASVs) from five, depth-resolved ocean transects spanning north-south in the Pacific Ocean. In the surface layer of the ocean, our method agreed well with traditional Longhurst provinces. In some cases, the method revealed that with more granular taxonomic resolution afforded by ASVs, traditional Longhurst provinces were divided into smaller zones. Also, one of the major advances of this method is its ability to incorporate a third dimension, depth. Indeed, our analysis found significant depth-wise partitions throughout the Pacific with remarkable agreement in the equatorial region with the base of the euphotic zone. Our algorithm's ability to delineate 3-dimensional bioprovinces will enable scientists to discover new ecological interpretations of marine phytoplankton ecology and biogeography. Furthermore, as compositional biological data inherently exists in three spatial dimensions in nature, bioprovince is broadly applicable beyond marine plankton, offering a more holistic perspective on biological provinces across diverse environments.
+ oai:arXiv.org:2512.08035v1
+ stat.AP
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Xiaoyu Ma, Likun Zhang, Christopher K. Wikle
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Rafael Catoia Pulgrossi, Nathan L R Williams, Yubin Raut, Jed Fuhrman, Sangwon Hyun
- Goodness-of-fit Tests for Heavy-tailed Random Fields
- https://arxiv.org/abs/2512.06412
- arXiv:2512.06412v1 Announce Type: new
-Abstract: We develop goodness-of-fit tests for max-stable random fields, which are used to model heavy-tailed spatial data. The test statistics are constructed based on the Fourier transforms of the indicators of extreme values in the heavy-tailed spatial data, whose asymptotic distribution is a Gaussian random field under a hypothesized max-stable random field. Since the covariance structure of the limiting Gaussian random field lacks an explicit expression, we propose a stationary bootstrap procedure for spatial fields to approximate critical values. Simulation studies confirm the theoretical distributional results, and applications to PM2.5 and temperature data illustrate the practical utility of the proposed method for model assessment.
- oai:arXiv.org:2512.06412v1
+ ADOPT: Additive Optimal Transport Regression
+ https://arxiv.org/abs/2512.08118
+ arXiv:2512.08118v1 Announce Type: new
+Abstract: Regression analysis for responses taking values in general metric spaces has received increasing attention, particularly for settings with Euclidean predictors $X \in \mathbb{R}^p$ and non-Euclidean responses $Y \in ( \mathcal{M}, d)$. While additive regression is a powerful tool for enhancing interpretability and mitigating the curse of dimensionality in the presence of multivariate predictors, its direct extension is hindered by the absence of vector space operations in general metric spaces. We propose a novel framework for additive optimal transport regression, which incorporates additive structure through optimal geodesic transports. A key idea is to extend the notion of optimal transports in Wasserstein spaces to general geodesic metric spaces. This unified approach accommodates a wide range of responses, including probability distributions, symmetric positive definite (SPD) matrices with various metrics and spherical data. The practical utility of the method is illustrated with correlation matrices derived from resting state fMRI brain imaging data.
+ oai:arXiv.org:2512.08118v1stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ying Niu, Zhao Chen, Christina Dan Wang, Yuwei Zhao
+ Wookyeong Song, Hans-Georg M\"uller
- Community detection in heterogeneous signed networks
- https://arxiv.org/abs/2512.06428
- arXiv:2512.06428v1 Announce Type: new
-Abstract: Network data has attracted growing interest across scientific domains, prompting the development of various network models. Existing network analysis methods mainly focus on unsigned networks, whereas signed networks, consisting of both positive and negative edges, have been frequently encountered in practice but much less investigated. In this paper, we formally define strong and weak balance in signed networks, and propose a signed block $\beta$-model, which is capable of modeling strong- and weak-balanced signed networks simultaneously. We establish the identifiability of the proposed model by leveraging properties of bipartite graphs, and develop an efficient alternating updating algorithm to optimize the resulting log-likelihood function. More importantly, we establish the asymptotic consistencies of the proposed model in terms of both probability estimation and community detection. Its advantages are also demonstrated through extensive numerical experiments and the application to a real-world international relationship network.
- oai:arXiv.org:2512.06428v1
+ deepspat: An R package for modeling nonstationary spatial and spatio-temporal Gaussian and extremes data through deep deformations
+ https://arxiv.org/abs/2512.08137
+ arXiv:2512.08137v1 Announce Type: new
+Abstract: Nonstationarity in spatial and spatio-temporal processes is ubiquitous in environmental datasets, but is not often addressed in practice, due to a scarcity of statistical software packages that implement nonstationary models. In this article, we introduce the R software package deepspat, which allows for modeling, fitting and prediction with nonstationary spatial and spatio-temporal models applied to Gaussian and extremes data. The nonstationary models in our package are constructed using a deep multi-layered deformation of the original spatial or spatio-temporal domain, and are straightforward to implement. Model parameters are estimated using gradient-based optimization of customized loss functions with tensorflow, which implements automatic differentiation. The functionalities of the package are illustrated through simulation studies and an application to Nepal temperature data.
+ oai:arXiv.org:2512.08137v1
+ stat.COstat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Yuwen Wang, Shiwen Ye, Jingnan Zhang, Junhui Wang
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Quan Vu, Xuanjie Shao, Rapha\"el Huser, Andrew Zammit-Mangion
- Canonical Tail Dependence for Soft Extremal Clustering of Multichannel Brain Signals
- https://arxiv.org/abs/2512.06435
- arXiv:2512.06435v1 Announce Type: new
-Abstract: We develop a novel characterization of extremal dependence between two cortical regions of the brain when its signals display extremely large amplitudes. We show that connectivity in the tails of the distribution reveals unique features of extreme events (e.g., seizures) that can help to identify their occurrence. Numerous studies have established that connectivity-based features are effective for discriminating brain states. Here, we demonstrate the advantage of the proposed approach: that tail connectivity provides additional discriminatory power, enabling more accurate identification of extreme-related events and improved seizure risk management. Common approaches in tail dependence modeling use pairwise summary measures or parametric models. However, these approaches do not identify channels that drive the maximal tail dependence between two groups of signals -- an information that is useful when analyzing electroencephalography of epileptic patients where specific channels are responsible for seizure occurrences. A familiar approach in traditional signal processing is canonical correlation, which we extend to the tails to develop a visualization of extremal channel-contributions. Through the tail pairwise dependence matrix (TPDM), we develop a computationally-efficient estimator for our canonical tail dependence measure. Our method is then used for accurate frequency-based soft clustering of neonates, distinguishing those with seizures from those without.
- oai:arXiv.org:2512.06435v1
- stat.ML
- cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
+ Non-parametric assessment of the calibration of individualized treatment effects
+ https://arxiv.org/abs/2512.08140
+ arXiv:2512.08140v1 Announce Type: new
+Abstract: An important aspect of the performance of algorithms that predict individualized treatment effects (ITE) is moderate calibration, i.e., the average treatment effect among individuals with predicted treatment effect of z being equal to z. The assessment of moderate calibration is a challenging task on two fronts: counterfactual responses are unobserved, and quantifying the conditional response function for models that generate continuous predicted values requires regularization or parametric modeling. Perhaps because of these challenges, there is currently no inferential method for the null hypothesis that an ITE model is moderately calibrated in a population. In this work, we propose non-parametric methods for the assessment of moderate calibration of ITE models for binary outcomes using data from a randomized trial. These methods simultaneously resolve both challenges, resulting in novel numerical, graphical, and inferential methods for the assessment of moderate calibration. The key idea is to formulate a stochastic process for the cumulative prediction errors that obeys a functional central limit theorem, enabling the use of the properties of Brownian motion for asymptotic inference. We propose two approaches to construct this process from a sample: a conditional approach that relies on predicted risks (often an output of ITE models), and a marginal approach based on replacing the cumulative conditional expected value and variance terms with their marginal counterparts. Numerical simulations confirm the desirable properties of both approaches and their ability to detect miscalibration of different forms. We use a case study to provide practical suggestions on graphical presentation and the interpretation of results. Moderate calibration of predicted ITEs can be assessed without requiring regularization techniques or making assumptions about the functional form of treatment response.
+ oai:arXiv.org:2512.08140v1
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Mara Sherlin Talento, Jordan Richards, Raphael Huser, Hernando Ombao
+ Mohsen Sadatsafavi, Jeroen Hoogland, Thomas P. A. Debray, John Petkau
- Spatio-temporal Shared-Field Modeling of Beluga and Bowhead Whale Sightings Using a Joint Marked Log-Gaussian Cox Process
- https://arxiv.org/abs/2512.06450
- arXiv:2512.06450v1 Announce Type: new
-Abstract: We analyze a decade of aerial survey whale sighting data (2010-2019) to model the spatio-temporal distributions and group sizes of beluga (Delphinapterus leucas) and bowhead (Balaena mysticetus) whales in the United States Arctic. To jointly model these species, we develop a multi-species Log-Gaussian Cox Process (LGCP) in which species specific intensity surfaces are linked through a shared latent spatial Gaussian field. This structure allows the model to capture broad spatial patterns common to both species while still accommodating species level responses to environmental covariates and seasonal variation. The latent field is represented using the Stochastic Partial Differential Equation (SPDE) approach with an anisotropic Matern covariance, implemented on an ocean constrained triangulated mesh so that spatial dependence aligns with marine geography. Whale group size is incorporated through a marked point process extension with species specific negative binomial marks, allowing occurrence and group sizes to be jointly analyzed within a unified framework. Inference is carried out using the Integrated Nested Laplace Approximation (INLA), enabling efficient model fitting over a decade of survey effort. The results highlight persistent multi-species hotspots and distinct environmental associations for each species, demonstrating the value of shared field LGCPs for joint species distribution modeling in data sparse and heterogeneous survey settings.
- oai:arXiv.org:2512.06450v1
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Propensity score adjustment when errors in achievement measures inform treatment assignment
+ https://arxiv.org/abs/2512.08144
+ arXiv:2512.08144v1 Announce Type: new
+Abstract: U.S. state education agencies mark schools displaying achievement gaps between demographic subgroups as needing improvement. Some schools may have few students in these subgroups, such that average end-of-year test scores only noisily measure the average "true" score--the score one would expect if students took the test many times. This, in addition to the masking of small subgroup averages in publicly available assessment data, poses challenges for evaluating interventions aimed at closing achievement gaps. We introduce propensity score estimates designed to achieve balance on subgroup average true scores. These estimates are available even when noisy measurements are not and improve overlap compared to those that ignore measurement error, leading to greater bias reduction of matching estimators. We demonstrate our methods through simulation and an application to a statewide initiative in Texas for curbing summer learning loss.
+ oai:arXiv.org:2512.08144v1
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Mauli Pant, Linda Fernandez, Indranil Sahoo
+ Joshua Wasserman, Michael R. Elliott, Ben B. Hansen
- Simultaneous Heterogeneity and Reduced-rank Learning for Multivariate Response Regression
- https://arxiv.org/abs/2512.06514
- arXiv:2512.06514v1 Announce Type: new
-Abstract: Heterogeneous data are now ubiquitous in many applications in which correctly identifying the subgroups from a heterogeneous population is critical. Although there is an increasing body of literature on subgroup detection, existing methods mainly focus on the univariate response setting. In this paper, we propose a joint heterogeneity and reduced-rank learning framework to simultaneously identify the subgroup structure and estimate the covariate effects for heterogeneous multivariate response regression. In particular, our approach uses rank-constrained pairwise fusion penalization and conducts the subgroup analysis without requiring prior knowledge regarding the individual subgroup memberships. We implement the proposed approach by an alternating direction method of multipliers (ADMM) algorithm and show its convergence. We also establish the asymptotic properties for the resulting estimators under mild and interpretable conditions. A predictive information criterion is proposed to select the rank of the coefficient matrix with theoretical support. The effectiveness of the proposed approach is demonstrated through simulation studies and a real data application.
- oai:arXiv.org:2512.06514v1
+ Uncertainty quantification for mixed membership in multilayer networks with degree heterogeneity using Gaussian variational inference
+ https://arxiv.org/abs/2512.08146
+ arXiv:2512.08146v1 Announce Type: new
+Abstract: Analyzing multilayer networks is central to understanding complex relational measurements collected across multiple conditions or over time. A pivotal task in this setting is to quantify uncertainty in community structure while appropriately pooling information across layers and accommodating layer-specific heterogeneity. Building on the multilayer degree-corrected mixed-membership (ML-DCMM) model, which captures both stable community membership profiles and layer-specific vertex activity levels, we propose a Bayesian inference framework based on a spectral-assisted likelihood. We then develop a computationally efficient Gaussian variational inference algorithm implemented via stochastic gradient descent. Our theoretical analysis establishes a variational Bernstein--von Mises theorem, which provides a frequentist guarantee for using the variational posterior to construct confidence sets for mixed memberships. We demonstrate the utility of the method on a U.S. airport longitudinal network, where the procedure yields robust estimates, natural uncertainty quantification, and competitive performance relative to state-of-the-art methods.
+ oai:arXiv.org:2512.08146v1stat.ME
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ math.ST
+ stat.CO
+ stat.TH
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1016/j.jmva.2025.105578
- Journal of Multivariate Analysis, Volume 213, May 2026, 105578
- Jie Wu, Bo Zhang, Daoji Li, Zemin Zheng
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Fangzheng Xie, Hsin-Hsiung Huang
- Hierarchical Clustering With Confidence
- https://arxiv.org/abs/2512.06522
- arXiv:2512.06522v1 Announce Type: new
-Abstract: Agglomerative hierarchical clustering is one of the most widely used approaches for exploring how observations in a dataset relate to each other. However, its greedy nature makes it highly sensitive to small perturbations in the data, often producing different clustering results and making it difficult to separate genuine structure from spurious patterns. In this paper, we show how randomizing hierarchical clustering can be useful not just for measuring stability but also for designing valid hypothesis testing procedures based on the clustering results.
- We propose a simple randomization scheme together with a method for constructing a valid p-value at each node of the hierarchical clustering dendrogram that quantifies evidence against performing the greedy merge. Our test controls the Type I error rate, works with any hierarchical linkage without case-specific derivations, and simulations show it is substantially more powerful than existing selective inference approaches. To demonstrate the practical utility of our p-values, we develop an adaptive $\alpha$-spending procedure that estimates the number of clusters, with a probabilistic guarantee on overestimation. Experiments on simulated and real data show that this estimate yields powerful clustering and can be used, for example, to assess clustering stability across multiple runs of the randomized algorithm.
- oai:arXiv.org:2512.06522v1
+ Bayesian Semiparametric Mixture Cure (Frailty) Models
+ https://arxiv.org/abs/2512.08173
+ arXiv:2512.08173v1 Announce Type: new
+Abstract: In recent years, mixture cure models have gained increasing popularity in survival analysis as an alternative to the Cox proportional hazards model, particularly in settings where a subset of patients is considered cured. The proportional hazards mixture cure model is especially advantageous when the presence of a cured fraction can be reasonably assumed, providing a more accurate representation of long-term survival dynamics. In this study, we propose a novel hierarchical Bayesian framework for the semiparametric mixture cure model, which accommodates both the inclusion and exclusion of a frailty component, allowing for greater flexibility in capturing unobserved heterogeneity among patients. Samples from the posterior distribution are obtained using a Markov chain Monte Carlo method, leveraging a hierarchical structure inspired by Bayesian Lasso. Comprehensive simulation studies are conducted across diverse scenarios to evaluate the performance and robustness of the proposed models. Bayesian model comparison and assessment are performed using various criteria. Finally, the proposed approaches are applied to two well-known datasets in the cure model literature: the E1690 melanoma trial and a colon cancer clinical trial.
+ oai:arXiv.org:2512.08173v1stat.MEmath.ST
+ stat.COstat.MLstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Di Wu, Jacob Bien, Snigdha Panigrahi
+ http://creativecommons.org/licenses/by/4.0/
+ Fatih K{\i}z{\i}laslan, Valeria Vitelli
- A Latent Variable Framework for Scaling Laws in Large Language Models
- https://arxiv.org/abs/2512.06553
- arXiv:2512.06553v1 Announce Type: new
-Abstract: We propose a statistical framework built on latent variable modeling for scaling laws of large language models (LLMs). Our work is motivated by the rapid emergence of numerous new LLM families with distinct architectures and training strategies, evaluated on an increasing number of benchmarks. This heterogeneity makes a single global scaling curve inadequate for capturing how performance varies across families and benchmarks. To address this, we propose a latent variable modeling framework in which each LLM family is associated with a latent variable that captures the common underlying features in that family. An LLM's performance on different benchmarks is then driven by its latent skills, which are jointly determined by the latent variable and the model's own observable features. We develop an estimation procedure for this latent variable model and establish its statistical properties. We also design efficient numerical algorithms that support estimation and various downstream tasks. Empirically, we evaluate the approach on 12 widely used benchmarks from the Open LLM Leaderboard (v1/v2).
- oai:arXiv.org:2512.06553v1
- stat.AP
+ Worst-case generation via minimax optimization in Wasserstein space
+ https://arxiv.org/abs/2512.08176
+ arXiv:2512.08176v1 Announce Type: new
+Abstract: Worst-case generation plays a critical role in evaluating robustness and stress-testing systems under distribution shifts, in applications ranging from machine learning models to power grids and medical prediction systems. We develop a generative modeling framework for worst-case generation for a pre-specified risk, based on min-max optimization over continuous probability distributions, namely the Wasserstein space. Unlike traditional discrete distributionally robust optimization approaches, which often suffer from scalability issues, limited generalization, and costly worst-case inference, our framework exploits the Brenier theorem to characterize the least favorable (worst-case) distribution as the pushforward of a transport map from a continuous reference measure, enabling a continuous and expressive notion of risk-induced generation beyond classical discrete DRO formulations. Based on the min-max formulation, we propose a Gradient Descent Ascent (GDA)-type scheme that updates the decision model and the transport map in a single loop, establishing global convergence guarantees under mild regularity assumptions and possibly without convexity-concavity. We also propose to parameterize the transport map using a neural network that can be trained simultaneously with the GDA iterations by matching the transported training samples, thereby achieving a simulation-free approach. The efficiency of the proposed method as a risk-induced worst-case generator is validated by numerical experiments on synthetic and image data.
+ oai:arXiv.org:2512.08176v1
+ stat.MLcs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
+ math.OC
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Peiyao Cai, Chengyu Cui, Felipe Maia Polo, Seamus Somerstep, Leshem Choshen, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun, Kean Ming Tan, Gongjun Xu
+ Xiuyuan Cheng, Yao Xie, Linglingzhi Zhu, Yunqin Zhu
- Controlling the False Discovery Proportion in Matched Observational Studies
- https://arxiv.org/abs/2512.06601
- arXiv:2512.06601v1 Announce Type: new
-Abstract: We provide an approach to exploratory data analysis in matched observational studies with a single intervention and multiple endpoints. In such settings, the researcher would like to explore evidence for actual treatment effects among these variables while accounting not only for the possibility of false discoveries, but also for the potential impact of unmeasured confounding. For any candidate subset of hypotheses about these outcomes, we provide sensitivity sets for the proportion of the hypotheses within the subset which are actually true. The resulting sensitivity statements are valid simultaneously over all possible choices for the rejected set, allowing the researcher to search for promising subsets of hypotheses that maintain a large estimated fraction of true discoveries even if hidden bias is present. The approach is well suited to sensitivity analysis, as conclusions that some fraction of outcomes are affected by the treatment exhibit larger robustness to unmeasured confounding than findings that any particular outcome is affected. We show how a sequence of integer programs, in tandem with screening steps, facilitate the efficient computation of the required sensitivity sets. We illustrate the practical utility of our method through both simulation studies and a data example on the long-term impacts of childhood abuse.
- oai:arXiv.org:2512.06601v1
+ Distributional Random Forests for Complex Survey Designs on Reproducing Kernel Hilbert Spaces
+ https://arxiv.org/abs/2512.08179
+ arXiv:2512.08179v1 Announce Type: new
+Abstract: We study estimation of the conditional law $P(Y|X=\mathbf{x})$ and continuous functionals $\Psi(P(Y|X=\mathbf{x}))$ when $Y$ takes values in a locally compact Polish space, $X \in \mathbb{R}^p$, and the observations arise from a complex survey design. We propose a survey-calibrated distributional random forest (SDRF) that incorporates complex-design features via a pseudo-population bootstrap, PSU-level honesty, and a Maximum Mean Discrepancy (MMD) split criterion computed from kernel mean embeddings of H\'{a}jek-type (design-weighted) node distributions. We provide a framework for analyzing forest-style estimators under survey designs; establish design consistency for the finite-population target and model consistency for the super-population target under explicit conditions on the design, kernel, resampling multipliers, and tree partitions. As far as we are aware, these are the first results on model-free estimation of conditional distributions under survey designs. Simulations under a stratified two-stage cluster design provide finite sample performance and demonstrate the statistical error price of ignoring the survey design. The broad applicability of SDRF is demonstrated using NHANES: We estimate the tolerance regions of the conditional joint distribution of two diabetes biomarkers, illustrating how distributional heterogeneity can support subgroup-specific risk profiling for diabetes mellitus in the U.S. population.
+ oai:arXiv.org:2512.08179v1stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ stat.ML
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Mengqi Lin, Colin Fogarty
+ Yating Zou, Marcos Matabuena, Michael R. Kosorok
- Latent Nonlinear Denoising Score Matching for Enhanced Learning of Structured Distributions
- https://arxiv.org/abs/2512.06615
- arXiv:2512.06615v1 Announce Type: new
-Abstract: We present latent nonlinear denoising score matching (LNDSM), a novel training objective for score-based generative models that integrates nonlinear forward dynamics with the VAE-based latent SGM framework. This combination is achieved by reformulating the cross-entropy term using the approximate Gaussian transition induced by the Euler-Maruyama scheme. To ensure numerical stability, we identify and remove two zero-mean but variance exploding terms arising from small time steps. Experiments on variants of the MNIST dataset demonstrate that the proposed method achieves faster synthesis and enhanced learning of inherently structured distributions. Compared to benchmark structure-agnostic latent SGMs, LNDSM consistently attains superior sample quality and variability.
- oai:arXiv.org:2512.06615v1
- stat.ML
- cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
+ Nonparametric inference with massive data via grouped empirical likelihood
+ https://arxiv.org/abs/2512.08182
+ arXiv:2512.08182v1 Announce Type: new
+Abstract: To address the computational issue in empirical likelihood methods with massive data, this paper proposes a grouped empirical likelihood (GEL) method. It divides $N$ observations into $n$ groups, and assigns the same probability weight to all observations within the same group. GEL estimates the $n\ (\ll N)$ weights by maximizing the empirical likelihood ratio. The dimensionality of the optimization problem is thus reduced from $N$ to $n$, thereby lowering the computational complexity. We prove that GEL possesses the same first order asymptotic properties as the conventional empirical likelihood method under the estimating equation settings and the classical two-sample mean problem. A distributed GEL method is also proposed with several servers. Numerical simulations and real data analysis demonstrate that GEL can keep the same inferential accuracy as the conventional empirical likelihood method, and achieves substantial computational acceleration compared to the divide-and-conquer empirical likelihood method. We can analyze a billion data with GEL in tens of seconds on only one PC.
+ oai:arXiv.org:2512.08182v1
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Kaichen Shen, Wei Zhu
+ Yongda Wang, Shifeng Xiong
- Monotone data augmentation algorithm for longitudinal continuous, binary and ordinal outcomes: a unifying approach
- https://arxiv.org/abs/2512.06621
- arXiv:2512.06621v1 Announce Type: new
-Abstract: The monotone data augmentation (MDA) algorithm has been widely used to impute missing data for longitudinal continuous outcomes. Compared to a full data augmentation approach, the MDA scheme accelerates the mixing of the Markov chain, reduces computational costs per iteration, and aids in missing data imputation under nonignorable dropouts. We extend the MDA algorithm to the multivariate probit (MVP) model for longitudinal binary and ordinal outcomes. The MVP model assumes the categorical outcomes are discretized versions of underlying longitudinal latent Gaussian outcomes modeled by a mixed effects model for repeated measures. A parameter expansion strategy is employed to facilitate the posterior sampling, and expedite the convergence of the Markov chain in MVP. The method enables the sampling of the regression coefficients and covariance matrix for longitudinal continuous, binary and ordinal outcomes in a unified manner. This property aids in understanding the algorithm and developing computer codes for MVP. We also introduce independent Metropolis-Hasting samplers to handle complex priors, and evaluate how the choice between flat and diffuse normal priors for regression coefficients influences parameter estimation and missing data imputation. Numerical examples are used to illustrate the methodology.
- oai:arXiv.org:2512.06621v1
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ A multivariate generalization of Hall's theorem for Edgeworth expansions of bootstrap distributions
+ https://arxiv.org/abs/2512.08200
+ arXiv:2512.08200v1 Announce Type: new
+Abstract: Theorem 5.1 in the monograph by Hall (1992) provides rigorous in-probability justification of Edgeworth expansions of bootstrap distributions. Proving this result was rather challenging because bootstrap distributions do not satisfy the classical Cram\'er condition and therefore classical methods for justifying Edgeworth expansions, e.g. Bhattacharya and Rao (1976) and Bhattacharya and Ghosh (1978), are not available. Hall's (1992) theorem is for a univariate statistic which can be expressed as a smooth function of means, though the underlying population can be multivariate. However, there are a number of applications where a multivariate version of Hall's theorem is needed, and generalizing the proof from the univariate case to the multivariate case is not immediate. Our primary purpose in this article is to fill this gap by stating a multivariate version of the theorem and sketching the modifications to the proof of Hall's (1992) Theorem 5.1 that are needed.
+ oai:arXiv.org:2512.08200v1
+ math.ST
+ stat.TH
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yongqiang Tang
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Andrew T. A. Wood
- Disentangling the Mediation Pathways of Depression in Asian Students and Workers
- https://arxiv.org/abs/2512.06654
- arXiv:2512.06654v1 Announce Type: new
-Abstract: Depression is a major global mental health issue shaped by cultural, demographic, and occupational factors. This study compares predictors of depression across student and worker populations using datasets from India, Malaysia, and China. The India dataset was split into student and worker groups, while the Malaysia dataset includes only students and the China (CHARLS) dataset includes only workers. After harmonizing variables, we applied logistic regression, random forest, and causal forest models to identify key predictors and subgroup-specific effects, and conducted causal mediation analysis (CMA) to assess whether variables operate through intermediaries such as perceived pressure. Among students, pressure, age, workload, financial stress, mental health history, and satisfaction were significant predictors; similar factors emerged for workers. Notably, age showed opposite effects across groups: younger students were more likely to experience depression, whereas older workers showed higher risk. Model performance showed moderate internal accuracy but weaker external generalizability across countries, with random forest outperforming logistic regression. Causal forest results indicated limited heterogeneity in the effect of pressure, while CMA showed that pressure does not mediate the effect of age but operates more directly, and satisfaction influences depression partly through pressure. Overall, pressure consistently emerged as the strongest predictor, suggesting that interventions targeting academic and occupational stress may help reduce depressive symptoms.
- oai:arXiv.org:2512.06654v1
+ Wishart kernel density estimation for strongly mixing time series on the cone of positive definite matrices
+ https://arxiv.org/abs/2512.08232
+ arXiv:2512.08232v1 Announce Type: new
+Abstract: A Wishart kernel density estimator (KDE) is introduced for density estimation in the cone of positive definite matrices. The estimator is boundary-aware and mitigates the boundary bias suffered by conventional KDEs, while remaining simple to implement. Its mean squared error, uniform strong consistency on expanding compact sets, and asymptotic normality are established under the Lebesgue measure and suitable mixing conditions. This work represents the first study of density estimation on this space under any metric. For independent observations, an asymptotic upper bound on the mean absolute error is also derived. A simulation study compares the performance of the Wishart KDE to another boundary-aware KDE that relies on the matrix-variate lognormal distribution proposed by Schwartzman [Int. Stat. Rev., 2016, 84(3), 456-486]. Results suggest that the Wishart KDE is superior for a selection of autoregressive coefficient matrices and innovation covariance matrices when estimating the stationary marginal density of a Wishart autoregressive process. To illustrate the practical utility of the Wishart KDE, an application to finance is made by estimating the marginal density function of a time series of realized covariance matrices, calculated from 5-minute intra-day returns, between the share prices of Amazon Corp. and the Standard & Poor's 500 exchange-traded fund over a one-year period. All code is publicly available via the R package ksm to facilitate implementation of the method and reproducibility of the findings.
+ oai:arXiv.org:2512.08232v1
+ stat.ME
+ math.PR
+ math.STstat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ stat.TH
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhaojin Nan, Ran Chen
+ L\'eo R. Belzile, Christian Genest, Fr\'ed\'eric Ouimet, Donald Richards
- Partially Observable Markov Decision Process Framework for Operating Condition Optimization Using Real-Time Degradation Signals
- https://arxiv.org/abs/2512.06682
- arXiv:2512.06682v1 Announce Type: new
-Abstract: In many engineering systems, proper predictive maintenance and operational control are essential to increase efficiency and reliability while reducing maintenance costs. However, one of the major challenges is that many sensors are used for system monitoring. Analyzing these sensors simultaneously for better predictive maintenance optimization is often very challenging. In this paper, we propose a systematic decision-making framework to improve the system performance in manufacturing practice, considering the real-time degradation signals generated by multiple sensors. Specifically, we propose a partially observed Markov decision process (POMDP) model to generate the optimal capacity and predictive maintenance policies, given the fact that the observation of the system state is imperfect. Such work provides a systematic approach that focuses on jointly controlling the operating conditions and preventive maintenance utilizing the real-time machine deterioration signals by incorporating the degradation constraint and non-observable states. We apply this technique to the bearing degradation data and NASA aircraft turbofan engine dataset, demonstrating the effectiveness of the proposed method.
- oai:arXiv.org:2512.06682v1
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Causal inference under interference: computational barriers and algorithmic solutions
+ https://arxiv.org/abs/2512.08252
+ arXiv:2512.08252v1 Announce Type: new
+Abstract: We study causal effect estimation under interference from network data. We work under the chain-graph formulation pioneered in Tchetgen Tchetgen et. al (2021). Our first result shows that polynomial time evaluation of treatment effects is computationally hard in this framework without additional assumptions on the underlying chain graph. Subsequently, we assume that the interactions among the study units are governed either by (i) a dense graph or (ii) an i.i.d. Gaussian matrix. In each case, we show that the treatment effects have well-defined limits as the population size diverges to infinity. Additionally, we develop polynomial time algorithms to consistently evaluate the treatment effects in each case. Finally, we estimate the unknown parameters from the observed data using maximum pseudo-likelihood estimates, and establish the stability of our causal effect estimators under this perturbation. Our algorithms provably approximate the causal effects in polynomial time even in low-temperature regimes where the canonical MCMC samplers are slow mixing. For dense graphs, our results use the notion of regularity partitions; for Gaussian interactions, our approach uses ideas from spin glass theory and Approximate Message Passing.
+ oai:arXiv.org:2512.08252v1
+ math.ST
+ math.PR
+ stat.ME
+ stat.TH
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Boyang Xu, Yunyi Kang, Xinyu Zhao, Hao Yan, Feng Ju
+ Sohom Bhattacharya, Subhabrata Sen
- Evidence and Elimination: A Bayesian Interpretation of Falsification in Scientific Practice
- https://arxiv.org/abs/2512.06777
- arXiv:2512.06777v1 Announce Type: new
-Abstract: The classical conception of falsification presents scientific theories as entities that are decisively refuted when their predictions fail. This picture has long been challenged by both philosophical analysis and scientific practice, yet the relationship between Popper's eliminative view of theory testing and Bayesian model comparison remains insufficiently articulated. This paper develops a unified account in which falsification is reinterpreted as a Bayesian process of model elimination. A theory is not rejected because it contradicts an observation in a logical sense; it is eliminated because it assigns vanishing integrated probability to the data in comparison with an alternative model. This reinterpretation resolves the difficulties raised by the Duhem-Quine thesis, clarifies the status of auxiliary hypotheses, and explains why ad hoc modifications reduce rather than increase theoretical credibility. The analysis is illustrated through two classical episodes in celestial mechanics, the discovery of Neptune and the anomalous precession of Mercury. In the Neptune case, an auxiliary hypothesis internal to Newtonian gravity dramatically increases the marginal likelihood of the theory, preserving it from apparent refutation. In the Mercury case, no permissible auxiliary modification can rescue the Newtonian model, while general relativity assigns high probability to the anomaly without adjustable parameters. The resulting posterior collapse provides a quantitative realisation of Popper's eliminative criterion. Bayesian model comparison therefore supplies the mathematical structure that Popper's philosophy lacked and offers a coherent account of scientific theory change as a process of successive eliminations within a space of competing models.
- oai:arXiv.org:2512.06777v1
- stat.OT
- Tue, 09 Dec 2025 00:00:00 -0500
+ Perturbation-based Inference for Extreme Value Index
+ https://arxiv.org/abs/2512.08258
+ arXiv:2512.08258v1 Announce Type: new
+Abstract: The extreme value index (EVI) characterizes the tail behavior of a distribution and is crucial for extreme value theory. Inference on the EVI is challenging due to data scarcity in the tail region. We propose a novel method for constructing confidence intervals for the EVI using synthetic exceedances generated via perturbation. Rather than perturbing the entire sample, we add noise to exceedances above a high threshold and apply the generalized Pareto distribution (GPD) approximation. Confidence intervals are derived by simulating the distribution of pivotal statistics from the perturbed data. We show that the pivotal statistic is consistent, ensuring the proposed method provides consistent intervals for the EVI. Additionally, we demonstrate that the perturbed data is differentially private. When the GPD approximation is inadequate, we introduce a refined perturbation method. Simulation results show that our approach outperforms existing methods, providing robust and reliable inference.
+ oai:arXiv.org:2512.08258v1
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tommaso Costa
+ Yiwei Tang, Judy Huixia Wang, Deyuan Li
- ADAM Optimization with Adaptive Batch Selection
- https://arxiv.org/abs/2512.06795
- arXiv:2512.06795v1 Announce Type: new
-Abstract: Adam is a widely used optimizer in neural network training due to its adaptive learning rate. However, because different data samples influence model updates to varying degrees, treating them equally can lead to inefficient convergence. To address this, a prior work proposed adapting the sampling distribution using a bandit framework to select samples adaptively. While promising, the bandit-based variant of Adam suffers from limited theoretical guarantees. In this paper, we introduce Adam with Combinatorial Bandit Sampling (AdamCB), which integrates combinatorial bandit techniques into Adam to resolve these issues. AdamCB is able to fully utilize feedback from multiple samples at once, enhancing both theoretical guarantees and practical performance. Our regret analysis shows that AdamCB achieves faster convergence than Adam-based methods including the previous bandit-based variant. Numerical experiments demonstrate that AdamCB consistently outperforms existing methods.
- oai:arXiv.org:2512.06795v1
- stat.ML
- cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
+ Distribution of Gaps in Multi-lane Orderly and Disorderly Traffic Streams
+ https://arxiv.org/abs/2512.08585
+ arXiv:2512.08585v1 Announce Type: new
+Abstract: To study gap acceptance behaviour one needs the distribution (or probability density function) of gaps in the opposing stream. Further, in these times of widespread availability of large computing powers, traffic simulation has emerged as a popular analysis and design tool. Such simulations rely on randomly generating the arriving vehicles in a way that statistically resembles real-world streams. The generation process for disorderly streams requires information on gap distributions. A study of past literature reveals that very little work has been done to determine the distribution of gaps on multi-lane orderly and disorderly streams. This study aims to develop an analytical framework to specify the distribution of gaps for such streams. This analytical framework is built using the Renewal Process Theory. A maximum likelihood based process for the estimation of the parameters of the analytically derived distribution is also described. Later, real-world gap data from three different sites covering orderly and disorderly streams are used to show how the derived distribution function (using the proposed method) ably describes the observed gap distributions.
+ oai:arXiv.org:2512.08585v1
+ stat.AP
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Proc. The Thirteenth International Conference on Learning Representations (ICLR), 2025
- Gyu Yeol Kim, Min-hwan Oh
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ankita Sharma, Partha Chakroborty, Pranamesh Chakraborty
- Double Local-to-Unity: Estimation under Nearly Nonstationary Volatility
- https://arxiv.org/abs/2512.06823
- arXiv:2512.06823v1 Announce Type: new
-Abstract: This article develops a moderate-deviation limit theory for autoregressive models with jointly persistent mean and volatility dynamics. The autoregressive coefficient is allowed to drift toward unity slower than the classical 1/n rate, while the volatility persistence parameter also converges to one at an even slower, logarithmic order, so that the conditional variance process is itself nearly nonstationary and its unconditional moments may diverge. This double localization allows the variance process to be nearly nonstationary and to evolve slowly, as observed in financial data and during asset price bubble episodes. Under standard regularity conditions, we establish consistency and distributional limits for the OLS estimator of the autoregressive coefficient that remains valid in the presence of highly persistent stochastic volatility. We show that the effective normalization for least squares inference is governed by an average volatility scale, and we derive martingale limit theorems for the OLS estimator under joint drift and volatility dynamics. In a mildly stationary regime (where the autoregressive root approaches one from below), the OLS estimator is asymptotically normal. In a mildly explosive regime (where the root approaches one from above), an OLS based self normalized statistic converges to a Cauchy limit. Strikingly, in both regimes, the limiting laws of our statistics are invariant to the detailed specification of the volatility process, even though the conditional variance is itself nearly nonstationary. Overall, the results extend moderate-deviation asymptotics to settings with drifting volatility persistence, unify local to unity inference with nearly nonstationary stochastic volatility, and deliver practically usable volatility robust statistics for empirical work in settings approaching instability and exhibiting bubbles.
- oai:arXiv.org:2512.06823v1
- math.ST
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Heuristics for Combinatorial Optimization via Value-based Reinforcement Learning: A Unified Framework and Analysis
+ https://arxiv.org/abs/2512.08601
+ arXiv:2512.08601v1 Announce Type: new
+Abstract: Since the 1990s, considerable empirical work has been carried out to train statistical models, such as neural networks (NNs), as learned heuristics for combinatorial optimization (CO) problems. When successful, such an approach eliminates the need for experts to design heuristics per problem type. Due to their structure, many hard CO problems are amenable to treatment through reinforcement learning (RL). Indeed, we find a wealth of literature training NNs using value-based, policy gradient, or actor-critic approaches, with promising results, both in terms of empirical optimality gaps and inference runtimes. Nevertheless, there has been a paucity of theoretical work undergirding the use of RL for CO problems. To this end, we introduce a unified framework to model CO problems through Markov decision processes (MDPs) and solve them using RL techniques. We provide easy-to-test assumptions under which CO problems can be formulated as equivalent undiscounted MDPs that provide optimal solutions to the original CO problems. Moreover, we establish conditions under which value-based RL techniques converge to approximate solutions of the CO problem with a guarantee on the associated optimality gap. Our convergence analysis provides: (1) a sufficient rate of increase in batch size and projected gradient descent steps at each RL iteration; (2) the resulting optimality gap in terms of problem parameters and targeted RL accuracy; and (3) the importance of a choice of state-space embedding. Together, our analysis illuminates the success (and limitations) of the celebrated deep Q-learning algorithm in this problem context.
+ oai:arXiv.org:2512.08601v1
+ stat.ML
+ cs.LG
+ math.OC
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abir Sarkar, Martin T. Wells
+ http://creativecommons.org/licenses/by/4.0/
+ Orit Davidovich, Shimrit Shtern, Segev Wasserkrug, Nimrod Megiddo
- SLOACI: Surrogate-Leveraged Online Adaptive Causal Inference
- https://arxiv.org/abs/2512.06872
- arXiv:2512.06872v1 Announce Type: new
-Abstract: Adaptive experimental designs have gained increasing attention across a range of domains. In this paper, we propose a new methodological framework, surrogate-leveraged online adaptive causal inference (SLOACI), which integrates predictive surrogate outcomes into adaptive designs to enhance efficiency. For downstream analysis, we construct the adaptive augmented inverse probability weighting estimator for the average treatment effect using collected data. Our procedure remains robust even when surrogates are noisy or weak. We provide a comprehensive theoretical foundation for SLOACI. Under the asymptotic regime, we show that the proposed estimator attains the semiparametric efficiency bound. From a non-asymptotic perspective, we derive a regret bound to provide practical insights. We also develop a toolbox of sequential testing procedures that accommodates both asymptotic and non-asymptotic regimes, allowing experimenters to choose the perspective that best aligns with their practical needs. Extensive simulations and a synthetic case study are conducted to showcase the superior finite-sample performance of our method.
- oai:arXiv.org:2512.06872v1
+ A Persistent Homology Pipeline for the Analysis of Neural Spike Train Data
+ https://arxiv.org/abs/2512.08637
+ arXiv:2512.08637v1 Announce Type: new
+Abstract: In this article, we introduce a Topological Data Analysis (TDA) pipeline for neural spike train data. Understanding how the brain transforms sensory information into perception and behavior requires analyzing coordinated neural population activity. Modern electrophysiology enables simultaneous recording of spike train ensembles, but extracting meaningful information from these datasets remains a central challenge in neuroscience. A fundamental question is how ensembles of neurons discriminate between different stimuli or behavioral states, particularly when individual neurons exhibit weak or no stimulus selectivity, yet their coordinated activity may still contribute to network-level encoding. We describe a TDA framework that identifies stimulus-discriminative structure in spike train ensembles recorded from the mouse insular cortex during presentation of deionized water stimuli at distinct non-nociceptive temperatures. We show that population-level topological signatures effectively differentiate oral thermal stimuli even when individual neurons provide little or no discrimination. These findings demonstrate that ensemble organization can carry perceptually relevant information that standard single-unit analysis may miss. The framework builds on a mathematical representation of spike train ensembles that enables persistent homology to be applied to collections of point processes. At its core is the widely-used Victor-Purpura (VP) distance. Using this metric, we construct persistence-based descriptors that capture multiscale topological features of ensemble geometry. Two key theoretical results support the method: a stability theorem establishing robustness of persistent homology to perturbations in the VP metric parameter, and a probabilistic stability theorem ensuring robustness of topological signatures.
+ oai:arXiv.org:2512.08637v1stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yingying Fan, Zihan Wang, Waverly Wei
+ http://creativecommons.org/publicdomain/zero/1.0/
+ Cagatay Ayhan, Audrey N. Nash, Roberto Vincis, Martin Bauer, Richard Bertram, Tom Needham
- Symmetric Aggregation of Conformity Scores for Efficient Uncertainty Sets
- https://arxiv.org/abs/2512.06945
- arXiv:2512.06945v1 Announce Type: new
-Abstract: Access to multiple predictive models trained for the same task, whether in regression or classification, is increasingly common in many applications. Aggregating their predictive uncertainties to produce reliable and efficient uncertainty quantification is therefore a critical but still underexplored challenge, especially within the framework of conformal prediction (CP). While CP methods can generate individual prediction sets from each model, combining them into a single, more informative set remains a challenging problem. To address this, we propose SACP (Symmetric Aggregated Conformal Prediction), a novel method that aggregates nonconformity scores from multiple predictors. SACP transforms these scores into e-values and combines them using any symmetric aggregation function. This flexible design enables a robust, data-driven framework for selecting aggregation strategies that yield sharper prediction sets. We also provide theoretical insights that help justify the validity and performance of the SACP approach. Extensive experiments on diverse datasets show that SACP consistently improves efficiency and often outperforms state-of-the-art model aggregation baselines.
- oai:arXiv.org:2512.06945v1
- stat.ML
- cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
+ Exhausting the type I error level in event-driven group-sequential designs with a closed testing procedure for progression-free and overall survival
+ https://arxiv.org/abs/2512.08658
+ arXiv:2512.08658v1 Announce Type: new
+Abstract: In oncological clinical trials, overall survival (OS) is the gold-standard endpoint, but long follow-up and treatment switching can delay or dilute detectable effects. Progression-free survival (PFS) often provides earlier evidence and is therefore frequently used together with OS as multiple primary endpoints. Since in certain scenarios trial success may be defined if one of the two hypotheses involved can be rejected, a correction for multiple testing may be deemed necessary. Because PFS and OS are generally highly dependent, their test statistics are typically correlated. Ignoring this dependency (e.g. via a simple Bonferroni correction) is not power optimal. We develop a group-sequential testing procedure for the multiple primary endpoints PFS and OS that fully exhausts the family-wise error rate (FWER) by exploiting their dependence. Specifically, we characterize the joint asymptotic distribution of log-rank statistics across endpoints and multiple event-driven analysis cutoffs. Furthermore, we show that we can consistently estimate the covariance structure. Embedding these results in a closed testing procedure, we can recalculate critical values of the test statistics in order to spend the available type I error optimally. An important extension to the current literature is that we allow for both interim and final analysis to be event-driven. Simulations based on illness-death multi-state models empirically confirm FWER control for moderate to large sample sizes. Compared with a simple Bonferroni correction, the proposed methods recover roughly two thirds of the power loss for OS, increase disjunctive and conjunctive power, and enable meaningful early stopping. In planning, these gains translate into about 5% fewer OS events required to reach the targeted power. We also discuss practical issues in the implementation of such designs and possible extensions of the introduced method.
+ oai:arXiv.org:2512.08658v1
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Nabil Alami, Jad Zakharia, Souhaib Ben Taieb
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Moritz Fabian Danzer, Kaspar Rufibach, Jan Beyersmann, Ren\'e Schmidt
- PARIS: Pruning Algorithm via the Representer theorem for Imbalanced Scenarios
- https://arxiv.org/abs/2512.06950
- arXiv:2512.06950v1 Announce Type: new
-Abstract: The challenge of \textbf{imbalanced regression} arises when standard Empirical Risk Minimization (ERM) biases models toward high-frequency regions of the data distribution, causing severe degradation on rare but high-impact ``tail'' events. Existing strategies uch as loss re-weighting or synthetic over-sampling often introduce noise, distort the underlying distribution, or add substantial algorithmic complexity.
- We introduce \textbf{PARIS} (Pruning Algorithm via the Representer theorem for Imbalanced Scenarios), a principled framework that mitigates imbalance by \emph{optimizing the training set itself}. PARIS leverages the representer theorem for neural networks to compute a \textbf{closed-form representer deletion residual}, which quantifies the exact change in validation loss caused by removing a single training point \emph{without retraining}. Combined with an efficient Cholesky rank-one downdating scheme, PARIS performs fast, iterative pruning that eliminates uninformative or performance-degrading samples.
- We use a real-world space weather example, where PARIS reduces the training set by up to 75\% while preserving or improving overall RMSE, outperforming re-weighting, synthetic oversampling, and boosting baselines. Our results demonstrate that representer-guided dataset pruning is a powerful, interpretable, and computationally efficient approach to rare-event regression.
- oai:arXiv.org:2512.06950v1
- stat.ML
- cs.LG
- physics.space-ph
- Tue, 09 Dec 2025 00:00:00 -0500
+ Matrix Completion Survey: Theory, Algorithms, and Empirical Evaluation
+ https://arxiv.org/abs/2512.08689
+ arXiv:2512.08689v1 Announce Type: new
+Abstract: We present a concise survey of matrix completion methods and associated implemen- tations of several fundamental algorithms. Our study covers both passive and adaptive strategies. We further illustrate the behavior of a simple adaptive sampling scheme through controlled synthetic experiments.
+ oai:arXiv.org:2512.08689v1
+ stat.CO
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Enrico Camporeale
+ Connor Panish, Leo Villani
- Statistical analysis of Inverse Entropy-regularized Reinforcement Learning
- https://arxiv.org/abs/2512.06956
- arXiv:2512.06956v1 Announce Type: new
-Abstract: Inverse reinforcement learning aims to infer the reward function that explains expert behavior observed through trajectories of state--action pairs. A long-standing difficulty in classical IRL is the non-uniqueness of the recovered reward: many reward functions can induce the same optimal policy, rendering the inverse problem ill-posed. In this paper, we develop a statistical framework for Inverse Entropy-regularized Reinforcement Learning that resolves this ambiguity by combining entropy regularization with a least-squares reconstruction of the reward from the soft Bellman residual. This combination yields a unique and well-defined so-called least-squares reward consistent with the expert policy. We model the expert demonstrations as a Markov chain with the invariant distribution defined by an unknown expert policy $\pi^\star$ and estimate the policy by a penalized maximum-likelihood procedure over a class of conditional distributions on the action space. We establish high-probability bounds for the excess Kullback--Leibler divergence between the estimated policy and the expert policy, accounting for statistical complexity through covering numbers of the policy class. These results lead to non-asymptotic minimax optimal convergence rates for the least-squares reward function, revealing the interplay between smoothing (entropy regularization), model complexity, and sample size. Our analysis bridges the gap between behavior cloning, inverse reinforcement learning, and modern statistical learning theory.
- oai:arXiv.org:2512.06956v1
- stat.ML
- cs.LG
- math.ST
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Stationary Point Constrained Inference via Diffeomorphisms
+ https://arxiv.org/abs/2512.08735
+ arXiv:2512.08735v1 Announce Type: new
+Abstract: Stationary points or derivative zero crossings of a regression function correspond to points where a trend reverses, making their estimation scientifically important. Existing approaches to uncertainty quantification for stationary points cannot deliver valid joint inference when multiple extrema are present, an essential capability in applications where the relative locations of peaks and troughs carry scientific significance. We develop a principled framework for functions with multiple regions of monotonicity by constraining the number of stationary points. We represent each function in the diffeomorphic formulation as the composition of a simple template and a smooth bijective transformation, and show that this parameterization enables coherent joint inference on the extrema. This construction guarantees a prespecified number of stationary points and provides a direct, interpretable parameterization of their locations. We derive non-asymptotic confidence bounds and establish approximate normality for the maximum likelihood estimators, with parallel results in the Bayesian setting. Simulations and an application to brain signal estimation demonstrate the method's accuracy and interpretability.
+ oai:arXiv.org:2512.08735v1
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by/4.0/
- Denis Belomestny, Alexey Naumov, Sergey Samsonov
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Michael Price, Debdeep Pati, Ning Ning
- Learning Conditional Independence Differential Graphs From Time-Dependent Data
- https://arxiv.org/abs/2512.06960
- arXiv:2512.06960v1 Announce Type: new
-Abstract: Estimation of differences in conditional independence graphs (CIGs) of two time series Gaussian graphical models (TSGGMs) is investigated where the two TSGGMs are known to have similar structure. The TSGGM structure is encoded in the inverse power spectral density (IPSD) of the time series. In several existing works, one is interested in estimating the difference in two precision matrices to characterize underlying changes in conditional dependencies of two sets of data consisting of independent and identically distributed (i.i.d.) observations. In this paper we consider estimation of the difference in two IPSDs to characterize the underlying changes in conditional dependencies of two sets of time-dependent data. Our approach accounts for data time dependencies unlike past work. We analyze a penalized D-trace loss function approach in the frequency domain for differential graph learning, using Wirtinger calculus. We consider both convex (group lasso) and non-convex (log-sum and SCAD group penalties) penalty/regularization functions. An alternating direction method of multipliers (ADMM) algorithm is presented to optimize the objective function. We establish sufficient conditions in a high-dimensional setting for consistency (convergence of the inverse power spectral density to true value in the Frobenius norm) and graph recovery. Both synthetic and real data examples are presented in support of the proposed approaches. In synthetic data examples, our log-sum-penalized differential time-series graph estimator significantly outperformed our lasso based differential time-series graph estimator which, in turn, significantly outperformed an existing lasso-penalized i.i.d. modeling approach, with $F_1$ score as the performance metric.
- oai:arXiv.org:2512.06960v1
- stat.ML
- cs.LG
- eess.SP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Genetic Regression Analysis of Human Brain Connectivity Using an Efficient Estimator of Genetic Covariance
+ https://arxiv.org/abs/2512.08756
+ arXiv:2512.08756v1 Announce Type: new
+Abstract: Non-invasive measurements of the human brain using magnetic resonance imaging (MRI) have significantly improved our understanding the brain's network organization by enabling measurement of anatomical connections between brain regions (structural connectivity) and their coactivation (functional connectivity). Heritability analyses have established that genetics account for considerable intersubject variability in structural and functional connectivity. However, characterizing how genetics shape the relationship between structural and functional connectomes remains challenging, since this association is obscured by unique environmental exposures in observed data. To address this, we develop a regression analysis framework that enables characterization of the relationship between latent genetic contributions to structural and functional connectivity. Implementing the proposed framework requires estimating genetic covariance matrices in multivariate random effects models, which is computationally intractable for high-dimensional connectome data using existing methods. We introduce a constrained method-of-moments estimator that is several orders of magnitude faster than existing methods without sacrificing estimation accuracy. For the genetic regression analysis, we develop regularized estimation approaches, including ridge, lasso, and tensor regression. Applying our method to Human Connectome Project data, we find that functional connectivity is moderately predictable from structure at the genetic level (max R^2 = 0.34), though it is not directly predictable in the observed data (max R^2 = 0.03). This stark contrast suggests that unique environmental factors mask strong genetically-encoded structure-function relationships.
+ oai:arXiv.org:2512.08756v1
+ stat.AP
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- 10.1109/ACCESS.2025.3639399
- Jitendra K Tugnait
+ Keshav Motwani, Ali Shojaie, Ariel Rokem, Eardi Lila
- On-line Pick-Freeze Mirror algorithm for Sensitity Analysis
- https://arxiv.org/abs/2512.06974
- arXiv:2512.06974v1 Announce Type: new
-Abstract: The main objective of this paper is to propose a new approach for estimating the entire collection of Sobol' indices simultaneously.
- Our approach exploits the fact that Sobol' indices can be rewritten as solutions to an optimization problem over the simplex of $\R^d$, to construct an online sequence of estimators using a stochastic mirror descent algorithm. We prove that our estimation procedure is consistent and provide a non-asymptotic upper bound for its rate of convergence. Furthermore, we demonstrate the numerical accuracy of our method and compare it with other classical estimation procedures.
- oai:arXiv.org:2512.06974v1
+ Point and interval estimators of a changepoint in stochastical dominance between two distributions
+ https://arxiv.org/abs/2512.08823
+ arXiv:2512.08823v1 Announce Type: new
+Abstract: For differences between means of continuous data from independent groups, the customary scale-free measure of effect is the standardized mean difference (SMD). To justify use of SMD, one should be reasonably confident that the group-level variances are equal. Empirical evidence often contradicts this assumption. Thus, we have investigated an alternate approach, based on stochastic ordering of the treatment and control distributions, that takes into account means and variances. For applying stochastic ordering, our development yields a key quantity, $\mathsf{A}$, the outcome value at which the direction of the ordering of the treatment and control distributions changes.
+ Using an extensive simulation, we studied relative bias of point estimators of $\mathsf{A}$ and coverage and relative width of bootstrap confidence intervals.
+ oai:arXiv.org:2512.08823v1math.STstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Manon Costa, S\'ebastien Gadat, Xavier Gendre, Thierry Klein
+ http://creativecommons.org/licenses/by/4.0/
+ Elena Kulinskaya, David C. Hoaglin
- Latency-Response Theory Model: Evaluating Large Language Models via Response Accuracy and Chain-of-Thought Length
- https://arxiv.org/abs/2512.07019
- arXiv:2512.07019v1 Announce Type: new
-Abstract: The proliferation of Large Language Models (LLMs) necessitates valid evaluation methods to provide guidance for both downstream applications and actionable future improvements. The Item Response Theory (IRT) model with Computerized Adaptive Testing has recently emerged as a promising framework for evaluating LLMs via their response accuracy. Beyond simple response accuracy, LLMs' chain of thought (CoT) lengths serve as a vital indicator of their reasoning ability. To leverage the CoT length information to assist the evaluation of LLMs, we propose the Latency-Response Theory (LaRT) model, which jointly models both the response accuracy and CoT length by introducing a key correlation parameter between the latent ability and the latent speed. We derive an efficient stochastic approximation Expectation-Maximization algorithm for parameter estimation. We establish rigorous identifiability results for the latent ability and latent speed parameters to ensure the statistical validity of their estimation. Through both theoretical asymptotic analyses and simulation studies, we demonstrate LaRT's advantages over IRT in terms of superior estimation accuracy and shorter confidence intervals for latent trait estimation. To evaluate LaRT in real data, we collect responses from diverse LLMs on popular benchmark datasets. We find that LaRT yields different LLM rankings than IRT and outperforms IRT across multiple key evaluation metrics including predictive power, item efficiency, ranking validity, and LLM evaluation efficiency. Code and data are available at https://github.com/Toby-X/Latency-Response-Theory-Model.
- oai:arXiv.org:2512.07019v1
- stat.ME
- cs.AI
+ Commanding the Foul Shot: A New Ensemble of Free Throw Metrics
+ https://arxiv.org/abs/2512.08824
+ arXiv:2512.08824v1 Announce Type: new
+Abstract: With the NBA's adoption of in-game limb tracking in 2023, Sony's Hawk-Eye system now captures high-resolution, 3D poses of players and the ball 60 times per second. Linking these data to key events such as shots, passes, and rebounds opens a new era in NBA analytics. Here, we leverage Hawk-Eye tracking to introduce a novel ensemble of metrics for evaluating free-throw shooting and demonstrate that our framework captures skill more effectively than traditional make-or-miss statistics. Inspired by baseball analytics, we introduce command, which quantifies the quality of a free throw by measuring a shooter's accuracy and precision near the basket's bullseye. This metric recognizes that some makes (or misses) are better than others and captures a player's ability to execute quality attempts consistently. To identify what drives command, we define launch-based metrics assessing consistency in release velocity, angle, and 3D position. Players with greater touch -- i.e., more consistent launch dynamics -- exhibit stronger command as they can reliably control their shot trajectory. Finally, we develop a physics model to identify the range of launch conditions that result in a make and to determine which launch conditions are most robust to small perturbations. This framework reveals "safe" launch regions and explains why certain players, such as Steph Curry, excel at free throws, providing actionable insights for player development.
+ oai:arXiv.org:2512.08824v1stat.AP
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhiyu Xu, Jia Liu, Yixin Wang, Yuqi Gu
+ Jake McGrath, Amanda Glazer, Vanna Bushong, Michelle Nguyen, Kirk Goldsberry
- Easy-to-Implement Two-Way Effect Decomposition for Any Outcome Variable with Endogenous Mediator
- https://arxiv.org/abs/2512.07058
- arXiv:2512.07058v1 Announce Type: new
-Abstract: Given a binary treatment D and a binary mediator M, mediation analysis decomposes the total effect of D on an outcome Y into the direct and indirect effects. Typically, both D and M are assumed to be exogenous, but this paper allows M to be endogenous while maintaining the exogeneity of D, which holds certainly if D is randomized. The endogeneity problem of M is then overcome using a binary instrumental variable Z. We derive a nonparametric "causal reduced form (CRF)" for Y with either (D,Z,DZ) or (D,M,DZ) as the regressors. The CRF enables estimating the direct and indirect effects easily with ordinary least squares or instrumental variable estimator, instead of matching or inverse probability weighting that have difficulties in finding the asymptotic distribution or in dealing with near-zero denominators. Not just this ease in implementation, our approach is applicable to any Y (binary, count, continuous, etc.). Simulation and empirical studies illustrate our approach.
- oai:arXiv.org:2512.07058v1
+ Prediction Intervals for Individual Treatment Effects in a Multiple Decision Point Framework using Conformal Inference
+ https://arxiv.org/abs/2512.08828
+ arXiv:2512.08828v1 Announce Type: new
+Abstract: Accurately quantifying uncertainty of individual treatment effects (ITEs) across multiple decision points is crucial for personalized decision-making in fields such as healthcare, finance, education, and online marketplaces. Previous work has focused on predicting non-causal longitudinal estimands or constructing prediction bands for ITEs using cross-sectional data based on exchangeability assumptions. We propose a novel method for constructing prediction intervals using conformal inference techniques for time-varying ITEs with weaker assumptions than prior literature. We guarantee a lower bound for coverage, which is dependent on the degree of non-exchangeability in the data. Although our method is broadly applicable across decision-making contexts, we support our theoretical claims with simulations emulating micro-randomized trials (MRTs) -- a sequential experimental design for mobile health (mHealth) studies. We demonstrate the practical utility of our method by applying it to a real-world MRT - the Intern Health Study (IHS).
+ oai:arXiv.org:2512.08828v1stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Bora Kim, Myoung-jae Lee
-
-
- Machine Learning-based Unfolding for Cross Section Measurements in the Presence of Nuisance Parameters
- https://arxiv.org/abs/2512.07074
- arXiv:2512.07074v1 Announce Type: new
-Abstract: Statistically correcting measured cross sections for detector effects is an important step across many applications. In particle physics, this inverse problem is known as \textit{unfolding}. In cases with complex instruments, the distortions they introduce are often known only implicitly through simulations of the detector. Modern machine learning has enabled efficient simulation-based approaches for unfolding high-dimensional data. Among these, one of the first methods successfully deployed on experimental data is the \textsc{OmniFold} algorithm, a classifier-based Expectation-Maximization procedure. In practice, however, the forward model is only approximately specified, and the corresponding uncertainty is encoded through nuisance parameters. Building on the well-studied \textsc{OmniFold} algorithm, we show how to extend machine learning-based unfolding to incorporate nuisance parameters. Our new algorithm, called Profile \textsc{OmniFold}, is demonstrated using a Gaussian example as well as a particle physics case study using simulated data from the CMS Experiment at the Large Hadron Collider.
- oai:arXiv.org:2512.07074v1
- stat.AP
- hep-ex
- hep-ph
- physics.data-anstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Huanbiao Zhu, Krish Desai, Mikael Kuusela, Vinicius Mikuni, Benjamin Nachman, Larry Wasserman
-
-
- Big shells, bigger data: cohort analysis of Chesapeake Bay Crassostrea virginica reefs
- https://arxiv.org/abs/2512.07080
- arXiv:2512.07080v1 Announce Type: new
-Abstract: Oysters in Virginia Chesapeake Bay oyster reefs are "age-truncated", possibly due to a combination of historical overfishing, disease epizootics, environmental degradation, and climate change. Research has suggested that oysters exhibit resilience to environmental stressors; however, that evidence is based on the current limited understanding of oyster lifespan. Until this paper, the Virginia Oyster Stock Assessment and Replenishment Archive (VOSARA), a spatially and temporally expansive dataset (222 reefs across 2003-2023) of shell lengths (SL, mm), had yet to be examined comprehensively in the context of resilience. We develop a novel method using Gaussian mixture modeling (GMM) to identify the age groups in each reef using yearly SL data and then link those age groups over time to identify cohorts and estimate their lifespan. Sixty-four reefs (29%) are deemed to have sufficient data (at least 300 oysters sampled for a minimum of 8 consecutive years) for this analysis. We fit univariate GMMs for each year ($t$) and reef ($r$) for each of the seven river strata ($R$) to estimate 1) the mean and standard deviation of SL for each $a_{Rrt}$th age group, and 2) the mixture percentage of each $a_{Rrt}$th age group. We link age groups across time to infer age cohorts by developing a mechanistic algorithm that prevents the shrinking of shell length when an $a_{Rrt}$th group becomes an ($a_{R,r,t+1}$)th group. Our method shows promise in identifying oyster cohorts and estimating lifespan solely using SL data. Our results show signals of resiliency in almost all river systems: oyster cohorts live longer and grow larger in the mid-to-late 2010s compared to the early 2000s.
- oai:arXiv.org:2512.07080v1
- stat.AP
- q-bio.QM
- stat.CO
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://creativecommons.org/licenses/by/4.0/
- Madison D. Griffin, Grace S. Chiu, Roger L. Mann, Melissa J. Southworth, John K. Thomas
+ Swaraj Bose, Walter Dempsey
- Finite-Sample Failures and Condition-Number Diagnostics in Double Machine Learning
- https://arxiv.org/abs/2512.07083
- arXiv:2512.07083v1 Announce Type: new
-Abstract: Standard Double Machine Learning (DML; Chernozhukov et al., 2018) confidence intervals can exhibit substantial finite-sample coverage distortions when the underlying score equations are ill-conditioned, even if nuisance functions are estimated with state-of-the-art methods. Focusing on the partially linear regression (PLR) model, we show that a simple, easily computed condition number for the orthogonal score, denoted kappa_DML := 1 / |J_theta|, largely determines when DML inference is reliable. Our first result derives a nonasymptotic, Berry-Esseen-type bound showing that the coverage error of the usual DML t-statistic is of order n^{-1/2} + sqrt(n) * r_n, where r_n is the standard DML remainder term summarizing nuisance estimation error. Our second result provides a refined linearization in which both estimation error and confidence interval length scale as kappa_DML / sqrt(n) + kappa_DML * r_n, so that ill-conditioning directly inflates both variance and bias. These expansions yield three conditioning regimes - well-conditioned, moderately ill-conditioned, and severely ill-conditioned - and imply that informative, shrinking confidence sets require kappa_DML = o_p(sqrt(n)) and kappa_DML * r_n -> 0. We conduct Monte Carlo experiments across overlap levels, nuisance learners (OLS, Lasso, random forests), and both low- and high-dimensional (p > n) designs. Across these designs, kappa_DML is highly predictive of finite-sample performance: well-conditioned designs with kappa_DML < 1 deliver near-nominal coverage with short intervals, whereas severely ill-conditioned designs can exhibit large bias and coverage around 40% for nominal 95% intervals, despite flexible nuisance fitting. We propose reporting kappa_DML alongside DML estimates as a routine diagnostic of score conditioning, in direct analogy to condition-number checks and weak-instrument diagnostics in IV settings.
- oai:arXiv.org:2512.07083v1
+ Partially Bayes p-values for large scale inference
+ https://arxiv.org/abs/2512.08847
+ arXiv:2512.08847v1 Announce Type: new
+Abstract: We seek to conduct statistical inference for a large collection of primary parameters, each with its own nuisance parameters. Our approach is partially Bayesian, in that we treat the primary parameters as fixed while we model the nuisance parameters as random and drawn from an unknown distribution which we endow with a nonparametric prior. We compute partially Bayes p-values by conditioning on nuisance parameter statistics, that is, statistics that are ancillary for the primary parameters and informative about the nuisance parameters. The proposed p-values have a Bayesian interpretation as tail areas computed with respect to the posterior distribution of the nuisance parameters. Similarly to the conditional predictive p-values of Bayarri and Berger, the partially Bayes p-values avoid double use of the data (unlike posterior predictive p-values). A key ingredient of our approach is that we model nuisance parameters hierarchically across problems; the sharing of information across problems leads to improved calibration. We illustrate the proposed partially Bayes p-values in two applications: the normal means problem with unknown variances and a location-scale model with unknown distribution shape. We model the scales via Dirichlet processes in both examples and the distribution shape via P\'olya trees in the second. Our proposed partially Bayes p-values increase power and calibration compared to purely frequentist alternatives.
+ oai:arXiv.org:2512.08847v1stat.ME
- econ.EM
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500newhttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gabriel Saco
-
-
- Asymptotic theory and statistical inference for the samples problems with heavy-tailed data using the functional empirical process
- https://arxiv.org/abs/2512.07088
- arXiv:2512.07088v1 Announce Type: new
-Abstract: This paper introduces the Trimmed Functional Empirical Process (TFEP) as a robust framework for statistical inference when dealing with heavy-tailed or skewed distributions, where classical moments such as the mean or variance may be infinite or undefined. Standard approaches including the classical Functional Empirical Process (FEP), break down under such conditions, especially for distributions like Pareto, Cauchy, low degree of freedom Student-t, due to their reliance on finite-variance assumptions to guarantee asymptotic convergence. The TFEP approach addresses these limitations by trimming a controlled proportion of extreme order statistics, thereby stabilizing the empirical process and restoring asymptotic Gaussian behavior. We establish the weak convergence of the TFEP under mild regularity conditions and derive new asymptotic distributions for one-sample and twosample problems. These theoretical developments lead to robust confidence intervals for truncated means, variances, and their differences or ratios. The efficiency and reliability of the TFEP are supported by extensive Monte Carlo experiments and an empirical application to Senegalese income data. In all scenarios, the TFEP provides accurate inference where both Gaussian-based methods and the classical FEP break down. The methodology thus offers a powerful and flexible tool for statistical analysis in heavy-tailed and non-standard environments.
- oai:arXiv.org:2512.07088v1
- stat.ME
- math.PR
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Abdoulaye Camara, Saliou Diouf, Moumouni Diallo, Gane Samb Lo
+ Nikolaos Ignatiadis, Li Ma
- Facilitating Conditions as an Enabler, Not a Direct Motivator: A Robustness and Mediation Analysis of E-Learning Adoption
- https://arxiv.org/abs/2512.07185
- arXiv:2512.07185v1 Announce Type: new
-Abstract: Despite substantial institutional investment in e-learning infrastructure, student engagement often fails to meet expectations--a persistent paradox that challenges the established direct relationship between Facilitating Conditions (FC) and behavioral intention within the classic UTAUT framework. To resolve this theoretical puzzle, we reconceptualized the role of FC through an empirical study of 470 Indonesian university students. Our robust, multi-stage analytical approach first confirmed the significant influence of established drivers--Performance Expectancy (beta=0.190), Effort Expectancy (beta=0.198), Social Influence (beta=0.151), and Perceived Enjoyment (beta=0.472)--on Behavioral Intention (BI), which in turn strongly predicted Use Behavior (beta=0.666). Crucially, however, the direct effect of FC on BI proved non-significant (beta=-0.085). A subsequent mediation model revealed FC's true function as a foundational enabling construct that operates indirectly by powerfully enhancing both Performance Expectancy (beta=0.556) and Effort Expectancy (beta=0.419). Our findings demonstrate that the value of technological infrastructure lies not in its mere presence, but in its dynamic capacity to enable learning and optimize user experience. This research advances a refined "enabling pathway" theoretical framework, guiding administrators to shift the focus of technological investment from merely providing tools to strategically crafting learning experiences.
- oai:arXiv.org:2512.07185v1
+ Multifractal behavior of price changes in the Green Bonds funds
+ https://arxiv.org/abs/2512.08886
+ arXiv:2512.08886v1 Announce Type: new
+Abstract: Climate change has driven the market to seek new ways of raising funds to mitigate its effects. One such innovation is the emergence of Green Bonds financial assets specifically designed to support sustainable projects. This study explores the fractal behavior of daily price changes in thirty-five Green Bond funds using the Multifractal Detrended Fluctuation Analysis (MFDFA) method. Our results indicate that price changes exhibit persistent behavior and high multifractality, characterized by large fluctuations. Only one of the thirty-five time series analyzed showed an outlier result, suggesting that the funds display very similar behavior. By shuffling the series, we were able to reduce multifractality significantly. These findings suggest that Green Bond funds exhibit multifractal behavior typical of other financial assets.
+ oai:arXiv.org:2512.08886v1stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500new
- http://creativecommons.org/licenses/by-sa/4.0/
- Jaka Nugraha, Noyyn Sun, Xinlin Zhao, Vindi Kusuma Wardani, Inna Koblianska, Jiunn-Woei Lian
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Wenderson Gomes Barbosa, Kerolly Kedma Felix do Nascimento, F\'abio Sandro dos Santos, Silvio Fernando Alves Xavier J\'unior, Tiago A. E. Ferreira
- Estimation of the elasticity for CKLS model from high-frequency observations
- https://arxiv.org/abs/2512.07301
- arXiv:2512.07301v1 Announce Type: new
-Abstract: We investigate parametric estimation of the elasticity parameter in the CKLS diffusion based on high-frequency data. First, we transform the CKLS diffusion to a CIR-type one via a smooth state-space mapping and the general Girsanov change of measure. This transformation enables the applications of existing inference tools for CIR processes while ensuring possibilities of transferring the resulting limit theorems back to the original probability space. However, because Feller's condition fails, many existing high-frequency likelihood-based procedures cannot be applied directly, since their discretization schemes approximate likelihood terms involving the reciprocal of the process by Riemann sums that are no longer well-defined once the paths are allowed to hit zero. Instead, we estimate the drift coefficient of the transformed CIR-type model via a procedure based on its positive Harris recurrence, which is valid in the high-frequency regime. Exploiting the drift-elasticity relationship implied by the CKLS--CIR transformation, with the help of an initial estimation, we obtain an estimator of the CKLS elasticity from the CIR drift estimator in the transformed model. This yields a closed-form estimator of the elasticity parameter with an explicit asymptotic variance. We establish its $p$-consistency, stable convergence in law, and asymptotic normality. Finally, we show that stable convergence in law is invariant under equivalent changes of measure, thereby guaranteeing that the Gaussian limit remains invariant under the original measure.
- oai:arXiv.org:2512.07301v1
- math.ST
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ State and Parameter Estimation for a Neural Model of Local Field Potentials
+ https://arxiv.org/abs/2512.07842
+ arXiv:2512.07842v1 Announce Type: cross
+Abstract: The study of cortical dynamics during different states such as decision making, sleep and movement, is an important topic in Neuroscience. Modelling efforts aim to relate the neural rhythms present in cortical recordings to the underlying dynamics responsible for their emergence. We present an effort to characterize the neural activity from the cortex of a mouse during natural sleep, captured through local field potential measurements. Our approach relies on using a discretized Wilson--Cowan Amari neural field model for neural activity, along with a data assimilation method that allows the Bayesian joint estimation of the state and parameters. We demonstrate the feasibility of our approach on synthetic measurements before applying it to a dataset available in literature. Our findings suggest the potential of our approach to characterize the stimulus received by the cortex from other brain regions, while simultaneously inferring a state that aligns with the observed signal.
+ oai:arXiv.org:2512.07842v1
+ q-bio.NC
+ math.DS
+ math.PR
+ stat.CO
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://creativecommons.org/licenses/by/4.0/
- Boyuan Ning, Yasutaka Shimizu
+ Daniele Avitabile, Gabriel J. Lord, Khadija Meddouni
- Exact Synthetic Populations for Scalable Societal and Market Modeling
- https://arxiv.org/abs/2512.07306
- arXiv:2512.07306v1 Announce Type: new
-Abstract: We introduce a constraint-programming framework for generating synthetic populations that reproduce target statistics with high precision while enforcing full individual consistency. Unlike data-driven approaches that infer distributions from samples, our method directly encodes aggregated statistics and structural relations, enabling exact control of demographic profiles without requiring any microdata. We validate the approach on official demographic sources and study the impact of distributional deviations on downstream analyses. This work is conducted within the Pollitics project developed by Emotia, where synthetic populations can be queried through large language models to model societal behaviors, explore market and policy scenarios, and provide reproducible decision-grade insights without personal data.
- oai:arXiv.org:2512.07306v1
- stat.ML
- cs.AI
+ Bayesian Optimization for Function-Valued Responses under Min-Max Criteria
+ https://arxiv.org/abs/2512.07868
+ arXiv:2512.07868v1 Announce Type: cross
+Abstract: Bayesian optimization is widely used for optimizing expensive black box functions, but most existing approaches focus on scalar responses. In many scientific and engineering settings the response is functional, varying smoothly over an index such as time or wavelength, which makes classical formulations inadequate. Existing methods often minimize integrated error, which captures average performance but neglects worst case deviations. To address this limitation we propose min-max Functional Bayesian Optimization (MM-FBO), a framework that directly minimizes the maximum error across the functional domain. Functional responses are represented using functional principal component analysis, and Gaussian process surrogates are constructed for the principal component scores. Building on this representation, MM-FBO introduces an integrated uncertainty acquisition function that balances exploitation of worst case expected error with exploration across the functional domain. We provide two theoretical guarantees: a discretization bound for the worst case objective, and a consistency result showing that as the surrogate becomes accurate and uncertainty vanishes, the acquisition converges to the true min-max objective. We validate the method through experiments on synthetic benchmarks and physics inspired case studies involving electromagnetic scattering by metaphotonic devices and vapor phase infiltration. Results show that MM-FBO consistently outperforms existing baselines and highlights the importance of explicitly modeling functional uncertainty in Bayesian optimization.
+ oai:arXiv.org:2512.07868v1cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ cs.AI
+ stat.ML
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://creativecommons.org/licenses/by/4.0/
- Thierry Petit, Arnault Pachot
+ Pouya Ahadi, Reza Marzban, Ali Adibi, Kamran Paynabar
- Machine learning in an expectation-maximisation framework for nowcasting
- https://arxiv.org/abs/2512.07335
- arXiv:2512.07335v1 Announce Type: new
-Abstract: Decision making often occurs in the presence of incomplete information, leading to the under- or overestimation of risk. Leveraging the observable information to learn the complete information is called nowcasting. In practice, incomplete information is often a consequence of reporting or observation delays. In this paper, we propose an expectation-maximisation (EM) framework for nowcasting that uses machine learning techniques to model both the occurrence as well as the reporting process of events. We allow for the inclusion of covariate information specific to the occurrence and reporting periods as well as characteristics related to the entity for which events occurred. We demonstrate how the maximisation step and the information flow between EM iterations can be tailored to leverage the predictive power of neural networks and (extreme) gradient boosting machines (XGBoost). With simulation experiments, we show that we can effectively model both the occurrence and reporting of events when dealing with high-dimensional covariate information. In the presence of non-linear effects, we show that our methodology outperforms existing EM-based nowcasting frameworks that use generalised linear models in the maximisation step. Finally, we apply the framework to the reporting of Argentinian Covid-19 cases, where the XGBoost-based approach again is most performant.
- oai:arXiv.org:2512.07335v1
- stat.ML
+ Softly Symbolifying Kolmogorov-Arnold Networks
+ https://arxiv.org/abs/2512.07875
+ arXiv:2512.07875v1 Announce Type: cross
+Abstract: Kolmogorov-Arnold Networks (KANs) offer a promising path toward interpretable machine learning: their learnable activations can be studied individually, while collectively fitting complex data accurately. In practice, however, trained activations often lack symbolic fidelity, learning pathological decompositions with no meaningful correspondence to interpretable forms. We propose Softly Symbolified Kolmogorov-Arnold Networks (S2KAN), which integrate symbolic primitives directly into training. Each activation draws from a dictionary of symbolic and dense terms, with learnable gates that sparsify the representation. Crucially, this sparsification is differentiable, enabling end-to-end optimization, and is guided by a principled Minimum Description Length objective. When symbolic terms suffice, S2KAN discovers interpretable forms; when they do not, it gracefully degrades to dense splines. We demonstrate competitive or superior accuracy with substantially smaller models across symbolic benchmarks, dynamical systems forecasting, and real-world prediction tasks, and observe evidence of emergent self-sparsification even without regularization pressure.
+ oai:arXiv.org:2512.07875v1cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-sa/4.0/
- Paul Wilsens, Katrien Antonio, Gerda Claeskens
-
-
- Nonparametric optimal density estimation for censored circular data
- https://arxiv.org/abs/2512.07380
- arXiv:2512.07380v1 Announce Type: new
-Abstract: We consider the problem of estimating the probability density function of a circular random variable observed under censoring. To this end, we introduce a projection estimator constructed via a regression approach on linear sieves. We first establish a lower bound for the mean integrated squared error in the case of Sobolev densities, thereby identifying the minimax rate of convergence for this estimation problem. We then derive a matching upper bound for the same risk, showing that the proposed estimator attains the minimax rate when the underlying density belongs to a Sobolev class. Finally, we develop a data-driven version of the procedure that preserves this optimal rate, thus yielding an adaptive estimator. The practical performance of the method is demonstrated through simulation studies.
- oai:arXiv.org:2512.07380v1
- math.ST
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ cs.NE
+ physics.data-an
+ stat.ML
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nicolas Conanec (LAGA), Claire Lacour (LAMA), Thanh Mai Pham Ngoc (LAGA)
+ James Bagrow, Josh Bongard
- A model-free Screening procedure
- https://arxiv.org/abs/2512.07423
- arXiv:2512.07423v1 Announce Type: new
-Abstract: In this article, we propose a generic screening method for selecting explanatory variables correlated with the response variable Y . We make no assumptions about the existence of a model that could link Y with a subset of explanatory variables, nor about the distribution of the variables. Our procedure can therefore be described as ''model-free'' and can be applied in a wide range of situations. In order to obtain precise theoretical guarantees (Sure Screening Property and control of the False Positive Rate), we establish a Berry-Esseen type inequality for the studentized statistic of the slope estimator. We illustrate our selection procedure using two simulated examples and a real data set.
- oai:arXiv.org:2512.07423v1
- math.ST
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ Fourier-Enhanced Recurrent Neural Networks for Electrical Load Time Series Downscaling
+ https://arxiv.org/abs/2512.07876
+ arXiv:2512.07876v1 Announce Type: cross
+Abstract: We present a Fourier-enhanced recurrent neural network (RNN) for downscaling electrical loads. The model combines (i) a recurrent backbone driven by low-resolution inputs, (ii) explicit Fourier seasonal embeddings fused in latent space, and (iii) a self-attention layer that captures dependencies among high-resolution components within each period. Across four PJM territories, the approach yields RMSE lower and flatter horizon-wise than classical Prophet baselines (with and without seasonality/LAA) and than RNN ablations without attention or Fourier features.
+ oai:arXiv.org:2512.07876v1
+ cs.LG
+ stat.ML
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- J Dedecker (MAP5 - UMR 8145), M L Taupin (LaMME), A S Tocquet (LaMME)
-
-
- Bridging CORDEX and CMIP6: Machine Learning Downscaling for Wind and Solar Energy Droughts in Central Europe
- https://arxiv.org/abs/2512.07429
- arXiv:2512.07429v1 Announce Type: new
-Abstract: Reliable regional climate information is essential for assessing the impacts of climate change and for planning in sectors such as renewable energy; yet, producing high-resolution projections through coordinated initiatives like CORDEX that run multiple physical regional climate models is both computationally demanding and difficult to organize. Machine learning emulators that learn the mapping between global and regional climate fields offer a promising way to address these limitations. Here we introduce the application of such an emulator: trained on CMIP5 and CORDEX simulations, it reproduces regional climate model data with sufficient accuracy. When applied to CMIP6 simulations not seen during training, it also produces realistic results, indicating stable performance. Using CORDEX data, CMIP5 and CMIP6 simulations, as well as regional data generated by two machine learning models, we analyze the co-occurrence of low wind speed and low solar radiation and find indications that the number of such energy drought days is likely to decrease in the future. Our results highlight that downscaling with machine learning emulators provides an efficient complement to efforts such as CORDEX, supplying the higher-resolution information required for impact assessments.
- oai:arXiv.org:2512.07429v1
- stat.AP
- physics.ao-ph
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Nina Effenberger, Maxim Samarin, Maybritt Schillinger, Reto Knutti
+ Qi Chen, Mihai Anitescu
- A multivariate extension of Azadkia-Chatterjee's rank coefficient
- https://arxiv.org/abs/2512.07443
- arXiv:2512.07443v1 Announce Type: new
-Abstract: The Azadkia-Chatterjee coefficient is a rank-based measure of dependence between a random variable $Y \in \mathbb{R}$ and a random vector ${\boldsymbol Z} \in \mathbb{R}^{d_Z}$. This paper proposes a multivariate extension that measures dependence between random vectors ${\boldsymbol Y} \in \mathbb{R}^{d_Y}$ and ${\boldsymbol Z} \in \mathbb{R}^{d_Z}$, based on $n$ i.i.d. samples. The proposed coefficient converges almost surely to a limit with the following properties: i) it lies in $[0, 1]$; ii) it equals zero if and only if ${\boldsymbol Y}$ and ${\boldsymbol Z}$ are independent; and iii) it equals one if and only if ${\boldsymbol Y}$ is almost surely a function of ${\boldsymbol Z}$. Remarkably, the only assumption required by this convergence is that ${\boldsymbol Y}$ is not almost surely a constant. We further prove that under the same mild condition, the coefficient is asymptotically normal when ${\boldsymbol Y}$ and ${\boldsymbol Z}$ are independent and propose a merge sort based algorithm to calculate this coefficient in time complexity $O(n (\log n)^{d_Y})$. Finally, we show that it can be used to measure conditional dependence between ${\boldsymbol Y}$ and ${\boldsymbol Z}$ conditional on a third random vector ${\boldsymbol X}$, and prove that the measure is monotonic with respect to the deviation from an independence distribution under certain model restrictions.
- oai:arXiv.org:2512.07443v1
- math.ST
+ CrowdLLM: Building LLM-Based Digital Populations Augmented with Generative Models
+ https://arxiv.org/abs/2512.07890
+ arXiv:2512.07890v1 Announce Type: cross
+Abstract: The emergence of large language models (LLMs) has sparked much interest in creating LLM-based digital populations that can be applied to many applications such as social simulation, crowdsourcing, marketing, and recommendation systems. A digital population can reduce the cost of recruiting human participants and alleviate many concerns related to human subject study. However, research has found that most of the existing works rely solely on LLMs and could not sufficiently capture the accuracy and diversity of a real human population. To address this limitation, we propose CrowdLLM that integrates pretrained LLMs and generative models to enhance the diversity and fidelity of the digital population. We conduct theoretical analysis of CrowdLLM regarding its great potential in creating cost-effective, sufficiently representative, scalable digital populations that can match the quality of a real crowd. Comprehensive experiments are also conducted across multiple domains (e.g., crowdsourcing, voting, user rating) and simulation studies which demonstrate that CrowdLLM achieves promising performance in both accuracy and distributional fidelity to human data.
+ oai:arXiv.org:2512.07890v1
+ cs.MA
+ cs.LGstat.ME
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ stat.ML
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Wenjie Huang, Zonghan Li, Yuhao Wang
+ Ryan Feng Lin, Keyu Tian, Hanming Zheng, Congjing Zhang, Li Zeng, Shuai Huang
- Permanent and transitory crime risk in variable-density hot spot analysis
- https://arxiv.org/abs/2512.07467
- arXiv:2512.07467v1 Announce Type: new
-Abstract: Crime prevention measures, aiming for the effective and efficient spending of public resources, rely on the empirical analysis of spatial and temporal data for public safety outcomes. We perform a variable-density cluster analysis on crime incident reports in the City of Chicago for the years 2001--2022 to investigate changes in crime share composition for hot spots of different densities. Contributing to and going beyond the existing wealth of research on criminological applications in the operational research literature, we study the evolution of crime type shares in clusters over the course of two decades and demonstrate particularly notable impacts of the COVID-19 pandemic and its associated social contact avoidance measures, as well as a dependence of these effects on the primary function of city areas. Our results also indicate differences in the relative difficulty to address specific crime types, and an analysis of spatial autocorrelations further shows variations in incident uniformity between clusters and outlier areas at different distance radii. We discuss our findings in the context of the interplay between operational research and criminal justice, the practice of hot spot policing and public safety optimization, and the factors contributing to, and challenges and risks due to, data biases as an often neglected factor in criminological applications.
- oai:arXiv.org:2512.07467v1
- stat.AP
- cs.CY
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Ben Moews
-
-
- Meta-analyses of dietary exposures must consider energy adjustment: recommendations from a meta-scientific review
- https://arxiv.org/abs/2512.07531
- arXiv:2512.07531v1 Announce Type: new
-Abstract: In observational studies of dietary exposures, the energy adjustment strategy has a critical impact on the effect being estimated. Adjusting for total energy intake or expressing the exposure as a percentage of total energy, leads to a substitution effect being estimated. This impacts the interpretation of primary studies and meta-analyses. Unless energy adjustment strategies are considered, meta-analyses may end up pooling estimates for incomparable effects. This meta-scientific review aimed to investigate the extent to which meta-analyses of dietary exposures may be pooling incomparable effects by reviewing the energy adjustment strategies. We identified all meta-analyses examining the relationship between saturated fat and fish and cardiovascular disease. The two most recent and two most cited reviews for each exposure were examined, along with all primary studies. Information on the study aims, targeted effects, and interpretations were summarized. The eight meta-analyses summarised results from 82 primary studies including 144 unique models. Only one meta-analysis explicitly considered the energy adjustment strategy of the primary studies to determine eligibility for a substitution subgroup analysis. None of the meta-analyses acknowledged that they were pooling estimates for different effects. 82% of the models from the primary studies were implicitly estimating substitution effects but this was not explicitly stated in most study aims, interpretation or conclusions. Our meta-scientific review found little evidence that the energy adjustment strategies of the primary studies were being considered in the synthesis or interpretation of evidence. Consequently, the pooled estimates reflect ill-defined quantities with unclear interpretations. We offer recommendations to improve the conduct of future meta-analyses and the quality of evidence that informs nutritional recommendations.
- oai:arXiv.org:2512.07531v1
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Natalia Ortega, Peter WG Tennant, Darren C Greenwood, Octavio Pano, Christina C Dahm, Russell J de Souza, Daniel B Ibsen, Conor J MacDonald, Deirdre K Tobias, Georgia D Tomova
+ Expectations in Expectation Propagation
+ https://arxiv.org/abs/2512.08034
+ arXiv:2512.08034v1 Announce Type: cross
+Abstract: Expectation Propagation (EP) is a widely used message-passing algorithm that decomposes a global inference problem into multiple local ones. It approximates marginal distributions (beliefs) using intermediate functions (messages). While beliefs must be proper probability distributions that integrate to one, messages may have infinite integral values. In Gaussian-projected EP, such messages take a Gaussian form and appear as if they have "negative" variances. Although allowed within the EP framework, these negative-variance messages can impede algorithmic progress.
+ In this paper, we investigate EP in linear models and analyze the relationship between the corresponding beliefs. Based on the analysis, we propose both non-persistent and persistent approaches that prevent the algorithm from being blocked by messages with infinite integral values.
+ Furthermore, by examining the relationship between the EP messages in linear models, we develop an additional approach that avoids the occurrence of messages with infinite integral values.
+ oai:arXiv.org:2512.08034v1
+ cs.IT
+ eess.SP
+ math.IT
+ stat.CO
+ Wed, 10 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Zilu Zhao, Fangqing Xiao, Dirk Slock
- High-Dimensional Change Point Detection using Graph Spanning Ratio
- https://arxiv.org/abs/2512.07541
- arXiv:2512.07541v1 Announce Type: new
-Abstract: Inspired by graph-based methodologies, we introduce a novel graph-spanning algorithm designed to identify changes in both offline and online data across low to high dimensions. This versatile approach is applicable to Euclidean and graph-structured data with unknown distributions, while maintaining control over error probabilities. Theoretically, we demonstrate that the algorithm achieves high detection power when the magnitude of the change surpasses the lower bound of the minimax separation rate, which scales on the order of $\sqrt{nd}$. Our method outperforms other techniques in terms of accuracy for both Gaussian and non-Gaussian data. Notably, it maintains strong detection power even with small observation windows, making it particularly effective for online environments where timely and precise change detection is critical.
- oai:arXiv.org:2512.07541v1
- stat.ML
+ LUNA: Linear Universal Neural Attention with Generalization Guarantees
+ https://arxiv.org/abs/2512.08061
+ arXiv:2512.08061v1 Announce Type: cross
+Abstract: Scaling attention faces a critical bottleneck: the $\mathcal{O}(n^2)$ quadratic computational cost of softmax attention, which limits its application in long-sequence domains. While linear attention mechanisms reduce this cost to $\mathcal{O}(n)$, they typically rely on fixed random feature maps, such as random Fourier features or hand-crafted functions. This reliance on static, data-agnostic kernels creates a fundamental trade-off, forcing practitioners to sacrifice significant model accuracy for computational efficiency. We introduce \textsc{LUNA}, a kernelized linear attention mechanism that eliminates this trade-off, retaining linear cost while matching and surpassing the accuracy of quadratic attention. \textsc{LUNA} is built on the key insight that the kernel feature map itself should be learned rather than fixed a priori. By parameterizing the kernel, \textsc{LUNA} learns a feature basis tailored to the specific data and task, overcoming the expressive limitations of fixed-feature methods. \textsc{Luna} implements this with a learnable feature map that induces a positive-definite kernel and admits a streaming form, yielding linear time and memory scaling in the sequence length. Empirical evaluations validate our approach across diverse settings. On the Long Range Arena (LRA), \textsc{Luna} achieves state-of-the-art average accuracy among efficient Transformers under compute parity, using the same parameter count, training steps, and approximate FLOPs. \textsc{Luna} also excels at post-hoc conversion: replacing softmax in fine-tuned BERT and ViT-B/16 checkpoints and briefly fine-tuning recovers most of the original performance, substantially outperforming fixed linearizations.
+ oai:arXiv.org:2512.08061v1cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ stat.ML
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://creativecommons.org/licenses/by/4.0/
- Youngwen Sun, Katerina Papagiannouli, Vladimir Spokoiny
+ Ashkan Shahbazi, Ping He, Ali Abbasi, Yikun Bai, Xinran Liu, Elaheh Akbari, Darian Salehi, Navid NaderiAlizadeh, Soheil Kolouri
- On Conditional Independence Graph Learning From Multi-Attribute Gaussian Dependent Time Series
- https://arxiv.org/abs/2512.07557
- arXiv:2512.07557v1 Announce Type: new
-Abstract: Estimation of the conditional independence graph (CIG) of high-dimensional multivariate Gaussian time series from multi-attribute data is considered. Existing methods for graph estimation for such data are based on single-attribute models where one associates a scalar time series with each node. In multi-attribute graphical models, each node represents a random vector or vector time series. In this paper we provide a unified theoretical analysis of multi-attribute graph learning for dependent time series using a penalized log-likelihood objective function formulated in the frequency domain using the discrete Fourier transform of the time-domain data. We consider both convex (sparse-group lasso) and non-convex (log-sum and SCAD group penalties) penalty/regularization functions. We establish sufficient conditions in a high-dimensional setting for consistency (convergence of the inverse power spectral density to true value in the Frobenius norm), local convexity when using non-convex penalties, and graph recovery. We do not impose any incoherence or irrepresentability condition for our convergence results. We also empirically investigate selection of the tuning parameters based on the Bayesian information criterion, and illustrate our approach using numerical examples utilizing both synthetic and real data.
- oai:arXiv.org:2512.07557v1
- stat.ML
- cs.LG
- eess.SP
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ Cabin Layout, Seat Density, and Passenger Segmentation in Air Transport: Implications for Prices, Ancillary Revenues, and Efficiency
+ https://arxiv.org/abs/2512.08066
+ arXiv:2512.08066v1 Announce Type: cross
+Abstract: This study investigates how the layout and density of seats in aircraft cabins influence the pricing of airline tickets on domestic flights. The analysis is based on microdata from boarding passes linked to face-to-face interviews with passengers, allowing us to relate the price paid to the location on the aircraft seat map, as well as market characteristics and flight operations. Econometric models were estimated using the Post-Double-Selection LASSO (PDS-LASSO) procedure, which selects numerous controls for unobservable factors linked to commercial and operational aspects, thus enabling better identification of the effect of variables such as advance purchase, reason for travel, fuel price, market structure, and load factor, among others. The results suggest that a higher density of seat rows is associated with lower prices, reflecting economies of scale with the increase in aircraft size and gains in operational efficiency. An unexpected result was also obtained: in situations where there was no seat selection fee, passengers with more expensive tickets were often allocated middle seats due to purchasing at short notice, when the side alternatives were no longer available. This behavior helps explain the economic logic behind one of the main ancillary revenues of airlines. In addition to quantitative analysis, the study incorporates an exploratory approach to innovative cabin concepts and their possible effects on density and comfort on board.
+ oai:arXiv.org:2512.08066v1
+ eess.SY
+ cs.SY
+ econ.GN
+ q-fin.EC
+ stat.AP
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://creativecommons.org/licenses/by/4.0/
- 10.1109/OJSP.2025.3578807
- IEEE Open Journal of Signal Processing, vol. 6, pp. 705-721, 2025
- Jitendra K. Tugnait
+ 10.5281/zenodo.17860616
+ Communications in Airline Economics Research, 202117818, 2025
+ Alessandro V. M. Oliveira, Moises D. Vassallo
- Surprisingly-early bias in forecasts for unscheduled events
- https://arxiv.org/abs/2512.07575
- arXiv:2512.07575v1 Announce Type: new
-Abstract: When a dataset contains forecasts on unscheduled events, such as natural catastrophes, outcomes may be censored or ``hidden'' since some events have not yet occurred. This article finds that this can lead to a selection bias which affects the perceived accuracy and calibration of forecasts. This selection bias can be eliminated by excluding forecasts on outcomes which have been verified surprisingly early.
- oai:arXiv.org:2512.07575v1
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Niklas V. Lehmann
+ Subcellular proteome niche discovery using semi-supervised functional clustering
+ https://arxiv.org/abs/2512.08087
+ arXiv:2512.08087v1 Announce Type: cross
+Abstract: Intracellular compartmentalization of proteins underpins their function and the metabolic processes they sustain. Various mass spectrometry-based proteomics methods (subcellular spatial proteomics) now allow high throughput subcellular protein localization. Yet, the curation, analysis and interpretation of these data remain challenging, particularly in non-model organisms where establishing reliable marker proteins is difficult, and in contexts where experimental replication and subcellular fractionation are constrained. Here, we develop FSPmix, a semi-supervised functional clustering method implemented as an open-source R package, which leverages partial annotations from a subset of marker proteins to predict protein subcellular localization from subcellular spatial proteomics data. This method explicitly assumes that protein signatures vary smoothly across subcellular fractions, enabling more robust inference under low signal-to-noise data regimes. We applied FSPmix to a subcellular proteomics dataset from a marine diatom, allowing us to assign probabilistic localizations to proteins and uncover potentially new protein functions. Altogether, this work lays the foundation for more robust statistical analysis and interpretation of subcellular proteomics datasets, particularly in understudied organisms.
+ oai:arXiv.org:2512.08087v1
+ q-bio.QM
+ q-bio.SC
+ stat.AP
+ Wed, 10 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Ziyue Zheng, Loay J. Jabre, Matthew McIlvin, Mak A. Saito, Sangwon Hyun
- $\phi$-test: Global Feature Selection and Inference for Shapley Additive Explanations
- https://arxiv.org/abs/2512.07578
- arXiv:2512.07578v1 Announce Type: new
-Abstract: We propose $\phi$-test, a global feature-selection and significance procedure for black-box predictors that combines Shapley attributions with selective inference. Given a trained model and an evaluation dataset, $\phi$-test performs SHAP-guided screening and fits a linear surrogate on the screened features via a selection rule with a tractable selective-inference form. For each retained feature, it outputs a Shapley-based global score, a surrogate coefficient, and post-selection $p$-values and confidence intervals in a global feature-importance table. Experiments on real tabular regression tasks with tree-based and neural backbones suggest that $\phi$-test can retain much of the predictive ability of the original model while using only a few features and producing feature sets that remain fairly stable across resamples and backbone classes. In these settings, $\phi$-test acts as a practical global explanation layer linking Shapley-based importance summaries with classical statistical inference.
- oai:arXiv.org:2512.07578v1
- stat.ML
+ Complexity of One-Dimensional ReLU DNNs
+ https://arxiv.org/abs/2512.08091
+ arXiv:2512.08091v1 Announce Type: cross
+Abstract: We study the expressivity of one-dimensional (1D) ReLU deep neural networks through the lens of their linear regions. For randomly initialized, fully connected 1D ReLU networks (He scaling with nonzero bias) in the infinite-width limit, we prove that the expected number of linear regions grows as $\sum_{i = 1}^L n_i + \mathop{{o}}\left(\sum_{i = 1}^L{n_i}\right) + 1$, where $n_\ell$ denotes the number of neurons in the $\ell$-th hidden layer. We also propose a function-adaptive notion of sparsity that compares the expected regions used by the network to the minimal number needed to approximate a target within a fixed tolerance.
+ oai:arXiv.org:2512.08091v1cs.LG
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ stat.ML
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Dongseok Kim, Hyoungsun Choi, Mohamed Jismy Aashik Rasool, Gisung Oh
+ Jonathan Kogan, Hayden Jananthan, Jeremy Kepner
- Location and scatter halfspace median under {\alpha}-symmetric distributions
- https://arxiv.org/abs/2512.07634
- arXiv:2512.07634v1 Announce Type: new
-Abstract: In a landmark result, Chen et al. (2018) showed that multivariate medians induced by halfspace depth attain the minimax optimal convergence rate under Huber contamination and elliptical symmetry, for both location and scatter estimation. We extend some of these findings to the broader family of {\alpha}-symmetric distributions, which includes both elliptically symmetric and multivariate heavy-tailed distributions. For location estimation, we establish an upper bound on the estimation error of the location halfspace median under the Huber contamination model. An analogous result for the standard scatter halfspace median matrix is feasible only under the assumption of elliptical symmetry, as ellipticity is deeply embedded in the definition of scatter halfspace depth. To address this limitation, we propose a modified scatter halfspace depth that better accommodates {\alpha}-symmetric distributions, and derive an upper bound for the corresponding {\alpha}-scatter median matrix. Additionally, we identify several key properties of scatter halfspace depth for {\alpha}-symmetric distributions.
- oai:arXiv.org:2512.07634v1
- math.ST
- stat.ME
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ Branching Fixed Effects: A Proposal for Communicating Uncertainty
+ https://arxiv.org/abs/2512.08101
+ arXiv:2512.08101v1 Announce Type: cross
+Abstract: Economists often rely on estimates of linear fixed effects models developed by other teams of researchers. Assessing the uncertainty in these estimates can be challenging. I propose a form of sample splitting for network data that breaks two-way fixed effects estimates into statistically independent branches, each of which provides an unbiased estimate of the parameters of interest. These branches facilitate uncertainty quantification, moment estimation, and shrinkage. Algorithms are developed for efficiently extracting branches from large datasets. I illustrate these techniques using a benchmark dataset from Veneto, Italy that has been widely used to study firm wage effects.
+ oai:arXiv.org:2512.08101v1
+ econ.EM
+ stat.AP
+ stat.CO
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://creativecommons.org/licenses/by/4.0/
- 10.1080/10485252.2025.2600417
- Filip Bo\v{c}inec, Stanislav Nagy
+ Patrick Kline
- Symmetric Vaccine Efficacy
- https://arxiv.org/abs/2512.07739
- arXiv:2512.07739v1 Announce Type: new
-Abstract: Traditional measures of vaccine efficacy (VE) are inherently asymmetric, constrained above by $1$ but unbounded below. As a result, VE estimates and corresponding confidence intervals can extend far below zero, making interpretation difficult and potentially obscuring whether the apparent effect reflects true harm or simply statistical uncertainty. The proposed symmetric vaccine efficacy (SVE) is a bounded and interpretable alternative to VE that maintains desirable statistical properties while resolving these asymmetries. SVE is defined as a symmetric transformation of infection risks, with possible values within $[-1, 1]$, providing a common scale for both beneficial and harmful vaccine effects. This paper describes the relationship between SVE and traditional VE, considers inference about SVE, and illustrates the utility of the proposed measure by reanalyzing data from a randomized trial of a candidate HIV vaccine. Open-source tools for computing estimates of SVE and corresponding confidence intervals are available in R through the sve package.
- oai:arXiv.org:2512.07739v1
- stat.ME
+ Any Old Tom, Dick or Harry: The Citation Impact of First Name Genderedness
+ https://arxiv.org/abs/2512.08219
+ arXiv:2512.08219v1 Announce Type: cross
+Abstract: This paper attempts a first analysis of citation distributions based on the genderedness of authors' first name. Following the extraction of first name and sex data from all human entity triplets contained in Wikidata, a first name genderedness table is first created based on compiled sex frequencies, then merged with bibliometric data from eponymous, US-affiliated authors. Comparisons of various cumulative distributions show that citation concentrations fluctuations are highest at the opposite ends of the genderedness spectrum, as authors with very feminine and masculine first names respectively get a lower and higher share of citations for every article published, irrespective of their contribution role.
+ oai:arXiv.org:2512.08219v1
+ cs.DLstat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lucy D'Agostino McGowan, Sarah C. Lotspeich, Michael G. Hudgens
+ Wed, 10 Dec 2025 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Maxime Holmberg Sainte-Marie, Vincent Larivi\`ere
- Physics-Informed Neural Networks for Source Inversion and Parameters Estimation in Atmospheric Dispersion
- https://arxiv.org/abs/2512.07755
- arXiv:2512.07755v1 Announce Type: new
-Abstract: Recent studies have shown the success of deep learning in solving forward and inverse problems in engineering and scientific computing domains, such as physics-informed neural networks (PINNs). In the fields of atmospheric science and environmental monitoring, estimating emission source locations is a central task that further relies on multiple model parameters that dictate velocity profiles and diffusion parameters. Estimating these parameters at the same time as emission sources from scarce data is a difficult task. In this work, we achieve this by leveraging the flexibility and generality of PINNs. We use a weighted adaptive method based on the neural tangent kernels to solve a source inversion problem with parameter estimation on the 2D and 3D advection-diffusion equations with unknown velocity and diffusion coefficients that may vary in space and time. Our proposed weighted adaptive method is presented as an extension of PINNs for forward PDE problems to a highly ill-posed source inversion and parameter estimation problem. The key idea behind our methodology is to attempt the joint recovery of the solution, the sources along with the unknown parameters, thereby using the underlying partial differential equation as a constraint that couples multiple unknown functional parameters, leading to more efficient use of the limited information in the measurements. We present various numerical experiments, using different types of measurements that model practical engineering systems, to show that our proposed method is indeed successful and robust to additional noise in the measurements.
- oai:arXiv.org:2512.07755v1
- stat.ML
+ Low Rank Support Quaternion Matrix Machine
+ https://arxiv.org/abs/2512.08327
+ arXiv:2512.08327v1 Announce Type: cross
+Abstract: Input features are conventionally represented as vectors, matrices, or third order tensors in the real field, for color image classification. Inspired by the success of quaternion data modeling for color images in image recovery and denoising tasks, we propose a novel classification method for color image classification, named as the Low-rank Support Quaternion Matrix Machine (LSQMM), in which the RGB channels are treated as pure quaternions to effectively preserve the intrinsic coupling relationships among channels via the quaternion algebra. For the purpose of promoting low-rank structures resulting from strongly correlated color channels, a quaternion nuclear norm regularization term, serving as a natural extension of the conventional matrix nuclear norm to the quaternion domain, is added to the hinge loss in our LSQMM model. An Alternating Direction Method of Multipliers (ADMM)-based iterative algorithm is designed to effectively resolve the proposed quaternion optimization model. Experimental results on multiple color image classification datasets demonstrate that our proposed classification approach exhibits advantages in classification accuracy, robustness and computational efficiency, compared to several state-of-the-art methods using support vector machines, support matrix machines, and support tensor machines.
+ oai:arXiv.org:2512.08327v1
+ cs.CVcs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ math.OC
+ stat.ML
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Brenda Anague, Bamdad Hosseini, Issa Karambal, Jean Medard Ngnotchouye
+ Wang Chen, Ziyan Luo, Shuangyue Wang
- Distribution-informed Online Conformal Prediction
- https://arxiv.org/abs/2512.07770
- arXiv:2512.07770v1 Announce Type: new
-Abstract: Conformal prediction provides a pivotal and flexible technique for uncertainty quantification by constructing prediction sets with a predefined coverage rate. Many online conformal prediction methods have been developed to address data distribution shifts in fully adversarial environments, resulting in overly conservative prediction sets. We propose Conformal Optimistic Prediction (COP), an online conformal prediction algorithm incorporating underlying data pattern into the update rule. Through estimated cumulative distribution function of non-conformity scores, COP produces tighter prediction sets when predictable pattern exists, while retaining valid coverage guarantees even when estimates are inaccurate. We establish a joint bound on coverage and regret, which further confirms the validity of our approach. We also prove that COP achieves distribution-free, finite-sample coverage under arbitrary learning rates and can converge when scores are $i.i.d.$. The experimental results also show that COP can achieve valid coverage and construct shorter prediction intervals than other baselines.
- oai:arXiv.org:2512.07770v1
- stat.ML
+ A Multivariate Bernoulli-Based Sampling Method for Multi-Label Data with Application to Meta-Research
+ https://arxiv.org/abs/2512.08371
+ arXiv:2512.08371v1 Announce Type: cross
+Abstract: Datasets may contain observations with multiple labels. If the labels are not mutually exclusive, and if the labels vary greatly in frequency, obtaining a sample that includes sufficient observations with scarcer labels to make inferences about those labels, and which deviates from the population frequencies in a known manner, creates challenges. In this paper, we consider a multivariate Bernoulli distribution as our underlying distribution of a multi-label problem. We present a novel sampling algorithm that takes label dependencies into account. It uses observed label frequencies to estimate multivariate Bernoulli distribution parameters and calculate weights for each label combination. This approach ensures the weighted sampling acquires target distribution characteristics while accounting for label dependencies. We applied this approach to a sample of research articles from Web of Science labeled with 64 biomedical topic categories. We aimed to preserve category frequency order, reduce frequency differences between most and least common categories, and account for category dependencies. This approach produced a more balanced sub-sample, enhancing the representation of minority categories.
+ oai:arXiv.org:2512.08371v1cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
- new
+ stat.ML
+ Wed, 10 Dec 2025 00:00:00 -0500
+ crosshttp://creativecommons.org/licenses/by-nc-nd/4.0/
- Dongjian Hu, Junxi Wu, Shu-Tao Xia, Changliang Zou
+ Simon Chung, Colby J. Vorland, Donna L. Maney, Andrew W. Brown
- Non-Asymptotic Error Bounds for Causally Conditioned Directed Information Rates of Gaussian Sequences
- https://arxiv.org/abs/2512.06238
- arXiv:2512.06238v1 Announce Type: cross
-Abstract: Directed information and its causally conditioned variations are often used to measure causal influences between random processes. In practice, these quantities must be measured from data. Non-asymptotic error bounds for these estimates are known for sequences over finite alphabets, but less is known for real-valued data. This paper examines the case in which the data are sequences of Gaussian vectors. We provide an explicit formula for causally conditioned directed information rate based on optimal prediction and define an estimator based on this formula. We show that our estimator gives an error of order $O\left(N^{-1/2}\log(N)\right)$ with high probability, where $N$ is the total sample size.
- oai:arXiv.org:2512.06238v1
+ A Distribution Testing Approach to Clustering Distributions
+ https://arxiv.org/abs/2512.08376
+ arXiv:2512.08376v1 Announce Type: cross
+Abstract: We study the following distribution clustering problem: Given a hidden partition of $k$ distributions into two groups, such that the distributions within each group are the same, and the two distributions associated with the two clusters are $\varepsilon$-far in total variation, the goal is to recover the partition. We establish upper and lower bounds on the sample complexity for two fundamental cases: (1) when one of the cluster's distributions is known, and (2) when both are unknown. Our upper and lower bounds characterize the sample complexity's dependence on the domain size $n$, number of distributions $k$, size $r$ of one of the clusters, and distance $\varepsilon$. In particular, we achieve tightness with respect to $(n,k,r,\varepsilon)$ (up to an $O(\log k)$ factor) for all regimes.
+ oai:arXiv.org:2512.08376v1
+ cs.DScs.ITmath.ITmath.ST
+ stat.MLstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Yuping Zheng, Andrew Lamperski
-
-
- Assessing the Information Content of Individual Spikes in Population-Level Models of Neural Spiking Activity
- https://arxiv.org/abs/2512.06280
- arXiv:2512.06280v1 Announce Type: cross
-Abstract: In the last decade, there have been major advances in clusterless decoding algorithms for neural data analysis. These algorithms use the theory of marked point processes to describe the joint activity of many neurons simultaneously, without the need for spike sorting. In this study, we examine information-theoretic metrics to analyze the information extracted from each observed spike under such clusterless models. In an analysis of spatial coding in the rat hippocampus, we compared the entropy reduction between spike-sorted and clusterless models for both individual spikes observed in isolation and when the prior information from all previously observed spikes is accounted for. Our analysis demonstrates that low-amplitude spikes, which are difficult to cluster and often left out of spike sorting, provide reduced information compared to sortable, high-amplitude spikes when considered in isolation, but the two provide similar levels of information when considering all the prior information available from past spiking. These findings demonstrate the value of combining information measures with state-space modeling and yield new insights into the underlying mechanisms of neural computation.
- oai:arXiv.org:2512.06280v1
- q-bio.NC
- q-bio.QM
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Azar Ghahari, Uri T. Eden
+ Gunjan Kumar, Yash Pote, Jonathan Scarlett
- Theoretical Compression Bounds for Wide Multilayer Perceptrons
- https://arxiv.org/abs/2512.06288
- arXiv:2512.06288v1 Announce Type: cross
-Abstract: Pruning and quantization techniques have been broadly successful in reducing the number of parameters needed for large neural networks, yet theoretical justification for their empirical success falls short. We consider a randomized greedy compression algorithm for pruning and quantization post-training and use it to rigorously show the existence of pruned/quantized subnetworks of multilayer perceptrons (MLPs) with competitive performance. We further extend our results to structured pruning of MLPs and convolutional neural networks (CNNs), thus providing a unified analysis of pruning in wide networks. Our results are free of data assumptions, and showcase a tradeoff between compressibility and network width. The algorithm we consider bears some similarities with Optimal Brain Damage (OBD) and can be viewed as a post-training randomized version of it. The theoretical results we derive bridge the gap between theory and application for pruning/quantization, and provide a justification for the empirical success of compression in wide multilayer perceptrons.
- oai:arXiv.org:2512.06288v1
+ Minimax and Bayes Optimal Adaptive Experimental Design for Treatment Choice
+ https://arxiv.org/abs/2512.08513
+ arXiv:2512.08513v1 Announce Type: cross
+Abstract: We consider an adaptive experiment for treatment choice and design a minimax and Bayes optimal adaptive experiment with respect to regret. Given binary treatments, the experimenter's goal is to choose the treatment with the highest expected outcome through an adaptive experiment, in order to maximize welfare. We consider adaptive experiments that consist of two phases, the treatment allocation phase and the treatment choice phase. The experiment starts with the treatment allocation phase, where the experimenter allocates treatments to experimental subjects to gather observations. During this phase, the experimenter can adaptively update the allocation probabilities using the observations obtained in the experiment. After the allocation phase, the experimenter proceeds to the treatment choice phase, where one of the treatments is selected as the best. For this adaptive experimental procedure, we propose an adaptive experiment that splits the treatment allocation phase into two stages, where we first estimate the standard deviations and then allocate each treatment proportionally to its standard deviation. We show that this experiment, often referred to as Neyman allocation, is minimax and Bayes optimal in the sense that its regret upper bounds exactly match the lower bounds that we derive. To show this optimality, we derive minimax and Bayes lower bounds for the regret using change-of-measure arguments. Then, we evaluate the corresponding upper bounds using the central limit theorem and large deviation bounds.
+ oai:arXiv.org:2512.08513v1
+ econ.EMcs.LGmath.ST
+ stat.ME
+ stat.MLstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Houssam El Cheairi, David Gamarnik, Rahul Mazumder
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Masahiro Kato
- Interpretable Neural Approximation of Stochastic Reaction Dynamics with Guaranteed Reliability
- https://arxiv.org/abs/2512.06294
- arXiv:2512.06294v1 Announce Type: cross
-Abstract: Stochastic Reaction Networks (SRNs) are a fundamental modeling framework for systems ranging from chemical kinetics and epidemiology to ecological and synthetic biological processes. A central computational challenge is the estimation of expected outputs across initial conditions and times, a task that is rarely solvable analytically and becomes computationally prohibitive with current methods such as Finite State Projection or the Stochastic Simulation Algorithm. Existing deep learning approaches offer empirical scalability, but provide neither interpretability nor reliability guarantees, limiting their use in scientific analysis and in applications where model outputs inform real-world decisions. Here we introduce DeepSKA, a neural framework that jointly achieves interpretability, guaranteed reliability, and substantial computational gains. DeepSKA yields mathematically transparent representations that generalise across states, times, and output functions, and it integrates this structure with a small number of stochastic simulations to produce unbiased, provably convergent, and dramatically lower-variance estimates than classical Monte Carlo. We demonstrate these capabilities across nine SRNs, including nonlinear and non-mass-action models with up to ten species, where DeepSKA delivers accurate predictions and orders-of-magnitude efficiency improvements. This interpretable and reliable neural framework offers a principled foundation for developing analogous methods for other Markovian systems, including stochastic differential equations.
- oai:arXiv.org:2512.06294v1
- q-bio.MN
+ DS FedProxGrad: Asymptotic Stationarity Without Noise Floor in Fair Federated Learning
+ https://arxiv.org/abs/2512.08671
+ arXiv:2512.08671v1 Announce Type: cross
+Abstract: Recent work \cite{arifgroup} introduced Federated Proximal Gradient \textbf{(\texttt{FedProxGrad})} for solving non-convex composite optimization problems in group fair federated learning. However, the original analysis established convergence only to a \textit{noise-dominated neighborhood of stationarity}, with explicit dependence on a variance-induced noise floor. In this work, we provide an improved asymptotic convergence analysis for a generalized \texttt{FedProxGrad}-type analytical framework with inexact local proximal solutions and explicit fairness regularization. We call this extended analytical framework \textbf{DS \texttt{FedProxGrad}} (Decay Step Size \texttt{FedProxGrad}). Under a Robbins-Monro step-size schedule \cite{robbins1951stochastic} and a mild decay condition on local inexactness, we prove that $\liminf_{r\to\infty} \mathbb{E}[\|\nabla F(\mathbf{x}^r)\|^2] = 0$, i.e., the algorithm is asymptotically stationary and the convergence rate does not depend on a variance-induced noise floor.
+ oai:arXiv.org:2512.08671v1cs.LG
- math.PR
- q-bio.QMstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Quentin Badolle, Arthur Theuer, Zhou Fang, Ankit Gupta, Mustafa Khammash
-
-
- Entropic Confinement and Mode Connectivity in Overparameterized Neural Networks
- https://arxiv.org/abs/2512.06297
- arXiv:2512.06297v1 Announce Type: cross
-Abstract: Modern neural networks exhibit a striking property: basins of attraction in the loss landscape are often connected by low-loss paths, yet optimization dynamics generally remain confined to a single convex basin and rarely explore intermediate points. We resolve this paradox by identifying entropic barriers arising from the interplay between curvature variations along these paths and noise in optimization dynamics. Empirically, we find that curvature systematically rises away from minima, producing effective forces that bias noisy dynamics back toward the endpoints - even when the loss remains nearly flat. These barriers persist longer than energetic barriers, shaping the late-time localization of solutions in parameter space. Our results highlight the role of curvature-induced entropic forces in governing both connectivity and confinement in deep learning landscapes.
- oai:arXiv.org:2512.06297v1
- cs.LG
- cond-mat.dis-nn
- cond-mat.stat-mech
- cs.AI
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Luca Di Carlo, Chase Goddard, David J. Schwab
-
-
- Zero Generalization Error Theorem for Random Interpolators via Algebraic Geometry
- https://arxiv.org/abs/2512.06347
- arXiv:2512.06347v1 Announce Type: cross
-Abstract: We theoretically demonstrate that the generalization error of interpolators for machine learning models under teacher-student settings becomes 0 once the number of training samples exceeds a certain threshold. Understanding the high generalization ability of large-scale models such as deep neural networks (DNNs) remains one of the central open problems in machine learning theory. While recent theoretical studies have attributed this phenomenon to the implicit bias of stochastic gradient descent (SGD) toward well-generalizing solutions, empirical evidences indicate that it primarily stems from properties of the model itself. Specifically, even randomly sampled interpolators, which are parameters that achieve zero training error, have been observed to generalize effectively. In this study, under a teacher-student framework, we prove that the generalization error of randomly sampled interpolators becomes exactly zero once the number of training samples exceeds a threshold determined by the geometric structure of the interpolator set in parameter space. As a proof technique, we leverage tools from algebraic geometry to mathematically characterize this geometric structure.
- oai:arXiv.org:2512.06347v1
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Naoki Yoshida, Isao Ishikawa, Masaaki Imaizumi
-
-
- Optimizing Optimizers for Fast Gradient-Based Learning
- https://arxiv.org/abs/2512.06370
- arXiv:2512.06370v1 Announce Type: cross
-Abstract: We lay the theoretical foundation for automating optimizer design in gradient-based learning. Based on the greedy principle, we formulate the problem of designing optimizers as maximizing the instantaneous decrease in loss. By treating an optimizer as a function that translates loss gradient signals into parameter motions, the problem reduces to a family of convex optimization problems over the space of optimizers. Solving these problems under various constraints not only recovers a wide range of popular optimizers as closed-form solutions, but also produces the optimal hyperparameters of these optimizers with respect to the problems at hand. This enables a systematic approach to design optimizers and tune their hyperparameters according to the gradient statistics that are collected during the training process. Furthermore, this optimization of optimization can be performed dynamically during training.
- oai:arXiv.org:2512.06370v1
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jaerin Lee, Kyoung Mu Lee
-
-
- Detrended cross-correlations and their random matrix limit: an example from the cryptocurrency market
- https://arxiv.org/abs/2512.06473
- arXiv:2512.06473v1 Announce Type: cross
-Abstract: Correlations in complex systems are often obscured by nonstationarity, long-range memory, and heavy-tailed fluctuations, which limit the usefulness of traditional covariance-based analyses. To address these challenges, we construct scale and fluctuation-dependent correlation matrices using the multifractal detrended cross-correlation coefficient $\rho_r$ that selectively emphasizes fluctuations of different amplitudes. We examine the spectral properties of these detrended correlation matrices and compare them to the spectral properties of the matrices calculated in the same way from synthetic Gaussian and $q$Gaussian signals. Our results show that detrending, heavy tails, and the fluctuation-order parameter $r$ jointly produce spectra, which substantially depart from the random case even under absence of cross-correlations in time series. Applying this framework to one-minute returns of 140 major cryptocurrencies from 2021-2024 reveals robust collective modes, including a dominant market factor and several sectoral components whose strength depends on the analyzed scale and fluctuation order. After filtering out the market mode, the empirical eigenvalue bulk aligns closely with the limit of random detrended cross-correlations, enabling clear identification of structurally significant outliers. Overall, the study provides a refined spectral baseline for detrended cross-correlations and offers a promising tool for distinguishing genuine interdependencies from noise in complex, nonstationary, heavy-tailed systems.
- oai:arXiv.org:2512.06473v1
- q-fin.ST
- cs.CE
- physics.data-an
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- 10.3390/e27121236
- Entropy 2025, 27(12), 1236
- Stanis{\l}aw Dro\.zd\.z, Pawe{\l} Jarosz, Jaros{\l}aw Kwapie\'n, Maria Skupie\'n, Marcin W\k{a}torek
-
-
- AI as "Co-founder": GenAI for Entrepreneurship
- https://arxiv.org/abs/2512.06506
- arXiv:2512.06506v1 Announce Type: cross
-Abstract: This paper studies whether, how, and for whom generative artificial intelligence (GenAI) facilitates firm creation. Our identification strategy exploits the November 2022 release of ChatGPT as a global shock that lowered start-up costs and leverages variations across geo-coded grids with differential pre-existing AI-specific human capital. Using high-resolution and universal data on Chinese firm registrations by the end of 2024, we find that grids with stronger AI-specific human capital experienced a sharp surge in new firm formation$\unicode{x2013}$driven entirely by small firms, contributing to 6.0% of overall national firm entry. Large-firm entry declines, consistent with a shift toward leaner ventures. New firms are smaller in capital, shareholder number, and founding team size, especially among small firms. The effects are strongest among firms with potential AI applications, weaker financing needs, and among first-time entrepreneurs. Overall, our results highlight that GenAI serves as a pro-competitive force by disproportionately boosting small-firm entry.
- oai:arXiv.org:2512.06506v1
- econ.GN
- cs.AI
- q-fin.EC
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Junhui Jeff Cai, Xian Gu, Liugang Sheng, Mengjia Xia, Linda Zhao, Wu Zhu
+ Huzaifa Arif
- Diagnosis-based mortality prediction for intensive care unit patients via transfer learning
- https://arxiv.org/abs/2512.06511
- arXiv:2512.06511v1 Announce Type: cross
-Abstract: In the intensive care unit, the underlying causes of critical illness vary substantially across diagnoses, yet prediction models accounting for diagnostic heterogeneity have not been systematically studied. To address the gap, we evaluate transfer learning approaches for diagnosis-specific mortality prediction and apply both GLM- and XGBoost-based models to the eICU Collaborative Research Database. Our results demonstrate that transfer learning consistently outperforms models trained only on diagnosis-specific data and those using a well-known ICU severity-of-illness score, i.e., APACHE IVa, alone, while also achieving better calibration than models trained on the pooled data. Our findings also suggest that the Youden cutoff is a more appropriate decision threshold than the conventional 0.5 for binary outcomes, and that transfer learning maintains consistently high predictive performance across various cutoff criteria.
- oai:arXiv.org:2512.06511v1
- cs.LG
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Mengqi Xu, Subha Maity, Joel Dubin
-
-
- Market Reactions and Information Spillovers in Bank Mergers: A Multi-Method Analysis of the Japanese Banking Sector
- https://arxiv.org/abs/2512.06550
- arXiv:2512.06550v1 Announce Type: cross
-Abstract: Major bank mergers and acquisitions (M&A) transform the financial market structure, but their valuation and spillover effects remain open to question. This study examines the market reaction to two M&A events: the 2005 creation of Mitsubishi UFJ Financial Group following the Financial Big Bang in Japan, and the 2018 merger involving Resona Holdings after the global financial crisis. The multi-method analysis in this research combines several distinct methods to explore these M&A events. An event study using the market model, the capital asset pricing model (CAPM), and the Fama-French three-factor model is implemented to estimate cumulative abnormal returns (CAR) for valuation purposes. Vector autoregression (VAR) models are used to test for Granger causality and map dynamic effects using impulse response functions (IRFs) to investigate spillovers. Propensity score matching (PSM) helps provide a causal estimate of the average treatment effect on the treated (ATT). The analysis detected a significant positive market reaction to the mergers. The findings also suggest the presence of prolonged positive spillovers to other banks, which may indicate a synergistic effect among Japanese banks. Combining these methods provides a unique perspective on M&A events in the Japanese banking sector, offering valuable insights for investors, managers, and regulators concerned with market efficiency and systemic stability
- oai:arXiv.org:2512.06550v1
- q-fin.CP
- econ.EM
- q-fin.PM
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Haibo Wang, Takeshi Tsuyuguchi
-
-
- The Online Discourse of Virtual Reality and Anxiety
- https://arxiv.org/abs/2512.06656
- arXiv:2512.06656v1 Announce Type: cross
-Abstract: VR in the treatment of clinical concerns such as generalized anxiety disorder or social anxiety. VR has created additional pathways to support patient well-being and care. Understanding online discussion of what users think about this technology may further support its efficacy. The purpose of this study was to employ a corpus linguistic methodology to identify the words and word networks that shed light on the online discussion of virtual reality and anxiety. Using corpus linguistics, frequently used words in discussion along with collocation were identified by utilizing Sketch Engine software. The results of the study, based upon the English Trends corpus, identified VR, Oculus, and headset as the most frequently discussed within the VR and anxiety subcorpus. These results point to the development of the virtual system, along with the physical apparatus that makes viewing and engaging with the virtual environment possible. Additional results point to collocation of prepositional phrases such as of virtual reality, in virtual reality, and for virtual reality relating to the design, experience, and development, respectively. These findings offer new perspective on how VR and anxiety together are discussed in general discourse and offer pathways for future opportunities to support counseling needs through development and accessibility. Keywords: anxiety disorders, corpus linguistics, Sketch Engine, and virtual reality VR
- oai:arXiv.org:2512.06656v1
- cs.CL
- stat.CO
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Kwabena Yamoah, Cass Dykeman
-
-
- Application of Time-Controlled Critical Point in Pressure Reducing Valves. A Case Study in North Spain
- https://arxiv.org/abs/2512.06735
- arXiv:2512.06735v1 Announce Type: cross
-Abstract: Potable water utilities are currently making great efforts to reduce leakage rates and assure long-term supply to the population due to the challenges of climate change, growing population and water shortage scenarios that have been on them over the last years. One of the most employed methods to reduce leakage includes the installation of pressurereducing valves along the water distribution network and the utilization of pressure management schemes. Pressure management includes different types of control models, which are applied according to the requirements of each site. The most advanced and sophisticated scheme is critical point control, which relies on a flow signal from a measuring device or online communication between the critical point and the valve. This paper proposes the utilization of a seasonal autoregressive integrated moving average, or the SARIMA model, to correlate pressure at the outlet of the valve and pressure on the critical point of the area supplied, aiming to set a fixed pressure in the critical point. The SARIMA model is developed according to historical data logged in the field and then validated. Later, the SARIMA model was tested on a real location in the village of Noja, Spain. The analysis of the field test results prove that the proposed model is feasible to be used since there is no significance difference between the target values set in the critical point and the real values measured in the field. The research proves that the SARIMA model can be used as an alternative for critical point control in water distribution networks when no flow signal is available or when communication between the critical point and the pressure reducing valve is not an option.
- oai:arXiv.org:2512.06735v1
- physics.app-ph
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- 10.3390/app13105845
- Appl. Sci. 2023, 13(10), 5845
- Andres Ortega-Ballesteros, David Munoz-Rodriguez, Maria-Jesus Aguilera-Urena, Francisco Javier de los Santos-Zarco, Alberto-Jesus Perea-Moreno
-
-
- Geometrical representation and dependence structure of three-dimensional Bernoulli distributions
- https://arxiv.org/abs/2512.06786
- arXiv:2512.06786v1 Announce Type: cross
-Abstract: This paper fully characterizes the geometrical structure of the class of distributions of three-dimensional Bernoulli random variables with equal means, $p$. We find all the geometrical generators in closed form as functions of $p$. This result stems from an algebraic representation of the class that encodes the statistical properties of Bernoulli distributions. We study extremal negative dependence within the class and provide an application example by finding the impact of negative dependence to minimal aggregate risk. The application relies on a game theory approach.
- oai:arXiv.org:2512.06786v1
- math.PR
- math.ST
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Roberto Fontana, Patrizia Semeraro
-
-
- Optimal and Diffusion Transports in Machine Learning
- https://arxiv.org/abs/2512.06797
- arXiv:2512.06797v1 Announce Type: cross
-Abstract: Several problems in machine learning are naturally expressed as the design and analysis of time-evolving probability distributions. This includes sampling via diffusion methods, optimizing the weights of neural networks, and analyzing the evolution of token distributions across layers of large language models. While the targeted applications differ (samples, weights, tokens), their mathematical descriptions share a common structure. A key idea is to switch from the Eulerian representation of densities to their Lagrangian counterpart through vector fields that advect particles. This dual view introduces challenges, notably the non-uniqueness of Lagrangian vector fields, but also opportunities to craft density evolutions and flows with favorable properties in terms of regularity, stability, and computational tractability. This survey presents an overview of these methods, with emphasis on two complementary approaches: diffusion methods, which rely on stochastic interpolation processes and underpin modern generative AI, and optimal transport, which defines interpolation by minimizing displacement cost. We illustrate how both approaches appear in applications ranging from sampling, neural network optimization, to modeling the dynamics of transformers for large language models.
- oai:arXiv.org:2512.06797v1
- math.OC
- cs.AI
+ Unsupervised Learning of Density Estimates with Topological Optimization
+ https://arxiv.org/abs/2512.08895
+ arXiv:2512.08895v1 Announce Type: cross
+Abstract: Kernel density estimation is a key component of a wide variety of algorithms in machine learning, Bayesian inference, stochastic dynamics and signal processing. However, the unsupervised density estimation technique requires tuning a crucial hyperparameter: the kernel bandwidth. The choice of bandwidth is critical as it controls the bias-variance trade-off by over- or under-smoothing the topological features. Topological data analysis provides methods to mathematically quantify topological characteristics, such as connected components, loops, voids et cetera, even in high dimensions where visualization of density estimates is impossible. In this paper, we propose an unsupervised learning approach using a topology-based loss function for the automated and unsupervised selection of the optimal bandwidth and benchmark it against classical techniques -- demonstrating its potential across different dimensions.
+ oai:arXiv.org:2512.08895v1cs.LGstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500cross
- http://creativecommons.org/licenses/by/4.0/
- Gabriel Peyr\'e
-
-
- Prediction with Expert Advice under Local Differential Privacy
- https://arxiv.org/abs/2512.06971
- arXiv:2512.06971v1 Announce Type: cross
-Abstract: We study the classic problem of prediction with expert advice under the constraint of local differential privacy (LDP). In this context, we first show that a classical algorithm naturally satisfies LDP and then design two new algorithms that improve it: RW-AdaBatch and RW-Meta. For RW-AdaBatch, we exploit the limited-switching behavior induced by LDP to provide a novel form of privacy amplification that grows stronger on easier data, analogous to the shuffle model in offline learning. Drawing on the theory of random walks, we prove that this improvement carries essentially no utility cost. For RW-Meta, we develop a general method for privately selecting between experts that are themselves non-trivial learning algorithms, and we show that in the context of LDP this carries no extra privacy cost. In contrast, prior work has only considered data-independent experts. We also derive formal regret bounds that scale inversely with the degree of independence between experts. Our analysis is supplemented by evaluation on real-world data reported by hospitals during the COVID-19 pandemic; RW-Meta outperforms both the classical baseline and a state-of-the-art \textit{central} DP algorithm by 1.5-3$\times$ on the task of predicting which hospital will report the highest density of COVID patients each week.
- oai:arXiv.org:2512.06971v1
- cs.LG
- cs.CR
- cs.DS
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Ben Jacobsen, Kassem Fawaz
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Suina Tanweer, Firas A. Khasawneh
- Learning Paths to Multi-Sector Equilibrium: Belief Dynamics Under Uncertain Returns to Scale
- https://arxiv.org/abs/2512.07013
- arXiv:2512.07013v1 Announce Type: cross
-Abstract: This paper explores the dynamics of learning in a multi-sector general equilibrium model where firms operate under incomplete information about their production returns to scale. Firms iteratively update their beliefs using maximum a-posteriori estimation, derived from observed production outcomes, to refine their knowledge of their returns to scale. The implications of these learning dynamics for market equilibrium and the conditions under which firms can effectively learn their true returns to scale are the key objects of this study. Our results shed light on how idiosyncratic shocks influence the learning process and demonstrate that input decisions encode all pertinent information for belief updates. Additionally, we show that a long-memory (path-dependent) learning which keeps track of all past estimations ends up having a worse performance than a short-memory (path-independent) approach.
- oai:arXiv.org:2512.07013v1
- cs.GT
- econ.TH
- math.OC
- math.PR
+ Limit results for distributed estimation of invariant subspaces in multiple networks inference and PCA
+ https://arxiv.org/abs/2206.04306
+ arXiv:2206.04306v5 Announce Type: replace
+Abstract: Several statistical problems, such as multiple heterogeneous graph analysis, distributed PCA, integrative data analysis, and simultaneous dimension reduction of images, can involve a collection of $m$ matrices whose leading subspaces $U^{(i)}$ consist of a shared subspace $U_c$ and individual subspaces $U_s^{(i)}$. We consider a distributed estimation procedure that first obtains $\hat U^{(i)}$ as the leading singular vectors for each observed noisy matrix, then computes the leading left singular vectors of the concatenated matrix $[\hat U^{(1)}|\hat U^{(2)}|\dots|\hat U^{(m)}]$ as $\hat U_c$, and finally computes the leading singular vectors of the projection of each $\hat U^{(i)}$ onto the orthogonal complement of $\hat U_c$ as $\hat U_s^{(i)}$. In this paper, we provide a framework for deriving limit results for such distributed estimation procedures, including expansions of estimation errors in both common and individual subspaces and their asymptotically normal approximations. We apply this framework specifically to (1) parameter estimation for multiple heterogeneous random graphs with shared subspaces, and (2) distributed PCA for independent sub-Gaussian random vectors with spiked covariance structures. Leveraging these results, we also consider a two-sample test for the null hypothesis that a pair of random graphs have the same edge probabilities, and present a test statistic whose limiting distribution converges to a central (resp., non-central) $\chi^2$ distribution under the null (resp., local alternative) hypothesis.
+ oai:arXiv.org:2206.04306v5math.STstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Stefano Nasini, Rabia Nessah, Bertrand Wigniolle
-
-
- Ideal Attribution and Faithful Watermarks for Language Models
- https://arxiv.org/abs/2512.07038
- arXiv:2512.07038v1 Announce Type: cross
-Abstract: We introduce ideal attribution mechanisms, a formal abstraction for reasoning about attribution decisions over strings. At the core of this abstraction lies the ledger, an append-only log of the prompt-response interaction history between a model and its user. Each mechanism produces deterministic decisions based on the ledger and an explicit selection criterion, making it well-suited to serve as a ground truth for attribution. We frame the design goal of watermarking schemes as faithful representation of ideal attribution mechanisms. This novel perspective brings conceptual clarity, replacing piecemeal probabilistic statements with a unified language for stating the guarantees of each scheme. It also enables precise reasoning about desiderata for future watermarking schemes, even when no current construction achieves them, since the ideal functionalities are specified first. In this way, the framework provides a roadmap that clarifies which guarantees are attainable in an idealized setting and worth pursuing in practice.
- oai:arXiv.org:2512.07038v1
- cs.CR
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Min Jae Song, Kameron Shahabi
-
-
- DeepSVM: Learning Stochastic Volatility Models with Physics-Informed Deep Operator Networks
- https://arxiv.org/abs/2512.07162
- arXiv:2512.07162v1 Announce Type: cross
-Abstract: Real-time calibration of stochastic volatility models (SVMs) is computationally bottlenecked by the need to repeatedly solve coupled partial differential equations (PDEs). In this work, we propose DeepSVM, a physics-informed Deep Operator Network (PI-DeepONet) designed to learn the solution operator of the Heston model across its entire parameter space. Unlike standard data-driven deep learning (DL) approaches, DeepSVM requires no labelled training data. Rather, we employ a hard-constrained ansatz that enforces terminal payoffs and static no-arbitrage conditions by design. Furthermore, we use Residual-based Adaptive Refinement (RAR) to stabilize training in difficult regions subject to high gradients. Overall, DeepSVM achieves a final training loss of $10^{-5}$ and predicts highly accurate option prices across a range of typical market dynamics. While pricing accuracy is high, we find that the model's derivatives (Greeks) exhibit noise in the at-the-money (ATM) regime, highlighting the specific need for higher-order regularization in physics-informed operator learning.
- oai:arXiv.org:2512.07162v1
- q-fin.CP
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kieran A. Malandain, Selim Kalici, Hakob Chakhoyan
-
-
- Variational Regularized Bilevel Estimation for Exponential Random Graph Models
- https://arxiv.org/abs/2512.07176
- arXiv:2512.07176v1 Announce Type: cross
-Abstract: I propose an estimation algorithm for Exponential Random Graph Models (ERGM), a popular statistical network model for estimating the structural parameters of strategic network formation in economics and finance. Existing methods often produce unreliable estimates of parameters for the triangle, a key network structure that captures the tendency of two individuals with friends in common to connect. Such unreliable estimates may lead to untrustworthy policy recommendations for networks with triangles. Through a variational mean-field approach, my algorithm addresses the two well-known difficulties when estimating the ERGM, the intractability of its normalizing constant and model degeneracy. In addition, I introduce $\ell_2$ regularization that ensures a unique solution to the mean-field approximation problem under suitable conditions. I provide a non-asymptotic optimization convergence rate analysis for my proposed algorithm under mild regularity conditions. Through Monte Carlo simulations, I demonstrate that my method achieves a perfect sign recovery rate for triangle parameters for small and mid-sized networks under perturbed initialization, compared to a 50% rate for existing algorithms. I provide the sensitivity analysis of estimates of ERGM parameters to hyperparameter choices, offering practical insights for implementation.
- oai:arXiv.org:2512.07176v1
- econ.EM
- stat.CO
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yoon Choi
-
-
- Parallel Algorithms for Combined Regularized Support Vector Machines: Application in Music Genre Classification
- https://arxiv.org/abs/2512.07463
- arXiv:2512.07463v1 Announce Type: cross
-Abstract: In the era of rapid development of artificial intelligence, its applications span across diverse fields, relying heavily on effective data processing and model optimization. Combined Regularized Support Vector Machines (CR-SVMs) can effectively handle the structural information among data features, but there is a lack of efficient algorithms in distributed-stored big data. To address this issue, we propose a unified optimization framework based on consensus structure. This framework is not only applicable to various loss functions and combined regularization terms but can also be effectively extended to non-convex regularization terms, showing strong scalability. Based on this framework, we develop a distributed parallel alternating direction method of multipliers (ADMM) algorithm to efficiently compute CR-SVMs when data is stored in a distributed manner. To ensure the convergence of the algorithm, we also introduce the Gaussian back-substitution method. Meanwhile, for the integrity of the paper, we introduce a new model, the sparse group lasso support vector machine (SGL-SVM), and apply it to music information retrieval. Theoretical analysis confirms that the computational complexity of the proposed algorithm is not affected by different regularization terms and loss functions, highlighting the universality of the parallel algorithm. Experiments on synthetic and free music archiv datasets demonstrate the reliability, stability, and efficiency of the algorithm.
- oai:arXiv.org:2512.07463v1
- cs.LG
- stat.AP
- stat.CO
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Rongmei Liang, Zizheng Liu, Xiaofei Wu, Jingwen Tu
-
-
- A Bootstrap Perspective on Stochastic Gradient Descent
- https://arxiv.org/abs/2512.07676
- arXiv:2512.07676v1 Announce Type: cross
-Abstract: Machine learning models trained with \emph{stochastic} gradient descent (SGD) can generalize better than those trained with deterministic gradient descent (GD). In this work, we study SGD's impact on generalization through the lens of the statistical bootstrap: SGD uses gradient variability under batch sampling as a proxy for solution variability under the randomness of the data collection process. We use empirical results and theoretical analysis to substantiate this claim. In idealized experiments on empirical risk minimization, we show that SGD is drawn to parameter choices that are robust under resampling and thus avoids spurious solutions even if they lie in wider and deeper minima of the training loss. We prove rigorously that by implicitly regularizing the trace of the gradient covariance matrix, SGD controls the algorithmic variability. This regularization leads to solutions that are less sensitive to sampling noise, thereby improving generalization. Numerical experiments on neural network training show that explicitly incorporating the estimate of the algorithmic variability as a regularizer improves test performance. This fact supports our claim that bootstrap estimation underpins SGD's generalization advantages.
- oai:arXiv.org:2512.07676v1
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
+ Wed, 10 Dec 2025 00:00:00 -0500
+ replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hongjian Lan, Yucong Liu, Florian Sch\"afer
+ Runbing Zheng, Minh Tang
- Bounds on inequality with incomplete data
- https://arxiv.org/abs/2512.07709
- arXiv:2512.07709v1 Announce Type: cross
-Abstract: We develop a unified, nonparametric framework for sharp partial identification and inference on inequality indices when income or wealth are only coarsely observed -- for example via grouped tables or individual interval reports -- possibly together with linear restrictions such as known means or subgroup totals. First, for a broad class of Schur-convex inequality measures, we characterize extremal allocations and show that sharp bounds are attained by distributions with simple, finite support, reducing the underlying infinite-dimensional problem to finite-dimensional optimization. Second, for indices that admit linear-fractional representations after suitable ordering of the data (including the Gini coefficient, quantile ratios, and the Hoover index), we recast the bound problems as linear or quadratic programs, yielding fast computation of numerically sharp bounds. Third, we establish $\sqrt{n}$ inference for bound endpoints using a uniform directional delta method and a bootstrap procedure for standard errors. In ELSA wealth data with mixed point and interval observations, we obtain sharp Gini bounds of 0.714--0.792 for liquid savings and 0.686--0.767 for a broad savings measure; historical U.S. income tables deliver time-series bounds for the Gini, quantile ratios, and Hoover index under grouped information.
- oai:arXiv.org:2512.07709v1
- econ.EM
+ Solving the Poisson equation using coupled Markov chains
+ https://arxiv.org/abs/2206.05691
+ arXiv:2206.05691v5 Announce Type: replace
+Abstract: This article shows how coupled Markov chains that meet exactly after a random number of iterations can be used to generate unbiased estimators of the solutions of the Poisson equation. Through this connection, we re-derive known unbiased estimators of expectations with respect to the stationary distribution of a Markov chain and provide conditions for the finiteness of their moments. We further construct unbiased estimators of the asymptotic variance of Markov chain ergodic averages, and provide conditions for the finiteness of the estimators' moments of any order. If their second moment is finite, the average of independent copies of such estimators converges to the asymptotic variance at the Monte Carlo rate, comparing favorably to known rates for batch means and spectral variance estimators. The results are illustrated with numerical experiments.
+ oai:arXiv.org:2206.05691v5stat.CO
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- James Banks, Thomas Glinnan, Tatiana Komarova
-
-
- Provable Long-Range Benefits of Next-Token Prediction
- https://arxiv.org/abs/2512.07818
- arXiv:2512.07818v1 Announce Type: cross
-Abstract: Why do modern language models, trained to do well on next-word prediction, appear to generate coherent documents and capture long-range structure? Here we show that next-token prediction is provably powerful for learning longer-range structure, even with common neural network architectures. Specifically, we prove that optimizing next-token prediction over a Recurrent Neural Network (RNN) yields a model that closely approximates the training distribution: for held-out documents sampled from the training distribution, no algorithm of bounded description length limited to examining the next $k$ tokens, for any $k$, can distinguish between $k$ consecutive tokens of such documents and $k$ tokens generated by the learned language model following the same prefix. We provide polynomial bounds (in $k$, independent of the document length) on the model size needed to achieve such $k$-token indistinguishability, offering a complexity-theoretic explanation for the long-range coherence observed in practice.
- oai:arXiv.org:2512.07818v1
- cs.LG
- cs.AI
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Xinyuan Cao, Santosh S. Vempala
-
-
- AR-sieve Bootstrap for High-dimensional Time Series
- https://arxiv.org/abs/2112.00414
- arXiv:2112.00414v2 Announce Type: replace
-Abstract: This paper proposes a new AR-sieve bootstrap approach to high-dimensional time series. The major challenge of classical bootstrap methods on high-dimensional time series is two-fold: curse of dimensionality and temporal dependence. To address such a difficulty, we utilize factor modeling to reduce dimension and capture temporal dependence simultaneously. A factor-based bootstrap procedure is constructed, which performs an AR-sieve bootstrap on the extracted low-dimensional common factor time series and then recovers the bootstrap samples for the original data from the factor model. Asymptotic properties for bootstrap mean statistics and extreme eigenvalues are established. Various simulation studies further demonstrate the advantages of the new AR-sieve bootstrap in high-dimensional scenarios. An empirical application on particulate matter (PM) concentration data is studied, where bootstrap confidence intervals for mean vectors and autocovariance matrices are provided.
- oai:arXiv.org:2112.00414v2
- stat.ME
- math.ST
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Daning Bi, Han Lin Shang, Yanrong Yang, Huanjun Zhu
+ Randal Douc, Pierre E. Jacob, Anthony Lee, Dootika Vats
- Static and Dynamic BART for Rank-Order Data
- https://arxiv.org/abs/2308.10231
- arXiv:2308.10231v5 Announce Type: replace
-Abstract: Ranking lists are often provided at regular time intervals in a range of applications, including economics, sports, marketing, and politics. Most popular methods for rank-order data postulate a linear specification for the latent scores, which determine the observed ranks, and ignore the temporal dependence of the ranking lists. To address these issues, novel nonparametric static (ROBART) and autoregressive (ARROBART) models are developed, with latent scores defined as nonlinear Bayesian additive regression tree functions of covariates. To make inferences in the dynamic ARROBART model, closed-form filtering, predictive, and smoothing distributions for the latent time-varying scores are derived. These results are applied in a Gibbs sampler with data augmentation for posterior inference. The proposed methods are shown to outperform existing competitors in simulation studies, static data applications to electoral data, stated preferences for sushi and movies, and dynamic data applications to economic complexity rankings of countries and weekly pollster rankings of NCAA football teams.
- oai:arXiv.org:2308.10231v5
- stat.ME
- math.ST
- stat.CO
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Parsimonious Generative Machine Learning for Non-Gaussian Tail Modeling
+ https://arxiv.org/abs/2402.14368
+ arXiv:2402.14368v3 Announce Type: replace
+Abstract: The presence of non-Gaussian tails is a prevalent characteristic in many financial modeling scenarios, necessitating the use of complex non-Gaussian distributions such as the generalized beta of the second kind (GB2) and the skewed generalized $t$ (SGT). The approach we propose for modeling heavy-tailed data differs significantly from traditional methods. We utilize generative machine learning, which offers an entirely different paradigm for modeling distributions. A parsimonious nonlinear transformation is applied to a simple base random variable such as Gaussian. The parameters can be estimated effectively, and the theoretical heavy-tail properties are derived. Robust performance is observed with this approach when compared to traditional distributions. More importantly, this method is broadly useful for machine learning due to its mathematical elegance and numerical convenience.
+ oai:arXiv.org:2402.14368v3
+ stat.AP
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by-nc-nd/4.0/
- Matteo Iacopini, Eoghan O'Neill, Luca Rossini
-
-
- Generalized Fr\'{e}chet means with random minimizing domains and its strong consistency
- https://arxiv.org/abs/2311.10958
- arXiv:2311.10958v2 Announce Type: replace
-Abstract: This paper introduces a novel extension of Fr\'{e}chet means, called \textit{generalized Fr\'{e}chet means} as a comprehensive framework for characterizing features in probability distributions in general topological spaces. The generalized Fr\'{e}chet means are defined as minimizers of a suitably defined cost function. The framework encompasses various extensions of Fr\'{e}chet means in the literature. The most distinctive difference of the new framework from the previous works is that we allow the domain of minimization of the empirical means be random and different from that of the population means. This expands the applicability of the Fr\'{e}chet mean framework to diverse statistical scenarios, including dimension reduction for manifold-valued data.
- oai:arXiv.org:2311.10958v2
- math.ST
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Jaesung Park, Sungkyu Jung
+ Xing Yan, Yue Zhao, Qi Wu, Wenxuan Ma
- Theoretical Guarantees for the Subspace-Constrained Tyler's Estimator
- https://arxiv.org/abs/2403.18658
- arXiv:2403.18658v3 Announce Type: replace
-Abstract: This work analyzes the subspace-constrained Tyler's estimator (STE), a method designed to recover a low-dimensional subspace from a dataset that may be heavily corrupted by outliers. The STE has previously been shown to be competitive for fundamental computer vision problems. We assume a weak inlier-outlier model and allow the inlier fraction to fall below the threshold at which robust subspace recovery becomes computationally hard. We show that, in this setting, if the initialization of STE satisfies a certain condition, then STE-which is computationally efficient-can effectively recover the underlying subspace. Furthermore, we establish approximate recovery guarantees for STE in the presence of noisy inliers. Finally, under the asymptotic generalized haystack model, we demonstrate that STE initialized with Tyler's M-estimator (TME) recovers the subspace even when the inlier fraction is too small for TME to succeed on its own.
- oai:arXiv.org:2403.18658v3
+ Minimax optimal seriation in polynomial time
+ https://arxiv.org/abs/2405.08747
+ arXiv:2405.08747v3 Announce Type: replace
+Abstract: We consider the seriation problem, whose goal is to recover a hidden ordering from a noisy observation of a permuted Robinson matrix. We establish sharp minimax rates under average-Lipschitz conditions that strictly extend the bi-Lipschitz framework of [Giraud et al., 2023]. We further design a polynomial-time algorithm that attains these optimal rates, thereby resolving two open questions raised in [Giraud et al., 2023]. Finally, our analysis extends to a broader class of matrices beyond those generated by exact permutations.
+ oai:arXiv.org:2405.08747v3math.ST
- stat.MLstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Gilad Lerman, Teng Zhang
-
-
- A Latent Variable Approach to Learning High-dimensional Multivariate longitudinal Data
- https://arxiv.org/abs/2405.15053
- arXiv:2405.15053v3 Announce Type: replace
-Abstract: High-dimensional multivariate longitudinal data, which arise when many outcome variables are measured repeatedly over time, are becoming increasingly common in social, behavioral and health sciences. We propose a latent variable model for drawing statistical inferences on covariate effects and predicting future outcomes based on high-dimensional multivariate longitudinal data. This model introduces unobserved factors to account for the between-variable and across-time dependence and assist the prediction. Statistical inference and prediction tools are developed under a general setting that allows outcome variables to be of mixed types and possibly unobserved for certain time points, for example, due to right censoring. A central limit theorem is established for drawing statistical inferences on regression coefficients. Additionally, an information criterion is introduced to choose the number of factors. The proposed model is applied to customer grocery shopping records to predict and understand shopping behavior.
- oai:arXiv.org:2405.15053v3
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sze Ming Lee, Yunxiao Chen, Tony Sit
+ Yann Issartel, Christophe Giraud, Nicolas Verzelen
- Adaptive Learning with Blockwise Missing and Semi-Supervised Data
- https://arxiv.org/abs/2405.18722
- arXiv:2405.18722v3 Announce Type: replace
-Abstract: Data fusion enables powerful and generalizable analyses across multiple sources. However, different data collection capacities across different sources lead to blockwise missingness (BM), which poses challenges in practice. Meanwhile, the high cost of obtaining gold-standard labels leaves the majority of samples unlabeled, known as the semi-supervised (SS) problem. In this paper, we propose a novel Data-adaptive Estimation approach for data FUsion in the SEmi-supervised setting (DEFUSE) that handles both BM and SS issues in the presence of distributional shifts across data sources under a missing at random (MAR) mechanism}. DEFUSE starts with a complete-data-only estimator derived from the primary data source, and uses data-adaptive and distributional-shift-adjusted procedures to successively incorporate the data with BM covariates and the large unlabeled sample to effectively reduce the estimation variance without incurring bias. To further avoid bias due to fusion of misaligned data violating of the MAR assumption, a screening method is developed to identify and exclude data sources that are not aligned with the primary source. Compared to existing approaches, DEFUSE offers two main improvements. First, it offers a new data-adaptive control variate approach to handle BM, which achieves intrinsic efficiency and robustness against distributional shifts. Second, it reveals a more essential role for the unlabeled sample in the BM regression problem, leading to improved estimation. These advantages are theoretically guaranteed and empirically supported by simulation studies and two real-world biomedical applications.
- oai:arXiv.org:2405.18722v3
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yiming Li, Ying Wei, Molei Liu
-
-
- Boosting Stochastic Optimisation for High-dimensional Latent Variable Models
- https://arxiv.org/abs/2406.09311
- arXiv:2406.09311v3 Announce Type: replace
-Abstract: Latent variable models are widely used in social and behavioural sciences, including education, psychology, and political science. With the increasing availability of large and complex datasets, high-dimensional latent variable models have become more common. However, estimating such models via marginal maximum likelihood is computationally challenging because it requires evaluating a large number of high-dimensional integrals. Stochastic optimisation, which combines stochastic approximation and sampling techniques, has been shown to be effective. It iterates between sampling latent variables from their posterior distribution under current parameter estimates and updating the model parameters using an approximate stochastic gradient constructed from the latent variable samples. In this paper, we investigate strategies to improve the performance of stochastic optimisation for high-dimensional latent variable models. The improvement is achieved through two strategies: a Metropolis-adjusted Langevin sampler that uses the gradient of the negative complete-data log-likelihood to sample latent variables efficiently, and a minibatch gradient technique that uses only a subset of observations when sampling latent variables and constructing stochastic gradients. Our simulation studies show that combining these strategies yields the best overall performance among competitors. An application to a personality test with 30 latent dimensions further demonstrates that the proposed algorithm scales effectively to high-dimensional settings.
- oai:arXiv.org:2406.09311v3
- stat.CO
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Motonori Oka, Yunxiao Chen, Irini Moustaki
-
-
- Interpret the estimand framework from a causal inference perspective
- https://arxiv.org/abs/2407.00292
- arXiv:2407.00292v3 Announce Type: replace
-Abstract: The estimand framework proposed by ICH in 2017 has brought fundamental changes in the pharmaceutical industry. It clearly describes how a treatment effect in a clinical question should be precisely defined and estimated, through attributes including treatments, endpoints and intercurrent events. However, ideas around the estimand framework are commonly in text, and different interpretations on this framework may exist. This article aims to interpret the estimand framework through its underlying theories, the causal inference framework based on potential outcomes. The statistical origin and formula of an estimand is given through the causal inference framework, with all attributes translated into statistical terms. We describe how five strategies proposed by ICH to analyze intercurrent events are incorporated in the statistical formula of an estimand, and we also suggest a new strategy to analyze intercurrent events. The roles of target populations and analysis sets in the estimand framework are compared and discussed based on the statistical formula of an estimand. This article recommends continuing studying causal inference theories behind the estimand framework and improving the estimand framework with greater methodological comprehensibility and availability.
- oai:arXiv.org:2407.00292v3
- stat.OT
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Jinghong Zeng
-
-
- Identifying arbitrary transformation between the slopes in scalar-on-function regression
- https://arxiv.org/abs/2407.19502
- arXiv:2407.19502v3 Announce Type: replace
-Abstract: In this article, we study whether the slope functions of two scalar-on-function regression models in two samples are associated with any arbitrary transformation along the vertical axis. The problem is formally stated as a statistical hypothesis test, and corresponding test statistic is formed based on the estimated second derivative of the unknown transformation. The asymptotic properties of the test statistic are investigated using some advanced techniques related to the empirical process. Moreover, to implement the test for small sample size data, a bootstrap algorithm is proposed, and it is shown that the bootstrap version of the test is as good as the original test for sufficiently large sample size. Furthermore, the utility of the proposed methodology is shown for simulated datasets, and DTI data is analyzed using the proposed methodology.
- oai:arXiv.org:2407.19502v3
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Pratim Guha Niyogi, Subhra Sankar Dhar
-
-
- Alpha-VI DeepONet: A prior-robust variational Bayesian approach for enhancing DeepONets with uncertainty quantification
- https://arxiv.org/abs/2408.00681
- arXiv:2408.00681v2 Announce Type: replace
-Abstract: We introduce a novel deep operator network (DeepONet) framework that incorporates generalised variational inference (GVI) using R\'enyi's $\alpha$-divergence to learn complex operators while quantifying uncertainty. By incorporating Bayesian neural networks as the building blocks for the branch and trunk networks, our framework endows DeepONet with uncertainty quantification. The use of R\'enyi's $\alpha$-divergence, instead of the Kullback-Leibler divergence (KLD), commonly used in standard variational inference, mitigates issues related to prior misspecification that are prevalent in Variational Bayesian DeepONets. This approach offers enhanced flexibility and robustness. We demonstrate that modifying the variational objective function yields superior results in terms of minimising the mean squared error and improving the negative log-likelihood on the test set. Our framework's efficacy is validated across various mechanical systems, where it outperforms both deterministic and standard KLD-based VI DeepONets in predictive accuracy and uncertainty quantification. The hyperparameter $\alpha$, which controls the degree of robustness, can be tuned to optimise performance for specific problems. We apply this approach to a range of mechanics problems, including gravity pendulum, advection-diffusion, and diffusion-reaction systems. Our findings underscore the potential of $\alpha$-VI DeepONet to advance the field of data-driven operator learning and its applications in engineering and scientific domains.
- oai:arXiv.org:2408.00681v2
+ Survey of Data-driven Newsvendor: Unified Analysis and Spectrum of Achievable Regrets
+ https://arxiv.org/abs/2409.03505
+ arXiv:2409.03505v4 Announce Type: replace
+Abstract: In the Newsvendor problem, the goal is to guess the number that will be drawn from some distribution, with asymmetric consequences for guessing too high vs. too low. In the data-driven version, the distribution is unknown, and one must work with samples from the distribution. Data-driven Newsvendor has been studied under many variants: additive vs. multiplicative regret, high probability vs. expectation bounds, and different distribution classes. This paper studies all combinations of these variants, filling in many gaps in the literature and simplifying many proofs. In particular, we provide a unified analysis based on the notion of clustered distributions, which in conjunction with our new lower bounds, shows that the entire spectrum of regrets between $1/\sqrt{n}$ and $1/n$ can be possible. Simulations on commonly-used distributions demonstrate that our notion is the "correct" predictor of empirical regret across varying data sizes.
+ oai:arXiv.org:2409.03505v4stat.MLcs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1016/j.cma.2025.118552
- Computer Methods in Applied Mechanics and Engineering, 449(B), 118552, 2026
- Soban Nasir Lone, Subhayan De, Rajdip Nayek
+ http://creativecommons.org/licenses/by/4.0/
+ Zhuoxin Chen, Will Ma
- "6 choose 4": A framework to understand and facilitate discussion of strategies for overall survival safety monitoring
- https://arxiv.org/abs/2410.04020
- arXiv:2410.04020v3 Announce Type: replace
-Abstract: Advances in anticancer therapies have significantly contributed to declining death rates in certain disease and clinical settings. However, they have also made it difficult to power a clinical trial in these settings with overall survival (OS) as the primary efficacy endpoint. Therefore, two approaches have been recently proposed for the pre-specified analysis of OS as a safety endpoint (Fleming et al., 2024; Rodriguez et al., 2024). In this paper, we provide a simple, unifying framework that includes the aforementioned approaches (and a couple others) as special cases. By highlighting each approach's focus, priority, tolerance for risk, and strengths or challenges for practical implementation, this framework can help to facilitate discussions between stakeholders on "fit-for-purpose OS data collection and assessment of harm" (American Association for Cancer Research, 2024). We apply this framework to a real clinical trial in large B-cell lymphoma to illustrate its application and value. Several recommendations and open questions are also raised.
- oai:arXiv.org:2410.04020v3
+ Efficient Analysis of Latent Spaces in Heterogeneous Networks
+ https://arxiv.org/abs/2412.02151
+ arXiv:2412.02151v4 Announce Type: replace
+Abstract: This work proposes a unified framework for efficient estimation under latent space modeling of heterogeneous networks. We consider a class of latent space models that decompose latent vectors into shared and network-specific components across networks. We develop a novel procedure that first identifies the shared latent vectors and further refines estimates through efficient score equations to achieve statistical efficiency. Oracle error rates for estimating the shared and heterogeneous latent vectors are established simultaneously. The analysis framework offers remarkable flexibility, accommodating various types of edge weights under general distributions.
+ oai:arXiv.org:2412.02151v4stat.ME
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-sa/4.0/
- Godwin Yung, Kaspar Rufibach, Marcel Wolbers, Mark Yan, Jue Wang
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yuang Tian, Jiajin Sun, Yinqiu He
- A Particle Algorithm for Mean-Field Variational Inference
- https://arxiv.org/abs/2412.20385
- arXiv:2412.20385v3 Announce Type: replace
-Abstract: Variational inference is a fast and scalable alternative to Markov chain Monte Carlo and has been widely applied to posterior inference tasks in statistics and machine learning. A traditional approach for implementing mean-field variational inference (MFVI) is coordinate ascent variational inference (CAVI), which relies crucially on parametric assumptions on complete conditionals. We introduce a novel particle-based algorithm for MFVI, named PArticle VI (PAVI), for nonparametric mean-field approximation. We obtain non-asymptotic error bounds for our algorithm. To our knowledge, this is the first end-to-end guarantee for particle-based MFVI.
- oai:arXiv.org:2412.20385v3
+ Simple proof of robustness for Bayesian heavy-tailed linear regression models
+ https://arxiv.org/abs/2501.06349
+ arXiv:2501.06349v3 Announce Type: replace
+Abstract: In the Bayesian literature, a line of research called resolution of conflict is about the characterization of robustness against outliers of statistical models. The robustness characterization of a model is achieved by establishing the limiting behaviour of the posterior distribution under an asymptotic framework in which the outliers move away from the bulk of the data. The proofs of the robustness characterization results, especially the recent ones for regression models, are technical and not intuitive, limiting the accessibility and preventing the development of theory in that line of research. In this paper, we highlight that the proof complexity is due to the generality of the assumptions on the prior distribution. To address the issue of accessibility, we present a significantly simpler proof for a linear regression model with a specific class of prior distributions, among which we find typically used prior distributions. The class of prior distributions is such that each regression coefficient has a sub-exponential distribution, which allows to exploit a tail bound, contrarily to previous approaches. The proof is intuitive and uses classical results of probability theory. The generality of the assumption on the error distribution is also appealing; essentially, it can be any distribution with regularly varying or log-regularly varying tails. So far, there does not exist a result in such generality for models with regularly varying distributions. We also investigate the necessity of the assumptions. To promote the development of theory in resolution of conflict, we highlight how the key steps of the proof can be adapted for other models and present an application of the proof technique in the context of generalized linear models.
+ oai:arXiv.org:2501.06349v3math.ST
- cs.LG
- math.OC
- stat.MLstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qiang Du, Kaizheng Wang, Edith Zhang, Chenyang Zhong
+ http://creativecommons.org/licenses/by/4.0/
+ Philippe Gagnon
- Treatment Effect Estimation in Causal Survival Analysis: Practical Recommendations
- https://arxiv.org/abs/2501.05836
- arXiv:2501.05836v3 Announce Type: replace
-Abstract: The restricted mean survival time (RMST) difference offers an interpretable causal contrast to estimate the treatment effect for time-to-event outcomes, yet a wide range of available estimators leaves limited guidance for practice. We provide a unified review of RMST estimators for randomized trials and observational studies, establish identification and asymptotic properties, and supply new derivations where needed. Our extensive simulation study compares simple nonparametric methods (such as unweighted Kaplan-Meier estimators) alongside parametric and nonparametric implementations of the G-formula, weighting approaches, Buckley-James transformations, and augmented estimators under diverse censoring mechanisms and model specifications. Across scenarios, classical Kaplan-Meier estimators (weighted when required by the censoring process) and G-formula methods perform well in randomized settings, while in observational data G-formula estimators remain competitive; however, augmented estimators such as AIPTW-AIPCW generally offer robustness to model misspecification and a favorable bias-variance trade-off. Parametric estimators perform best under correct specification, whereas nonparametric methods avoid functional assumptions but require large sample sizes to achieve reliable performance. We offer practical recommendations for estimator choice and provide open-source R code to support reproducibility and application.
- oai:arXiv.org:2501.05836v3
+ Covariate-Adjusted Response-Adaptive Design with Delayed Outcomes
+ https://arxiv.org/abs/2502.01062
+ arXiv:2502.01062v3 Announce Type: replace
+Abstract: Covariate-adjusted response-adaptive (CARA) designs have gained widespread adoption for their clear benefits in enhancing experimental efficiency and participant welfare. These designs dynamically adjust treatment allocations during interim analyses based on participant responses and covariates collected during the experiment. However, delayed responses can significantly compromise the effectiveness of CARA designs, as they hinder timely adjustments to treatment assignments when certain participant outcomes are not immediately observed. In this paper, we propose a fully forward-looking CARA design that dynamically updates treatment assignments throughout the experiment as response delay mechanisms are progressively estimated. Our design strategy is informed by novel semiparametric efficiency calculations that explicitly account for outcome delays in a multi-stage setting. Through both theoretical investigations and simulation studies, we demonstrate that our proposed design offers a robust solution for handling delayed outcomes in CARA designs, yielding significant improvements in both statistical power and participant welfare.
+ oai:arXiv.org:2502.01062v3stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Charlotte Voinot (PREMEDICAL, Sanofi Gentilly), Cl\'ement Berenfeld (Sanofi Gentilly), Imke Mayer (Sanofi Gentilly), Bernard Sebastien (Sanofi Gentilly), Julie Josse (PREMEDICAL)
+ Xinwei Ma, Jingshen Wang, Waverly Wei
- Learning Enhanced Ensemble Filters
- https://arxiv.org/abs/2504.17836
- arXiv:2504.17836v3 Announce Type: replace
-Abstract: The filtering distribution in hidden Markov models evolves according to the law of a mean-field model in state-observation space. The ensemble Kalman filter (EnKF) approximates this mean-field model with an ensemble of interacting particles, employing a Gaussian ansatz for the joint distribution of the state and observation at each observation time. These methods are robust, but the Gaussian ansatz limits accuracy. Here this shortcoming is addressed by using machine learning to map the joint predicted state and observation to the updated state estimate. The derivation of methods from a mean field formulation of the true filtering distribution suggests a single parametrization of the algorithm that can be deployed at different ensemble sizes. And we use a mean field formulation of the ensemble Kalman filter as an inductive bias for our architecture.
- To develop this perspective, in which the mean-field limit of the algorithm and finite interacting ensemble particle approximations share a common set of parameters, a novel form of neural operator is introduced, taking probability distributions as input: a measure neural mapping (MNM). A MNM is used to design a novel approach to filtering, the MNM-enhanced ensemble filter (MNMEF), which is defined in both the mean-field limit and for interacting ensemble particle approximations. The ensemble approach uses empirical measures as input to the MNM and is implemented using the set transformer, which is invariant to ensemble permutation and allows for different ensemble sizes. In practice fine-tuning of a small number of parameters, for specific ensemble sizes, further enhances the accuracy of the scheme. The promise of the approach is demonstrated by its superior root-mean-square-error performance relative to leading methods in filtering the Lorenz '96 and Kuramoto-Sivashinsky models.
- oai:arXiv.org:2504.17836v3
- stat.ML
- cs.LG
- cs.SY
- eess.SY
- physics.comp-ph
- Tue, 09 Dec 2025 00:00:00 -0500
+ Multivariable Behavioral Change Modeling of Epidemics in the Presence of Undetected Infections
+ https://arxiv.org/abs/2503.00982
+ arXiv:2503.00982v3 Announce Type: replace
+Abstract: Epidemic models are invaluable tools to understand and implement strategies to control the spread of infectious diseases, as well as to inform public health policies and resource allocation. However, current modeling approaches have limitations that reduce their practical utility, such as the exclusion of human behavioral change in response to the epidemic or ignoring the presence of undetected infectious individuals in the population. These limitations became particularly evident during the COVID-19 pandemic, underscoring the need for more accurate and informative models. To address these challenges, we develop a novel Bayesian epidemic modeling framework to better capture the complexities of disease spread by incorporating behavioral responses and undetected infections. In particular, our framework makes three contributions: 1) leveraging additional data on hospitalizations and deaths in modeling the disease dynamics, 2) accounting for data uncertainty arising from the large presence of asymptomatic and undetected infections, and 3) allowing the population behavioral change to be dynamically influenced by multiple data sources (cases and deaths). We thoroughly investigate the properties of the proposed model via simulation, and illustrate its utility on COVID-19 data from Montreal and Miami.
+ oai:arXiv.org:2503.00982v3
+ stat.ME
+ physics.soc-ph
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Eviatar Bach, Ricardo Baptista, Edoardo Calvello, Bohan Chen, Andrew Stuart
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Caitlin Ward, Rob Deardon, Alexandra M. Schmidt
- Generative Machine Learning for Multivariate Angular Simulation
- https://arxiv.org/abs/2504.21505
- arXiv:2504.21505v2 Announce Type: replace
-Abstract: With the recent development of new geometric and angular-radial frameworks for multivariate extremes, reliably simulating from angular variables in moderate-to-high dimensions is of increasing importance. Empirical approaches have the benefit of simplicity, and work reasonably well in low dimensions, but as the number of variables increases, they can lack the required flexibility and scalability. Classical parametric models for angular variables, such as the von Mises--Fisher distribution (vMF), provide an alternative. Exploiting finite mixtures of vMF distributions increases their flexibility, but there are cases where, without letting the number of mixture components grow considerably, a mixture model with a fixed number of components is not sufficient to capture the intricate features that can arise in data. Owing to their flexibility, generative deep learning methods are able to capture complex data structures; they therefore have the potential to be useful in the simulation of multivariate angular variables. In this paper, we introduce a range of deep learning approaches for this task, including generative adversarial networks, normalizing flows and flow matching. We assess their performance via a range of metrics, and make comparisons to the more classical approach of using a finite mixture of vMF distributions. The methods are also applied to a metocean data set, with diagnostics indicating strong performance, demonstrating the applicability of such techniques to real-world, complex data structures.
- oai:arXiv.org:2504.21505v2
+ Median Consensus Embedding for Dimensionality Reduction
+ https://arxiv.org/abs/2503.08103
+ arXiv:2503.08103v2 Announce Type: replace
+Abstract: This study proposes median consensus embedding (MCE) to address variability in low-dimensional embeddings caused by random initialization in nonlinear dimensionality reduction techniques such as $t$-distributed stochastic neighbor embedding. MCE is defined as the geometric median of multiple embeddings. By assuming multiple embeddings as independent and identically distributed random samples and applying large deviation theory, we prove that MCE achieves consistency at an exponential rate. Furthermore, we develop a practical algorithm to implement MCE by constructing a distance function between embeddings based on the Frobenius norm of the pairwise distance matrix of data points. Application to actual data demonstrates that MCE converges rapidly and effectively reduces instability. We further combine MCE with multiple imputation to address missing values and consider multiscale hyperparameters. Results confirm that MCE effectively mitigates instability issues in embedding methods arising from random initialization and other sources.
+ oai:arXiv.org:2503.08103v2stat.MLcs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/
- Jakob Benjamin Wessel, Callum J. R. Murphy-Barltrop, Emma S. Simpson
+ Yui Tomo, Daisuke Yoneoka
- Estimating Covariate-balanced Survival Curve in Distributed Data Environment using Data Collaboration Quasi-Experiment
- https://arxiv.org/abs/2505.06035
- arXiv:2505.06035v3 Announce Type: replace
-Abstract: The sharing of patient-level data necessary for covariate-adjusted survival analysis between medical institutions is difficult due to privacy protection restrictions. We propose a privacy-preserving framework that estimates balanced Kaplan-Meier curves from distributed observational data without exchanging raw data. Each institution sends only the low-dimensional representation obtained through dimensionality reduction of the covariate matrix. Analysts reconstruct the aggregated dataset, perform propensity score matching, and estimate survival curves. Experiments using simulation datasets and five publicly available medical datasets showed that the proposed method consistently outperformed single-site analyses. This method can handle both horizontal and vertical data distribution scenarios and enables the collaborative acquisition of reliable survival curves with minimal communication and no disclosure of raw data.
- oai:arXiv.org:2505.06035v3
+ Sufficient digits and density estimation: A Bayesian nonparametric approach using generalized finite P\'olya trees
+ https://arxiv.org/abs/2506.09437
+ arXiv:2506.09437v3 Announce Type: replace
+Abstract: This paper proposes a novel approach for statistical modelling of a continuous random variable $X$ on $[0, 1)$, based on its digit representation $X=.X_1X_2\ldots$. In general, $X$ can be coupled with a latent random variable $N$ so that $(X_1,\ldots,X_N)$ becomes a sufficient statistics and $.X_{N+1}X_{N+2}\ldots$ is uniformly distributed. In line with this fact, and focusing on binary digits for simplicity, we propose a family of generalized finite P{\'o}lya trees that induces a random density for a sample, which becomes a flexible tool for density estimation. Here, the digit system may be random and learned from the data. We provide a detailed Bayesian analysis, including closed form expression for the posterior distribution. We analyse the frequentist properties as the sample size increases, and provide sufficient conditions for consistency of the posterior distributions of the random density and $N$. We consider an extension to data spanning multiple orders of magnitude, and propose a prior distribution that encodes the so-called extended Newcomb-Benford law. Such a model shows promising results for density estimation of human-activity data. Our methodology is illustrated on several synthetic and real datasets.
+ oai:arXiv.org:2506.09437v3stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Akihiro Toyoda, Yuji Kawamata, Tomoru Nakayama, Akira Imakura, Tetsuya Sakurai, Yukihiko Okada
-
-
- Detecting non-uniform patterns on high-dimensional hyperspheres
- https://arxiv.org/abs/2506.00444
- arXiv:2506.00444v2 Announce Type: replace
-Abstract: We propose a new probabilistic characterization of the uniform distribution on the hypersphere in terms of the distribution of inner products, extending the ideas of \citep{cuesta2009projection,cuesta2007sharp} in a data-driven manner. Using this characterization, we define a new distance that quantifies the deviation of an arbitrary distribution from uniformity.
- As an application, we construct a novel nonparametric test for the problem of testing uniformity, namely the task of determining whether a set of \(n\) i.i.d.\ random points on the \(p\)-dimensional hypersphere is approximately uniformly distributed. The proposed test is asymptotically a Brownian bridge and it can detect any alternative lying outside a ball of radius \(1/n\) with respect to the proposed distance, in both high and low-dimensional settings.
- We then prove a matching lower bound with respect to this distance and study its behavior when restricted to parametric models. In particular, we show that the minimax detection thresholds with respect to this distance coincide with the usual minimax thresholds in two important families: (i) the class of Fisher--von Mises--Langevin (FvML) alternatives, and (ii) a class of low-rank uniform distributions. Thus, the proposed test is optimal in these models. We also derive the limiting distributions of the test under the corresponding local alternatives.
- As a byproduct of our analysis, we determine the detection threshold in the high-dimensional regime for testing the intrinsic dimension of the uniform distribution on $\mathbb{S}^{p-1}$; that is, for testing whether the distribution is uniformly supported on $\mathbb{S}^{p-1}$ against the alternative that it is uniformly distributed on \[ \mathbb{S}^{p-1} \cap H, \] for some $k$-dimensional linear subspace $H \subset \mathbb{R}^p$.
- oai:arXiv.org:2506.00444v2
+ math.PRmath.STstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/publicdomain/zero/1.0/
- Tiefeng Jiang, Tuan Pham
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Mario Beraha, Jesper M{\o}llerA General Approach to Visualizing Uncertainty in Statistical Graphics
https://arxiv.org/abs/2508.00937
- arXiv:2508.00937v2 Announce Type: replace
+ arXiv:2508.00937v3 Announce Type: replace
Abstract: We present a general approach to visualizing uncertainty in static 2-D statistical graphics. If we treat a visualization as a function of its underlying quantities, uncertainty in those quantities induces a distribution over images. We show how to aggregate these images into a single visualization that represents the uncertainty. The approach can be viewed as a generalization of sample-based approaches that use overlay. Notably, standard representations, such as confidence intervals and bands, emerge with their usual coverage guarantees without being explicitly quantified or visualized. As a proof of concept, we implement our approach in the IID setting using resampling, provided as an open-source Python library. Because the approach operates directly on images, the user needs only to supply the data and the code for visualizing the quantities of interest without uncertainty. Through several examples, we show how both familiar and novel forms of uncertainty visualization can be created. The implementation is not only a practical validation of the underlying theory but also an immediately usable tool that can complement existing uncertainty-visualization libraries.
- oai:arXiv.org:2508.00937v2
+ oai:arXiv.org:2508.00937v3stat.MEcs.GRcs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/Bernarda Petek, David Nabergoj, Erik \v{S}trumbelj
- A 4% withdrawal rate for American retirement spending, derived from a discrete-time model of stochastic returns on assets and their sample moments
- https://arxiv.org/abs/2508.10273
- arXiv:2508.10273v2 Announce Type: replace
-Abstract: What grounds the rule of thumb that a(n American) retiree can safely withdraw 4% of their initial retirement wealth in their first year of retirement, then increase that rate of consumption with inflation? I address that question with a discrete-time model of returns to a retirement portfolio consumed at a rate that grows by $s$ per period. The model's key parameter is $\gamma$, an $s$-adjusted rate of return to wealth, derived from the first 2-4 moments of the portfolio's probability distribution of returns; for a retirement lasting $t$ periods the model recommends a rate of consumption of $\gamma / (1 - (1 - \gamma)^t)$. Estimation of $\gamma$ (and hence of the implied rate of spending in retirement) reveals that the 4% rule emerges from adjusting high expected rates of return down for: consumption growth, the variance in (and kurtosis of) returns to wealth, the longevity risk of a retiree potentially underestimating $t$, and the inclusion of bonds in retirement portfolios without leverage. The model supports leverage of retirement portfolios dominated by the S&P 500, with leverage ratios $> 1.6$ having been historically optimal under the model's approximations. Historical simulations of 30-year retirements suggest that the model proposes withdrawal rates having roughly even odds of success, that leverage greatly improves those odds for stocks-heavy portfolios, and that investing on margin could have allowed safe withdrawal rates $> 6$% per year.
- oai:arXiv.org:2508.10273v2
- stat.AP
- q-fin.PM
- q-fin.ST
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/publicdomain/zero/1.0/
- Drew M. Thomas
-
-
- A self-supervised learning approach for denoising autoregressive models with additive noise: finite and infinite variance cases
- https://arxiv.org/abs/2508.12970
- arXiv:2508.12970v2 Announce Type: replace
-Abstract: The autoregressive time series model is a popular second-order stationary process, modeling a wide range of real phenomena. However, in applications, autoregressive signals are often corrupted by additive noise. Further, the autoregressive process and the corruptive noise may be highly impulsive, stemming from an infinite-variance distribution. The model estimation techniques that account for additional noise tend to show reduced efficacy when there is very strong noise present in the data, especially when the noise is heavy-tailed. In this paper, we propose a novel self-supervised learning method to denoise the additive noise-corrupted autoregressive model. Our approach is motivated by recent work in computer vision and does not require full knowledge of the noise distribution. We use the proposed method to recover exemplary finite- and infinite-variance autoregressive signals, namely, Gaussian and alpha-stable distributed signals, respectively, from their noise-corrupted versions. The simulation study conducted on both synthetic and semi-synthetic data demonstrates strong denoising performance of our method compared to several baseline methods, particularly when the corruption is significant and impulsive in nature. Finally, we apply the presented methodology to forecast the pure autoregressive signal from the noise-corrupted data.
- oai:arXiv.org:2508.12970v2
- stat.ME
- stat.CO
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Sayantan Banerjee, Agnieszka Wylomanska, Sundar S
-
-
- SADA: Safe and Adaptive Aggregation of Multiple Black-Box Predictions in Semi-Supervised Learning
- https://arxiv.org/abs/2509.21707
- arXiv:2509.21707v2 Announce Type: replace
-Abstract: Semi-supervised learning (SSL) arises in practice when labeled data are scarce or expensive to obtain, while large quantities of unlabeled data are readily available.
- With the growing adoption of machine learning techniques, it has become increasingly feasible to generate multiple predicted labels using a variety of models and algorithms, including deep learning, large language models, and generative AI. In this paper, we propose a novel approach that safely and adaptively aggregates multiple black-box predictions of uncertain quality for both inference and prediction tasks. Our method provides two key guarantees: (i) it never performs worse than using the labeled data alone, regardless of the quality of the predictions; and (ii) if any one of the predictions (without knowing which one) perfectly fits the ground truth, the algorithm adaptively exploits this to achieve either a faster convergence rate or the semiparametric efficiency bound. We demonstrate the effectiveness of the proposed algorithm through small-scale simulations and two real-data analyses with distinct scientific goals. A user-friendly R package, sada, is provided to facilitate practical implementation.
- oai:arXiv.org:2509.21707v2
- stat.ML
- cs.LG
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiawei Shan, Zhifeng Chen, Yiming Dong, Yazhen Wang, Jiwei Zhao
-
-
- Deep Hedging Under Non-Convexity: Limitations and a Case for AlphaZero
- https://arxiv.org/abs/2510.01874
- arXiv:2510.01874v2 Announce Type: replace
-Abstract: This paper examines replication portfolio construction in incomplete markets - a key problem in financial engineering with applications in pricing, hedging, balance sheet management, and energy storage planning. We model this as a two-player game between an investor and the market, where the investor makes strategic bets on future states while the market reveals outcomes. Inspired by the success of Monte Carlo Tree Search in stochastic games, we introduce an AlphaZero-based system and compare its performance to deep hedging - a widely used industry method based on gradient descent. Through theoretical analysis and experiments, we show that deep hedging struggles in environments where the optimal action-value function is not subject to convexity constraints - such as those involving non-convex transaction costs, capital constraints, or regulatory limitations - converging to local optima. We construct specific market environments to highlight these limitations and demonstrate that AlphaZero consistently finds near-optimal replication strategies. On the theoretical side, we establish a connection between deep hedging and convex optimization, suggesting that its effectiveness is contingent on convexity assumptions. Our experiments further suggest that AlphaZero is more sample-efficient - an important advantage in data-scarce, overfitting-prone derivative markets.
- oai:arXiv.org:2510.01874v2
- stat.ML
- cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Matteo Maggiolo, Giuseppe Nuti, Miroslav \v{S}trupl, Oleg Szehr
-
-
- Model Monitoring: A General Framework with an Application to Non-life Insurance Pricing
- https://arxiv.org/abs/2510.04556
- arXiv:2510.04556v2 Announce Type: replace
-Abstract: Maintaining the predictive performance of pricing models is challenging when insurance portfolios and data-generating mechanisms evolve over time. Focusing on non-life insurance, we adopt the concept-drift terminology from machine learning and distinguish virtual drift from real concept drift in an actuarial setting. Methodologically, we (i) formalize deviance loss and Murphy's score decomposition to assess global and local auto-calibration; (ii) study the Gini score as a rank-based performance measure, derive its asymptotic distribution, and develop a consistent bootstrap estimator of its asymptotic variance; and (iii) combine these results into a statistically grounded, model-agnostic monitoring framework that integrates a Gini-based ranking drift test with global and local auto-calibration tests. An application to a modified motor insurance portfolio with controlled concept-drift scenarios illustrates how the framework guides decisions on refitting or recalibrating pricing models.
- oai:arXiv.org:2510.04556v2
+ Gaussian Approximation for Two-Timescale Linear Stochastic Approximation
+ https://arxiv.org/abs/2508.07928
+ arXiv:2508.07928v2 Announce Type: replace
+Abstract: In this paper, we establish non-asymptotic bounds for accuracy of normal approximation for linear two-timescale stochastic approximation (TTSA) algorithms driven by martingale difference or Markov noise. Focusing on both the last iterate and Polyak-Ruppert averaging regimes, we derive bounds for normal approximation in terms of the convex distance between probability distributions. Our analysis reveals a non-trivial interaction between the fast and slow timescales: the normal approximation rate for the last iterate improves as the timescale separation increases, while it decreases in the Polyak-Ruppert averaged setting. We also provide the high-order moment bounds for the error of linear TTSA algorithm, which may be of independent interest.
+ oai:arXiv.org:2508.07928v2stat.MLcs.LG
+ math.OC
+ math.PRmath.ST
- q-fin.ST
- stat.APstat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Alexej Brauer, Paul Menzel, Mario V. W\"uthrich
-
-
- Can language models boost the power of randomized experiments without statistical bias?
- https://arxiv.org/abs/2510.05545
- arXiv:2510.05545v2 Announce Type: replace
-Abstract: Randomized experiments or randomized controlled trials (RCTs) are gold standards for causal inference, yet cost and sample-size constraints limit power. We introduce CALM (Causal Analysis leveraging Language Models), a statistical framework that integrates large language models (LLMs) generated insights of RCTs with established causal estimators to increase precision while preserving statistical validity. In particular, CALM treats LLM-generated outputs as auxiliary prognostic information and corrects their potential bias via a heterogeneous calibration step that residualizes and optimally reweights predictions. We prove that CALM remains consistent even when LLM predictions are biased and achieves efficiency gains over augmented inverse probability weighting estimators for various causal effects. In particular, CALM develops a few-shot variant that aggregates predictions across randomly sampled demonstration sets. The resulting U-statistic-like predictor restores i.i.d. structure and also mitigates prompt-selection variability. Empirically, in simulations calibrated to a mobile-app depression RCT, CALM delivers lower variance relative to other benchmarking methods, is effective in zero- and few-shot settings, and remains stable across prompt designs. By principled use of LLMs to harness unstructured data and external knowledge learned during pretraining, CALM provides a practical path to more precise causal analyses in RCTs.
- oai:arXiv.org:2510.05545v2
- stat.ME
- econ.EM
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Xinrui Ruan, Xinwei Ma, Yingfei Wang, Waverly Wei, Jingshen Wang
-
-
- Multiply Robust Estimation of Conditional Survival Probability with Time-Varying Covariates
- https://arxiv.org/abs/2510.10372
- arXiv:2510.10372v3 Announce Type: replace
-Abstract: It is often of interest to study the association between covariates and the cumulative incidence of a right-censored time-to-event outcome. When time-varying covariates are measured on a fixed discrete time scale, it is desirable to account for these more up-to-date covariates when addressing censoring. For example, in vaccine trials, it is of interest to study the association between immune response levels after administering the vaccine and the cumulative incidence of the endpoint, while accounting for loss to follow-up explained by immune response levels measured after at multiple post-vaccination visits. Existing methods rely on stringent parametric assumptions, do not account for informative censoring due to time-varying covariates when time is continuous, only estimate a marginal survival probability, or do not fully use the discrete-time structure of post-treatment covariates. We propose a nonparametric estimator of the continuous-time survival probability conditional on covariates, accounting for censoring due to time-varying covariates measured on a fixed discrete time scale. We show that the estimator is multiply robust: it is Fisher consistent if, within each time window between adjacent visits, the censoring distribution is correctly specified, or both the time-to-event distribution and a conditional mean probability are correctly specified.
- oai:arXiv.org:2510.10372v3
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hongxiang Qiu, Marco Carone, Alex Luedtke, Peter B. Gilbert
-
-
- In-Context Learning Is Provably Bayesian Inference: A Generalization Theory for Meta-Learning
- https://arxiv.org/abs/2510.10981
- arXiv:2510.10981v2 Announce Type: replace
-Abstract: This paper develops a finite-sample statistical theory for in-context learning (ICL), analyzed within a meta-learning framework that accommodates mixtures of diverse task types. We introduce a principled risk decomposition that separates the total ICL risk into two orthogonal components: Bayes Gap and Posterior Variance. The Bayes Gap quantifies how well the trained model approximates the Bayes-optimal in-context predictor. For a uniform-attention Transformer, we derive a non-asymptotic upper bound on this gap, which explicitly clarifies the dependence on the number of pretraining prompts and their context length. The Posterior Variance is a model-independent risk representing the intrinsic task uncertainty. Our key finding is that this term is determined solely by the difficulty of the true underlying task, while the uncertainty arising from the task mixture vanishes exponentially fast with only a few in-context examples. Together, these results provide a unified view of ICL: the Transformer selects the optimal meta-algorithm during pretraining and rapidly converges to the optimal algorithm for the true task at test time.
- oai:arXiv.org:2510.10981v2
- stat.ML
- cs.LG
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tomoya Wakayama, Taiji Suzuki
-
-
- Multimodal Fusion and Interpretability in Human Activity Recognition: A Reproducible Framework for Sensor-Based Modeling
- https://arxiv.org/abs/2510.22410
- arXiv:2510.22410v2 Announce Type: replace
-Abstract: The research introduces a reproducible framework for transforming raw, heterogeneous sensor streams into aligned, semantically meaningful representations for multimodal human activity recognition. Grounded in the Carnegie Mellon University Multi-Modal Activity Database (CMU-MMAC) database and focused on the naturalistic Subject 07 Brownie session, the study traces the full pipeline from data ingestion to modeling and interpretation. Unlike black box preprocessing, a unified preprocessing workflow is proposed that temporally aligns video, audio, and RFID through resampling, grayscale conversion, sliding-window segmentation, and modality-specific normalization, producing standardized fused tensors suitable for downstream learning. Building on this foundation, the work systematically compares early, late, and hybrid fusion strategies using LSTM-based models implemented with PyTorch and TensorFlow, showing that late fusion consistently achieves the highest validation accuracy, with hybrid fusion outperforming early fusion. To evaluate interpretability and modality contribution, PCA and t-SNE visualizations reveal coherent temporal structure and confirm that the video carries stronger discriminative power than audio, while their combination yields substantial performance gains. Incorporating sparse, asynchronous RFID signals further improves accuracy by over 50% and boosts macro-averaged ROC-AUC, demonstrating the added value of object-interaction cues. Overall, the framework contributes a modular, empirically validated approach to multimodal fusion that links preprocessing design, fusion architecture, and interpretability, offering a transferable template for intelligent systems operating in complex, real-world activity settings.
- oai:arXiv.org:2510.22410v2
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Yiyao Yang, Yasemin Gulbahar
-
-
- Unifiedly Efficient Inference on All-Dimensional Targets for Large-Scale GLMs
- https://arxiv.org/abs/2511.06070
- arXiv:2511.06070v2 Announce Type: replace
-Abstract: The scalability of Generalized Linear Models (GLMs) for large-scale, high-dimensional data often forces a trade-off between computational feasibility and statistical accuracy, particularly for inference on pre-specified parameters. While subsampling methods mitigate computational costs, existing estimators are typically constrained by a suboptimal $r^{-1/2}$ convergence rate, where $r$ is the subsample size. This paper introduces a unified framework that systematically breaks this barrier, enabling efficient and precise inference regardless of the dimension of the target parameters. To overcome the accuracy loss and enhance computational efficiency, we propose three estimators tailored to different scenarios. For low-dimensional targets, we propose a de-variance subsampling (DVS) estimator that achieves a sharply improved convergence rate of $\max\{r^{-1}, n^{-1/2}\}$, permitting valid inference even with very small subsamples. As $r$ grows, a multi-step refinement of our estimator is proven to be asymptotically normal and semiparametric efficient when $r/\sqrt{n} \to \infty$, matching the performance of the full-sample estimator-a property confirmed by its Bahadur representation. Critically, we provide an improved principle to high-dimensional targets, developing a novel decorrelated score function that facilitates simultaneous inference for a diverging number of pre-specified parameters. Comprehensive numerical experiments demonstrate that our framework delivers a superior balance of computational efficiency and statistical accuracy across both low- and high-dimensional inferential tasks in large-scale GLM, thereby realizing the promise of unifiedly efficient inference for large-scale GLMs.
- oai:arXiv.org:2511.06070v2
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Bo Fu, Dandan Jiang
+ Bogdan Butyrin, Artemy Rubtsov, Alexey Naumov, Vladimir Ulyanov, Sergey Samsonov
- High-Performance Variance-Covariance Matrix Construction Using an Uncentered Gram Formulation
- https://arxiv.org/abs/2511.08223
- arXiv:2511.08223v2 Announce Type: replace
-Abstract: Reichel (2025) defined the bariance as a pairwise-difference measure that can be rewritten in linear time using only scalar sums. We extend this idea to the covariance matrix by showing that the standard matrix expression involving the uncentered Gram matrix and a correction term is algebraically identical to the pairwise-difference definition while avoiding explicit centering. The computation then reduces to one outer product of dimension p-by-p and a single subtraction. Benchmarks in Python show clear runtime gains, especially when BLAS optimizations are absent. Optionally faster Gram-matrix routines such as RXTX (Rybin et al., 2025) further reduce overall cost.
- oai:arXiv.org:2511.08223v2
- stat.CO
- cs.LG
- cs.NA
- math.NA
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Felix Reichel
-
-
- Interpolated stochastic interventions based on propensity scores, target policies and treatment-specific costs
- https://arxiv.org/abs/2511.11353
- arXiv:2511.11353v2 Announce Type: replace
-Abstract: We introduce families of stochastic interventions for discrete treatments that connect causal modeling to cost-sensitive decision making. The interventions arise from a cost-penalized information projection of the independent product of the organic propensity and a user-specified target, yielding closed-form Boltzmann-Gibbs couplings. The induced marginals define modified stochastic policies that interpolate smoothly, via a single tilt parameter, from the organic law or from the target distribution toward a product-of-experts limit when all destination costs are strictly positive. One of these families recovers and extends incremental propensity score interventions, retaining identification without global positivity. For inference, we derive efficient influence functions under a nonparametric model for the expected outcomes after these policies and construct one-step estimators with uniform confidence bands. In simulations, the proposed estimators improve stability and robustness to nuisance misspecification relative to plug-in baselines. The framework can operationalize graded scientific hypotheses under realistic constraints: because inputs are modular, analysts can sweep feasible policy spaces, prototype candidates, and align interventions with budgets and logistics before committing experimental resources. This could help close the loop between observational evidence and resource-aware experimental design.
- oai:arXiv.org:2511.11353v2
+ A Case for a "Refutations and Critiques" Track in Statistics Journals
+ https://arxiv.org/abs/2509.03702
+ arXiv:2509.03702v3 Announce Type: replace
+Abstract: The statistics community, which has traditionally lacked a transparent and open peer-review system, faces a challenge of inconsistent paper quality, with some published work containing substantial errors. This problem resonates with concerns raised by Schaeffer et al. (2025) regarding the rapid growth of machine learning research. They argue that peer review has proven insufficient to prevent the publication of ``misleading, incorrect, flawed or perhaps even fraudulent studies'' and that a ``dynamic self-correcting research ecosystem'' is needed. This note provides a concrete illustration of this problem by examining two published papers, Wang, Zhou and Lin (2025) and Liu et al. (2023), and exposing striking and critical errors in their proofs. The presence of such errors in major journals raises a fundamental question about the importance and verification of mathematical proofs in our field. Echoing the proposal from Schaeffer et al. (2025), we argue that reforming the peer-review system itself is likely impractical. Instead, we propose a more viable path forward: the creation of a high-profile, reputable platform, such as a ``Refutations and Critiques'' track on arXiv, to provide visibility to vital research that critically challenges prior work. Such a mechanism would be crucial for enhancing the reliability and credibility of statistical research.
+ oai:arXiv.org:2509.03702v3stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Johan de Aguas
-
-
- Wilcoxon-Mann-Whitney Test of No Group Discrimination
- https://arxiv.org/abs/2511.20308
- arXiv:2511.20308v2 Announce Type: replace
-Abstract: The traditional WMW null hypothesis $H_0: F = G$ is erroneously too broad. WMW actually tests narrower $H_0: AUC = 0.5$. Asymptotic distribution of the standardized $U$ statistic (i.e., the empirical AUC) under the correct $H_0$ is derived along with finite sample bias corrections. The traditional alternative hypothesis of stochastic dominance is too narrow. WMW is consistent against $H_1: AUC \neq 0.5$, as established by Van Dantzig in 1951.
- oai:arXiv.org:2511.20308v2
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Marian Grendar
-
-
- Univariate-Guided Sparse Regression for Biobank-Scale High-Dimensional -omics Data
- https://arxiv.org/abs/2511.22049
- arXiv:2511.22049v2 Announce Type: replace
-Abstract: We present a scalable framework for computing polygenic risk scores (PRS) in high-dimensional genomic settings using the recently introduced Univariate-Guided Sparse Regression (uniLasso). UniLasso is a two-stage penalized regression procedure that leverages univariate coefficients and magnitudes to stabilize feature selection and enhance interpretability. Building on its theoretical and empirical advantages, we adapt uniLasso for application to the UK Biobank, a population-based repository comprising over one million genetic variants measured on hundreds of thousands of individuals from the United Kingdom. We further extend the framework to incorporate external summary statistics to increase predictive accuracy. Our results demonstrate that the adapted uniLasso attains predictive performance comparable to standard Lasso while selecting substantially fewer variants, yielding sparser and more interpretable models. Moreover, it exhibits superior performance in estimating PRS relative to its competitors, such as PRS-CS. Integrating external scores further improves prediction while maintaining sparsity.
- oai:arXiv.org:2511.22049v2
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Joshua Richland, Tuomo Kiiskinen, William Wang, Sophia Lu, Balasubramanian Narasimhan, Manuel Rivas, Robert Tibshirani
-
-
- Comparing Two Proxy Methods for Causal Identification
- https://arxiv.org/abs/2512.00175
- arXiv:2512.00175v2 Announce Type: replace
-Abstract: Identifying causal effects in the presence of unmeasured variables is a fundamental challenge in causal inference, for which proxy variable methods have emerged as a powerful solution. We contrast two major approaches in this framework: (1) bridge equation methods, which leverage solutions to integral equations to recover causal targets, and (2) array decomposition methods, which recover latent factors composing counterfactual quantities by exploiting unique determination of eigenspaces. We compare the model restrictions underlying these two approaches and provide insight into implications of the underlying assumptions, clarifying the scope of applicability for each method.
- oai:arXiv.org:2512.00175v2
- stat.ME
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Helen Guo, Elizabeth L. Ogburn, Ilya Shpitser
-
-
- Difference-in-differences with stochastic policy shifts of continuous treatments
- https://arxiv.org/abs/2512.00296
- arXiv:2512.00296v2 Announce Type: replace
-Abstract: Treatment effects under stochastic policy shifts quantify differences in outcomes across counterfactual scenarios with varying treatment distributions. Stochastic policy shifts generalize common notions of treatment effects since they include deterministic interventions (e.g., all individuals treated versus none treated) as a special case. While stochastic policy effects have been examined under causal exchangeability, they have not been integrated into the difference-in-differences (DiD) framework, which relies on parallel trends rather than exchangeability. In this paper, nonparametric efficient estimators of stochastic intervention effects are developed under a DiD setup with continuous treatments. The proposed causal estimand is the average stochastic dose effect among the treated, where the stochastic dose effect is the contrast between potential outcomes under a counterfactual dose distribution and no treatment. Several possible stochastic interventions are discussed, including those that do and do not depend on the observed data distribution. For generic stochastic interventions, the causal estimand is identified under standard conditions and estimators are proposed. Then, we focus on a specific stochastic policy shift, the exponential tilt, that increments the conditional density function of the continuous dose. For the exponential tilt intervention, a nonparametric estimator is proposed that allows for data-adaptive, machine learning nuisance function estimation. Under mild convergence rate conditions, the estimator is shown to be root-$n$ consistent and asymptotically normal with variance attaining the nonparametric efficiency bound. The proposed method is used to study the effect of hydraulic fracturing activity on employment and income.
- oai:arXiv.org:2512.00296v2
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ math.ST
+ stat.TH
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Michael Jetsupphasuk, Chenwei Fang, Didong Li, Michael G. Hudgens
+ Zhen Li
- Estimation of Semiparametric Factor Models with Missing Data
- https://arxiv.org/abs/2512.03235
- arXiv:2512.03235v2 Announce Type: replace
-Abstract: We study semiparametric factor models in high-dimensional panels where the factor loadings consist of a nonparametric component explained by observed covariates and an idiosyncratic component capturing unobserved heterogeneity. A key challenge in empirical applications is the presence of missing observations, which can distort both factor recovery and loading estimation. To address this issue, we develop a projected principal component analysis (PPCA) procedure that accommodates general missing-at-random mechanisms through inverse-probability weighting. We establish consistency and derive the asymptotic distributions of the estimated factors and loading functions, allowing the sieve dimension to diverge and permitting the time dimension to be either fixed or growing. Unlike classical PCA, PPCA achieves consistent factor estimation even when T is fixed, and the limiting distributions under missing data exhibit mixture normality with enlarged asymptotic variances. Theoretical results are supported by simulations and an empirical application. Our findings demonstrate that PPCA provides an effective and robust framework for estimating semiparametric factor models in the presence of missing data.
- oai:arXiv.org:2512.03235v2
+ Bayes Factor Hypothesis Testing in Meta-Analyses: Practical Advantages and Methodological Considerations
+ https://arxiv.org/abs/2511.22535
+ arXiv:2511.22535v2 Announce Type: replace
+Abstract: Bayesian hypothesis testing via Bayes factors offers a principled alternative to classical p-value methods in meta-analysis, particularly suited to its cumulative and sequential nature. Unlike commonly reported p-values for standard null hypothesis significance testing, Bayes factors allow for quantifying support both for and against the existence of an effect, facilitate ongoing evidence monitoring, and maintain coherent long-run behavior as additional studies are incorporated. Recent theoretical developments further show how Bayes factors can flexibly control Type I error rates through connections to e-value theory. Despite these advantages, their use remains limited in the meta-analytic literature. This paper provides a critical overview of their theoretical properties, methodological considerations, such as prior sensitivity, and practical advantages for evidence synthesis. Two illustrative applications are provided: one on statistical learning in individuals with language impairments, and another on seroma incidence following post-operative exercise in breast cancer patients. New tools supporting these methods are available in the open-source R package BFpack.
+ oai:arXiv.org:2511.22535v2stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sijie Zheng
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Joris Mulder, Robbie C. M. van AertAssumption-Lean Differential Variance Inference for Heterogeneous Treatment Effect Detection
https://arxiv.org/abs/2512.03254
- arXiv:2512.03254v2 Announce Type: replace
+ arXiv:2512.03254v3 Announce Type: replace
Abstract: The conditional average treatment effect (CATE) is frequently estimated to refute the homogeneous treatment effect assumption. Under this assumption, all units making up the population under study experience identical benefit from a given treatment. Uncovering heterogeneous treatment effects through inference about the CATE, however, requires that covariates truly modifying the treatment effect be reliably collected at baseline. CATE-based techniques will necessarily fail to detect violations when effect modifiers are omitted from the data due to, for example, resource constraints. Severe measurement error has a similar impact. To address these limitations, we prove that the homogeneous treatment effect assumption can be gauged through inference about contrasts of the potential outcomes' variances. We derive causal machine learning estimators of these contrasts and study their asymptotic properties. We establish that these estimators are doubly robust and asymptotically linear under mild conditions, permitting formal hypothesis testing about the homogeneous treatment effect assumption even when effect modifiers are missing or mismeasured. Numerical experiments demonstrate that these estimators' asymptotic guarantees are approximately achieved in experimental and observational data alike. These inference procedures are then used to detect heterogeneous treatment effects in the re-analysis of randomized controlled trials investigating targeted temperature management in cardiac arrest patients.
- oai:arXiv.org:2512.03254v2
+ oai:arXiv.org:2512.03254v3stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replacehttp://creativecommons.org/licenses/by/4.0/Philippe A. Boileau, Hani Zaki, Gabriele Lileikyte, Niklas Nielsen, Patrick R. Lawler, Mireille E. Schnitzer
- SSLfmm: An R Package for Semi-Supervised Learning with a Mixed-Missingness Mechanism in Finite Mixture Models
- https://arxiv.org/abs/2512.03322
- arXiv:2512.03322v2 Announce Type: replace
-Abstract: Semi-supervised learning (SSL) constructs classifiers from datasets in which only a subset of observations is labelled, a situation that naturally arises because obtaining labels often requires expert judgement or costly manual effort. This motivates methods that integrate labelled and unlabelled data within a learning framework. Most SSL approaches assume that label absence is harmless, typically treated as missing completely at random or ignored, but in practice, the missingness process can be informative, as the chances of an observation being unlabelled may depend on the ambiguity of its feature vector. In such cases, the missingness indicators themselves provide additional information that, if properly modelled, may improve estimation efficiency. The \textbf{SSLfmm} package for R is designed to capture this behaviour by estimating the Bayes' classifier under a finite mixture model in which each component corresponding to a class follows a multivariate normal distribution. It incorporates a mixed-missingness mechanism that combines a missing completely at random (MCAR) component with a (non-ignorable) missing at random (MAR) component, the latter modelling the probability of label missingness as a logistic function of the entropy based on the features. Parameters are estimated via an Expectation--Conditional Maximisation algorithm. In the two-class Gaussian setting with arbitrary covariance matrices, the resulting classifier trained on partially labelled data may, in some cases, achieve a lower misclassification rate than the supervised version in the case where all the labels are known. The package includes a practical tool for modelling and illustrates its performance through simulated examples.
- oai:arXiv.org:2512.03322v2
- stat.CO
+ Controlling the False Discovery Proportion in Matched Observational Studies
+ https://arxiv.org/abs/2512.06601
+ arXiv:2512.06601v2 Announce Type: replace
+Abstract: We provide an approach to exploratory data analysis in matched observational studies with a single intervention and multiple endpoints. In such settings, the researcher would like to explore evidence for actual treatment effects among these variables while accounting not only for the possibility of false discoveries, but also for the potential impact of unmeasured confounding. For any candidate subset of hypotheses about these outcomes, we provide sensitivity sets for the proportion of the hypotheses within the subset which are actually true. The resulting sensitivity statements are valid simultaneously over all possible choices for the rejected set, allowing the researcher to search for promising subsets of hypotheses that maintain a large estimated fraction of true discoveries even if hidden bias is present. The approach is well suited to sensitivity analysis, as conclusions that some fraction of outcomes are affected by the treatment exhibit larger robustness to unmeasured confounding than findings that any particular outcome is affected. We show how a sequence of integer programs, in tandem with screening steps, facilitate the efficient computation of the required sensitivity sets. We illustrate the practical utility of our method through both simulation studies and a data example on the long-term impacts of childhood abuse.
+ oai:arXiv.org:2512.06601v2stat.ME
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace
- http://creativecommons.org/licenses/by/4.0/
- Geoffrey J. McLachlan, Jinran Wu
-
-
- The Optimal Approximation Factor in Density Estimation
- https://arxiv.org/abs/1902.05876
- arXiv:1902.05876v4 Announce Type: replace-cross
-Abstract: Consider the following problem: given two arbitrary densities $q_1,q_2$ and a sample-access to an unknown target density $p$, find which of the $q_i$'s is closer to $p$ in total variation.
- A remarkable result due to Yatracos shows that this problem is tractable in the following sense: there exists an algorithm that uses $O(\epsilon^{-2})$ samples from $p$ and outputs~$q_i$ such that with high probability, $TV(q_i,p) \leq 3\cdot\mathsf{opt} + \epsilon$, where $\mathsf{opt}= \min\{TV(q_1,p),TV(q_2,p)\}$. Moreover, this result extends to any finite class of densities $\mathcal{Q}$: there exists an algorithm that outputs the best density in $\mathcal{Q}$ up to a multiplicative approximation factor of 3.
- We complement and extend this result by showing that: (i) the factor 3 can not be improved if one restricts the algorithm to output a density from $\mathcal{Q}$, and (ii) if one allows the algorithm to output arbitrary densities (e.g.\ a mixture of densities from $\mathcal{Q}$), then the approximation factor can be reduced to 2, which is optimal. In particular this demonstrates an advantage of improper learning over proper in this setup.
- We develop two approaches to achieve the optimal approximation factor of 2: an adaptive one and a static one. Both approaches are based on a geometric point of view of the problem and rely on estimating surrogate metrics to the total variation. Our sample complexity bounds exploit techniques from {\it Adaptive Data Analysis}.
- oai:arXiv.org:1902.05876v4
- cs.LG
- cs.CC
- cs.IT
- math.IT
- math.PR
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Olivier Bousquet, Daniel Kane, Shay Moran
-
-
- A Unified Framework for Estimation of High-dimensional Conditional Factor Models
- https://arxiv.org/abs/2209.00391
- arXiv:2209.00391v2 Announce Type: replace-cross
-Abstract: This paper presents a general framework for estimating high-dimensional conditional latent factor models via constrained nuclear norm regularization. We establish large sample properties of the estimators and provide efficient algorithms for their computation. To improve practical applicability, we propose a cross-validation procedure for selecting the regularization parameter. Our framework unifies the estimation of various conditional factor models, enabling the derivation of new asymptotic results while addressing limitations of existing methods, which are often model-specific or restrictive. Empirical analyses of the cross section of individual US stock returns suggest that imposing homogeneity improves the model's out-of-sample predictability, with our new method outperforming existing alternatives.
- oai:arXiv.org:2209.00391v2
- econ.EM
- stat.AP
- stat.ME
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qihui Chen
+ Mengqi Lin, Colin Fogarty
- Hidden Minima in Two-Layer ReLU Networks
- https://arxiv.org/abs/2312.16819
- arXiv:2312.16819v3 Announce Type: replace-cross
-Abstract: We consider the optimization problem arising from fitting two-layer ReLU networks with $d$ inputs under the square loss, where labels are generated by a target network. Two infinite families of spurious minima have recently been identified: one whose loss vanishes as $d \to \infty$, and another whose loss remains bounded away from zero. The latter are nevertheless avoided by vanilla SGD, and thus hidden, motivating the search for analytic properties distinguishing the two types. Perhaps surprisingly, the Hessian spectra of hidden and non-hidden minima agree up to terms of order $O(d^{-1/2})$, providing limited explanatory power. Consequently, our analysis of hidden minima proceeds instead via curves along which the loss is minimized or maximized. The main result is that arcs emanating from hidden minima differ, characteristically, by their structure and symmetry, precisely on account of the $O(d^{-1/2})$-eigenvalue terms absent from previous analyses.
- oai:arXiv.org:2312.16819v3
+ Discovering Influential Factors in Variational Autoencoders
+ https://arxiv.org/abs/1809.01804
+ arXiv:1809.01804v3 Announce Type: replace-cross
+Abstract: In the field of machine learning, it is still a critical issue to identify and supervise the learned representation without manually intervening or intuition assistance to extract useful knowledge or serve for the downstream tasks. In this work, we focus on supervising the influential factors extracted by the variational autoencoder(VAE). The VAE is proposed to learn independent low dimension representation while facing the problem that sometimes pre-set factors are ignored. We argue that the mutual information of the input and each learned factor of the representation plays a necessary indicator of discovering the influential factors. We find the VAE objective inclines to induce mutual information sparsity in factor dimension over the data intrinsic dimension and therefore result in some non-influential factors whose function on data reconstruction could be ignored. We show mutual information also influences the lower bound of the VAE's reconstruction error and downstream classification task. To make such indicator applicable, we design an algorithm for calculating the mutual information for the VAE and prove its consistency. Experimental results on MNIST, CelebA and DEAP datasets show that mutual information can help determine influential factors, of which some are interpretable and can be used to further generation and classification tasks, and help discover the variant that connects with emotion on DEAP dataset.
+ oai:arXiv.org:1809.01804v3cs.LG
- math.OCstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yossi Arjevani
+ Shiqi Liu, Jingxin Liu, Qian Zhao, Xiangyong Cao, Huibin Li, Deyu Meng, Hongying Meng, Sheng Liu
- Covariate-Elaborated Robust Partial Information Transfer with Conditional Spike-and-Slab Prior
- https://arxiv.org/abs/2404.03764
- arXiv:2404.03764v3 Announce Type: replace-cross
-Abstract: The popularity of transfer learning stems from the fact that it can borrow information from useful auxiliary datasets. Existing statistical transfer learning methods usually adopt a global similarity measure between the source data and the target data, which may lead to inefficiency when only partial information is shared. In this paper, we propose a novel Bayesian transfer learning method named ``CONCERT'' to allow robust partial information transfer for high-dimensional data analysis. A conditional spike-and-slab prior is introduced in the joint distribution of target and source parameters for information transfer. By incorporating covariate-specific priors, we can characterize partial similarities and integrate source information collaboratively to improve the performance on the target. In contrast to existing work, the CONCERT is a one-step procedure which achieves variable selection and information transfer simultaneously. We establish variable selection consistency, as well as estimation and prediction error bounds for CONCERT. Our theory demonstrates the covariate-specific benefit of transfer learning. To ensure the scalability of the algorithm, we adopt the variational Bayes framework to facilitate implementation. Extensive experiments and two real data applications showcase the validity and advantages of CONCERT over existing cutting-edge transfer learning methods.
- oai:arXiv.org:2404.03764v3
- cs.LG
+ Identifying Treatment and Spillover Effects Using Exposure Contrasts
+ https://arxiv.org/abs/2403.08183
+ arXiv:2403.08183v4 Announce Type: replace-cross
+Abstract: To report spillover effects, a common practice is to regress outcomes on statistics summarizing neighbors' treatments. This paper studies nonparametric analogs of these estimands, which we refer to as exposure contrasts. We demonstrate that a contrast may have the opposite sign of the unit-level effects of interest even under unconfoundedness. We then provide interpretable conditions on interference and the assignment mechanism under which exposure contrasts can be represented as convex averages of the unit-level effects and therefore avoid sign reversals. These conditions encompass cluster-randomized trials, network experiments, and observational settings with peer effects in selection into treatment.
+ oai:arXiv.org:2403.08183v4
+ econ.EMstat.ME
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1080/01621459.2025.2591232
- Ruqian Zhang, Yijiao Zhang, Annie Qu, Zhongyi Zhu, Juan Shen
-
-
- Evaluating Model Performance Under Worst-case Subpopulations
- https://arxiv.org/abs/2407.01316
- arXiv:2407.01316v2 Announce Type: replace-cross
-Abstract: The performance of ML models degrades when the training population is different from that seen under operation. Towards assessing distributional robustness, we study the worst-case performance of a model over all subpopulations of a given size, defined with respect to core attributes Z. This notion of robustness can consider arbitrary (continuous) attributes Z, and automatically accounts for complex intersectionality in disadvantaged groups. We develop a scalable yet principled two-stage estimation procedure that can evaluate the robustness of state-of-the-art models. We prove that our procedure enjoys several finite-sample convergence guarantees, including dimension-free convergence. Instead of overly conservative notions based on Rademacher complexities, our evaluation error depends on the dimension of Z only through the out-of-sample error in estimating the performance conditional on Z. On real datasets, we demonstrate that our method certifies the robustness of a model and prevents deployment of unreliable models.
- oai:arXiv.org:2407.01316v2
- cs.LG
- cs.CY
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Mike Li, Daksh Mittal, Hongseok Namkoong, Shangzhou Xia
+ Michael P. Leung
- Modeling diffusion in networks with communities: a multitype branching process approach
- https://arxiv.org/abs/2408.04456
- arXiv:2408.04456v2 Announce Type: replace-cross
-Abstract: The dynamics of diffusion in complex networks are widely studied to understand how entities, such as information, diseases, or behaviors, spread in an interconnected environment. Complex networks often present community structure, and tools to analyze diffusion processes on networks with communities are needed. In this paper, we develop theoretical tools using multi-type branching processes to model and analyze diffusion processes, following a simple contagion mechanism, across a broad class of networks with community structure. We show how, by using limited information about the network -- the degree distribution within and between communities -- we can calculate standard statistical characteristics of propagation dynamics, such as the extinction probability, hazard function, and cascade size distribution. These properties can be estimated not only for the entire network but also for each community separately.
- Furthermore, we estimate the probability of spread crossing from one community to another where it is not currently spreading. We demonstrate the accuracy of our framework by applying it to two specific examples: the Stochastic Block Model and a log-normal network with community structure. We show how the initial seeding location affects the observed cascade size distribution on a heavy-tailed network and that our framework accurately captures this effect.
- oai:arXiv.org:2408.04456v2
- physics.soc-ph
- stat.OT
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- 10.1103/PhysRevE.111.034310
- Alina Dubovskaya, Caroline B. Pena, David J. P. O'Sullivan
-
-
- A Data Envelopment Analysis Approach for Assessing Fairness in Resource Allocation: Application to Kidney Exchange Programs
- https://arxiv.org/abs/2410.02799
- arXiv:2410.02799v2 Announce Type: replace-cross
-Abstract: Kidney exchange programs have substantially increased transplantation rates but also raise critical concerns about fairness in organ allocation. We propose a novel framework leveraging Data Envelopment Analysis (DEA) to evaluate multiple dimensions of fairness-Priority, Access, and Outcome-within a unified model. This approach captures complexities often missed in single-metric analyses. Using data from the United Network for Organ Sharing, we separately quantify fairness across these dimensions: Priority fairness through waitlist durations, Access fairness via the Living Kidney Donor Profile Index (LKDPI) scores, and Outcome fairness based on graft lifespan. We then apply our conditional DEA model with covariate adjustment to demonstrate significant disparities in kidney allocation efficiency across ethnic groups. To quantify uncertainty, we employ conformal prediction within a novel Reference Frontier Mapping (RFM) framework, yielding group-conditional prediction intervals with finite-sample coverage guarantees. Our findings show notable differences in efficiency distributions between ethnic groups. Our study provides a rigorous framework for evaluating fairness in complex resource allocation systems with resource scarcity and mutual compatibility constraints.
- oai:arXiv.org:2410.02799v2
- cs.CY
- cs.LG
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Ali Kaazempur-Mofrad, Xiaowu Dai
-
-
- Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning
- https://arxiv.org/abs/2410.09101
- arXiv:2410.09101v2 Announce Type: replace-cross
-Abstract: Dataset ownership verification, the process of determining if a dataset is used in a model's training data, is necessary for detecting unauthorized data usage and data contamination. Existing approaches, such as backdoor watermarking, rely on inducing a detectable behavior into the trained model on a part of the data distribution. However, these approaches have limitations, as they can be harmful to the model's performances or require unpractical access to the model's internals. Most importantly, previous approaches lack guarantee against false positives. This paper introduces data taggants, a novel non-backdoor dataset ownership verification technique. Our method uses pairs of out-of-distribution samples and random labels as secret keys, and leverages clean-label targeted data poisoning to subtly alter a dataset, so that models trained on it respond to the key samples with the corresponding key labels. The keys are built as to allow for statistical certificates with black-box access only to the model. We validate our approach through comprehensive and realistic experiments on ImageNet1k using ViT and ResNet models with state-of-the-art training recipes. Our findings demonstrate that data taggants can reliably detect models trained on the protected dataset with high confidence, without compromising validation accuracy, and show their superiority over backdoor watermarking. We demonstrate the stealthiness and robustness of our method against various defense mechanisms.
- oai:arXiv.org:2410.09101v2
- cs.CR
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Wassim Bouaziz, Nicolas Usunier, El-Mahdi El-Mhamdi
-
-
- FIT-GNN: Faster Inference Time for GNNs that 'FIT' in Memory Using Coarsening
- https://arxiv.org/abs/2410.15001
- arXiv:2410.15001v4 Announce Type: replace-cross
-Abstract: Scalability of Graph Neural Networks (GNNs) remains a significant challenge. To tackle this, methods like coarsening, condensation, and computation trees are used to train on a smaller graph, resulting in faster computation. Nonetheless, prior research has not adequately addressed the computational costs during the inference phase. This paper presents a novel approach to improve the scalability of GNNs by reducing computational burden during the inference phase using graph coarsening. We demonstrate two different methods -- Extra Nodes and Cluster Nodes. Our study extends the application of graph coarsening for graph-level tasks, including graph classification and graph regression. We conduct extensive experiments on multiple benchmark datasets to evaluate the performance of our approach. Our results show that the proposed method achieves orders of magnitude improvements in single-node inference time compared to traditional approaches. Furthermore, it significantly reduces memory consumption for node and graph classification and regression tasks, enabling efficient training and inference on low-resource devices where conventional methods are impractical. Notably, these computational advantages are achieved while maintaining competitive performance relative to baseline models.
- oai:arXiv.org:2410.15001v4
- cs.LG
+ Explosive neural networks via higher-order interactions in curved statistical manifolds
+ https://arxiv.org/abs/2408.02326
+ arXiv:2408.02326v3 Announce Type: replace-cross
+Abstract: Higher-order interactions underlie complex phenomena in systems such as biological and artificial neural networks, but their study is challenging due to the scarcity of tractable models. By leveraging a generalisation of the maximum entropy principle, we introduce curved neural networks as a class of models with a limited number of parameters that are particularly well-suited for studying higher-order phenomena. Through exact mean-field descriptions, we show that these curved neural networks implement a self-regulating annealing process that can accelerate memory retrieval, leading to explosive order-disorder phase transitions with multi-stability and hysteresis effects. Moreover, by analytically exploring their memory-retrieval capacity using the replica trick, we demonstrate that these networks can enhance memory capacity and robustness of retrieval over classical associative-memory networks. Overall, the proposed framework provides parsimonious models amenable to analytical study, revealing higher-order phenomena in complex networks.
+ oai:arXiv.org:2408.02326v3
+ cond-mat.dis-nn
+ cond-mat.stat-mech
+ cs.IT
+ math.IT
+ nlin.AOstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Shubhajit Roy, Hrriday Ruparel, Kishan Ved, Anirban Dasgupta
+ 10.1038/s41467-025-61475-w
+ Aguilera, M., Morales, P.A., Rosas, F.E. et al. Explosive neural networks via higher-order interactions in curved statistical manifolds. Nature Communications 16, 6511 (2025)
+ Miguel Aguilera, Pablo A. Morales, Fernando E. Rosas, Hideaki Shimazaki
- Flatten Graphs as Sequences: Transformers are Scalable Graph Generators
- https://arxiv.org/abs/2502.02216
- arXiv:2502.02216v3 Announce Type: replace-cross
-Abstract: We introduce AutoGraph, a scalable autoregressive model for attributed graph generation using decoder-only transformers. By flattening graphs into random sequences of tokens through a reversible process, AutoGraph enables modeling graphs as sequences without relying on additional node features that are expensive to compute, in contrast to diffusion-based approaches. This results in sampling complexity and sequence lengths that scale optimally linearly with the number of edges, making it scalable and efficient for large, sparse graphs. A key success factor of AutoGraph is that its sequence prefixes represent induced subgraphs, creating a direct link to sub-sentences in language modeling. Empirically, AutoGraph achieves state-of-the-art performance on synthetic and molecular benchmarks, with up to 100x faster generation and 3x faster training than leading diffusion models. It also supports substructure-conditioned generation without fine-tuning and shows promising transferability, bridging language modeling and graph generation to lay the groundwork for graph foundation models. Our code is available at https://github.com/BorgwardtLab/AutoGraph.
- oai:arXiv.org:2502.02216v3
+ GLL: A Differentiable Graph Learning Layer for Neural Networks
+ https://arxiv.org/abs/2412.08016
+ arXiv:2412.08016v2 Announce Type: replace-cross
+Abstract: Standard deep learning architectures used for classification generate label predictions with a projection head and softmax activation function. Although successful, these methods fail to leverage the relational information between samples for generating label predictions. In recent works, graph-based learning techniques, namely Laplace learning, have been heuristically combined with neural networks for both supervised and semi-supervised learning (SSL) tasks. However, prior works approximate the gradient of the loss function with respect to the graph learning algorithm or decouple the processes; end-to-end integration with neural networks is not achieved. In this work, we derive backpropagation equations, via the adjoint method, for inclusion of a general family of graph learning layers into a neural network. The resulting method, distinct from graph neural networks, allows us to precisely integrate similarity graph construction and graph Laplacian-based label propagation into a neural network layer, replacing a projection head and softmax activation function for general classification task. Our experimental results demonstrate smooth label transitions across data, improved generalization and robustness to adversarial attacks, and improved training dynamics compared to a standard softmax-based approach.
+ oai:arXiv.org:2412.08016v2cs.LGstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Dexiong Chen, Markus Krimmel, Karsten Borgwardt
+ Jason Brown, Bohan Chen, Harris Hardiman-Mostow, Jeff Calder, Andrea L. Bertozzi
- Stein Discrepancy for Unsupervised Domain Adaptation
- https://arxiv.org/abs/2502.03587
- arXiv:2502.03587v4 Announce Type: replace-cross
-Abstract: Unsupervised domain adaptation (UDA) aims to improve model performance on an unlabeled target domain using a related, labeled source domain. A common approach aligns source and target feature distributions by minimizing a distance between them, often using symmetric measures such as maximum mean discrepancy (MMD). However, these methods struggle when target data is scarce. We propose a novel UDA framework that leverages Stein discrepancy, an asymmetric measure that depends on the target distribution only through its score function, making it particularly suitable for low-data target regimes. Our proposed method has kernelized and adversarial forms and supports flexible modeling of the target distribution via Gaussian, GMM, or VAE models. We derive a generalization bound on the target error and a convergence rate for the empirical Stein discrepancy in the two-sample setting. Empirically, our method consistently outperforms prior UDA approaches under limited target data across multiple benchmarks.
- oai:arXiv.org:2502.03587v4
+ Flow-based Conformal Prediction for Multi-dimensional Time Series
+ https://arxiv.org/abs/2502.05709
+ arXiv:2502.05709v2 Announce Type: replace-cross
+Abstract: Time series prediction underpins a broad range of downstream tasks across many scientific domains. Recent advances and increasing adoption of black-box machine learning models for time series prediction highlight the critical need for reliable uncertainty quantification. While conformal prediction has gained attention as a reliable uncertainty quantification method, conformal prediction for time series faces two key challenges: (1) adaptively leveraging correlations in features and non-conformity scores to overcome the exchangeability assumption, and (2) constructing prediction sets for multi-dimensional outcomes. To address these challenges jointly, we propose a novel conformal prediction method for time series using flow with classifier-free guidance. We provide coverage guarantees by establishing exact non-asymptotic marginal coverage and a finite-sample bound on conditional coverage for the proposed method. Evaluations on real-world time series datasets demonstrate that our method constructs significantly smaller prediction sets than existing conformal prediction methods while maintaining target coverage.
+ oai:arXiv.org:2502.05709v2cs.LGstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Anneke von Seeger, Dongmian Zou, Gilad Lerman
-
-
- Recorded Versus Synthetic Spectral-compatible Ground Motions: A Comparative Analysis of Structural Seismic Responses
- https://arxiv.org/abs/2502.19549
- arXiv:2502.19549v2 Announce Type: replace-cross
-Abstract: This paper presents a comparative analysis of structural seismic responses under two types of ground motion inputs: (i) synthetic motions generated by stochastic spectral-compatible ground motion models and (ii) recorded motions from an earthquake database. Both ground motion datasets are calibrated to a shared target response spectrum to ensure consistent spectral median, variance, and correlation structure. Five key stochastic response metrics-probability distributions, statistical moments, correlations, tail indices, and variance-based global sensitivity indices-are systematically evaluated for two representative structures: a medium-period building and a limiting case of a long-period tower. The comparison accounts for uncertainties both from ground motion and structural parameters. The results reveal that synthetic motions closely replicate recorded motions in terms of global response behavior-including distributions, mean and variance, correlation structure, and dominant uncertainty sources-indicating their suitability for routine seismic design and parametric studies. However, substantial differences emerge in response extremes for long-period structures, particularly in metrics governed by rare events, such as higher-order moments and tail behavior. These differences, which often exceed 50%, can be attributed to the non-Gaussian features and complex characteristics inherent in recorded motions, which are less pronounced in synthetic datasets. The findings support the use of synthetic ground motions for evaluating global seismic response characteristics, while highlighting their limitations in capturing rare-event behavior and long-period structural dynamics.
- oai:arXiv.org:2502.19549v2
- physics.geo-ph
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by-nc-nd/4.0/
- Jungho Kim, Maijia Su, Ziqi Wang, Marco Broccardo
+ Junghwan Lee, Chen Xu, Yao Xie
- Beyond Markovian: Reflective Exploration via Bayes-Adaptive RL for LLM Reasoning
- https://arxiv.org/abs/2505.20561
- arXiv:2505.20561v2 Announce Type: replace-cross
-Abstract: Large Language Models (LLMs) trained via Reinforcement Learning (RL) have exhibited strong reasoning capabilities and emergent reflective behaviors, such as rethinking and error correction, as a form of in-context exploration. However, the Markovian policy obtained from conventional RL training does not give rise to reflective exploration behaviors since the policy depends on the history only through the state and therefore has no incentive to enrich identical states with additional context. Instead, RL exploration is only useful during training to learn the optimal policy in a trial-and-error manner. Therefore, it remains unclear whether reflective reasoning will emerge during RL, or why it is beneficial. To remedy this, we recast reflective exploration within a Bayesian RL framework, which optimizes the expected return under a posterior distribution over Markov decision processes induced by the training data. This Bayesian formulation admits uncertainty-adaptive policies that, through belief updates, naturally incentivize information-gathering actions and induce self-reflection behaviors. Our resulting algorithm, BARL, instructs the LLM to stitch and switch strategies based on the observed outcomes, offering principled guidance on when and how the model should reflectively explore. Empirical results on both synthetic and mathematical reasoning tasks demonstrate that BARL outperforms conventional RL approaches, achieving superior test-time performance and token efficiency. Our code is available at https://github.com/shenao-zhang/BARL.
- oai:arXiv.org:2505.20561v2
+ Proper Learnability and the Role of Unlabeled Data
+ https://arxiv.org/abs/2502.10359
+ arXiv:2502.10359v2 Announce Type: replace-cross
+Abstract: Proper learning refers to the setting in which learners must emit predictors in the underlying hypothesis class $H$, and often leads to learners with simple algorithmic forms (e.g. empirical risk minimization (ERM), structural risk minimization (SRM)). The limitation of proper learning, however, is that there exist problems which can only be learned improperly, e.g. in multiclass classification. Thus, we ask: Under what assumptions on the hypothesis class or the information provided to the learner is a problem properly learnable? We first demonstrate that when the unlabeled data distribution is given, there always exists an optimal proper learner governed by distributional regularization, a randomized generalization of regularization. We refer to this setting as the distribution-fixed PAC model, and continue to evaluate the learner on its worst-case performance over all distributions. Our result holds for all metric loss functions and any finite learning problem (with no dependence on its size). Further, we demonstrate that sample complexities in the distribution-fixed PAC model can shrink by only a logarithmic factor from the classic PAC model, strongly refuting the role of unlabeled data in PAC learning (from a worst-case perspective).
+ We complement this with impossibility results which obstruct any characterization of proper learnability in the realizable PAC model. First, we observe that there are problems whose proper learnability is logically undecidable, i.e., independent of the ZFC axioms. We then show that proper learnability is not a monotone property of the underlying hypothesis class, and that it is not a local property (in a precise sense). Our impossibility results all hold even for the fundamental setting of multiclass classification, and go through a reduction of EMX learning (Ben-David et al., 2019) to proper classification which may be of independent interest.
+ oai:arXiv.org:2502.10359v2cs.LG
- cs.AI
- cs.CLstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Shenao Zhang, Yaqing Wang, Yinxiao Liu, Tianqi Liu, Peter Grabowski, Eugene Ie, Zhaoran Wang, Yunxuan Li
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Julian Asilis, Siddartha Devic, Shaddin Dughmi, Vatsal Sharan, Shang-Hua Teng
- Learning where to learn: Training data distribution optimization for scientific machine learning
- https://arxiv.org/abs/2505.21626
- arXiv:2505.21626v3 Announce Type: replace-cross
-Abstract: In scientific machine learning, models are routinely deployed with parameter values or boundary conditions far from those used in training. This paper studies the learning-where-to-learn problem of designing a training data distribution that minimizes average prediction error across a family of deployment regimes. A theoretical analysis shows how the training distribution shapes deployment accuracy. This motivates two adaptive algorithms based on bilevel or alternating optimization in the space of probability measures. Discretized implementations using parametric distribution classes or nonparametric particle-based gradient flows deliver optimized training distributions that outperform nonadaptive designs. Once trained, the resulting models exhibit improved sample complexity and robustness to distribution shift. This framework unlocks the potential of principled data acquisition for learning functions and solution operators of partial differential equations.
- oai:arXiv.org:2505.21626v3
+ Representation Retrieval Learning for Heterogeneous Data Integration
+ https://arxiv.org/abs/2503.09494
+ arXiv:2503.09494v3 Announce Type: replace-cross
+Abstract: In the era of big data, large-scale, multi-source, multi-modality datasets are increasingly ubiquitous, offering unprecedented opportunities for predictive modeling and scientific discovery. However, these datasets often exhibit complex heterogeneity, such as covariates shift, posterior drift, and blockwise missingness, which worsen predictive performance of existing supervised learning algorithms. To address these challenges simultaneously, we propose a novel Representation Retrieval (R2) framework, which integrates a dictionary of representation learning modules (representer dictionary) with data source-specific sparsity-induced machine learning model (learners). Under the R2 framework, we introduce the notion of integrativeness for each representer, and propose a novel Selective Integration Penalty (SIP) to explicitly encourage more integrative representers to improve predictive performance. Theoretically, we show that the excess risk bound of the R2 framework is characterized by the integrativeness of representers, and SIP effectively improves the excess risk. Extensive simulation studies validate the superior performance of R2 framework and the effect of SIP. We further apply our method to two real-world datasets to confirm its empirical success.
+ oai:arXiv.org:2503.09494v3cs.LG
- math.OC
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Nicolas Guerra, Nicholas H. Nelsen, Yunan Yang
-
-
- LLMs Judging LLMs: A Simplex Perspective
- https://arxiv.org/abs/2505.21972
- arXiv:2505.21972v2 Announce Type: replace-cross
-Abstract: Given the challenge of automatically evaluating free-form outputs from large language models (LLMs), an increasingly common solution is to use LLMs themselves as the judging mechanism, without any gold-standard scores. Implicitly, this practice accounts for only sampling variability (aleatoric uncertainty) and ignores uncertainty about judge quality (epistemic uncertainty). While this is justified if judges are perfectly accurate, it is unclear when such an approach is theoretically valid and practically robust. We study these questions for the task of ranking LLM candidates from a novel geometric perspective: for $M$-level scoring systems, both LLM judges and candidates can be represented as points on an $(M-1)$-dimensional probability simplex, where geometric concepts (e.g., triangle areas) correspond to key ranking concepts. This perspective yields intuitive theoretical conditions and visual proofs for when rankings are identifiable; for instance, we provide a formal basis for the ``folk wisdom'' that LLM judges are more effective for two-level scoring ($M=2$) than multi-level scoring ($M>2$). Leveraging the simplex, we design geometric Bayesian priors that encode epistemic uncertainty about judge quality and vary the priors to conduct sensitivity analyses. Experiments on LLM benchmarks show that rankings based solely on LLM judges are robust in many but not all datasets, underscoring both their widespread success and the need for caution. Our Bayesian method achieves substantially higher coverage rates than existing procedures, highlighting the importance of modeling epistemic uncertainty.
- oai:arXiv.org:2505.21972v2
- cs.LG
- cs.AI
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Patrick Vossler, Fan Xia, Yifan Mai, Adarsh Subbaswamy, Jean Feng
+ Qi Xu, Annie Qu
- Emergent Granger Causality in Neural Networks: Can Prediction Alone Reveal Structure?
- https://arxiv.org/abs/2506.20347
- arXiv:2506.20347v2 Announce Type: replace-cross
-Abstract: Granger Causality (GC) offers an elegant statistical framework to study the association between multivariate time series data. Vector autoregressive models (VAR) are simple and easy to fit, but have limited application because of their inherent inability to capture more complex (e.g., non-linear) associations. Numerous attempts have already been made in the literature that exploit the functional approximation power of deep neural networks (DNNs) for GC. However, these methods treat GC as a variable selection problem. We present a novel paradigm for investigating the learned GC from a single neural network used for joint modeling of all components of multivariate time series data, which is essentially linked with prediction and assessing the distribution shift in residuals. A deep learning model, with proper regularization, may learn the true GC structure when jointly used for all components of the time series when there is sufficient training data. We propose to uncover the learned GC structure by comparing the model uncertainty or distribution of the residuals when the past of everything is used as compared to the one where a specific time series component is dropped from the model. We also compare the effect of input layer dropout on the ability of a neural network to learn GC. We show that a well-regularized model can learn the true GC structure from the data without explicitly adding terms in the loss function that guide the model to select variables or perform sparse regression under specific settings. We also provide a comparison of deep learning architectures such as CNN, LSTM and transformer models on their ability to discover Granger Causality. The numerical experiments demonstrate that, compared to sparse regression models, a simple joint model is a strong baseline for learning the true GC which has the advantage that it does not require tuning of many extra hyper-parameters.
- oai:arXiv.org:2506.20347v2
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Malik Shahid Sultan, Hernando Ombao, Maurizio Filippone
-
-
- Stochastic Approximation with Block Coordinate Optimal Stepsizes
- https://arxiv.org/abs/2507.08963
- arXiv:2507.08963v2 Announce Type: replace-cross
-Abstract: We consider stochastic approximation with block-coordinate stepsizes and propose adaptive stepsize rules that aim to minimize the expected distance from the next iterate to an (unknown) target point. These stepsize rules employ online estimates of the second moment of the search direction along each block coordinate. The popular Adam algorithm can be interpreted as a variant with a specific estimator. By leveraging a simple conditional estimator, we derive a new method that obtains competitive performance against Adam but requires less memory and fewer hyper-parameters. We prove that this family of methods converges almost surely to a small neighborhood of the target point, and the radius of the neighborhood depends on the bias and variance of the second-moment estimator. Our analysis relies on a simple aiming condition that assumes neither convexity nor smoothness, thus has broad applicability.
- oai:arXiv.org:2507.08963v2
- math.OC
+ Rethinking Few-Shot Image Fusion: Granular Ball Priors Enable General-Purpose Deep Fusion
+ https://arxiv.org/abs/2504.08937
+ arXiv:2504.08937v4 Announce Type: replace-cross
+Abstract: In image fusion tasks, the absence of real fused images as priors forces most deep learning approaches to rely on large-scale paired datasets to extract global weighting features or to generate pseudo-supervised images through algorithmic constructions. Unlike previous methods, this work re-examines prior-guided learning under few-shot conditions by introducing rough set theory. We regard the traditional algorithm as a prior generator, while the network re-inferrs and adaptively optimizes the prior through a dynamic loss function, reducing the inference burden of the network and enabling effective few-shot learning.To provide the prior, we propose the Granular Ball Pixel Computation (GBPC) algorithm. GBPC models pixel pairs in a luminance subspace using meta-granular balls and mines intra-ball information at multiple granular levels. At the fine-grained level, sliding granular balls assign adaptive weights to individual pixels to produce pixel-level prior fusion. At the coarse-grained level, the algorithm performs split computation within a single image to estimate positive and boundary domain distributions, enabling modality awareness and prior confidence estimation, which dynamically guide the loss weighting.The network and the algorithmic prior are coupled through the loss function to form an integrated framework. Thanks to the dynamic weighting mechanism, the network can adaptively adjust to different priors during training, enhancing its perception and fusion capability across modalities. We name this framework GBFF (Granular Ball Fusion Framework). Experiments on four fusion tasks demonstrate that even with only ten training image pairs per task, GBFF achieves superior performance in both visual quality and model compactness. Code is available at: https://github.com/DMinjie/GBFF
+ oai:arXiv.org:2504.08937v4
+ cs.GR
+ cs.CVcs.LG
+ eess.IVstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Tao Jiang, Lin Xiao
+ Minjie Deng, Yan Wei, An Wu, Yuncan Ouyang, Hao Zhai, Qianyao Peng
- A Simple Approximate Bayesian Inference Neural Surrogate for Stochastic Petri Net Models
- https://arxiv.org/abs/2507.10714
- arXiv:2507.10714v2 Announce Type: replace-cross
-Abstract: Stochastic Petri Nets (SPNs) are an increasingly popular tool of choice for modeling discrete-event dynamics in areas such as epidemiology and systems biology, yet their parameter estimation remains challenging in general and in particular when transition rates depend on external covariates and explicit likelihoods are unavailable. We introduce a neural-surrogate (neural-network-based approximation of the posterior distribution) framework that predicts the coefficients of known covariate-dependent rate functions directly from noisy, partially observed token trajectories. Our model employs a lightweight 1D Convolutional Residual Network trained end-to-end on Gillespie-simulated SPN realizations, learning to invert system dynamics under realistic conditions of event dropout. During inference, Monte Carlo dropout provides calibrated uncertainty bounds together with point estimates. On synthetic SPNs with $10\%$ missing events, our surrogate recovers rate-function coefficients with an $RMSE = 0.043$ and substantially runs faster than traditional Bayesian approaches. These results demonstrate that data-driven, likelihood-free surrogates can enable accurate, robust, and real-time parameter recovery in complex, partially observed discrete-event systems.
- oai:arXiv.org:2507.10714v2
- cs.LG
+ SSRCA: a novel machine learning pipeline to perform sensitivity analysis for agent-based models
+ https://arxiv.org/abs/2506.00168
+ arXiv:2506.00168v3 Announce Type: replace-cross
+Abstract: Agent-based models (ABMs) are widely used in biology to understand how individual actions scale into emergent population behavior. Modelers employ sensitivity analysis (SA) algorithms to quantify input parameters' impact on model outputs, however, it is hard to perform SA for ABMs due to their computational and complex nature. In this work, we develop the Simulate, Summarize, Reduce, Cluster, and Analyze (SSRCA) methodology, a machine-learning based pipeline designed to facilitate SA for ABMs. In particular, SSRCA can achieve the following tasks for ABMS: 1) identify sensitive model parameters, 2) reveal common output model patterns, and 3) determine which input parameter values generate these patterns. We use an example ABM of tumor spheroid growth to showcase how SSRCA identifies four common patterns from the ABM and the parameter regions that generate these outputs. Additionally, we compare the SA results between SSRCA and the popular Sobol' Method and find that SSRCA's identified sensitive parameters are robust to the choice of model descriptors while Sobol's are not. This analysis could streamline data-driven tasks, such as parameter estimation, for ABMs by reducing parameter space. While we highlight these results with an ABM on tumor spheroid formation, the SSRCA Methodology is broadly applicable to biological ABMs.
+ oai:arXiv.org:2506.00168v3q-bio.QM
+ q-bio.CBstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Bright Kwaku Manu, Trevor Reckell, Beckett Sterner, Petar Jevtic
-
-
- Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
- https://arxiv.org/abs/2507.11473
- arXiv:2507.11473v2 Announce Type: replace-cross
-Abstract: AI systems that "think" in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave. Like all other known AI oversight methods, CoT monitoring is imperfect and allows some misbehavior to go unnoticed. Nevertheless, it shows promise and we recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods. Because CoT monitorability may be fragile, we recommend that frontier model developers consider the impact of development decisions on CoT monitorability.
- oai:arXiv.org:2507.11473v2
- cs.AI
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Tomek Korbak, Mikita Balesni, Elizabeth Barnes, Yoshua Bengio, Joe Benton, Joseph Bloom, Mark Chen, Alan Cooney, Allan Dafoe, Anca Dragan, Scott Emmons, Owain Evans, David Farhi, Ryan Greenblatt, Dan Hendrycks, Marius Hobbhahn, Evan Hubinger, Geoffrey Irving, Erik Jenner, Daniel Kokotajlo, Victoria Krakovna, Shane Legg, David Lindner, David Luan, Aleksander M\k{a}dry, Julian Michael, Neel Nanda, Dave Orr, Jakub Pachocki, Ethan Perez, Mary Phuong, Fabien Roger, Joshua Saxe, Buck Shlegeris, Mart\'in Soto, Eric Steinberger, Jasmine Wang, Wojciech Zaremba, Bowen Baker, Rohin Shah, Vlad Mikulik
-
-
- Order-Flow Filtration and Directional Association with Short-Horizon Returns
- https://arxiv.org/abs/2507.22712
- arXiv:2507.22712v2 Announce Type: replace-cross
-Abstract: Electronic markets generate dense order flow with many transient orders, which degrade directional signals derived from the limit order book (LOB). We study whether simple structural filters on order lifetime, modification count, and modification timing sharpen the association between order book imbalance (OBI) and short-horizon returns in BankNifty index futures, where unfiltered OBI is already known to be a strong short-horizon directional indicator. The efficacy of each filter is evaluated using a three-step diagnostic ladder: contemporaneous correlations, linear association between discretised regimes, and Hawkes event-time excitation between OBI and return regimes. Our results indicate that filtration of the aggregate order flow produces only modest changes relative to the unfiltered benchmark. By contrast, when filters are applied on the parent orders of executed trades, the resulting OBI series exhibits systematically stronger directional association. Motivated by recent regulatory initiatives to curb noisy order flow, we treat the association between OBI and short-horizon returns as a policy-relevant diagnostic of market quality. We then compare unfiltered and filtered OBI series, using tick-by-tick data from the National Stock Exchange of India, to infer how structural filters on the order flow affect OBI-return dynamics in an emerging market setting.
- oai:arXiv.org:2507.22712v2
- q-fin.TR
- q-fin.CP
- q-fin.GN
- q-fin.ST
- stat.ME
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Aditya Nittur Anantha, Shashi Jain, Prithwish Maiti
+ Edward H. Rohr, John T. Nardini
- Density Operator Expectation Maximization
- https://arxiv.org/abs/2507.22786
- arXiv:2507.22786v2 Announce Type: replace-cross
-Abstract: Machine learning with density operators, the mathematical foundation of quantum mechanics, is gaining prominence with rapid advances in quantum computing. Generative models based on density operators cannot yet handle tasks that are routinely handled by probabilistic models. The progress of latent variable models, a broad and influential class of probabilistic unsupervised models, was driven by the Expectation-Maximization framework. Deriving such a framework for density operators is challenging due to the non-commutativity of operators. To tackle this challenge, an inequality arising from the monotonicity of relative entropy is demonstrated to serve as an evidence lower bound for density operators. A minorant-maximization perspective on this bound leads to Density Operator Expectation Maximization (DO-EM), a general framework for training latent variable models defined through density operators. Through an information-geometric argument, the Expectation step in DO-EM is shown to be the Petz recovery map. The DO-EM algorithm is applied to Quantum Restricted Boltzmann Machines, adapting Contrastive Divergence to approximate the Maximization step gradient. Quantum interleaved Deep Boltzmann Machines and Quantum Gaussian-Bernoulli Restricted Boltzmann Machines, new models introduced in this work, outperform their probabilistic counterparts on generative tasks when trained with similar computational resources and identical hyperparameters.
- oai:arXiv.org:2507.22786v2
- cs.LG
- quant-ph
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Adit Vishnu, Abhay Shastry, Dhruva Kashyap, Chiranjib Bhattacharyya
-
-
- Efficient Approximate Posterior Sampling with Annealed Langevin Monte Carlo
- https://arxiv.org/abs/2508.07631
- arXiv:2508.07631v3 Announce Type: replace-cross
-Abstract: We study the problem of posterior sampling in the context of score based generative models. We have a trained score network for a prior $p(x)$, a measurement model $p(y|x)$, and are tasked with sampling from the posterior $p(x|y)$. Prior work has shown this to be intractable in KL (in the worst case) under well-accepted computational hardness assumptions. Despite this, popular algorithms for tasks such as image super-resolution, stylization, and reconstruction enjoy empirical success. Rather than establishing distributional assumptions or restricted settings under which exact posterior sampling is tractable, we view this as a more general "tilting" problem of biasing a distribution towards a measurement. Under minimal assumptions, we show that one can tractably sample from a distribution that is simultaneously close to the posterior of a noised prior in KL divergence and the true posterior in Fisher divergence. Intuitively, this combination ensures that the resulting sample is consistent with both the measurement and the prior. To the best of our knowledge these are the first formal results for (approximate) posterior sampling in polynomial time.
- oai:arXiv.org:2508.07631v3
+ Knowledge Adaptation as Posterior Correction
+ https://arxiv.org/abs/2506.14262
+ arXiv:2506.14262v3 Announce Type: replace-cross
+Abstract: Adaptation is the holy grail of intelligence, but even the best AI models lack the adaptability of toddlers. In spite of great progress, little is known about the mechanisms by which machines can learn to adapt as fast as humans and animals. Here, we cast adaptation as `correction' of old posteriors and show that a wide-variety of existing adaptation methods follow this very principle, including those used for continual learning, federated learning, unlearning, and model merging. In all these settings, more accurate posteriors often lead to smaller corrections and can enable faster adaptation. Posterior correction is derived by using the dual representation of the Bayesian Learning Rule of Khan and Rue (2023), where the interference between the old representation and new information is quantified by using the natural-gradient mismatch. We present many examples demonstrating how machines can learn to adapt quickly by using posterior correction.
+ oai:arXiv.org:2506.14262v3cs.LGcs.AIstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Advait Parulekar, Litu Rout, Karthikeyan Shanmugam, Sanjay Shakkottai
+ http://creativecommons.org/licenses/by/4.0/
+ Mohammad Emtiyaz Khan
- Multi-head Transformers Provably Learn Symbolic Multi-step Reasoning via Gradient Descent
- https://arxiv.org/abs/2508.08222
- arXiv:2508.08222v2 Announce Type: replace-cross
-Abstract: Transformers have demonstrated remarkable capabilities in multi-step reasoning tasks. However, understandings of the underlying mechanisms by which they acquire these abilities through training remain limited, particularly from a theoretical standpoint. This work investigates how transformers learn to solve symbolic multi-step reasoning problems through chain-of-thought processes, focusing on path-finding in trees. We analyze two intertwined tasks: a backward reasoning task, where the model outputs a path from a goal node to the root, and a more complex forward reasoning task, where the model implements two-stage reasoning by first identifying the goal-to-root path and then reversing it to produce the root-to-goal path. Our theoretical analysis, grounded in the dynamics of gradient descent, shows that trained one-layer transformers can provably solve both tasks with generalization guarantees to unseen trees. In particular, our multi-phase training dynamics for forward reasoning elucidate how different attention heads learn to specialize and coordinate autonomously to solve the two subtasks in a single autoregressive path. These results provide a mechanistic explanation of how trained transformers can implement sequential algorithmic procedures. Moreover, they offer insights into the emergence of reasoning abilities, suggesting that when tasks are structured to take intermediate chain-of-thought steps, even shallow multi-head transformers can effectively solve problems that would otherwise require deeper architectures.
- oai:arXiv.org:2508.08222v2
+ Elucidated Rolling Diffusion Models for Probabilistic Forecasting of Complex Dynamics
+ https://arxiv.org/abs/2506.20024
+ arXiv:2506.20024v3 Announce Type: replace-cross
+Abstract: Diffusion models are a powerful tool for probabilistic forecasting, yet most applications in high-dimensional complex systems predict future states individually. This approach struggles to model complex temporal dependencies and fails to explicitly account for the progressive growth of uncertainty inherent to the systems. While rolling diffusion frameworks, which apply increasing noise to forecasts at longer lead times, have been proposed to address this, their integration with state-of-the-art, high-fidelity diffusion techniques remains a significant challenge. We tackle this problem by introducing Elucidated Rolling Diffusion Models (ERDM), the first framework to successfully unify a rolling forecast structure with the principled, performant design of Elucidated Diffusion Models (EDM). To do this, we adapt the core EDM components-its noise schedule, network preconditioning, and Heun sampler-to the rolling forecast setting. The success of this integration is driven by three key contributions: (i) a novel loss weighting scheme that focuses model capacity on the mid-range forecast horizons where determinism gives way to stochasticity; (ii) an efficient initialization strategy using a pre-trained EDM for the initial window; and (iii) a bespoke hybrid sequence architecture for robust spatiotemporal feature extraction under progressive denoising. On 2D Navier-Stokes simulations and ERA5 global weather forecasting at 1.5-degree resolution, ERDM consistently outperforms key diffusion-based baselines, including conditional autoregressive EDM. ERDM offers a flexible and powerful general framework for tackling diffusion-based dynamics forecasting problems where modeling uncertainty propagation is paramount.
+ oai:arXiv.org:2506.20024v3cs.LGcs.AI
- cs.IT
- math.IT
- math.OC
+ physics.ao-phstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Tong Yang, Yu Huang, Yingbin Liang, Yuejie Chi
+ Advances in Neural Information Processing Systems (NeurIPS), 2025
+ Salva R\"uhling Cachay, Miika Aittala, Karsten Kreis, Noah Brenowitz, Arash Vahdat, Morteza Mardani, Rose Yu
- Staying on the Manifold: Geometry-Aware Noise Injection
- https://arxiv.org/abs/2509.20201
- arXiv:2509.20201v2 Announce Type: replace-cross
-Abstract: It has been shown that perturbing the input during training implicitly regularises the gradient of the learnt function, leading to smoother models and enhancing generalisation. However, previous research mostly considered the addition of ambient noise in the input space, without considering the underlying structure of the data. In this work, we propose several strategies of adding geometry-aware input noise that accounts for the lower dimensional manifold the input space inhabits. We start by projecting ambient Gaussian noise onto the tangent space of the manifold. In a second step, the noise sample is mapped on the manifold via the associated geodesic curve. We also consider Brownian motion noise, which moves in random steps along the manifold. We show that geometry-aware noise leads to improved generalisation and robustness to hyperparameter selection on highly curved manifolds, while performing at least as well as training without noise on simpler manifolds. Our proposed framework extends to data manifolds approximated by generative models and we observe similar trends on the MNIST digits dataset.
- oai:arXiv.org:2509.20201v2
- cs.LG
- math.DG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Albert Kj{\o}ller Jacobsen, Johanna Marie Gegenfurtner, Georgios Arvanitidis
-
-
- Closed-form $\ell_r$ norm scaling with data for overparameterized linear regression and diagonal linear networks under $\ell_p$ bias
- https://arxiv.org/abs/2509.21181
- arXiv:2509.21181v3 Announce Type: replace-cross
-Abstract: For overparameterized linear regression with isotropic Gaussian design and minimum-$\ell_p$ interpolator $p\in(1,2]$, we give a unified, high-probability characterization for the scaling of the family of parameter norms $ \\{ \lVert \widehat{w_p} \rVert_r \\}_{r \in [1,p]} $ with sample size.
- We solve this basic, but unresolved question through a simple dual-ray analysis, which reveals a competition between a signal *spike* and a *bulk* of null coordinates in $X^\top Y$, yielding closed-form predictions for (i) a data-dependent transition $n_\star$ (the "elbow"), and (ii) a universal threshold $r_\star=2(p-1)$ that separates $\lVert \widehat{w_p} \rVert_r$'s which plateau from those that continue to grow with an explicit exponent.
- This unified solution resolves the scaling of *all* $\ell_r$ norms within the family $r\in [1,p]$ under $\ell_p$-biased interpolation, and explains in one picture which norms saturate and which increase as $n$ grows.
- We then study diagonal linear networks (DLNs) trained by gradient descent. By calibrating the initialization scale $\alpha$ to an effective $p_{\mathrm{eff}}(\alpha)$ via the DLN separable potential, we show empirically that DLNs inherit the same elbow/threshold laws, providing a predictive bridge between explicit and implicit bias.
- Given that many generalization proxies depend on $\lVert \widehat {w_p} \rVert_r$, our results suggest that their predictive power will depend sensitively on which $l_r$ norm is used.
- oai:arXiv.org:2509.21181v3
+ Hebbian Physics Networks: A Self-Organizing Computational Architecture Based on Local Physical Laws
+ https://arxiv.org/abs/2507.00641
+ arXiv:2507.00641v2 Announce Type: replace-cross
+Abstract: Physical transport processes organize through local interactions that redistribute imbalance while preserving conservation. Classical solvers enforce this organization by applying fixed discrete operators on rigid grids. We introduce the Hebbian Physics Network (HPN), a computational framework that replaces this rigid scaffolding with a plastic transport geometry. An HPN is a coupled dynamical system of physical states on nodes and constitutive weights on edges in a graph. Residuals--local violations of continuity, momentum balance, or energy conservation--act as thermodynamic forces that drive the joint evolution of both the state and the operator (i.e. the adaptive weights). The weights adapt through a three-factor Hebbian rule, which we prove constitutes a strictly local gradient descent on the residual energy. This mechanism ensures thermodynamic stability: near equilibrium, the learned operator naturally converges to a symmetric, positive-definite form, rigorously reproducing Onsager\'s reciprocal relations without explicit enforcement. Far from equilibrium, the system undergoes a self-organizing search for a transport topology that restores global coercivity. Unlike optimization-based approaches that impose physics through global loss functions, HPNs embed conservation intrinsically: transport is restored locally by the evolving operator itself, without a global Poisson solve or backpropagated objective. We demonstrate the framework on scalar diffusion and incompressible lid-driven cavity flow, showing that physically consistent transport geometries and flow structures emerge from random initial conditions solely through residual-driven local adaptation. HPNs thus reframe computation not as the solution of a fixed equation, but as a thermodynamic relaxation process where the constitutive geometry and physical state co-evolve.
+ oai:arXiv.org:2507.00641v2
+ nlin.AOcs.LG
- math.ST
- stat.ML
- stat.TH
- Tue, 09 Dec 2025 00:00:00 -0500
+ stat.CO
+ stat.ME
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Shuofeng Zhang, Ard Louis
+ Gunjan Auti, Hirofumi Daiguji, Gouhei Tanaka
- AuON: A Linear-time Alternative to Orthogonal Momentum Updates
- https://arxiv.org/abs/2509.24320
- arXiv:2509.24320v3 Announce Type: replace-cross
-Abstract: Orthogonal momentum gradient updates have emerged to overcome the limitations of vector-based optimizers like Adam. The vector-based optimizer Adam suffers from high memory costs and ill-conditioned momentum gradient updates. However, traditional Orthogonal momentum approaches, such as SVD/QR decomposition, suffer from high computational and memory costs and underperform compared to well-tuned SGD with momentum.Recent advances, such as Muon, improve efficiency by applying momentum before orthogonalization and approximate orthogonal matrices via Newton-Schulz iterations, which gives better GPU utilization, active high TFLOPS, and reduces memory usage by up to 3x. Nevertheless, Muon(Vanilla) suffers from exploding attention logits and has cubic computation complexity. In this paper we , deep dive into orthogonal momentum gradient updates to find the main properties that help Muon to achieve remarkable performance.We propose \textbf{AuON} (Alternative Unit-norm momentum updates by Normalized nonlinear scaling), a linear-time optimizer that achieves strong performance without approximate orthogonal matrices, while preserving structural alignment and reconditioning ill-posed updates. AuON has an automatic (\textbf{"emergency brake"}) to handle exploding attention logits.. We further introduce a hybrid variant (\textbf{ Hybrid-AuON})that applies the linear transformations with Newton-Schulz iterations which out performs Muon in the language modeling tasks. Code is available at: https://github.com/ryyzn9/AuON
- oai:arXiv.org:2509.24320v3
+ Amortized Bayesian Meta-Learning for Low-Rank Adaptation of Large Language Models
+ https://arxiv.org/abs/2508.14285
+ arXiv:2508.14285v2 Announce Type: replace-cross
+Abstract: Fine-tuning large language models (LLMs) with low-rank adaptation (LoRA) is a cost-effective way to incorporate information from a specific dataset. However, it is often unclear how well the fine-tuned LLM will generalize, i.e., how well it will perform on unseen datasets. Methods have been proposed to improve generalization by optimizing in-context prompts, or by using meta-learning to fine-tune LLMs. However, these methods are expensive in memory and computation, requiring either long-context prompts or saving copies of parameters and using second-order gradient updates. To address these challenges, we propose Amortized Bayesian Meta-Learning for LoRA (ABMLL). This method builds on amortized Bayesian meta-learning for smaller models, adapting this approach to LLMs while maintaining its computational efficiency. We reframe task-specific and global parameters in the context of LoRA and use a new hyperparameter to balance reconstruction accuracy and the fidelity of task-specific parameters to the global ones. ABMLL provides effective generalization and scales to large models such as LLAMA3-8B. Furthermore, as a result of using a Bayesian framework, ABMLL provides improved uncertainty quantification. We test ABMLL on CrossFit and Unified-QA datasets and find that it outperforms existing methods on these benchmarks in terms of both accuracy and expected calibration error.
+ oai:arXiv.org:2508.14285v2cs.LG
+ cs.AIstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Dipan Maity
+ http://creativecommons.org/licenses/by/4.0/
+ Liyi Zhang, Jake Snell, Thomas L. Griffiths
- EnScale: Temporally-consistent multivariate generative downscaling via proper scoring rules
- https://arxiv.org/abs/2509.26258
- arXiv:2509.26258v2 Announce Type: replace-cross
-Abstract: The practical use of future climate projections from global circulation models (GCMs) is often limited by their coarse spatial resolution, requiring downscaling to generate high-resolution data. Regional climate models (RCMs) provide this refinement, but are computationally expensive. To address this issue, machine learning models can learn the downscaling function, mapping coarse GCM outputs to high-resolution fields. Among these, generative approaches aim to capture the full conditional distribution of RCM data given coarse-scale GCM data, which is characterized by large variability and thus challenging to model accurately. We introduce EnScale, a generative machine learning framework that emulates the full GCM-to-RCM map by training on multiple pairs of GCM and corresponding RCM data. It first adjusts large-scale mismatches between GCM and coarsened RCM data, followed by a super-resolution step to generate high-resolution fields. Both steps employ generative models optimized with the energy score, a proper scoring rule. Compared to state-of-the-art ML downscaling approaches, our setup reduces computational cost by about one order of magnitude. EnScale jointly emulates multiple variables -- temperature, precipitation, solar radiation, and wind -- spatially consistent over an area in Central Europe. In addition, we propose a variant EnScale-t that enables temporally consistent downscaling. We establish a comprehensive evaluation framework across various categories including calibration, spatial structure, extremes, and multivariate dependencies. Comparison with diverse benchmarks demonstrates EnScale's strong performance and computational efficiency. EnScale offers a promising approach for accurate and temporally consistent RCM emulation.
- oai:arXiv.org:2509.26258v2
- physics.ao-ph
- physics.data-an
+ A statistical test for network similarity
+ https://arxiv.org/abs/2508.14399
+ arXiv:2508.14399v2 Announce Type: replace-cross
+Abstract: In this article, we revisit and expand our prior work on graph similarity. As with our earlier work, we focus on a view of similarity which does not require node correspondence between graphs under comparison. Our work is suited to the temporal study of networks, change-point and anomaly detection and simple comparisons of static graphs. It provides a similarity metric for the study of (weakly) connected graphs. Our work proposes a metric designed to compare networks and assess the (dis)similarity between them. For example, given three different graphs with possibly different numbers of nodes, $G_1$, $G_2$ and $G_3$, we aim to answer two questions: a) "How different is $G_1 $ from $G_2$?" and b) "Is graph $G_3$ more similar to $G_1$ or to $G_2$?". We illustrate the value of our test and its accuracy through several new experiments, using synthetic and real-world graphs.
+ oai:arXiv.org:2508.14399v2
+ cs.DMstat.AP
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by-sa/4.0/
- Maybritt Schillinger, Maxim Samarin, Xinwei Shen, Reto Knutti, Nicolai Meinshausen
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Pierre Miasnikof, Alexander Y. Shetopaloff
- AdaDetectGPT: Adaptive Detection of LLM-Generated Text with Statistical Guarantees
- https://arxiv.org/abs/2510.01268
- arXiv:2510.01268v4 Announce Type: replace-cross
-Abstract: We study the problem of determining whether a piece of text has been authored by a human or by a large language model (LLM). Existing state of the art logits-based detectors make use of statistics derived from the log-probability of the observed text evaluated using the distribution function of a given source LLM. However, relying solely on log probabilities can be sub-optimal. In response, we introduce AdaDetectGPT -- a novel classifier that adaptively learns a witness function from training data to enhance the performance of logits-based detectors. We provide statistical guarantees on its true positive rate, false positive rate, true negative rate and false negative rate. Extensive numerical studies show AdaDetectGPT nearly uniformly improves the state-of-the-art method in various combination of datasets and LLMs, and the improvement can reach up to 37\%. A python implementation of our method is available at https://github.com/Mamba413/AdaDetectGPT.
- oai:arXiv.org:2510.01268v4
- cs.CL
- cs.AI
- cs.LG
+ Contractive kinetic Langevin samplers beyond global Lipschitz continuity
+ https://arxiv.org/abs/2509.12031
+ arXiv:2509.12031v2 Announce Type: replace-cross
+Abstract: In this paper, we examine the problem of sampling from log-concave distributions with (possibly) superlinear gradient growth under kinetic (underdamped) Langevin algorithms. Using a carefully tailored taming scheme, we propose two novel discretizations of the kinetic Langevin SDE, and we show that they are both contractive and satisfy a log-Sobolev inequality. Building on this, we establish a series of non-asymptotic bounds in $2$-Wasserstein distance between the law reached by each algorithm and the underlying target measure.
+ oai:arXiv.org:2509.12031v2
+ math.PR
+ cs.NA
+ math.NAstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Hongyi Zhou, Jin Zhu, Pingfan Su, Kai Ye, Ying Yang, Shakeel A O B Gavioli-Akilagun, Chengchun Shi
+ Iosif Lytras, Panayotis Mertikopoulos
- K-DAREK: Distance Aware Error for Kurkova Kolmogorov Networks
- https://arxiv.org/abs/2510.22021
- arXiv:2510.22021v2 Announce Type: replace-cross
-Abstract: Neural networks are powerful parametric function approximators, while Gaussian processes (GPs) are nonparametric probabilistic models that place distributions over functions via kernel-defined correlations but become computationally expensive for large-scale problems. Kolmogorov-Arnold networks (KANs), semi-parametric neural architectures, model complex functions efficiently using spline layers. Kurkova Kolmogorov-Arnold networks (KKANs) extend KANs by replacing the early spline layers with multi-layer perceptrons that map inputs into higher-dimensional spaces before applying spline-based transformations, which yield more stable training and provide robust architectures for system modeling. By enhancing the KKAN architecture, we develop a novel learning algorithm, distance-aware error for Kurkova-Kolmogorov networks (K-DAREK), for efficient and interpretable function approximation with uncertainty quantification. Our approach establishes robust error bounds that are distance-aware; this means they reflect the proximity of a test point to its nearest training points. In safe control case studies, we demonstrate that K-DAREK is about four times faster and ten times more computationally efficient than Ensemble of KANs, 8.6 times more scalable than GP as data size increases, and 7.2% safer than our previous work distance-aware error for Kolmogorov networks (DAREK). Moreover, on real data (e.g., Real Estate Valuation), K-DAREK's error bound achieves zero coverage violations.
- oai:arXiv.org:2510.22021v2
+ Graph Coloring for Multi-Task Learning
+ https://arxiv.org/abs/2509.16959
+ arXiv:2509.16959v4 Announce Type: replace-cross
+Abstract: When different objectives conflict with each other in multi-task learning, gradients begin to interfere and slow convergence, thereby potentially reducing the final model's performance. To address this, we introduce SON-GOKU, a scheduler that computes gradient interference, constructs an interference graph, and then applies greedy graph-coloring to partition tasks into groups that align well with each other. At each training step, only one group (color class) of tasks are activated, and the grouping partition is constantly recomputed as task relationships evolve throughout training. By ensuring that each mini-batch contains only tasks that pull the model in the same direction, our method improves the effectiveness of any underlying multi-task learning optimizer without additional tuning. Since tasks within these groups will update in compatible directions, multi-task learning will improve model performance rather than impede it. Empirical results on six different datasets show that this interference-aware graph-coloring approach consistently outperforms baselines and state-of-the-art multi-task optimizers. We provide extensive theory showing why grouping and sequential updates improve multi-task learning, with guarantees on descent, convergence, and accurately identifying what tasks conflict or align.
+ oai:arXiv.org:2509.16959v4cs.LG
- eess.SP
+ cs.AI
+ cs.NEstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Masoud Ataei, Vikas Dhiman, Mohammad Javad Khojasteh
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Santosh Patapati
- LAD-BNet: Lag-Aware Dual-Branch Networks for Real-Time Energy Forecasting on Edge Devices
- https://arxiv.org/abs/2511.10680
- arXiv:2511.10680v2 Announce Type: replace-cross
-Abstract: Real-time energy forecasting on edge devices represents a major challenge for smart grid optimization and intelligent buildings. We present LAD-BNet (Lag-Aware Dual-Branch Network), an innovative neural architecture optimized for edge inference with Google Coral TPU. Our hybrid approach combines a branch dedicated to explicit exploitation of temporal lags with a Temporal Convolutional Network (TCN) featuring dilated convolutions, enabling simultaneous capture of short and long-term dependencies. Tested on real energy consumption data with 10-minute temporal resolution, LAD-BNet achieves 14.49% MAPE at 1-hour horizon with only 18ms inference time on Edge TPU, representing an 8-12 x acceleration compared to CPU. The multi-scale architecture enables predictions up to 12 hours with controlled performance degradation. Our model demonstrates a 2.39% improvement over LSTM baselines and 3.04% over pure TCN architectures, while maintaining a 180MB memory footprint suitable for embedded device constraints. These results pave the way for industrial applications in real-time energy optimization, demand management, and operational planning.
- oai:arXiv.org:2511.10680v2
+ CLAPS: Posterior-Aware Conformal Intervals via Last-Layer Laplace
+ https://arxiv.org/abs/2512.01384
+ arXiv:2512.01384v2 Announce Type: replace-cross
+Abstract: We present CLAPS, a posterior-aware conformal regression method that pairs a Last-Layer Laplace Approximation with split-conformal calibration. From the resulting Gaussian posterior, CLAPS defines a simple two-sided posterior CDF score that aligns the conformity metric with the full predictive shape, not just a point estimate. This alignment yields narrower prediction intervals at the same target coverage, especially on small to medium tabular datasets where data are scarce and uncertainty modeling matters. We also provide a lightweight diagnostic suite that separates aleatoric and epistemic components and visualizes posterior behavior, helping practitioners understand why intervals shrink when they do. Across multiple benchmarks using the same MLP backbone, CLAPS consistently attains nominal coverage with improved efficiency and minimal overhead, offering a clear, practical upgrade to residual-based conformal baselines.
+ oai:arXiv.org:2512.01384v2cs.LGstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Jean-Philippe Lignier
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Dongseok Kim, Hyoungsun Choi, Mohamed Jismy Aashik Rasool, Gisung Oh
- An Adaptive Resonance Theory-based Topological Clustering Algorithm with a Self-Adjusting Vigilance Parameter
- https://arxiv.org/abs/2511.17983
- arXiv:2511.17983v5 Announce Type: replace-cross
-Abstract: Clustering in stationary and nonstationary settings, where data distributions remain static or evolve over time, requires models that can adapt to distributional shifts while preserving previously learned cluster structures. This paper proposes an Adaptive Resonance Theory (ART)-based topological clustering algorithm that autonomously adjusts its recalculation interval and vigilance threshold through a diversity-driven adaptation mechanism. This mechanism enables hyperparameter-free learning that maintains cluster stability and continuity in dynamic environments. Experiments on 24 real-world datasets demonstrate that the proposed algorithm outperforms state-of-the-art methods in both clustering performance and continual learning capability. These results highlight the effectiveness of the proposed parameter adaptation in mitigating catastrophic forgetting and maintaining consistent clustering in evolving data streams. Source code is available at https://github.com/Masuyama-lab/IDAT
- oai:arXiv.org:2511.17983v5
+ Mitigating the Curse of Detail: Scaling Arguments for Feature Learning and Sample Complexity
+ https://arxiv.org/abs/2512.04165
+ arXiv:2512.04165v3 Announce Type: replace-cross
+Abstract: Two pressing topics in the theory of deep learning are the interpretation of feature learning mechanisms and the determination of implicit bias of networks in the rich regime. Current theories of rich feature learning, often appear in the form of high-dimensional non-linear equations, which require computationally intensive numerical solutions. Given the many details that go into defining a deep learning problem, this complexity is a significant and often unavoidable challenge. Here, we propose a powerful heuristic route for predicting the data and width scales at which various patterns of feature learning emerge. This form of scale analysis is considerably simpler than exact theories and reproduces the scaling exponents of various known results. In addition, we make novel predictions on complex toy architectures, such as three-layer non-linear networks and attention heads, thus extending the scope of first-principle theories of deep learning.
+ oai:arXiv.org:2512.04165v3cs.LGstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Naoki Masuyama, Yuichiro Toda, Yusuke Nojima, Hisao Ishibuchi
-
-
- Correlation-Weighted Communicability Curvature as a Structural Driver of Dengue Spread: A Bayesian Spatial Analysis of Recife (2015-2024)
- https://arxiv.org/abs/2512.00315
- arXiv:2512.00315v2 Announce Type: replace-cross
-Abstract: We investigate whether the structural connectivity of urban road networks helps explain dengue incidence in Recife, Brazil (2015--2024). For each neighborhood, we compute the average \emph{communicability curvature}, a graph-theoretic measure capturing the ability of a locality to influence others through multiple network paths. We integrate this metric into Negative Binomial models, fixed-effects regressions, SAR/SAC spatial models, and a hierarchical INLA/BYM2 specification. Across all frameworks, curvature is the strongest and most stable predictor of dengue risk. In the BYM2 model, the structured spatial component collapses ($\phi \approx 0$), indicating that functional network connectivity explains nearly all spatial dependence typically attributed to adjacency-based CAR terms. The results show that dengue spread in Recife is driven less by geographic contiguity and more by network-mediated structural flows.
- oai:arXiv.org:2512.00315v2
- physics.soc-ph
- math.PR
- q-bio.PE
- stat.AP
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Marc\'ilio Ferreira dos Santos, Cleiton de Lima Ricardo, Andreza dos Santos Rodrigues de Melo
-
-
- Calibrating Geophysical Predictions under Constrained Probabilistic Distributions
- https://arxiv.org/abs/2512.03081
- arXiv:2512.03081v2 Announce Type: replace-cross
-Abstract: Machine learning (ML) has shown significant promise in studying complex geophysical dynamical systems, including turbulence and climate processes. Such systems often display sensitive dependence on initial conditions, reflected in positive Lyapunov exponents, where even small perturbations in short-term forecasts can lead to large deviations in long-term outcomes. Thus, meaningful inference requires not only accurate short-term predictions, but also consistency with the system's long-term attractor that is captured by the marginal distribution of state variables. Existing approaches attempt to address this challenge by incorporating spatial and temporal dependence, but these strategies become impractical when data are extremely sparse. In this work, we show that prior knowledge of marginal distributions offers valuable complementary information to short-term observations, motivating a distribution-informed learning framework. We introduce a calibration algorithm based on normalization and the Kernelized Stein Discrepancy (KSD) to enhance ML predictions. The method here employs KSD within a reproducing kernel Hilbert space to calibrate model outputs, improving their fidelity to known physical distributions. This not only sharpens pointwise predictions but also enforces consistency with non-local statistical structures rooted in physical principles. Through synthetic experiments-spanning offline climatological CO2 fluxes and online quasi-geostrophic flow simulations-we demonstrate the robustness and broad utility of the proposed framework.
- oai:arXiv.org:2512.03081v2
- physics.ao-ph
- cs.LG
- stat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhewen Hou, Jiajin Sun, Subashree Venkatasubramanian, Peter Jin, Shuolin Li, Tian Zheng
+ Noa Rubin, Orit Davidovich, Zohar Ringel
- Mitigating the Curse of Detail: Scaling Arguments for Feature Learning and Sample Complexity
- https://arxiv.org/abs/2512.04165
- arXiv:2512.04165v2 Announce Type: replace-cross
-Abstract: Two pressing topics in the theory of deep learning are the interpretation of feature learning mechanisms and the determination of implicit bias of networks in the rich regime. Current theories of rich feature learning, often appear in the form of high-dimensional non-linear equations, which require computationally intensive numerical solutions. Furthermore, even under such limiting settings, predictions often appear in the form of high-dimensional non-linear equations, which require computationally intensive numerical solutions. Given the many details that go into defining a deep learning problem, this analytical complexity is a significant and often unavoidable challenge. Here, we propose a powerful heuristic route for predicting the data and width scales at which various patterns of feature learning emerge. This form of scale analysis is considerably simpler than such exact theories and reproduces the scaling exponents of various known results. In addition, we make novel predictions on complex toy architectures, such as three-layer non-linear networks and attention heads, thus extending the scope of first-principle theories of deep learning.
- oai:arXiv.org:2512.04165v2
+ Uncertainty Quantification for Scientific Machine Learning using Sparse Variational Gaussian Process Kolmogorov-Arnold Networks (SVGP KAN)
+ https://arxiv.org/abs/2512.05306
+ arXiv:2512.05306v2 Announce Type: replace-cross
+Abstract: Kolmogorov-Arnold Networks have emerged as interpretable alternatives to traditional multi-layer perceptrons. However, standard implementations lack principled uncertainty quantification capabilities essential for many scientific applications. We present a framework integrating sparse variational Gaussian process inference with the Kolmogorov-Arnold topology, enabling scalable Bayesian inference with computational complexity quasi-linear in sample size. Through analytic moment matching, we propagate uncertainty through deep additive structures while maintaining interpretability. We use three example studies to demonstrate the framework's ability to distinguish aleatoric from epistemic uncertainty: calibration of heteroscedastic measurement noise in fluid flow reconstruction, quantification of prediction confidence degradation in multi-step forecasting of advection-diffusion dynamics, and out-of-distribution detection in convolutional autoencoders. These results suggest Sparse Variational Gaussian Process Kolmogorov-Arnold Networks (SVGP KANs) is a promising architecture for uncertainty-aware learning in scientific machine learning.
+ oai:arXiv.org:2512.05306v2cs.LGstat.ML
- Tue, 09 Dec 2025 00:00:00 -0500
+ Wed, 10 Dec 2025 00:00:00 -0500replace-crosshttp://creativecommons.org/licenses/by/4.0/
- Noa Rubin, Orit Davidovich, Zohar Ringel
+ Y. Sungtaek Ju