diff --git "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
--- "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
+++ "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
@@ -7,1722 +7,12 @@
http://www.rssboard.org/rss-specificationen-us
- Thu, 01 Jan 2026 05:00:18 +0000
+ Fri, 02 Jan 2026 05:00:04 +0000rss-help@arxiv.org
- Thu, 01 Jan 2026 00:00:00 -0500
+ Fri, 02 Jan 2026 00:00:00 -0500
- SaturdaySunday
+ Saturday
-
- Marked point processes intensity estimation using sparse group Lasso method applied to locations of lucrative and cooperative banks in mainland France
- https://arxiv.org/abs/2512.23772
- arXiv:2512.23772v1 Announce Type: new
-Abstract: In this paper, we model the locations of five major banks in mainland France, two lucrative and three cooperative institutions based on socio-economic considerations. Locations of banks are collected using web scrapping and constitute a bivariate spatial point process for which we estimate nonparametrically summary functions (intensity, Ripley and cross-Ripley's K functions). This shows that the pattern is highly inhomogenenous and exhibits a clustering effect especially at small scales, and thus a significant departure to the bivariate (inhomogeneous) Poisson point process is pointed out. We also collect socio-economic datasets (at the living area level) from INSEE and propose a parametric modelling of the intensity function using these covariates. We propose a group-penalized bivariate composite likelihood method to estimate the model parameters, and we establish its asymptotic properties. The application of the methodology to the banking dataset provides new insights into the specificity of the cooperative model within the sector, particularly in relation to the theories of institutional isomorphism.
- oai:arXiv.org:2512.23772v1
- stat.ME
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Am\'elie Artis (PACTE), Achmad Choiruddin (SVH), Jean-Fran\c{c}ois Coeurjolly (SVH), Fr\'ed\'erique Letu\'e (SVH)
-
-
- Fitted Q Evaluation Without Bellman Completeness via Stationary Weighting
- https://arxiv.org/abs/2512.23805
- arXiv:2512.23805v1 Announce Type: new
-Abstract: Fitted Q-evaluation (FQE) is a central method for off-policy evaluation in reinforcement learning, but it generally requires Bellman completeness: that the hypothesis class is closed under the evaluation Bellman operator. This requirement is challenging because enlarging the hypothesis class can worsen completeness. We show that the need for this assumption stems from a fundamental norm mismatch: the Bellman operator is gamma-contractive under the stationary distribution of the target policy, whereas FQE minimizes Bellman error under the behavior distribution. We propose a simple fix: reweight each regression step using an estimate of the stationary density ratio, thereby aligning FQE with the norm in which the Bellman operator contracts. This enables strong evaluation guarantees in the absence of realizability or Bellman completeness, avoiding the geometric error blow-up of standard FQE in this setting while maintaining the practicality of regression-based evaluation.
- oai:arXiv.org:2512.23805v1
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Lars van der Laan, Nathan Kallus
-
-
- Energy-Tweedie: Score meets Score, Energy meets Energy
- https://arxiv.org/abs/2512.23818
- arXiv:2512.23818v1 Announce Type: new
-Abstract: Denoising and score estimation have long been known to be linked via the classical Tweedie's formula. In this work, we first extend the latter to a wider range of distributions often called "energy models" and denoted elliptical distributions in this work. Next, we examine an alternative view: we consider the denoising posterior $P(X|Y)$ as the optimizer of the energy score (a scoring rule) and derive a fundamental identity that connects the (path-) derivative of a (possibly) non-Euclidean energy score to the score of the noisy marginal. This identity can be seen as an analog of Tweedie's identity for the energy score, and allows for several interesting applications; for example, score estimation, noise distribution parameter estimation, as well as using energy score models in the context of "traditional" diffusion model samplers with a wider array of noising distributions.
- oai:arXiv.org:2512.23818v1
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Andrej Leban
-
-
- A Fuzzy Approach for Randomized Confidence Intervals
- https://arxiv.org/abs/2512.23866
- arXiv:2512.23866v1 Announce Type: new
-Abstract: We propose randomized confidence intervals based on the Neyman-Pearson lemma, in order to make them more broadly applicable to distributions that do not satisfy regularity conditions. This is achieved by using the definition of fuzzy confidence intervals. These intervals are compared with methods described in the literature for well-known distributions such as normal, binomial, and Poisson. The results show that in high-variance situations, the new intervals provide better performance. Furthermore, through these intervals, it is possible to compute a lower bound for the expected length, demonstrating that they achieve the minimal maximum expected length for a Bernoulli trial observation.
- oai:arXiv.org:2512.23866v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Carlos Henrique Trigo Nasser Felix, Nancy Lopes Garcia, Alex Rodrigo dos Santos Sousa
-
-
- Forecasting the Term Structure of Interest Rates with SPDE-Based Models
- https://arxiv.org/abs/2512.23910
- arXiv:2512.23910v1 Announce Type: new
-Abstract: The Dynamic Nelson--Siegel (DNS) model is a widely used framework for term structure forecasting. We propose a novel extension that models DNS residuals as a Gaussian random field, capturing dependence across both time and maturity. The residual field is represented via a stochastic partial differential equation (SPDE), enabling flexible covariance structures and scalable Bayesian inference through sparse precision matrices. We consider a range of SPDE specifications, including stationary, non-stationary, anisotropic, and nonseparable models. The SPDE--DNS model is estimated in a Bayesian framework using the integrated nested Laplace approximation (INLA), jointly inferring latent DNS factors and the residual field. Empirical results show that the SPDE-based extensions improve both point and probabilistic forecasts relative to standard benchmarks. When applied in a mean--variance bond portfolio framework, the forecasts generate economically meaningful utility gains, measured as performance fees relative to a Bayesian DNS benchmark under monthly rebalancing. Importantly, incorporating the structured SPDE residual substantially reduces cross-maturity and intertemporal dependence in the remaining measurement error, bringing it closer to white noise. These findings highlight the advantages of combining DNS with SPDE-driven residual modeling for flexible, interpretable, and computationally efficient yield curve forecasting.
- oai:arXiv.org:2512.23910v1
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Qihao Duan, Alexandre B. Simas, David Bolin, Rapha\"el Huser
-
-
- Stationary Reweighting Yields Local Convergence of Soft Fitted Q-Iteration
- https://arxiv.org/abs/2512.23927
- arXiv:2512.23927v1 Announce Type: new
-Abstract: Fitted Q-iteration (FQI) and its entropy-regularized variant, soft FQI, are central tools for value-based model-free offline reinforcement learning, but can behave poorly under function approximation and distribution shift. In the entropy-regularized setting, we show that the soft Bellman operator is locally contractive in the stationary norm of the soft-optimal policy, rather than in the behavior norm used by standard FQI. This geometric mismatch explains the instability of soft Q-iteration with function approximation in the absence of Bellman completeness. To restore contraction, we introduce stationary-reweighted soft FQI, which reweights each regression update using the stationary distribution of the current policy. We prove local linear convergence under function approximation with geometrically damped weight-estimation errors, assuming approximate realizability. Our analysis further suggests that global convergence may be recovered by gradually reducing the softmax temperature, and that this continuation approach can extend to the hardmax limit under a mild margin condition.
- oai:arXiv.org:2512.23927v1
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Lars van der Laan, Nathan Kallus
-
-
- Implicit geometric regularization in flow matching via density weighted Stein operators
- https://arxiv.org/abs/2512.23956
- arXiv:2512.23956v1 Announce Type: new
-Abstract: Flow Matching (FM) has emerged as a powerful paradigm for continuous normalizing flows, yet standard FM implicitly performs an unweighted $L^2$ regression over the entire ambient space. In high dimensions, this leads to a fundamental inefficiency: the vast majority of the integration domain consists of low-density ``void'' regions where the target velocity fields are often chaotic or ill-defined. In this paper, we propose {$\gamma$-Flow Matching ($\gamma$-FM)}, a density-weighted variant that aligns the regression geometry with the underlying probability flow. While density weighting is desirable, naive implementations would require evaluating the intractable target density. We circumvent this by introducing a Dynamic Density-Weighting strategy that estimates the \emph{target} density directly from training particles. This approach allows us to dynamically downweight the regression loss in void regions without compromising the simulation-free nature of FM. Theoretically, we establish that $\gamma$-FM minimizes the transport cost on a statistical manifold endowed with the $\gamma$-Stein metric. Spectral analysis further suggests that this geometry induces an implicit Sobolev regularization, effectively damping high-frequency oscillations in void regions. Empirically, $\gamma$-FM significantly improves vector field smoothness and sampling efficiency on high-dimensional latent datasets, while demonstrating intrinsic robustness to outliers.
- oai:arXiv.org:2512.23956v1
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shinto Eguchi
-
-
- Fundamental limits for weighted empirical approximations of tilted distributions
- https://arxiv.org/abs/2512.23979
- arXiv:2512.23979v1 Announce Type: new
-Abstract: Consider the task of generating samples from a tilted distribution of a random vector whose underlying distribution is unknown, but samples from it are available. This finds applications in fields such as finance and climate science, and in rare event simulation. In this article, we discuss the asymptotic efficiency of a self-normalized importance sampler of the tilted distribution. We provide a sharp characterization of its accuracy, given the number of samples and the degree of tilt. Our findings reveal a surprising dichotomy: while the number of samples needed to accurately tilt a bounded random vector increases polynomially in the tilt amount, it increases at a super polynomial rate for unbounded distributions.
- oai:arXiv.org:2512.23979v1
- math.ST
- cs.LG
- math.PR
- stat.ML
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Sarvesh Ravichandran Iyer, Himadri Mandal, Dhruman Gupta, Rushil Gupta, Agniv Bandhyopadhyay, Achal Bassamboo, Varun Gupta, Sandeep Juneja
-
-
- Completing and studentising Spearman's correlation in the presence of ties
- https://arxiv.org/abs/2512.23993
- arXiv:2512.23993v1 Announce Type: new
-Abstract: Non-parametric correlation coefficients have been widely used for analysing arbitrary random variables upon common populations, when requiring an explicit error distribution to be known is an unacceptable assumption. We examine an \(\ell_{2}\) representation of a correlation coefficient (Emond and Mason, 2002) from the perspective of a statistical estimator upon random variables, and verify a number of interesting and highly desirable mathematical properties, mathematically similar to the Whitney embedding of a Hilbert space into the \(\ell_{2}\)-norm space. In particular, we show here that, in comparison to the traditional Spearman (1904) \(\rho\), the proposed Kemeny \(\rho_{\kappa}\) correlation coefficient satisfies Gauss-Markov conditions in the presence or absence of ties, thereby allowing both discrete and continuous marginal random variables. We also prove under standard regularity conditions a number of desirable scenarios, including the construction of a null hypothesis distribution which is Student-t distributed, parallel to standard practice with Pearson's r, but without requiring either continuous random variables nor particular Gaussian errors. Simulations in particular focus upon highly kurtotic data, with highly nominal empirical coverage consistent with theoretical expectation.
- oai:arXiv.org:2512.23993v1
- stat.ME
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Landon Hurley
-
-
- Least Square Estimation: SDEs Perturbed by L\'evy Noise with Sparse Sample Paths
- https://arxiv.org/abs/2512.24005
- arXiv:2512.24005v1 Announce Type: new
-Abstract: This article investigates the least squares estimators (LSE) for the unknown parameters in stochastic differential equations (SDEs) that are affected by L\'evy noise, particularly when the sample paths are sparse. Specifically, given $n$ sparsely observed curves related to this model, we derive the least squares estimators for the unknown parameters: the drift coefficient, the diffusion coefficient, and the jump-diffusion coefficient. We also establish the asymptotic rate of convergence for the proposed LSE estimators. Additionally, in the supplementary materials, the proposed methodology is applied to a benchmark dataset of functional data/curves, and a small simulation study is conducted to illustrate the findings.
- oai:arXiv.org:2512.24005v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Brijesh Kumar Jha, Subhra Sankar Dhar, Akash Ashirbad Panda
-
-
- An exact unbiased semi-parametric maximum quasi-likelihood framework which is complete in the presence of ties
- https://arxiv.org/abs/2512.24009
- arXiv:2512.24009v1 Announce Type: new
-Abstract: This paper introduces a novel quasi-likelihood extension of the generalised Kendall \(\tau_{a}\) estimator, together with an extension of the Kemeny metric and its associated covariance and correlation forms. The central contribution is to show that the U-statistic structure of the proposed coefficient \(\tau_{\kappa}\) naturally induces a quasi-maximum likelihood estimation (QMLE) framework, yielding consistent Wald and likelihood ratio test statistics. The development builds on the uncentred correlation inner-product (Hilbert space) formulation of Emond and Mason (2002) and resolves the associated sub-Gaussian likelihood optimisation problem under the \(\ell_{2}\)-norm via an Edgeworth expansion of higher-order moments. The Kemeny covariance coefficient \(\tau_{\kappa}\) is derived within a novel likelihood framework for pairwise comparison-continuous random variables, enabling direct inference on population-level correlation between ranked or weakly ordered datasets. Unlike existing approaches that focus on marginal or pairwise summaries, the proposed framework supports sample-observed weak orderings and accommodates ties without information loss. Drawing parallels with Thurstone's Case V latent ordering model, we derive a quasi-likelihood-based tie model with analytic standard errors, generalising classical U-statistics. The framework applies to general continuous and discrete random variables and establishes formal equivalence to Bradley-Terry and Thurstone models, yielding a uniquely identified linear representation with both analytic and likelihood-based estimators.
- oai:arXiv.org:2512.24009v1
- stat.ME
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Landon Hurley
-
-
- Exposed: Shedding Blacklight on Online Privacy
- https://arxiv.org/abs/2512.24041
- arXiv:2512.24041v1 Announce Type: new
-Abstract: To what extent are users surveilled on the web, by what technologies, and by whom? We answer these questions by combining passively observed, anonymized browsing data of a large, representative sample of Americans with domain-level data on tracking from Blacklight. We find that nearly all users ($ > 99\%$) encounter at least one ad tracker or third-party cookie over the observation window. More invasive techniques like session recording, keylogging, and canvas fingerprinting are less widespread, but over half of the users visited a site employing at least one of these within the first 48 hours of the start of tracking. Linking trackers to their parent organizations reveals that a single organization, usually Google, can track over $50\%$ of web activity of more than half the users. Demographic differences in exposure are modest and often attenuate when we account for browsing volume. However, disparities by age and race remain, suggesting that what users browse, not just how much, shapes their surveillance risk.
- oai:arXiv.org:2512.24041v1
- stat.AP
- cs.CR
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Lucas Shen, Gaurav Sood
-
-
- Local Asymptotic Normality for Mixed Fractional Brownian Motion with $0<H<3/4$
- https://arxiv.org/abs/2512.24042
- arXiv:2512.24042v1 Announce Type: new
-Abstract: This paper establishes the Local Asymptotic Normality (LAN) property for the mixed fractional Brownian motion under high-frequency observations with Hurst index $H \in (0, 3/4)$. The simultaneous estimation of the volatility and the Hurst index encounters a degeneracy problem in the Fisher information matrix.
- oai:arXiv.org:2512.24042v1
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Chunhao Cai
-
-
- A persistent-homology-based Bayesian prior to identify Robin coefficient in parabolic problems
- https://arxiv.org/abs/2512.24046
- arXiv:2512.24046v1 Announce Type: new
-Abstract: We adopt a Bayesian inference approach with persistent-homology-based prior to estimate a temporally dependent Robin coefficient arising in the analysis of convective heat transfer. And we also discuss the use of a hierarchical Bayesian method for automatic selection of the regularization parameter. Numerical results demonstrate that the PH prior shows consistent improvement compared to the Gaussian and the total variation prior.
- oai:arXiv.org:2512.24046v1
- stat.CO
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Xiaomei Yang, Jiaying Jia
-
-
- Constructive Approximation of Random Process via Stochastic Interpolation Neural Network Operators
- https://arxiv.org/abs/2512.24106
- arXiv:2512.24106v1 Announce Type: new
-Abstract: In this paper, we construct a class of stochastic interpolation neural network operators (SINNOs) with random coefficients activated by sigmoidal functions. We establish their boundedness, interpolation accuracy, and approximation capabilities in the mean square sense, in probability, as well as path-wise within the space of second-order stochastic (random) processes \( L^2(\Omega, \mathcal{F},\mathbb{P}) \). Additionally, we provide quantitative error estimates using the modulus of continuity of the processes. These results highlight the effectiveness of SINNOs for approximating stochastic processes with potential applications in COVID-19 case prediction.
- oai:arXiv.org:2512.24106v1
- stat.ML
- cs.LG
- math.PR
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Sachin Saini, Uaday Singh
-
-
- A goodness-of-fit test for the Zeta distribution with unknown parameter
- https://arxiv.org/abs/2512.24128
- arXiv:2512.24128v1 Announce Type: new
-Abstract: We introduce a new goodness-of-fit test for count data on $\mathbb{N}$ for the Zeta distribution with unknown parameter. The test is built on a Stein-type characterization that uses, as Stein operator, the infinitesimal generator of a birth-death process whose stationary distribution is Zeta. The resulting $L^2$-type statistic is shown to be omnibus consistent, and we establish the limit null behavior as well as the validity of the associated parametric bootstrap procedure. In a Monte Carlo simulation study, we compare the proposed test with the only existing Zeta-specific procedure of Meintanis (2009), as well as with more general competitors based on empirical distribution functions, kernel Stein discrepancies and other Stein-type characterizations.
- oai:arXiv.org:2512.24128v1
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Bruno Ebner, Daniel Hlubinka
-
-
- Score-based sampling without diffusions: Guidance from a simple and modular scheme
- https://arxiv.org/abs/2512.24152
- arXiv:2512.24152v1 Announce Type: new
-Abstract: Sampling based on score diffusions has led to striking empirical results, and has attracted considerable attention from various research communities. It depends on availability of (approximate) Stein score functions for various levels of additive noise. We describe and analyze a modular scheme that reduces score-based sampling to solving a short sequence of ``nice'' sampling problems, for which high-accuracy samplers are known. We show how to design forward trajectories such that both (a) the terminal distribution, and (b) each of the backward conditional distribution is defined by a strongly log concave (SLC) distribution. This modular reduction allows us to exploit \emph{any} SLC sampling algorithm in order to traverse the backwards path, and we establish novel guarantees with short proofs for both uni-modal and multi-modal densities. The use of high-accuracy routines yields $\varepsilon$-accurate answers, in either KL or Wasserstein distances, with polynomial dependence on $\log(1/\varepsilon)$ and $\sqrt{d}$ dependence on the dimension.
- oai:arXiv.org:2512.24152v1
- math.ST
- cs.LG
- stat.ML
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- M. J. Wainwright
-
-
- The Malaysian Election Corpus (MECo): Electoral Maps and Cartograms from 1954 to 2025
- https://arxiv.org/abs/2512.24211
- arXiv:2512.24211v1 Announce Type: new
-Abstract: Electoral boundaries in Malaysia are not publicly available in machine-readable form. This prevents rigorous analysis of geography-centric issues such as malapportionment and gerrymandering, and constrains spatial perspectives on electoral outcomes. We present the second component of the Malaysian Election Corpus (MECo), an open-access collection of digital electoral boundaries covering all 19 approved delimitation exercises in Malaysia's history, from the first set of Malayan boundaries in 1954 until the 2019 Sabah delimitation. We also auto-generate election-time maps for all federal and state elections up to 2025, and include equal-area and electorate-weighted cartograms to support deeper geospatial analysis. This is the first complete, publicly-available, and machine-readable record of Malaysia's electoral boundaries, and fills a critical gap in the country's electoral data infrastructure.
- oai:arXiv.org:2512.24211v1
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/publicdomain/zero/1.0/
- Thevesh Thevananthan, Danesh Prakash Chacko
-
-
- A Robust Persistent Homology : Trimming Approach
- https://arxiv.org/abs/2512.24222
- arXiv:2512.24222v1 Announce Type: new
-Abstract: This article studies the robust version of persistent homology based on trimming methodology to capture the geometric feature through support of the data in presence of outliers. Precisely speaking, the proposed methodology works when the outliers lie outside the main data cloud as well as inside the data cloud. In the course of theoretical study, it is established that the Bottleneck distance between the proposed robust version of persistent homology and its population analogue can be made arbitrary small with a certain rate for a sufficiently large sample size. The practicability of the methodology is shown for various simulated data and bench mark real data associated with cellular biology.
- oai:arXiv.org:2512.24222v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Tuhin Subhra Mahato, Subhra Sankar Dhar
-
-
- Valid and Efficient Two-Stage Latent Subgroup Analysis with Observational Data
- https://arxiv.org/abs/2512.24223
- arXiv:2512.24223v1 Announce Type: new
-Abstract: Subgroup analysis evaluates treatment effects across multiple sub-populations. When subgroups are defined by latent memberships inferred from imperfect measurements, the analysis typically involves two inter-connected models, a latent class model and a subgroup outcome model. The classical one-stage framework, which models the joint distribution of the two models, may be infeasible with observational data containing many confounders. The two-stage framework, which first estimates the latent class model and then performs subgroup analysis using estimated latent memberships, can accommodate potential confounders but may suffer from bias issues due to misclassification of latent subgroup memberships. This paper focuses on latent subgroups inferred from binary item responses and addresses when and how a valid two-stage latent subgroup analysis can be made with observational data. We investigate the maximum misclassification rate that a valid two-stage framework can tolerate. Introducing a spectral method perspective, we propose a two-stage approach to achieve the desired misclassification rate with the blessing of many item responses. Our method accommodates high-dimensional confounders, is computationally efficient and robust to noninformative items. In observational studies, our methods lead to consistent estimation and valid inference on latent subgroup effects. We demonstrate its merit through simulation studies and an application to educational assessment data.
- oai:arXiv.org:2512.24223v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yuanhui Luo, Xinzhou Guo, Yuqi Gu
-
-
- Topological Spatial Graph Coarsening
- https://arxiv.org/abs/2512.24327
- arXiv:2512.24327v1 Announce Type: new
-Abstract: Spatial graphs are particular graphs for which the nodes are localized in space (e.g., public transport network, molecules, branching biological structures). In this work, we consider the problem of spatial graph reduction, that aims to find a smaller spatial graph (i.e., with less nodes) with the same overall structure as the initial one. In this context, performing the graph reduction while preserving the main topological features of the initial graph is particularly relevant, due to the additional spatial information. Thus, we propose a topological spatial graph coarsening approach based on a new framework that finds a trade-off between the graph reduction and the preservation of the topological characteristics. The coarsening is realized by collapsing short edges. In order to capture the topological information required to calibrate the reduction level, we adapt the construction of classical topological descriptors made for point clouds (the so-called persistent diagrams) to spatial graphs. This construction relies on the introduction of a new filtration called triangle-aware graph filtration. Our coarsening approach is parameter-free and we prove that it is equivariant under rotations, translations and scaling of the initial spatial graph. We evaluate the performances of our method on synthetic and real spatial graphs, and show that it significantly reduces the graph sizes while preserving the relevant topological information.
- oai:arXiv.org:2512.24327v1
- stat.ML
- cs.CG
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Anna Calissano, Etienne Lasalle
-
-
- A Novel Approach for Data Integration with Multiple Heterogeneous Data Sources
- https://arxiv.org/abs/2512.24342
- arXiv:2512.24342v1 Announce Type: new
-Abstract: The integration of data from multiple sources is increasingly used to achieve larger sample sizes and enhance population diversity. Our previous work established that, under random sampling from the same underlying population, integrating large incomplete datasets with summary-level data produces unbiased parameter estimates. In this study, we develop a novel statistical framework that enables the integration of summary-level data with information from heterogeneous data sources by leveraging auxiliary information. The proposed approach estimates study-specific sampling weights using this auxiliary information and calibrates the estimating equations to obtain the full set of model parameters. We evaluate the performance of the proposed method through simulation studies under various sampling designs and illustrate its application by reanalyzing U.S. cancer registry data combined with summary-level odds ratio estimates for selected colorectal cancer (CRC) risk factors, while relaxing the random sampling assumption.
- oai:arXiv.org:2512.24342v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Farimah Shamsi, Andriy Derkach
-
-
- Bayesian inference for functional extreme events defined via partially unobserved processes
- https://arxiv.org/abs/2512.24356
- arXiv:2512.24356v1 Announce Type: new
-Abstract: In order to describe the extremal behaviour of some stochastic process $X$, approaches from univariate extreme value theory are typically generalized to the spatial domain. In particular, generalized peaks-over-threshold approaches allow for the consideration of single extreme events. These can be flexibly defined as exceedances of a risk functional $r$, such as a spatial average, applied to $X$. Inference for the resulting limit process, the so-called $r$-Pareto process, requires the evaluation of $r(X)$ and thus the knowledge of the whole process $X$. In many practical applications, however, observations of $X$ are only available at scattered sites. To overcome this issue, we propose a two-step MCMC-algorithm in a Bayesian framework. In a first step, we sample from $X$ conditionally on the observations in order to evaluate which observations lead to $r$-exceedances. In a second step, we use these exceedances to sample from the posterior distribution of the parameters of the limiting $r$-Pareto process. Alternating these steps results in a full Bayesian model for the extremes of $X$. We show that, under appropriate assumptions, the probability of classifying an observation as $r$-exceedance in the first step converges to the desired probability. Furthermore, given the first step, the distribution of the Markov chain constructed in the second step converges to the posterior distribution of interest. The procedure is compared to the Bayesian version of the standard procedure in a simulation study.
- oai:arXiv.org:2512.24356v1
- stat.ME
- stat.CO
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Max Thannheimer, Marco Oesting
-
-
- Implicit score matching meets denoising score matching: improved rates of convergence and log-density Hessian estimation
- https://arxiv.org/abs/2512.24378
- arXiv:2512.24378v1 Announce Type: new
-Abstract: We study the problem of estimating the score function using both implicit score matching and denoising score matching. Assuming that the data distribution exhibiting a low-dimensional structure, we prove that implicit score matching is able not only to adapt to the intrinsic dimension, but also to achieve the same rates of convergence as denoising score matching in terms of the sample size. Furthermore, we demonstrate that both methods allow us to estimate log-density Hessians without the curse of dimensionality by simple differentiation. This justifies convergence of ODE-based samplers for generative diffusion models. Our approach is based on Gagliardo-Nirenberg-type inequalities relating weighted $L^2$-norms of smooth functions and their derivatives.
- oai:arXiv.org:2512.24378v1
- math.ST
- cs.LG
- stat.ML
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Konstantin Yakovlev, Anna Markovich, Nikita Puchkin
-
-
- Geometric criteria for identifying extremal dependence and flexible modeling via additive mixtures
- https://arxiv.org/abs/2512.24392
- arXiv:2512.24392v1 Announce Type: new
-Abstract: The framework of geometric extremes is based on the convergence of scaled sample clouds onto a limit set, characterized by a gauge function, with the shape of the limit set determining extremal dependence structures. While it is known that a blunt limit set implies asymptotic independence, the absence of bluntness can be linked to both asymptotic dependence and independence. Focusing on the bivariate case, under a truncated gamma modeling assumption with bounded angular density, we show that a ``pointy'' limit set implies asymptotic dependence, thus offering practical geometric criteria for identifying extremal dependence classes. Suitable models for the gauge function offer the ability to capture asymptotically independent or dependent data structures, without requiring prior knowledge of the true extremal dependence structure. The geometric approach thus offers a simple alternative to various parametric copula models that have been developed for this purpose in recent years. We consider two types of additively mixed gauge functions that provide a smooth interpolation between asymptotic dependence and asymptotic independence. We derive their explicit forms, explore their properties, and establish connections to the developed geometric criteria. Through a simulation study, we evaluate the effectiveness of the geometric approach with additively mixed gauge functions, comparing its performance to existing methodologies that account for both asymptotic dependence and asymptotic independence. The methodology is computationally efficient and yields reliable performance across various extremal dependence scenarios.
- oai:arXiv.org:2512.24392v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jeongjin Lee, Jennifer Wadsworth
-
-
- Demystifying Proximal Causal Inference
- https://arxiv.org/abs/2512.24413
- arXiv:2512.24413v1 Announce Type: new
-Abstract: Proximal causal inference (PCI) has emerged as a promising framework for identifying and estimating causal effects in the presence of unobserved confounders. While many traditional causal inference methods rely on the assumption of no unobserved confounding, this assumption is likely often violated. PCI mitigates this challenge by relying on an alternative set of assumptions regarding the relationships between treatment, outcome, and auxiliary variables that serve as proxies for unmeasured confounders. We review existing identification results, discuss the assumptions necessary for valid causal effect estimation via PCI, and compare different PCI estimation methods. We offer practical guidance on operationalizing PCI, with a focus on selecting and evaluating proxy variables using domain knowledge, measurement error perspectives, and negative control analogies. Through conceptual examples, we demonstrate tensions in proxy selection and discuss the importance of clearly defining the unobserved confounding mechanism. By bridging formal results with applied considerations, this work aims to demystify PCI, encourage thoughtful use in practice, and identify open directions for methodological development and empirical research.
- oai:arXiv.org:2512.24413v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Grace V. Ringlein, Trang Quynh Nguyen, Peter P. Zandi, Elizabeth A. Stuart, Harsh Parikh
-
-
- Exact finite mixture representations for species sampling processes
- https://arxiv.org/abs/2512.24414
- arXiv:2512.24414v1 Announce Type: new
-Abstract: Random probability measures, together with their constructions, representations, and associated algorithms, play a central role in modern Bayesian inference. A key class is that of proper species sampling processes, which offer a relatively simple yet versatile framework that extends naturally to non-exchangeable settings. We revisit this class from a computational perspective and show that they admit exact finite mixture representations. In particular, we prove that any proper species sampling process can be written, at the prior level, as a finite mixture with a latent truncation variable and reweighted atoms, while preserving its distributional features exactly. These finite formulations can be used as drop-in replacements in Bayesian mixture models, recasting posterior computation in terms of familiar finite-mixture machinery. This yields straightforward MCMC implementations and tractable expressions, while avoiding ad hoc truncations and model-specific constructions. The resulting representation preserves the full generality of the original infinite-dimensional priors while enabling practical gains in algorithm design and implementation.
- oai:arXiv.org:2512.24414v1
- stat.ME
- math.ST
- stat.CO
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Rams\'es H. Mena, Christos Merkatas, Theodoros Nicoleris, Carlos E. Rodr\'iguez
-
-
- Model-Assisted Bayesian Estimators of Transparent Population Level Summary Measures for Ordinal Outcomes in Randomized Controlled Trials
- https://arxiv.org/abs/2512.24442
- arXiv:2512.24442v1 Announce Type: new
-Abstract: In randomized controlled trials, ordinal outcomes typically improve statistical efficiency over binary outcomes. The treatment effect on an ordinal outcome is usually described by the odds ratio from a proportional odds model, but this summary measure lacks transparency with respect to its emphasis on the components of the ordinal outcome when proportional odds is violated. We propose various summary measures for ordinal outcomes that are fully transparent in this regard, including 'weighted geometric mean' odds ratios and relative risks, and 'weighted mean' risk differences. We also develop and evaluate efficient model-assisted Bayesian estimators for these population level summary measures based on non-proportional odds models that facilitate covariate adjustment with marginalization via the Bayesian bootstrap. We propose a weighting scheme that engenders appealing invariance properties, including to whether the ordinal outcome is ordered from best to worst versus worst to best. Using computer simulation, we show that comparative testing based on the proposed population level summary measures performs well relative to the conventional proportional odds approach. We also report an analysis of the COVID-OUT trial, which exhibits evidence of non-proportional odds.
- oai:arXiv.org:2512.24442v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lindsey E. Turner, Carolyn T. Bramante, Thomas A. Murray
-
-
- Robust reduced rank regression under heavy-tailed noise and missing data via non-convex penalization
- https://arxiv.org/abs/2512.24450
- arXiv:2512.24450v1 Announce Type: new
-Abstract: Reduced rank regression (RRR) is a fundamental tool for modeling multiple responses through low-dimensional latent structures, offering both interpretability and strong predictive performance in high-dimensional settings. Classical RRR methods, however, typically rely on squared loss and Gaussian noise assumptions, rendering them sensitive to heavy-tailed errors, outliers, and data contamination. Moreover, the presence of missing data--common in modern applications--further complicates reliable low-rank estimation. In this paper, we propose a robust reduced rank regression framework that simultaneously addresses heavy-tailed noise, outliers, and missing data. Our approach combines a robust Huber loss with nonconvex spectral regularization, specifically the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD). Unlike convex nuclear-norm regularization, the proposed nonconvex penalties alleviate excessive shrinkage and enable more accurate recovery of the underlying low-rank structure. The method also accommodates missing data in the response matrix without requiring imputation. We develop an efficient proximal gradient algorithm based on alternating updates and tailored spectral thresholding. Extensive simulation studies demonstrate that the proposed methods substantially outperform nuclear-norm-based and non-robust alternatives under heavy-tailed noise and contamination. An application to cancer cell line data set further illustrates the practical advantages of the proposed robust RRR framework.
- Our method is implemented in the R package rrpackrobust available at https://github.com/tienmt/rrpackrobust.
- oai:arXiv.org:2512.24450v1
- stat.ME
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- The Tien Mai
-
-
- Improving the stability of the covariance-controlled adaptive Langevin thermostat for large-scale Bayesian sampling
- https://arxiv.org/abs/2512.24515
- arXiv:2512.24515v1 Announce Type: new
-Abstract: Stochastic gradient Langevin dynamics and its variants approximate the likelihood of an entire dataset, via random (and typically much smaller) subsets, in the setting of Bayesian sampling. Due to the (often substantial) improvement of the computational efficiency, they have been widely used in large-scale machine learning applications. It has been demonstrated that the so-called covariance-controlled adaptive Langevin (CCAdL) thermostat, which incorporates an additional term involving the covariance matrix of the noisy force, outperforms popular alternative methods. A moving average is used in CCAdL to estimate the covariance matrix of the noisy force, in which case the covariance matrix will converge to a constant matrix in long-time limit. Moreover, it appears in our numerical experiments that the use of a moving average could reduce the stability of the numerical integrators, thereby limiting the largest usable stepsize. In this article, we propose a modified CCAdL (i.e., mCCAdL) thermostat that uses the scaling part of the scaling and squaring method together with a truncated Taylor series approximation to the exponential to numerically approximate the exact solution to the subsystem involving the additional term proposed in CCAdL. We also propose a symmetric splitting method for mCCAdL, instead of an Euler-type discretisation used in the original CCAdL thermostat. We demonstrate in our numerical experiments that the newly proposed mCCAdL thermostat achieves a substantial improvement in the numerical stability over the original CCAdL thermostat, while significantly outperforming popular alternative stochastic gradient methods in terms of the numerical accuracy for large-scale machine learning applications.
- oai:arXiv.org:2512.24515v1
- stat.ML
- cs.LG
- stat.CO
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Jiani Wei, Xiaocheng Shang
-
-
- Power Analysis is Essential: High-Powered Tests Suggest Minimal to No Effect of Rounded Shapes on Click-Through Rates
- https://arxiv.org/abs/2512.24521
- arXiv:2512.24521v1 Announce Type: new
-Abstract: Underpowered studies (below 50%) suffer from the winner's curse: a statistically significant result must exaggerate the true treatment effect to meet the significance threshold. A study by Dipayan Biswas, Annika Abell, and Roger Chacko published in the Journal of Consumer Research (2023) reported that in an A/B test simply rounding the corners of square buttons increased the online click-through rate by 55% (p-value 0.037)$\unicode{x2014}$a striking finding with potentially wide-ranging implications for the digital industry that is seeking to enhance consumer engagement. Drawing on our experience with tens of thousands of A/B tests, many involving similar user interface modifications, we found this dramatic claim implausibly large. To evaluate the claim, we conducted three high-powered A/B tests, each involving over two thousand times more users than the original study. All three experiments yielded effect size estimates that were approximately two orders of magnitude smaller than initially reported, with 95% confidence intervals that include zero, that is, not statistically significant at the 0.05 level. Two additional independent replications by Evidoo found similarly small effects. These findings underscore the critical importance of power analysis and experimental design to increase trust and reproducibility of results.
- oai:arXiv.org:2512.24521v1
- stat.ME
- cs.HC
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Ron Kohavi, Jakub Linowski, Lukas Vermeer, Fabrice Boisseranc, Joachim Furuseth, Andrew Gelman, Guido Imbens, Ravikiran Rajagopal
-
-
- Dimension-free estimators of gradients of functions with(out) non-independent variables
- https://arxiv.org/abs/2512.24527
- arXiv:2512.24527v1 Announce Type: new
-Abstract: This study proposes a unified stochastic framework for approximating and computing the gradient of every smooth function evaluated at non-independent variables, using $\ell_p$-spherical distributions on $\R^d$ with $d, p\geq 1$. The upper-bounds of the bias of the gradient surrogates do not suffer from the curse of dimensionality for any $p\geq 1$. Also, the mean squared errors (MSEs) of the gradient estimators are bounded by $K_0 N^{-1} d$ for any $p \in [1, 2]$, and by $K_1 N^{-1} d^{2/p}$ when $2 \leq p \ll d$ with $N$ the sample size and $K_0, K_1$ some constants. Taking $\max\left\{2, \log(d) \right\} < p \ll d$ allows for achieving dimension-free upper-bounds of MSEs. In the case where $d\ll p< +\infty$, the upper-bound $K_2 N^{-1} d^{2-2/p}/ (d+2)^2$ is reached with $K_2$ a constant. Such results lead to dimension-free MSEs of the proposed estimators, which boil down to estimators of the traditional gradient when the variables are independent. Numerical comparisons show the efficiency of the proposed approach.
- oai:arXiv.org:2512.24527v1
- math.ST
- math.OC
- math.PR
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Matieyendou Lamboni
-
-
- MultiRisk: Multiple Risk Control via Iterative Score Thresholding
- https://arxiv.org/abs/2512.24587
- arXiv:2512.24587v1 Announce Type: new
-Abstract: As generative AI systems are increasingly deployed in real-world applications, regulating multiple dimensions of model behavior has become essential. We focus on test-time filtering: a lightweight mechanism for behavior control that compares performance scores to estimated thresholds, and modifies outputs when these bounds are violated. We formalize the problem of enforcing multiple risk constraints with user-defined priorities, and introduce two efficient dynamic programming algorithms that leverage this sequential structure. The first, MULTIRISK-BASE, provides a direct finite-sample procedure for selecting thresholds, while the second, MULTIRISK, leverages data exchangeability to guarantee simultaneous control of the risks. Under mild assumptions, we show that MULTIRISK achieves nearly tight control of all constraint risks. The analysis requires an intricate iterative argument, upper bounding the risks by introducing several forms of intermediate symmetrized risk functions, and carefully lower bounding the risks by recursively counting jumps in symmetrized risk functions between appropriate risk levels. We evaluate our framework on a three-constraint Large Language Model alignment task using the PKU-SafeRLHF dataset, where the goal is to maximize helpfulness subject to multiple safety constraints, and where scores are generated by a Large Language Model judge and a perplexity filter. Our experimental results show that our algorithm can control each individual risk at close to the target level.
- oai:arXiv.org:2512.24587v1
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Sunay Joshi, Yan Sun, Hamed Hassani, Edgar Dobriban
-
-
- Multiple Testing of One-Sided Hypotheses with Conservative $p$-values
- https://arxiv.org/abs/2512.24588
- arXiv:2512.24588v1 Announce Type: new
-Abstract: We study a large-scale one-sided multiple testing problem in which test statistics follow normal distributions with unit variance, and the goal is to identify signals with positive mean effects. A common approach is to compute $p$-values under the assumption that all null means are exactly zero and then apply standard multiple testing procedures such as the Benjamini--Hochberg (BH) or Storey--BH method. However, because the null hypothesis is composite, some null means may be strictly negative. In this case, the resulting $p$-values are conservative, leading to a substantial loss of power. Existing methods address this issue by modifying the multiple testing procedure itself, for example through conditioning strategies or discarding rules. In contrast, we focus on correcting the $p$-values so that they are exact under the null. Specifically, we estimate the marginal null distribution of the test statistics within an empirical Bayes framework and construct refined $p$-values based on this estimated distribution. These refined $p$-values can then be directly used in standard multiple testing procedures without modification. Extensive simulation studies show that the proposed method substantially improves power when $p$-values are conservative, while achieving comparable performance to existing methods when $p$-values are exact. An application to phosphorylation data further demonstrates the practical effectiveness of our approach.
- oai:arXiv.org:2512.24588v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kwangok Seo, Johan Lim, Hyungwon Choi, Jaesik Jeong
-
-
- Generalized Poisson Matrix Factorization for Overdispersed Count Data
- https://arxiv.org/abs/2512.24604
- arXiv:2512.24604v1 Announce Type: new
-Abstract: Non-negative matrix factorization (NMF) is widely used as a feature extraction technique for matrices with non-negative entries, such as image data, purchase histories, and other types of count data. In NMF, a non-negative matrix is decomposed into the product of two non-negative matrices, and the approximation accuracy is evaluated by a loss function. If the Kullback-Leibler divergence is chosen as the loss function, the estimation coincides with maximum likelihood under the assumption that the data entries are distributed according to a Poisson distribution. To address overdispersion, negative binomial matrix factorization has recently been proposed as an extension of the Poisson-based model. However, the negative binomial distribution often generates an excessive number of zeros, which limits its expressive capacity. In this study, we propose a non-negative matrix factorization based on the generalized Poisson distribution, which can flexibly accommodate overdispersion, and we introduce a maximum likelihood approach for parameter estimation. This methodology provides a more versatile framework than existing models, thereby extending the applicability of NMF to a broader class of count data.
- oai:arXiv.org:2512.24604v1
- stat.CO
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Ryo Ohashi, Hiroyasu Abe, Fumitake Sakaori
-
-
- Empirical Bayes Method for Large Scale Multiple Testing with Heteroscedastic Errors
- https://arxiv.org/abs/2512.24611
- arXiv:2512.24611v1 Announce Type: new
-Abstract: In this paper, we address the normal mean inference problem, which involves testing multiple means of normal random variables with heteroscedastic variances. Most existing empirical Bayes methods for this setting are developed under restrictive assumptions, such as the scaled inverse-chi-squared prior for variances and unimodality for the non-null mean distribution. However, when either of these assumptions is violated, these methods often fail to control the false discovery rate (FDR) at the target level or suffer from a substantial loss of power. To overcome these limitations, we propose a new empirical Bayes method, gg-Mix, which assumes only independence between the normal means and variances, without imposing any structural restrictions on their distributions. We thoroughly evaluate the FDR control and power of gg-Mix through extensive numerical studies and demonstrate its superior performance compared to existing methods. Finally, we apply gg-Mix to three real data examples to further illustrate the practical advantages of our approach.
- oai:arXiv.org:2512.24611v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kwangok Seo, Johan Lim, Kaiwen Wang, Dohwan Park, Shota Katayama, Xinlei Wang
-
-
- $\ell_0$-Regularized Item Response Theory Model for Robust Ideal Point Estimation
- https://arxiv.org/abs/2512.24642
- arXiv:2512.24642v1 Announce Type: new
-Abstract: Ideal point estimation methods face a significant challenge when legislators engage in protest voting -- strategically voting against their party to express dissatisfaction. Such votes introduce attenuation bias, making ideologically extreme legislators appear artificially moderate. We propose a novel statistical framework that extends the fast EM-based estimation approach of \cite{Imai2016} using $\ell_0$ regularization method to handle protest votes. Through simulation studies, we demonstrate that our proposed method maintains estimation accuracy even with high proportions of protest votes, while being substantially faster than MCMC-based methods. Applying our method to the 116th and 117th U.S. House of Representatives, we successfully recover the extreme liberal positions of ``the Squad'', whose protest votes had caused conventional methods to misclassify them as moderates. While conventional methods rank Ocasio-Cortez as more conservative than 69\% of Democrats, our method places her firmly in the progressive wing, aligning with her documented policy positions. This approach provides both robust ideal point estimates and systematic identification of protest votes, facilitating deeper analysis of strategic voting behavior in legislatures.
- oai:arXiv.org:2512.24642v1
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kwangok Seo, Johan Lim, Seokho Lee, Jong Hee Park
-
-
- Nonparametric Bandits with Single-Index Rewards: Optimality and Adaptivity
- https://arxiv.org/abs/2512.24669
- arXiv:2512.24669v1 Announce Type: new
-Abstract: Contextual bandits are a central framework for sequential decision-making, with applications ranging from recommendation systems to clinical trials. While nonparametric methods can flexibly model complex reward structures, they suffer from the curse of dimensionality. We address this challenge using a single-index model, which projects high-dimensional covariates onto a one-dimensional subspace while preserving nonparametric flexibility.
- We first develop a nonasymptotic theory for offline single-index regression for each arm, combining maximum rank correlation for index estimation with local polynomial regression. Building on this foundation, we propose a single-index bandit algorithm and establish its convergence rate. We further derive a matching lower bound, showing that the algorithm achieves minimax-optimal regret independent of the ambient dimension $d$, thereby overcoming the curse of dimensionality.
- We also establish an impossibility result for adaptation: without additional assumptions, no policy can adapt to unknown smoothness levels. Under a standard self-similarity condition, however, we construct a policy that remains minimax-optimal while automatically adapting to the unknown smoothness. Finally, as the dimension $d$ increases, our algorithm continues to achieve minimax-optimal regret, revealing a phase transition that characterizes the fundamental limits of single-index bandit learning.
- oai:arXiv.org:2512.24669v1
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Wanteng Ma, T. Tony Cai
-
-
- Reformulating Confidence as Extended Likelihood
- https://arxiv.org/abs/2512.24701
- arXiv:2512.24701v1 Announce Type: new
-Abstract: Fisher's fiducial probability has recently received renewed attention under the name confidence. In this paper, we reformulate it within an extended-likelihood framework, a representation that helps to resolve many long-standing controversies. The proposed formulation accommodates multi-dimensional parameters and shows how higher-order approximations can be used to refine standard asymptotic confidence statements.
- oai:arXiv.org:2512.24701v1
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Youngjo Lee
-
-
- Quasi-Maximum Likelihood Estimation for a Genuinely Unbalanced Dynamic Network Panel Data Model
- https://arxiv.org/abs/2512.24748
- arXiv:2512.24748v1 Announce Type: new
-Abstract: This paper develops a quasi-maximum likelihood estimator for genuinely unbalanced dynamic network panel data models with individual fixed effects. We propose a model that accommodates contemporaneous and lagged network spillovers, temporal dependence, and a listing effect that activates upon a unit's first appearance in the panel. We establish the consistency of the QMLE as both $N$ and $T$ go to infinity, derive its asymptotic distribution, and identify an asymptotic bias arising from incidental parameters when $N$ is asymptotically large relative to $T$. Based on the asymptotic bias expression, we propose a bias-corrected estimator that is asymptotically unbiased and normally distributed under appropriate regularity conditions. Monte Carlo experiments examine the finite sample performance of the bias-corrected estimator across different criteria, including bias, RMSE, coverage probability, and the normality of the estimator. The empirical application to Airbnb listings from New Zealand and New York City reveals region-specific patterns in spatial and temporal price transmission, illustrating the importance of modeling genuine unbalancedness in dynamic network settings.
- oai:arXiv.org:2512.24748v1
- stat.ME
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zhijian Wang, Xingbai Xu, Tuo Liu
-
-
- Sparse Offline Reinforcement Learning with Corruption Robustness
- https://arxiv.org/abs/2512.24768
- arXiv:2512.24768v1 Announce Type: new
-Abstract: We investigate robustness to strong data corruption in offline sparse reinforcement learning (RL). In our setting, an adversary may arbitrarily perturb a fraction of the collected trajectories from a high-dimensional but sparse Markov decision process, and our goal is to estimate a near optimal policy. The main challenge is that, in the high-dimensional regime where the number of samples $N$ is smaller than the feature dimension $d$, exploiting sparsity is essential for obtaining non-vacuous guarantees but has not been systematically studied in offline RL. We analyse the problem under uniform coverage and sparse single-concentrability assumptions. While Least Square Value Iteration (LSVI), a standard approach for robust offline RL, performs well under uniform coverage, we show that integrating sparsity into LSVI is unnatural, and its analysis may break down due to overly pessimistic bonuses. To overcome this, we propose actor-critic methods with sparse robust estimator oracles, which avoid the use of pointwise pessimistic bonuses and provide the first non-vacuous guarantees for sparse offline RL under single-policy concentrability coverage. Moreover, we extend our results to the contaminated setting and show that our algorithm remains robust under strong contamination. Our results provide the first non-vacuous guarantees in high-dimensional sparse MDPs with single-policy concentrability coverage and corruption, showing that learning a near-optimal policy remains possible in regimes where traditional robust offline RL techniques may fail.
- oai:arXiv.org:2512.24768v1
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Nam Phuong Tran, Andi Nika, Goran Radanovic, Long Tran-Thanh, Debmalya Mandal
-
-
- Approximate Computation via Le Cam Simulability
- https://arxiv.org/abs/2512.24860
- arXiv:2512.24860v1 Announce Type: new
-Abstract: We propose a decision-theoretic framework for computational complexity, complementary to classical theory: moving from syntactic exactness (Turing / Shannon) to semantic simulability (Le Cam). While classical theory classifies problems by the cost of exact solution, modern computation often seeks only decision-valid approximations. We introduce a framework where "computation" is viewed as the efficient simulation of a target statistical experiment within a bounded risk distortion (Le Cam deficiency).
- We formally define computational deficiency ($\delta_{\text{poly}}$) and use it to construct the complexity class LeCam-P (Decision-Robust Polynomial Time), characterizing problems that may be syntactically hard but semantically easy to approximate. We show that classical Karp reductions can be viewed as zero-deficiency simulations, and that approximate reductions correspond to bounded deficiency. Furthermore, we establish the No-Free-Transfer Inequality, showing that strictly invariant representations inevitably destroy decision-relevant information. This framework offers a statistical perspective on approximation theory, bridging the gap between algorithmic complexity and decision theory.
- oai:arXiv.org:2512.24860v1
- math.ST
- cs.CC
- cs.IT
- math.IT
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Deniz Akdemir
-
-
- Are First-Order Diffusion Samplers Really Slower? A Fast Forward-Value Approach
- https://arxiv.org/abs/2512.24927
- arXiv:2512.24927v1 Announce Type: new
-Abstract: Higher-order ODE solvers have become a standard tool for accelerating diffusion probabilistic model (DPM) sampling, motivating the widespread view that first-order methods are inherently slower and that increasing discretization order is the primary path to faster generation. This paper challenges this belief and revisits acceleration from a complementary angle: beyond solver order, the placement of DPM evaluations along the reverse-time dynamics can substantially affect sampling accuracy in the low-neural function evaluation (NFE) regime.
- We propose a novel training-free, first-order sampler whose leading discretization error has the opposite sign to that of DDIM. Algorithmically, the method approximates the forward-value evaluation via a cheap one-step lookahead predictor. We provide theoretical guarantees showing that the resulting sampler provably approximates the ideal forward-value trajectory while retaining first-order convergence. Empirically, across standard image generation benchmarks (CIFAR-10, ImageNet, FFHQ, and LSUN), the proposed sampler consistently improves sample quality under the same NFE budget and can be competitive with, and sometimes outperform, state-of-the-art higher-order samplers. Overall, the results suggest that the placement of DPM evaluations provides an additional and largely independent design angle for accelerating diffusion sampling.
- oai:arXiv.org:2512.24927v1
- stat.ML
- cs.LG
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Yuchen Jiao, Na Li, Changxiao Cai, Gen Li
-
-
- Basic Inequalities for First-Order Optimization with Applications to Statistical Risk Analysis
- https://arxiv.org/abs/2512.24999
- arXiv:2512.24999v1 Announce Type: new
-Abstract: We introduce \textit{basic inequalities} for first-order iterative optimization algorithms, forming a simple and versatile framework that connects implicit and explicit regularization. While related inequalities appear in the literature, we isolate and highlight a specific form and develop it as a well-rounded tool for statistical analysis. Let $f$ denote the objective function to be optimized. Given a first-order iterative algorithm initialized at $\theta_0$ with current iterate $\theta_T$, the basic inequality upper bounds $f(\theta_T)-f(z)$ for any reference point $z$ in terms of the accumulated step sizes and the distances between $\theta_0$, $\theta_T$, and $z$. The bound translates the number of iterations into an effective regularization coefficient in the loss function. We demonstrate this framework through analyses of training dynamics and prediction risk bounds. In addition to revisiting and refining known results on gradient descent, we provide new results for mirror descent with Bregman divergence projection, for generalized linear models trained by gradient descent and exponentiated gradient descent, and for randomized predictors. We illustrate and supplement these theoretical findings with experiments on generalized linear models.
- oai:arXiv.org:2512.24999v1
- math.ST
- cs.LG
- cs.NA
- math.NA
- math.OC
- stat.ML
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Seunghoon Paik, Kangjie Zhou, Matus Telgarsky, Ryan J. Tibshirani
-
-
- Modewise Additive Factor Model for Matrix Time Series
- https://arxiv.org/abs/2512.25025
- arXiv:2512.25025v1 Announce Type: new
-Abstract: We introduce a Modewise Additive Factor Model (MAFM) for matrix-valued time series that captures row-specific and column-specific latent effects through an additive structure, offering greater flexibility than multiplicative frameworks such as Tucker and CP factor models. In MAFM, each observation decomposes into a row-factor component, a column-factor component, and noise, allowing distinct sources of variation along different modes to be modeled separately. We develop a computationally efficient two-stage estimation procedure: Modewise Inner-product Eigendecomposition (MINE) for initialization, followed by Complement-Projected Alternating Subspace Estimation (COMPAS) for iterative refinement. The key methodological innovation is that orthogonal complement projections completely eliminate cross-modal interference when estimating each loading space. We establish convergence rates for the estimated factor loading matrices under proper conditions. We further derive asymptotic distributions for the loading matrix estimators and develop consistent covariance estimators, yielding a data-driven inference framework that enables confidence interval construction and hypothesis testing. As a technical contribution of independent interest, we establish matrix Bernstein inequalities for quadratic forms of dependent matrix time series. Numerical experiments on synthetic and real data demonstrate the advantages of the proposed method over existing approaches.
- oai:arXiv.org:2512.25025v1
- stat.ME
- econ.EM
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Elynn Chen, Yuefeng Han, Jiayu Li, Ke Xu
-
-
- Bayesian Elastic Net Regression with Structured Prior Dependence
- https://arxiv.org/abs/2512.25045
- arXiv:2512.25045v1 Announce Type: new
-Abstract: Many regularization priors for Bayesian regression assume the regression coefficients are a priori independent. In particular this is the case for standard Bayesian treatments of the lasso and the elastic net. While independence may be reasonable in some data-analytic settings, incorporating dependence in these prior distributions provides greater modeling flexibility. This paper introduces the orthant normal distribution in its general form and shows how it can be used to structure prior dependence in the Bayesian elastic net regression model. An L1-regularized version of Zellner's g prior is introduced as a special case, creating a new link between the literature on penalized optimization and an important class of regression priors. Computation is challenging due to an intractable normalizing constant in the prior. We avoid this issue by modifying slightly a standard prior of convenience for the hyperparameters in such a way to enable simple and fast Gibbs sampling of the posterior distribution. The benefit of including structured prior dependence in the Bayesian elastic net regression model is demonstrated through simulation and a near-infrared spectroscopy data example.
- oai:arXiv.org:2512.25045v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Christopher M. Hans, Ningyi Liu
-
-
- Sequential Bayesian parameter-state estimation in dynamical systems with noisy and incomplete observations via a variational framework
- https://arxiv.org/abs/2512.25056
- arXiv:2512.25056v1 Announce Type: new
-Abstract: Online joint estimation of unknown parameters and states in a dynamical system with uncertainty quantification is crucial in many applications. For example, digital twins dynamically update their knowledge of model parameters and states to support prediction and decision-making. Reliability and computational speed are vital for DTs. Online parameter-state estimation ensures computational efficiency, while uncertainty quantification is essential for making reliable predictions and decisions. In parameter-state estimation, the joint distribution of the state and model parameters conditioned on the data, termed the joint posterior, provides accurate uncertainty quantification. Because the joint posterior is generally intractable to compute, this paper presents an online variational inference framework to compute its approximation at each time step. The approximation is factorized into a marginal distribution over the model parameters and a state distribution conditioned on the parameters. This factorization enables recursive updates through a two-stage procedure: first, the parameter posterior is approximated via variational inference; second, the state distribution conditioned on the parameters is computed using Gaussian filtering based on the estimated parameter posterior. The algorithmic design is supported by a theorem establishing upper bounds on the joint posterior approximation error. Numerical experiments demonstrate that the proposed method (i) matches the performance of the joint particle filter in low-dimensional problems, accurately inferring both unobserved states and unknown parameters of dynamical and observation models; (ii) remains robust under noisy, partial observations and model discrepancies in a chaotic Lorenz 96 system; and (iii) scales effectively to a high-dimensional convection-diffusion system, where it outperforms the joint ensemble Kalman filter.
- oai:arXiv.org:2512.25056v1
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- new
- http://creativecommons.org/licenses/by/4.0/
- Liliang Wang, Alex Gorodetsky
-
-
- Overflow-Avoiding Memory AMP
- https://arxiv.org/abs/2407.03898
- arXiv:2407.03898v1 Announce Type: cross
-Abstract: Approximate Message Passing (AMP) type algorithms are widely used for signal recovery in high-dimensional noisy linear systems. Recently, a principle called Memory AMP (MAMP) was proposed. Leveraging this principle, the gradient descent MAMP (GD-MAMP) algorithm was designed, inheriting the strengths of AMP and OAMP/VAMP. In this paper, we first provide an overflow-avoiding GD-MAMP (OA-GD-MAMP) to address the overflow problem that arises from some intermediate variables exceeding the range of floating point numbers. Second, we develop a complexity-reduced GD-MAMP (CR-GD-MAMP) to reduce the number of matrix-vector products per iteration by 1/3 (from 3 to 2) with little to no impact on the convergence speed.
- oai:arXiv.org:2407.03898v1
- cs.IT
- eess.SP
- math.IT
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shunqi Huang, Lei Liu, Brian M. Kurkoski
-
-
- New Exam Security Questions in the AI Era: Comparing AI-Generated Item Similarity Between Naive and Detail-Guided Prompting Approaches
- https://arxiv.org/abs/2512.23729
- arXiv:2512.23729v1 Announce Type: cross
-Abstract: Large language models (LLMs) have emerged as powerful tools for generating domain-specific multiple-choice questions (MCQs), offering efficiency gains for certification boards but raising new concerns about examination security. This study investigated whether LLM-generated items created with proprietary guidance differ meaningfully from those generated using only publicly available resources. Four representative clinical activities from the American Board of Family Medicine (ABFM) blueprint were mapped to corresponding Entrustable Professional Activities (EPAs), and three LLMs (GPT-4o, Claude 4 Sonnet, Gemini 2.5 Flash) produced items under a naive strategy using only public EPA descriptors, while GPT-4o additionally produced items under a guided strategy that incorporated proprietary blueprints, item-writing guidelines, and exemplar items, yielding 160 total items. Question stems and options were encoded using PubMedBERT and BioBERT, and intra- and inter-strategy cosine similarity coefficients were calculated. Results showed high internal consistency within each prompting strategy, while cross-strategy similarity was lower overall. However, several domain model pairs, particularly in narrowly defined areas such as viral pneumonia and hypertension, exceeded the 0.65 threshold, indicating convergence between naive and guided pipelines. These findings suggest that while proprietary resources impart distinctiveness, LLMs prompted only with public information can still generate items closely resembling guided outputs in constrained clinical domains, thereby heightening risks of item exposure. Safeguarding the integrity of high stakes examinations will require human-first, AI-assisted item development, strict separation of formative and summative item pools, and systematic similarity surveillance to balance innovation with security.
- oai:arXiv.org:2512.23729v1
- cs.CY
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Ting Wang, Caroline Prendergast, Susan Lottridge
-
-
- A Review of Diffusion-based Simulation-Based Inference: Foundations and Applications in Non-Ideal Data Scenarios
- https://arxiv.org/abs/2512.23748
- arXiv:2512.23748v1 Announce Type: cross
-Abstract: For complex simulation problems, inferring parameters of scientific interest often precludes the use of classical likelihood-based techniques due to intractable likelihood functions. Simulation-based inference (SBI) methods forego the need for explicit likelihoods by directly utilizing samples from the simulator to learn posterior distributions over parameters $\mathbf{\theta}$ given observed data $\mathbf{x}_{\text{o}}$. Recent work has brought attention to diffusion models -- a type of generative model rooted in score matching and reverse-time stochastic dynamics -- as a flexible framework SBI tasks. This article reviews diffusion-based SBI from first principles to applications in practice. We first recall the mathematical foundations of diffusion modeling (forward noising, reverse-time SDE/ODE, probability flow, and denoising score matching) and explain how conditional scores enable likelihood-free posterior sampling. We then examine where diffusion models address pain points of normalizing flows in neural posterior/likelihood estimation and where they introduce new trade-offs (e.g., iterative sampling costs). The key theme of this review is robustness of diffusion-based SBI in non-ideal conditions common to scientific data: misspecification (mismatch between simulated training data and reality), unstructured or infinite-dimensional observations, and missingness. We synthesize methods spanning foundations drawing from Schrodinger-bridge formulations, conditional and sequential posterior samplers, amortized architectures for unstructured data, and inference-time prior adaptation. Throughout, we adopt consistent notation and emphasize conditions and caveats required for accurate posteriors. The review closes with a discussion of open problems with an eye toward applications of uncertainty quantification for probabilistic geophysical models that may benefit from diffusion-based SBI.
- oai:arXiv.org:2512.23748v1
- cs.LG
- math.PR
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Haley Rosso, Talea Mayo
-
-
- Learning Coupled System Dynamics under Incomplete Physical Constraints and Missing Data
- https://arxiv.org/abs/2512.23761
- arXiv:2512.23761v1 Announce Type: cross
-Abstract: Advances in data acquisition and computational methods have accelerated the use of differential equation based modelling for complex systems. Such systems are often described by coupled (or more) variables, yet governing equation is typically available for one variable, while the remaining variable can be accessed only through data. This mismatch between known physics and observed data poses a fundamental challenge for existing physics-informed machine learning approaches, which generally assume either complete knowledge of the governing equations or full data availability across all variables. In this paper, we introduce MUSIC (Multitask Learning Under Sparse and Incomplete Constraints), a sparsity induced multitask neural network framework that integrates partial physical constraints with data-driven learning to recover full-dimensional solutions of coupled systems when physics-constrained and data-informed variables are mutually exclusive. MUSIC employs mesh-free (random) sampling of training data and sparsity regularization, yielding highly compressed models with improved training and evaluation efficiency. We demonstrate that MUSIC accurately learns solutions (shock wave solutions, discontinuous solutions, pattern formation solutions) to complex coupled systems under data-scarce and noisy conditions, consistently outperforming non-sparse formulations. These results highlight MUSIC as a flexible and effective approach for modeling partially observed systems with incomplete physical knowledge.
- oai:arXiv.org:2512.23761v1
- cs.LG
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Esha Saha, Hao Wang
-
-
- Neural Optimal Design of Experiment for Inverse Problems
- https://arxiv.org/abs/2512.23763
- arXiv:2512.23763v1 Announce Type: cross
-Abstract: We introduce Neural Optimal Design of Experiments, a learning-based framework for optimal experimental design in inverse problems that avoids classical bilevel optimization and indirect sparsity regularization. NODE jointly trains a neural reconstruction model and a fixed-budget set of continuous design variables representing sensor locations, sampling times, or measurement angles, within a single optimization loop. By optimizing measurement locations directly rather than weighting a dense grid of candidates, the proposed approach enforces sparsity by design, eliminates the need for l1 tuning, and substantially reduces computational complexity. We validate NODE on an analytically tractable exponential growth benchmark, on MNIST image sampling, and illustrate its effectiveness on a real world sparse view X ray CT example. In all cases, NODE outperforms baseline approaches, demonstrating improved reconstruction accuracy and task-specific performance.
- oai:arXiv.org:2512.23763v1
- cs.LG
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- John E. Darges, Babak Maboudi Afkham, Matthias Chung
-
-
- Exploring Cumulative Effects in Survival Data Using Deep Learning Networks
- https://arxiv.org/abs/2512.23764
- arXiv:2512.23764v1 Announce Type: cross
-Abstract: In epidemiological research, modeling the cumulative effects of time-dependent exposures on survival outcomes presents a challenge due to their intricate temporal dynamics. Conventional spline-based statistical methods, though effective, require repeated data transformation for each spline parameter tuning, with survival analysis computations relying on the entire dataset, posing difficulties for large datasets. Meanwhile, existing neural network-based survival analysis methods focus on accuracy but often overlook the interpretability of cumulative exposure patterns. To bridge this gap, we introduce CENNSurv, a novel deep learning approach that captures dynamic risk relationships from time-dependent data. Evaluated on two diverse real-world datasets, CENNSurv revealed a multi-year lagged association between chronic environmental exposure and a critical survival outcome, as well as a critical short-term behavioral shift prior to subscription lapse. This demonstrates CENNSurv's ability to model complex temporal patterns with improved scalability. CENNSurv provides researchers studying cumulative effects a practical tool with interpretable insights.
- oai:arXiv.org:2512.23764v1
- cs.LG
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Kang-Chung Yang, Shinsheng Yuan
-
-
- TabMixNN: A Unified Deep Learning Framework for Structural Mixed Effects Modeling on Tabular Data
- https://arxiv.org/abs/2512.23787
- arXiv:2512.23787v1 Announce Type: cross
-Abstract: We present TabMixNN, a flexible PyTorch-based deep learning framework that synthesizes classical mixed-effects modeling with modern neural network architectures for tabular data analysis. TabMixNN addresses the growing need for methods that can handle hierarchical data structures while supporting diverse outcome types including regression, classification, and multitask learning. The framework implements a modular three-stage architecture: (1) a mixed-effects encoder with variational random effects and flexible covariance structures, (2) backbone architectures including Generalized Structural Equation Models (GSEM) and spatial-temporal manifold networks, and (3) outcome-specific prediction heads supporting multiple outcome families. Key innovations include an R-style formula interface for accessibility, support for directed acyclic graph (DAG) constraints for causal structure learning, Stochastic Partial Differential Equation (SPDE) kernels for spatial modeling, and comprehensive interpretability tools including SHAP values and variance decomposition. We demonstrate the framework's flexibility through applications to longitudinal data analysis, genomic prediction, and spatial-temporal modeling. TabMixNN provides a unified interface for researchers to leverage deep learning while maintaining the interpretability and theoretical grounding of classical mixed-effects models.
- oai:arXiv.org:2512.23787v1
- cs.LG
- stat.CO
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Deniz Akdemir
-
-
- Interactive Machine Learning: From Theory to Scale
- https://arxiv.org/abs/2512.23924
- arXiv:2512.23924v1 Announce Type: cross
-Abstract: Machine learning has achieved remarkable success across a wide range of applications, yet many of its most effective methods rely on access to large amounts of labeled data or extensive online interaction. In practice, acquiring high-quality labels and making decisions through trial-and-error can be expensive, time-consuming, or risky, particularly in large-scale or high-stakes settings. This dissertation studies interactive machine learning, in which the learner actively influences how information is collected or which actions are taken, using past observations to guide future interactions. We develop new algorithmic principles and establish fundamental limits for interactive learning along three dimensions: active learning with noisy data and rich model classes, sequential decision making with large action spaces, and model selection under partial feedback. Our results include the first computationally efficient active learning algorithms achieving exponential label savings without low-noise assumptions; the first efficient, general-purpose contextual bandit algorithms whose guarantees are independent of the size of the action space; and the first tight characterizations of the fundamental cost of model selection in sequential decision making. Overall, this dissertation advances the theoretical foundations of interactive learning by developing algorithms that are statistically optimal and computationally efficient, while also providing principled guidance for deploying interactive learning methods in large-scale, real-world settings.
- oai:arXiv.org:2512.23924v1
- cs.LG
- cs.AI
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yinglun Zhu
-
-
- Statistical Guarantees in the Search for Less Discriminatory Algorithms
- https://arxiv.org/abs/2512.23943
- arXiv:2512.23943v1 Announce Type: cross
-Abstract: Recent scholarship has argued that firms building data-driven decision systems in high-stakes domains like employment, credit, and housing should search for "less discriminatory algorithms" (LDAs) (Black et al., 2024). That is, for a given decision problem, firms considering deploying a model should make a good-faith effort to find equally performant models with lower disparate impact across social groups. Evidence from the literature on model multiplicity shows that randomness in training pipelines can lead to multiple models with the same performance, but meaningful variations in disparate impact. This suggests that developers can find LDAs simply by randomly retraining models. Firms cannot continue retraining forever, though, which raises the question: What constitutes a good-faith effort? In this paper, we formalize LDA search via model multiplicity as an optimal stopping problem, where a model developer with limited information wants to produce strong evidence that they have sufficiently explored the space of models. Our primary contribution is an adaptive stopping algorithm that yields a high-probability upper bound on the gains achievable from a continued search, allowing the developer to certify (e.g., to a court) that their search was sufficient. We provide a framework under which developers can impose stronger assumptions about the distribution of models, yielding correspondingly stronger bounds. We validate the method on real-world credit, employment and housing datasets.
- oai:arXiv.org:2512.23943v1
- cs.CY
- cs.LG
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Chris Hays, Ben Laufer, Solon Barocas, Manish Raghavan
-
-
- Improved Balanced Classification with Theoretically Grounded Loss Functions
- https://arxiv.org/abs/2512.23947
- arXiv:2512.23947v1 Announce Type: cross
-Abstract: The balanced loss is a widely adopted objective for multi-class classification under class imbalance. By assigning equal importance to all classes, regardless of their frequency, it promotes fairness and ensures that minority classes are not overlooked. However, directly minimizing the balanced classification loss is typically intractable, which makes the design of effective surrogate losses a central question. This paper introduces and studies two advanced surrogate loss families: Generalized Logit-Adjusted (GLA) loss functions and Generalized Class-Aware weighted (GCA) losses. GLA losses generalize Logit-Adjusted losses, which shift logits based on class priors, to the broader general cross-entropy loss family. GCA loss functions extend the standard class-weighted losses, which scale losses inversely by class frequency, by incorporating class-dependent confidence margins and extending them to the general cross-entropy family. We present a comprehensive theoretical analysis of consistency for both loss families. We show that GLA losses are Bayes-consistent, but only $H$-consistent for complete (i.e., unbounded) hypothesis sets. Moreover, their $H$-consistency bounds depend inversely on the minimum class probability, scaling at least as $1/\mathsf p_{\min}$. In contrast, GCA losses are $H$-consistent for any hypothesis set that is bounded or complete, with $H$-consistency bounds that scale more favorably as $1/\sqrt{\mathsf p_{\min}}$, offering significantly stronger theoretical guarantees in imbalanced settings. We report the results of experiments demonstrating that, empirically, both the GCA losses with calibrated class-dependent confidence margins and GLA losses can greatly outperform straightforward class-weighted losses as well as the LA losses. GLA generally performs slightly better in common benchmarks, whereas GCA exhibits a slight edge in highly imbalanced settings.
- oai:arXiv.org:2512.23947v1
- cs.LG
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Corinna Cortes, Mehryar Mohri, Yutao Zhong
-
-
- A Community-Aware Framework for Influence Maximization with Explicit Accounting for Inter-Community Influence
- https://arxiv.org/abs/2512.23973
- arXiv:2512.23973v1 Announce Type: cross
-Abstract: Influence Maximization (IM) seeks to identify a small set of seed nodes in a social network to maximize expected information spread under a diffusion model. While community-based approaches improve scalability by exploiting modular structure, they typically assume independence between communities, overlooking inter-community influence$\unicode{x2014}$a limitation that reduces effectiveness in real-world networks. We introduce Community-IM++, a scalable framework that explicitly models cross-community diffusion through a principled heuristic based on community-based diffusion degree (CDD) and a progressive budgeting strategy. The algorithm partitions the network, computes CDD to prioritize bridging nodes, and allocates seeds adaptively across communities using lazy evaluation to minimize redundant computations. Experiments on large real-world social networks under different edge weight models show that Community-IM++ achieves near-greedy influence spread at up to 100 times lower runtime, while outperforming Community-IM and degree heuristics across budgets and structural conditions. These results demonstrate the practicality of Community-IM++ for large-scale applications such as viral marketing, misinformation control, and public health campaigns, where efficiency and cross-community reach are critical.
- oai:arXiv.org:2512.23973v1
- cs.SI
- cs.AI
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Eliot W. Robson, Abhishek K. Umrawal
-
-
- Assured Autonomy: How Operations Research Powers and Orchestrates Generative AI Systems
- https://arxiv.org/abs/2512.23978
- arXiv:2512.23978v1 Announce Type: cross
-Abstract: Generative artificial intelligence (GenAI) is shifting from conversational assistants toward agentic systems -- autonomous decision-making systems that sense, decide, and act within operational workflows. This shift creates an autonomy paradox: as GenAI systems are granted greater operational autonomy, they should, by design, embody more formal structure, more explicit constraints, and stronger tail-risk discipline. We argue stochastic generative models can be fragile in operational domains unless paired with mechanisms that provide verifiable feasibility, robustness to distribution shift, and stress testing under high-consequence scenarios. To address this challenge, we develop a conceptual framework for assured autonomy grounded in operations research (OR), built on two complementary approaches. First, flow-based generative models frame generation as deterministic transport characterized by an ordinary differential equation, enabling auditability, constraint-aware generation, and connections to optimal transport, robust optimization, and sequential decision control. Second, operational safety is formulated through an adversarial robustness lens: decision rules are evaluated against worst-case perturbations within uncertainty or ambiguity sets, making unmodeled risks part of the design. This framework clarifies how increasing autonomy shifts OR's role from solver to guardrail to system architect, with responsibility for control logic, incentive protocols, monitoring regimes, and safety boundaries. These elements define a research agenda for assured autonomy in safety-critical, reliability-sensitive operational domains.
- oai:arXiv.org:2512.23978v1
- cs.LG
- math.OC
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Tinglong Dai, David Simchi-Levi, Michelle Xiao Wu, Yao Xie
-
-
- Random Multiplexing
- https://arxiv.org/abs/2512.24087
- arXiv:2512.24087v1 Announce Type: cross
-Abstract: As wireless communication applications evolve from traditional multipath environments to high-mobility scenarios like unmanned aerial vehicles, multiplexing techniques have advanced accordingly. Traditional single-carrier frequency-domain equalization (SC-FDE) and orthogonal frequency-division multiplexing (OFDM) have given way to emerging orthogonal time-frequency space (OTFS) and affine frequency-division multiplexing (AFDM). These approaches exploit specific channel structures to diagonalize or sparsify the effective channel, thereby enabling low-complexity detection. However, their reliance on these structures significantly limits their robustness in dynamic, real-world environments. To address these challenges, this paper studies a random multiplexing technique that is decoupled from the physical channels, enabling its application to arbitrary norm-bounded and spectrally convergent channel matrices. Random multiplexing achieves statistical fading-channel ergodicity for transmitted signals by constructing an equivalent input-isotropic channel matrix in the random transform domain. It guarantees the asymptotic replica MAP bit-error rate (BER) optimality of AMP-type detectors for linear systems with arbitrary norm-bounded, spectrally convergent channel matrices and signaling configurations, under the unique fixed point assumption. A low-complexity cross-domain memory AMP (CD-MAMP) detector is considered, leveraging the sparsity of the time-domain channel and the randomness of the equivalent channel. Optimal power allocations are derived to minimize the replica MAP BER and maximize the replica constrained capacity of random multiplexing systems. The optimal coding principle and replica constrained-capacity optimality of CD-MAMP detector are investigated for random multiplexing systems. Additionally, the versatility of random multiplexing in diverse wireless applications is explored.
- oai:arXiv.org:2512.24087v1
- cs.IT
- cs.AI
- cs.LG
- eess.SP
- math.IT
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lei Liu, Yuhao Chi, Shunqi Huang, Zhaoyang Zhang
-
-
- Colorful Pinball: Density-Weighted Quantile Regression for Conditional Guarantee of Conformal Prediction
- https://arxiv.org/abs/2512.24139
- arXiv:2512.24139v1 Announce Type: cross
-Abstract: While conformal prediction provides robust marginal coverage guarantees, achieving reliable conditional coverage for specific inputs remains challenging. Although exact distribution-free conditional coverage is impossible with finite samples, recent work has focused on improving the conditional coverage of standard conformal procedures. Distinct from approaches that target relaxed notions of conditional coverage, we directly minimize the mean squared error of conditional coverage by refining the quantile regression components that underpin many conformal methods. Leveraging a Taylor expansion, we derive a sharp surrogate objective for quantile regression: a density-weighted pinball loss, where the weights are given by the conditional density of the conformity score evaluated at the true quantile. We propose a three-headed quantile network that estimates these weights via finite differences using auxiliary quantile levels at \(1-\alpha \pm \delta\), subsequently fine-tuning the central quantile by optimizing the weighted loss. We provide a theoretical analysis with exact non-asymptotic guarantees characterizing the resulting excess risk. Extensive experiments on diverse high-dimensional real-world datasets demonstrate remarkable improvements in conditional coverage performance.
- oai:arXiv.org:2512.24139v1
- cs.LG
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qianyi Chen, Bo Li
-
-
- Paired Seed Evaluation: Statistical Reliability for Learning-Based Simulators
- https://arxiv.org/abs/2512.24145
- arXiv:2512.24145v1 Announce Type: cross
-Abstract: Machine learning systems appear stochastic but are deterministically random, as seeded pseudorandom number generators produce identical realisations across executions. Learning-based simulators are widely used to compare algorithms, design choices, and interventions under such dynamics, yet evaluation outcomes often exhibit high variance due to random initialisation and learning stochasticity. We analyse the statistical structure of comparative evaluation in these settings and show that standard independent evaluation designs fail to exploit shared sources of randomness across alternatives. We formalise a paired seed evaluation design in which competing systems are evaluated under identical random seeds, inducing matched realisations of stochastic components and strict variance reduction whenever outcomes are positively correlated at the seed level. This yields tighter confidence intervals, higher statistical power, and effective sample size gains at fixed computational budgets. Empirically, seed-level correlations are typically large and positive, producing order-of-magnitude efficiency gains. Paired seed evaluation is weakly dominant in practice, improving statistical reliability when correlation is present and reducing to independent evaluation without loss of validity when it is not.
- oai:arXiv.org:2512.24145v1
- cs.LG
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Udit Sharma
-
-
- A density-based framework for community detection in attributed networks
- https://arxiv.org/abs/2512.24336
- arXiv:2512.24336v1 Announce Type: cross
-Abstract: Community structure in social and collaborative networks often emerges from a complex interplay between structural mechanisms, such as degree heterogeneity and leader-driven attraction, and homophily on node attributes. Existing community detection methods typically focus on these dimensions in isolation, limiting their ability to recover interpretable communities in presence of such mechanisms. In this paper, we propose AttDeCoDe, an attribute-driven extension of a density-based community detection framework, developed to analyse networks where node characteristics play a central role in group formation. Instead of defining density purely from network topology, AttDeCoDe estimates node-wise density in the attribute space, allowing communities to form around attribute-based community representatives while preserving structural connectivity constraints. This approach naturally captures homophily-driven aggregation while remaining sensitive to leader influence. We evaluate the proposed method through a simulation study based on a novel generative model that extends the degree-corrected stochastic block model by incorporating attribute-driven leader attraction, reflecting key features of collaborative research networks. We perform an empirical application to research collaboration data from the Horizon programmes, where organisations are characterised by project-level thematic descriptors. Both results show that AttDeCoDe offers a flexible and interpretable framework for community detection in attributed networks achieving competitive performance relative to topology-based and attribute-assisted benchmarks.
- oai:arXiv.org:2512.24336v1
- cs.SI
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-sa/4.0/
- Sara Geremia, Michael Fop, Domenico De Stefano
-
-
- Efficient Inference for Inverse Reinforcement Learning and Dynamic Discrete Choice Models
- https://arxiv.org/abs/2512.24407
- arXiv:2512.24407v1 Announce Type: cross
-Abstract: Inverse reinforcement learning (IRL) and dynamic discrete choice (DDC) models explain sequential decision-making by recovering reward functions that rationalize observed behavior. Flexible IRL methods typically rely on machine learning but provide no guarantees for valid inference, while classical DDC approaches impose restrictive parametric specifications and often require repeated dynamic programming. We develop a semiparametric framework for debiased inverse reinforcement learning that yields statistically efficient inference for a broad class of reward-dependent functionals in maximum entropy IRL and Gumbel-shock DDC models. We show that the log-behavior policy acts as a pseudo-reward that point-identifies policy value differences and, under a simple normalization, the reward itself. We then formalize these targets, including policy values under known and counterfactual softmax policies and functionals of the normalized reward, as smooth functionals of the behavior policy and transition kernel, establish pathwise differentiability, and derive their efficient influence functions. Building on this characterization, we construct automatic debiased machine-learning estimators that allow flexible nonparametric estimation of nuisance components while achieving $\sqrt{n}$-consistency, asymptotic normality, and semiparametric efficiency. Our framework extends classical inference for DDC models to nonparametric rewards and modern machine-learning tools, providing a unified and computationally tractable approach to statistical inference in IRL.
- oai:arXiv.org:2512.24407v1
- cs.LG
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Lars van der Laan, Aurelien Bibaut, Nathan Kallus
-
-
- The non-backtracking transition probability matrix and its usage for node clustering
- https://arxiv.org/abs/2512.24434
- arXiv:2512.24434v1 Announce Type: cross
-Abstract: Relation between the real eigenvalues of the non-backtracking matrix and those of the non-backtracking Laplacian is considered with respect to node clustering. For this purpose we use the real eigenvalues of the transition probability matrix (when the random walk goes through the oriented edges with the rule of ``not going back in the next step'') which have a linear relation to those of the non-backtracking Laplacian of Jost,Mulas. ``Inflation--deflation'' techniques are also developed for clustering the nodes of the non-backtracking graph. With further processing, it leads to the clustering of the nodes of the original graph, which usually comes from a sparse stochastic block model of Bordenave,Decelle.
- oai:arXiv.org:2512.24434v1
- math.CO
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Marianna Bolla
-
-
- Bayesian Subspace Identification in the MIMO Case
- https://arxiv.org/abs/2512.24435
- arXiv:2512.24435v1 Announce Type: cross
-Abstract: This report investigates the extension of the Bayesian Subspace System Identification method proposed in our previous work to the Multiple-Input Multiple-Output (MIMO) case. We derive new equivariant priors and posterior distributions specifically suited for the MIMO framework. Numerical results utilizing the DAISY dataset are reported to validate the approach.
- oai:arXiv.org:2512.24435v1
- eess.SY
- cs.SY
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Alexandre Rodrigues Mesquita
-
-
- Sparse classification with positive-confidence data in high dimensions
- https://arxiv.org/abs/2512.24443
- arXiv:2512.24443v1 Announce Type: cross
-Abstract: High-dimensional learning problems, where the number of features exceeds the sample size, often require sparse regularization for effective prediction and variable selection. While established for fully supervised data, these techniques remain underexplored in weak-supervision settings such as Positive-Confidence (Pconf) classification. Pconf learning utilizes only positive samples equipped with confidence scores, thereby avoiding the need for negative data. However, existing Pconf methods are ill-suited for high-dimensional regimes. This paper proposes a novel sparse-penalization framework for high-dimensional Pconf classification. We introduce estimators using convex (Lasso) and non-convex (SCAD, MCP) penalties to address shrinkage bias and improve feature recovery. Theoretically, we establish estimation and prediction error bounds for the L1-regularized Pconf estimator, proving it achieves near minimax-optimal sparse recovery rates under Restricted Strong Convexity condition. To solve the resulting composite objective, we develop an efficient proximal gradient algorithm. Extensive simulations demonstrate that our proposed methods achieve predictive performance and variable selection accuracy comparable to fully supervised approaches, effectively bridging the gap between weak supervision and high-dimensional statistics.
- oai:arXiv.org:2512.24443v1
- cs.LG
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- The Tien Mai, Mai Anh Nguyen, Trung Nghia Nguyen
-
-
- HOLOGRAPH: Active Causal Discovery via Sheaf-Theoretic Alignment of Large Language Model Priors
- https://arxiv.org/abs/2512.24478
- arXiv:2512.24478v1 Announce Type: cross
-Abstract: Causal discovery from observational data remains fundamentally limited by identifiability constraints. Recent work has explored leveraging Large Language Models (LLMs) as sources of prior causal knowledge, but existing approaches rely on heuristic integration that lacks theoretical grounding. We introduce HOLOGRAPH, a framework that formalizes LLM-guided causal discovery through sheaf theory--representing local causal beliefs as sections of a presheaf over variable subsets. Our key insight is that coherent global causal structure corresponds to the existence of a global section, while topological obstructions manifest as non-vanishing sheaf cohomology. We propose the Algebraic Latent Projection to handle hidden confounders and Natural Gradient Descent on the belief manifold for principled optimization. Experiments on synthetic and real-world benchmarks demonstrate that HOLOGRAPH provides rigorous mathematical foundations while achieving competitive performance on causal discovery tasks with 50-100 variables. Our sheaf-theoretic analysis reveals that while Identity, Transitivity, and Gluing axioms are satisfied to numerical precision (<10^{-6}), the Locality axiom fails for larger graphs, suggesting fundamental non-local coupling in latent variable projections. Code is available at [https://github.com/hyunjun1121/holograph](https://github.com/hyunjun1121/holograph).
- oai:arXiv.org:2512.24478v1
- cs.LG
- cs.AI
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Hyunjun Kim
-
-
- What Drives Success in Physical Planning with Joint-Embedding Predictive World Models?
- https://arxiv.org/abs/2512.24497
- arXiv:2512.24497v1 Announce Type: cross
-Abstract: A long-standing challenge in AI is to develop agents capable of solving a wide range of physical tasks and generalizing to new, unseen tasks and environments. A popular recent approach involves training a world model from state-action trajectories and subsequently use it with a planning algorithm to solve new tasks. Planning is commonly performed in the input space, but a recent family of methods has introduced planning algorithms that optimize in the learned representation space of the world model, with the promise that abstracting irrelevant details yields more efficient planning. In this work, we characterize models from this family as JEPA-WMs and investigate the technical choices that make algorithms from this class work. We propose a comprehensive study of several key components with the objective of finding the optimal approach within the family. We conducted experiments using both simulated environments and real-world robotic data, and studied how the model architecture, the training objective, and the planning algorithm affect planning success. We combine our findings to propose a model that outperforms two established baselines, DINO-WM and V-JEPA-2-AC, in both navigation and manipulation tasks. Code, data and checkpoints are available at https://github.com/facebookresearch/jepa-wms.
- oai:arXiv.org:2512.24497v1
- cs.AI
- cs.LG
- cs.RO
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Basile Terver, Tsung-Yen Yang, Jean Ponce, Adrien Bardes, Yann LeCun
-
-
- More Than Bits: Multi-Envelope Double Binary Factorization for Extreme Quantization
- https://arxiv.org/abs/2512.24545
- arXiv:2512.24545v1 Announce Type: cross
-Abstract: For extreme low-bit quantization of large language models (LLMs), Double Binary Factorization (DBF) is attractive as it enables efficient inference without sacrificing accuracy. However, the scaling parameters of DBF are too restrictive; after factoring out signs, all rank components share the same magnitude profile, resulting in performance saturation. We propose Multi-envelope DBF (MDBF), which retains a shared pair of 1-bit sign bases but replaces the single envelope with a rank-$l$ envelope. By sharing sign matrices among envelope components, MDBF effectively maintains a binary carrier and utilizes the limited memory budget for magnitude expressiveness. We also introduce a closed-form initialization and an alternating refinement method to optimize MDBF. Across the LLaMA and Qwen families, MDBF enhances perplexity and zero-shot accuracy over previous binary formats at matched bits per weight while preserving the same deployment-friendly inference primitive.
- oai:arXiv.org:2512.24545v1
- cs.LG
- cs.AI
- cs.CL
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Yuma Ichikawa, Yoshihiko Fujisawa, Yudai Fujimoto, Akira Sakai, Katsuki Fujisawa
-
-
- When Does the Silhouette Score Work? A Comprehensive Study in Network Clustering
- https://arxiv.org/abs/2512.24841
- arXiv:2512.24841v1 Announce Type: cross
-Abstract: Selecting the number of communities is a fundamental challenge in network clustering. The silhouette score offers an intuitive, model-free criterion that balances within-cluster cohesion and between-cluster separation. Albeit its widespread use in clustering analysis, its performance in network-based community detection remains insufficiently characterized. In this study, we comprehensively evaluate the performance of the silhouette score across unweighted, weighted, and fully connected networks, examining how network size, separation strength, and community size imbalance influence its performance. Simulation studies show that the silhouette score accurately identifies the true number of communities when clusters are well separated and balanced, but it tends to underestimate under strong imbalance or weak separation and to overestimate in sparse networks. Extending the evaluation to a real airline reachability network, we demonstrate that the silhouette-based clustering can recover geographically interpretable and market-oriented clusters. These findings provide empirical guidance for applying the silhouette score in network clustering and clarify the conditions under which its use is most reliable.
- oai:arXiv.org:2512.24841v1
- cs.SI
- stat.CO
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zongyue Teng, Jun Yan, Dandan Liu, Panpan Zhang
-
-
- Triangulation as an Acceptance Rule for Multilingual Mechanistic Interpretability
- https://arxiv.org/abs/2512.24842
- arXiv:2512.24842v1 Announce Type: cross
-Abstract: Multilingual language models achieve strong aggregate performance yet often behave unpredictably across languages, scripts, and cultures. We argue that mechanistic explanations for such models should satisfy a \emph{causal} standard: claims must survive causal interventions and must \emph{cross-reference} across environments that perturb surface form while preserving meaning. We formalize \emph{reference families} as predicate-preserving variants and introduce \emph{triangulation}, an acceptance rule requiring necessity (ablating the circuit degrades the target behavior), sufficiency (patching activations transfers the behavior), and invariance (both effects remain directionally stable and of sufficient magnitude across the reference family). To supply candidate subgraphs, we adopt automatic circuit discovery and \emph{accept or reject} those candidates by triangulation. We ground triangulation in causal abstraction by casting it as an approximate transformation score over a distribution of interchange interventions, connect it to the pragmatic interpretability agenda, and present a comparative experimental protocol across multiple model families, language pairs, and tasks. Triangulation provides a falsifiable standard for mechanistic claims that filters spurious circuits passing single-environment tests but failing cross-lingual invariance.
- oai:arXiv.org:2512.24842v1
- cs.CL
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Yanan Long
-
-
- Constraints on the perfect phylogeny mixture model and their effect on reducing degeneracy
- https://arxiv.org/abs/2512.24930
- arXiv:2512.24930v1 Announce Type: cross
-Abstract: The perfect phylogeny mixture (PPM) model is useful due to its simplicity and applicability in scenarios where mutations can be assumed to accumulate monotonically over time. It is the underlying model in many tools that have been used, for example, to infer phylogenetic trees for tumor evolution and reconstruction. Unfortunately, the PPM model gives rise to substantial ambiguity -- in that many different phylogenetic trees can explain the same observed data -- even in the idealized setting where data are observed perfectly, i.e. fully and without noise. This ambiguity has been studied in this perfect setting by Pradhan et al. 2018, which proposed a procedure to bound the number of solutions given a fixed instance of observation data. Beyond this, studies have been primarily empirical. Recent work (Myers et al. 2019) proposed adding extra constraints to the PPM model to tackle ambiguity. In this paper, we first show that the extra constraints of Myers et al. 2019, called longitudinal constraints (LC), often fail to reduce the number of distinct trees that explain the observations. We then propose novel alternative constraints to limit solution ambiguity and study their impact when the data are observed perfectly. Unlike the analysis in Pradhan et al. 2018, our theoretical results regarding both the inefficacy of the LC and the extent to which our new constrains reduce ambiguity are not tied to a single observation instance. Rather, our theorems hold over large ensembles of possible inference problems. To the best of our knowledge, we are the first to study degeneracy in the PPM model in this ensemble-based theoretical framework.
- oai:arXiv.org:2512.24930v1
- q-bio.PE
- stat.OT
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- John Marangola, Azadeh Sheikholeslami, Jos\'e Bento
-
-
- The Impact of LLMs on Online News Consumption and Production
- https://arxiv.org/abs/2512.24968
- arXiv:2512.24968v1 Announce Type: cross
-Abstract: Large language models (LLMs) change how consumers acquire information online; their bots also crawl news publishers' websites for training data and to answer consumer queries; and they provide tools that can lower the cost of content creation. These changes lead to predictions of adverse impact on news publishers in the form of lowered consumer demand, reduced demand for newsroom employees, and an increase in news "slop." Consequently, some publishers strategically responded by blocking LLM access to their websites using the robots.txt file standard.
- Using high-frequency granular data, we document four effects related to the predicted shifts in news publishing following the introduction of generative AI (GenAI). First, we find a consistent and moderate decline in traffic to news publishers occurring after August 2024. Second, using a difference-in-differences approach, we find that blocking GenAI bots can have adverse effects on large publishers by reducing total website traffic by 23% and real consumer traffic by 14% compared to not blocking. Third, on the hiring side, we do not find evidence that LLMs are replacing editorial or content-production jobs yet. The share of new editorial and content-production job listings increases over time. Fourth, regarding content production, we find no evidence that large publishers increased text volume; instead, they significantly increased rich content and use more advertising and targeting technologies.
- Together, these findings provide early evidence of some unforeseen impacts of the introduction of LLMs on news production and consumption.
- oai:arXiv.org:2512.24968v1
- econ.GN
- cs.AI
- cs.CY
- q-fin.EC
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Hangcheng Zhao, Ron Berman
-
-
- Convergence of the generalization error for deep gradient flow methods for PDEs
- https://arxiv.org/abs/2512.25017
- arXiv:2512.25017v1 Announce Type: cross
-Abstract: The aim of this article is to provide a firm mathematical foundation for the application of deep gradient flow methods (DGFMs) for the solution of (high-dimensional) partial differential equations (PDEs). We decompose the generalization error of DGFMs into an approximation and a training error. We first show that the solution of PDEs that satisfy reasonable and verifiable assumptions can be approximated by neural networks, thus the approximation error tends to zero as the number of neurons tends to infinity. Then, we derive the gradient flow that the training process follows in the ``wide network limit'' and analyze the limit of this flow as the training time tends to infinity. These results combined show that the generalization error of DGFMs tends to zero as the number of neurons and the training time tend to infinity.
- oai:arXiv.org:2512.25017v1
- math.NA
- cs.LG
- cs.NA
- q-fin.CP
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Chenguang Liu, Antonis Papapantoleon, Jasper Rou
-
-
- Testing Monotonicity in a Finite Population
- https://arxiv.org/abs/2512.25032
- arXiv:2512.25032v1 Announce Type: cross
-Abstract: We consider the extent to which we can learn from a completely randomized experiment whether everyone has treatment effects that are weakly of the same sign, a condition we call monotonicity. From a classical sampling perspective, it is well-known that monotonicity is untestable. By contrast, we show from the design-based perspective -- in which the units in the population are fixed and only treatment assignment is stochastic -- that the distribution of treatment effects in the finite population (and hence whether monotonicity holds) is formally identified. We argue, however, that the usual definition of identification is unnatural in the design-based setting because it imagines knowing the distribution of outcomes over different treatment assignments for the same units. We thus evaluate the informativeness of the data by the extent to which it enables frequentist testing and Bayesian updating. We show that frequentist tests can have nontrivial power against some alternatives, but power is generically limited. Likewise, we show that there exist (non-degenerate) Bayesian priors that never update about whether monotonicity holds. We conclude that, despite the formal identification result, the ability to learn about monotonicity from data in practice is severely limited.
- oai:arXiv.org:2512.25032v1
- econ.EM
- math.ST
- stat.ME
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Jiafeng Chen, Jonathan Roth, Jann Spiess
-
-
- Compound Estimation for Binomials
- https://arxiv.org/abs/2512.25042
- arXiv:2512.25042v1 Announce Type: cross
-Abstract: Many applications involve estimating the mean of multiple binomial outcomes as a common problem -- assessing intergenerational mobility of census tracts, estimating prevalence of infectious diseases across countries, and measuring click-through rates for different demographic groups. The most standard approach is to report the plain average of each outcome. Despite simplicity, the estimates are noisy when the sample sizes or mean parameters are small. In contrast, the Empirical Bayes (EB) methods are able to boost the average accuracy by borrowing information across tasks. Nevertheless, the EB methods require a Bayesian model where the parameters are sampled from a prior distribution which, unlike the commonly-studied Gaussian case, is unidentified due to discreteness of binomial measurements. Even if the prior distribution is known, the computation is difficult when the sample sizes are heterogeneous as there is no simple joint conjugate prior for the sample size and mean parameter.
- In this paper, we consider the compound decision framework which treats the sample size and mean parameters as fixed quantities. We develop an approximate Stein's Unbiased Risk Estimator (SURE) for the average mean squared error given any class of estimators. For a class of machine learning-assisted linear shrinkage estimators, we establish asymptotic optimality, regret bounds, and valid inference. Unlike existing work, we work with the binomials directly without resorting to Gaussian approximations. This allows us to work with small sample sizes and/or mean parameters in both one-sample and two-sample settings. We demonstrate our approach using three datasets on firm discrimination, education outcomes, and innovation rates.
- oai:arXiv.org:2512.25042v1
- econ.EM
- math.ST
- stat.ME
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- cross
- http://creativecommons.org/licenses/by/4.0/
- Yan Chen, Lihua Lei
-
-
- Efficient Active Learning with Abstention
- https://arxiv.org/abs/2204.00043
- arXiv:2204.00043v3 Announce Type: replace
-Abstract: The goal of active learning is to achieve the same accuracy achievable by passive learning, while using much fewer labels. Exponential savings in terms of label complexity have been proved in very special cases, but fundamental lower bounds show that such improvements are impossible in general. This suggests a need to explore alternative goals for active learning. Learning with abstention is one such alternative. In this setting, the active learning algorithm may abstain from prediction and incur an error that is marginally smaller than random guessing. We develop the first computationally efficient active learning algorithm with abstention. Our algorithm provably achieves $\mathsf{polylog}(\frac{1}{\varepsilon})$ label complexity, without any low noise conditions. Such performance guarantee reduces the label complexity by an exponential factor, relative to passive learning and active learning that is not allowed to abstain. Furthermore, our algorithm is guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property we term proper abstention that also leads to a host of other desirable characteristics (e.g., recovering minimax guarantees in the standard setting, and avoiding the undesirable "noise-seeking" behavior often seen in active learning). We also provide novel extensions of our algorithm that achieve constant label complexity and deal with model misspecification.
- oai:arXiv.org:2204.00043v3
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yinglun Zhu, Robert Nowak
-
-
- Hypothesis testing for partial tail correlation in multivariate extremes
- https://arxiv.org/abs/2210.02048
- arXiv:2210.02048v3 Announce Type: replace
-Abstract: Statistical modeling of high dimensional extremes remains challenging and has generally been limited to moderate dimensions. Understanding structural relationships among variables at their extreme levels is crucial both for constructing simplified models and for identifying sparsity in extremal dependence. In this paper, we introduce the notion of partial tail correlation to characterize structural relationships between pairs of variables in their tails. To this end, we propose a tail regression approach for nonnegative regularly varying random vectors and define partial tail correlation based on the regression residuals. Using an extreme analogue of the covariance matrix, we show that the resulting regression coefficients and partial tail correlations take the same form as in classical non-extreme settings. For inference, we develop a hypothesis test to explore sparsity in extremal dependence structures, and demonstrate its effectiveness through simulations and an application to the Danube river network.
- oai:arXiv.org:2210.02048v3
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Mihyun Kim, Jeongjin Lee
-
-
- Modeling Spatio-Temporal Transport: From Rigid Advection to Realistic Dynamics
- https://arxiv.org/abs/2303.02756
- arXiv:2303.02756v4 Announce Type: replace
-Abstract: Stochastic models for spatio-temporal transport face a critical trade-off between physical realism and interpretability. The advection model with a single constant velocity is interpretable but physically limited by its perfect correlation over time. This work aims to bridge the gap between this simple framework and its physically realistic extensions. Our guiding principle is to introduce a spatial correlation structure that vanishes over time. To achieve this, we present two distinct approaches. The first constructs complex velocity structures, either through superpositions of advection components or by allowing the velocity to vary locally. The second is a spectral technique that replaces the singular spectrum of rigid advection with a more flexible form, introducing temporal decorrelation controlled by parameters. We accompany these models with efficient simulation algorithms and demonstrate their success in replicating complex dynamics, such as tropical cyclones and the solutions of partial differential equations. Finally, we illustrate the practical utility of the proposed framework by comparing its simulations to real-world precipitation data from Hurricane Florence.
- oai:arXiv.org:2303.02756v4
- stat.CO
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Maria Laura Battagliola, Sofia Charlotta Olhede
-
-
- Maximum Likelihood Estimates of Parameters in Generalized Gamma Distribution with SeLF Algorithm
- https://arxiv.org/abs/2306.16419
- arXiv:2306.16419v2 Announce Type: replace
-Abstract: This undergraduate thesis focuses on calculating maximum likelihood estimates of parameters in the generalized Gamma distribution using the SeLF algorithm. As an extension of the Gamma distribution, the generalized Gamma distribution can better fit real data and has been widely applied. The research begins by exploring the definition of the generalized Gamma distribution and its similarities and differences from the traditional Gamma distribution. Then, the SeLF and US algorithms are discussed in detail. The SeLF algorithm is a new algorithm based on the Minorization-Maximization algorithm, which can obtain the local optimal solution with few iterations, with the advantages of fast computation, high accuracy, and good convergence. The US algorithm is a method for finding the zeros of a function, which stands at a higher level than the SeLF algorithm and can improve the convergence speed and stability. This thesis proposes a method for calculating maximum likelihood estimates of the parameters in the generalized Gamma distribution using the SeLF and US algorithms, and presents the practical implementation of the algorithms, as well as simulations and data analysis to evaluate the performance of the proposed methods. The results demonstrate that the SeLF algorithm can achieve more stable and accurate estimates of the parameters in the generalized Gamma distribution more quickly, compared to traditional Newton's method, which can be useful in various applications. This thesis provides a comprehensive and in-depth exploration of the generalized Gamma distribution and the SeLF algorithm, and proposes a new method for calculating maximum likelihood estimates of parameters, contributing to the development of statistical methods for parameter estimation in complex models. The proposed method in this thesis has important practical significance and application value for solving practical problems.
- oai:arXiv.org:2306.16419v2
- stat.ME
- stat.CO
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yufei Cai
-
-
- Studentising Kendall's Tau: U-Statistic Estimators and Bias Correction for a Generalised Rank Variance-Covariance framework
- https://arxiv.org/abs/2307.10973
- arXiv:2307.10973v2 Announce Type: replace
-Abstract: Kemeny (1959) introduced a topologically complete metric space to study ordinal random variables, particularly in the context of Condorcet's paradox and the measurability of ties. Building on this, Emond & Mason (2002) reformulated Kemeny's framework into a rank correlation coefficient by embedding the metric space into a Hilbert structure. This transformation enables the analysis of data under weak order-preserving transformations (monotonically non-decreasing) within a linear probabilistic framework. However, the statistical properties of this rank correlation estimator, such as bias, estimation variance, and Type I error rates, have not been thoroughly evaluated.
- In this paper, we derive and prove a complete U-statistic estimator in the presence of ties for Kemeny's \(\tau_{\kappa}\), addressing the positive bias introduced by tied ranks. We also introduce a consistent population standard error estimator. The null distribution of the test statistic is shown to follow a \(t_{(N-2)}\)-distribution. Simulation results demonstrate that the proposed method outperforms Kendall's \(\tau_{b}\), offering a more accurate and robust measure of ordinal association which is topologically complete upon standard linear models.
- oai:arXiv.org:2307.10973v2
- stat.ME
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Landon Hurley
-
-
- The Population Resemblance Statistic: A Chi-Square Measure of Fit for Banking
- https://arxiv.org/abs/2307.11878
- arXiv:2307.11878v4 Announce Type: replace
-Abstract: The Population Stability Index (PSI) is a widely used measure in credit risk modeling and monitoring within the banking industry. Its purpose is to monitor for changes in the population underlying a model, such as a scorecard, to ensure that the current population closely resembles the one used during model development. If substantial differences between populations are detected, model reconstruction may be necessary. Despite its widespread use, the origins and properties of the PSI are not well documented. Previous literature has suggested using arbitrary constants as a rule-of-thumb to assess resemblance (or "stability"), regardless of sample size. However, this approach too often calls for model reconstruction in small sample sizes while not detecting the need often enough in large sample sizes.
- This paper introduces an alternative discrepancy measure, the Population Resemblance statistic (PRS), based on the Pearson chi-square statistic. Properties of the PRS follow from the non-central chi-square distribution. Specifically, the PRS allows for critical values that are configured according to sample size and the number of risk categories. Implementation relies on the specification of a set of parameters, enabling practitioners to calibrate the procedure with their risk tolerance and sensitivity to population shifts. The PRS is demonstrated to be universally competent in a simulation study and with real-world examples.
- oai:arXiv.org:2307.11878v4
- stat.AP
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Nelis Potgieter, Corli van Zyl, WD Schutte, Fred Lombard
-
-
- Generative Modelling of L\'evy Area for High Order SDE Simulation
- https://arxiv.org/abs/2308.02452
- arXiv:2308.02452v2 Announce Type: replace
-Abstract: It is well understood that, when numerically simulating SDEs with general noise, achieving a strong convergence rate better than $O(\sqrt{h})$ (where h is the step size) requires the use of certain iterated integrals of Brownian motion, commonly referred to as its "L\'evy areas". However, these stochastic integrals are difficult to simulate due to their non-Gaussian nature and for a $d$-dimensional Brownian motion with $d > 2$, no fast almost-exact sampling algorithm is known.
- In this paper, we propose L\'evyGAN, a deep-learning-based model for generating approximate samples of L\'evy area conditional on a Brownian increment. Due to our "Bridge-flipping" operation, the output samples match all joint and conditional odd moments exactly. Our generator employs a tailored GNN-inspired architecture, which enforces the correct dependency structure between the output distribution and the conditioning variable. Furthermore, we incorporate a mathematically principled characteristic-function based discriminator. Lastly, we introduce a novel training mechanism termed "Chen-training", which circumvents the need for expensive-to-generate training data-sets. This new training procedure is underpinned by our two main theoretical results.
- For 4-dimensional Brownian motion, we show that L\'evyGAN exhibits state-of-the-art performance across several metrics which measure both the joint and marginal distributions. We conclude with a numerical experiment on the log-Heston model, a popular SDE in mathematical finance, demonstrating that high-quality synthetic L\'evy area can lead to high order weak convergence and variance reduction when using multilevel Monte Carlo (MLMC).
- oai:arXiv.org:2308.02452v2
- stat.ML
- cs.LG
- cs.NA
- math.NA
- math.PR
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.1137/23M161077X
- SIAM Journal on Mathematics of Data Science, vol. 7, no. 4, pp. 1541-1567, 2025
- Andra\v{z} Jelin\v{c}i\v{c}, Jiajie Tao, William F. Turner, Thomas Cass, James Foster, Hao Ni
-
-
- Are Ensembles Getting Better all the Time?
- https://arxiv.org/abs/2311.17885
- arXiv:2311.17885v3 Announce Type: replace
-Abstract: Ensemble methods combine the predictions of several base models. We study whether or not including more models always improves their average performance. This question depends on the kind of ensemble considered, as well as the predictive metric chosen. We focus on situations where all members of the ensemble are a priori expected to perform equally well, which is the case of several popular methods such as random forests or deep ensembles. In this setting, we show that ensembles are getting better all the time if, and only if, the considered loss function is convex. More precisely, in that case, the loss of the ensemble is a decreasing function of the number of models. When the loss function is nonconvex, we show a series of results that can be summarised as: ensembles of good models keep getting better, and ensembles of bad models keep getting worse. To this end, we prove a new result on the monotonicity of tail probabilities that may be of independent interest. We illustrate our results on a medical problem (diagnosing melanomas using neural nets) and a "wisdom of crowds" experiment (guessing the ratings of upcoming movies).
- oai:arXiv.org:2311.17885v3
- stat.ML
- cs.LG
- math.ST
- stat.ME
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Journal of Machine Learning Research, vol. 26 (201), 1-46, 2025
- Pierre-Alexandre Mattei, Damien Garreau
-
-
- Distribution-Dependent Rates for Multi-Distribution Learning
- https://arxiv.org/abs/2312.13130
- arXiv:2312.13130v2 Announce Type: replace
-Abstract: To address the needs of modeling uncertainty in sensitive machine learning applications, the setup of distributionally robust optimization (DRO) seeks good performance uniformly across a variety of tasks. The recent multi-distribution learning (MDL) framework tackles this objective in a dynamic interaction with the environment, where the learner has sampling access to each target distribution. Drawing inspiration from the field of pure-exploration multi-armed bandits, we provide distribution-dependent guarantees in the MDL regime, that scale with suboptimality gaps and result in superior dependence on the sample size when compared to the existing distribution-independent analyses. We investigate two non-adaptive strategies, uniform and non-uniform exploration, and present non-asymptotic regret bounds using novel tools from empirical process theory. Furthermore, we devise an adaptive optimistic algorithm, LCB-DR, that showcases enhanced dependence on the gaps, mirroring the contrast between uniform and optimistic allocation in the multi-armed bandit literature. We also conduct a small synthetic experiment illustrating the comparative strengths of each strategy.
- oai:arXiv.org:2312.13130v2
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Rafael Hanashiro, Patrick Jaillet
-
-
- Stochastic Gradient Descent for Nonparametric Additive Regression
- https://arxiv.org/abs/2401.00691
- arXiv:2401.00691v5 Announce Type: replace
-Abstract: This paper introduces an iterative algorithm for training nonparametric additive models that enjoys favorable memory storage and computational requirements. The algorithm can be viewed as the functional counterpart of stochastic gradient descent, applied to the coefficients of a truncated basis expansion of the component functions. We show that the resulting estimator satisfies an oracle inequality that allows for model mis-specification. In the well-specified setting, by choosing the learning rate carefully across three distinct stages of training, we demonstrate that its risk is minimax optimal in terms of the dependence on both the dimensionality of the data and the size of the training sample. Unlike past work, we also provide polynomial convergence rates even when the covariates do not have full support on their domain.
- oai:arXiv.org:2401.00691v5
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Xin Chen, Jason M. Klusowski
-
-
- Symmetric Linear Bandits with Hidden Symmetry
- https://arxiv.org/abs/2405.13899
- arXiv:2405.13899v3 Announce Type: replace
-Abstract: High-dimensional linear bandits with low-dimensional structure have received considerable attention in recent studies due to their practical significance. The most common structure in the literature is sparsity. However, it may not be available in practice. Symmetry, where the reward is invariant under certain groups of transformations on the set of arms, is another important inductive bias in the high-dimensional case that covers many standard structures, including sparsity. In this work, we study high-dimensional symmetric linear bandits where the symmetry is hidden from the learner, and the correct symmetry needs to be learned in an online setting. We examine the structure of a collection of hidden symmetry and provide a method based on model selection within the collection of low-dimensional subspaces. Our algorithm achieves a regret bound of $ O(d_0^{2/3} T^{2/3} \log(d))$, where $d$ is the ambient dimension which is potentially very large, and $d_0$ is the dimension of the true low-dimensional subspace such that $d_0 \ll d$. With an extra assumption on well-separated models, we can further improve the regret to $ O(d_0\sqrt{T\log(d)} )$.
- oai:arXiv.org:2405.13899v3
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Nam Phuong Tran, The Anh Ta, Debmalya Mandal, Long Tran-Thanh
-
-
- Extremile scalar-on-function regression
- https://arxiv.org/abs/2405.20817
- arXiv:2405.20817v2 Announce Type: replace
-Abstract: Extremiles provide a generalization of quantiles which are not only robust, but also have an intrinsic link with extreme value theory. This paper introduces an extremile regression model tailored for functional covariate spaces. The estimation procedure turns out to be a weighted version of local linear scalar-on-function regression, where now a double kernel approach plays a crucial role. Asymptotic expressions for the bias and variance are established, applicable to both decreasing bandwidth sequences and automatically selected bandwidths. The methodology is then investigated in detail through a simulation study. Furthermore, we illustrate the method's applicability with an analysis of the Berkeley Growth data, showcasing its performance in a real-world functional data setting.
- oai:arXiv.org:2405.20817v2
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Maria Laura Battagliola, Martin Bladt
-
-
- Inference at the data's edge: Gaussian processes for modeling and inference under model-dependency, poor overlap, and extrapolation
- https://arxiv.org/abs/2407.10442
- arXiv:2407.10442v2 Announce Type: replace
-Abstract: Many inferential tasks involve fitting models to observed data and predicting outcomes at new covariate values, requiring interpolation or extrapolation. Conventional methods select a single best-fitting model, discarding fits that were similarly plausible in-sample but would yield sharply different predictions out-of-sample. Gaussian Processes (GPs) offer a principled alternative. Rather than committing to one conditional expectation function, GPs deliver a posterior distribution over outcomes at any covariate value. This posterior effectively retains the range of models consistent with the data, widening uncertainty intervals where extrapolation magnifies divergence. In this way, the GP's uncertainty estimates reflect the implications of extrapolation on our predictions, helping to tame the "dangers of extreme counterfactuals" (King & Zeng, 2006). The approach requires (i) specifying a covariance function linking outcome similarity to covariate similarity, and (ii) assuming Gaussian noise around the conditional expectation. We provide an accessible introduction to GPs with emphasis on this property, along with a simple, automated procedure for hyperparameter selection implemented in the R package gpss. We illustrate the value of GPs for capturing counterfactual uncertainty in three settings: (i) treatment effect estimation with poor overlap, (ii) interrupted time series requiring extrapolation beyond pre-intervention data, and (iii) regression discontinuity designs where estimates hinge on boundary behavior.
- oai:arXiv.org:2407.10442v2
- stat.ME
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Soonhong Cho, Doeun Kim, Chad Hazlett
-
-
- Functional Extreme-PLS
- https://arxiv.org/abs/2410.05517
- arXiv:2410.05517v2 Announce Type: replace
-Abstract: We propose an extreme dimension reduction method extending the Extreme-PLS approach to the case where the covariate lies in a possibly infinite-dimensional Hilbert space. The ideas are partly borrowed from both Partial Least-Squares and Sliced Inverse Regression techniques. As such, the method relies on the projection of the covariate onto a subspace and maximizes the covariance between its projection and the response conditionally to an extreme event driven by a random threshold to capture the tail-information. The covariate and the heavy-tailed response are supposed to be linked through a non-linear inverse single-index model and our goal is to infer the index in this regression framework. We propose a new family of estimators and show its asymptotic consistency with convergence rates under the model. Assuming mild conditions on the noise, most of the assumptions are stated in terms of regular variation unlike the standard literature on SIR and single-index regression. Finally, our results are illustrated on a finite-sample study with synthetic functional data as well as on real data from the financial realm, highlighting the effectiveness of the dimension reduction for estimating extreme risk measures.
- oai:arXiv.org:2410.05517v2
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- St\'ephane Girard, Cambyse Pakzad
-
-
- A Particle Algorithm for Mean-Field Variational Inference
- https://arxiv.org/abs/2412.20385
- arXiv:2412.20385v4 Announce Type: replace
-Abstract: Variational inference is a fast and scalable alternative to Markov chain Monte Carlo and has been widely applied to posterior inference tasks in statistics and machine learning. A traditional approach for implementing mean-field variational inference (MFVI) is coordinate ascent variational inference (CAVI), which relies crucially on parametric assumptions on complete conditionals. We introduce a novel particle-based algorithm for MFVI, named PArticle VI (PAVI), for nonparametric mean-field approximation. We obtain non-asymptotic error bounds for our algorithm. To our knowledge, this is the first end-to-end guarantee for particle-based MFVI.
- oai:arXiv.org:2412.20385v4
- math.ST
- cs.LG
- math.OC
- stat.ML
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Qiang Du, Kaizheng Wang, Edith Zhang, Chenyang Zhong
-
-
- NeuroPMD: Neural Fields for Density Estimation on Product Manifolds
- https://arxiv.org/abs/2501.02994
- arXiv:2501.02994v2 Announce Type: replace
-Abstract: We propose a novel deep neural network methodology for density estimation on product Riemannian manifold domains. In our approach, the network directly parameterizes the unknown density function and is trained using a penalized maximum likelihood framework, with a penalty term formed using manifold differential operators. The network architecture and estimation algorithm are carefully designed to handle the challenges of high-dimensional product manifold domains, effectively mitigating the curse of dimensionality that limits traditional kernel and basis expansion estimators, as well as overcoming the convergence issues encountered by non-specialized neural network methods. Extensive simulations and a real-world application to brain structural connectivity data highlight the clear advantages of our method over the competing alternatives.
- oai:arXiv.org:2501.02994v2
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- William Consagra, Zhiling Gu, Zhengwu Zhang
-
-
- Subtype-Aware Registration of Longitudinal Electronic Health Records
- https://arxiv.org/abs/2501.07336
- arXiv:2501.07336v2 Announce Type: replace
-Abstract: Electronic Health Records (EHRs) contain extensive patient information that can inform downstream clinical decisions, such as mortality prediction, disease phenotyping, and disease onset prediction. A key challenge in EHR data analysis is the temporal gap between when a condition is first recorded and its actual onset time. Such timeline misalignment can lead to artificially distinct biomarker trends among patients with similar disease progression, undermining the reliability of downstream analyses and complicating tasks such as disease subtyping and outcome prediction. To address this challenge, we provide a subtype-aware timeline registration method that leverages data projection and discrete optimization to correct timeline misalignment. Through simulation and real-world data analyses, we demonstrate that the proposed method effectively aligns distorted observed records with the true disease progression patterns, enhancing subtyping clarity and improving performance in downstream clinical analyses.
- oai:arXiv.org:2501.07336v2
- stat.AP
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Xin Gai, Shiyi Jiang, Anru R. Zhang
-
-
- coverforest: Conformal Predictions with Random Forest in Python
- https://arxiv.org/abs/2501.14570
- arXiv:2501.14570v3 Announce Type: replace
-Abstract: Conformal prediction provides a framework for uncertainty quantification, specifically in the forms of prediction intervals and sets with distribution-free guaranteed coverage. While recent cross-conformal techniques such as CV+ and Jackknife+-after-bootstrap achieve better data efficiency than traditional split conformal methods, they incur substantial computational costs due to required pairwise comparisons between training and test samples' out-of-bag scores. Observing that these methods naturally extend from ensemble models, particularly random forests, we leverage existing optimized random forest implementations to enable efficient cross-conformal predictions.
- We present coverforest, a Python package that implements efficient conformal prediction methods specifically optimized for random forests. coverforest supports both regression and classification tasks through various conformal prediction methods, including split conformal, CV+, Jackknife+-after-bootstrap, and adaptive prediction sets. Our package leverages parallel computing and Cython optimizations to speed up out-of-bag calculations. Our experiments demonstrate that coverforest's predictions achieve the desired level of coverage. In addition, its training and prediction times can be faster than an existing implementation by 2--9 times. The source code for the coverforest is hosted on GitHub at https://github.com/donlap/coverforest.
- oai:arXiv.org:2501.14570v3
- stat.ML
- cs.LG
- stat.CO
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- 10.1016/j.neucom.2025.132362
- Neurocomputing. 668, (Mar. 2026), 132362
- Panisara Meehinkong, Donlapark Ponnoprat
-
-
- Sample complexity and weak limits of nonsmooth multimarginal Schr\"{o}dinger system with application to optimal transport barycenter
- https://arxiv.org/abs/2502.02726
- arXiv:2502.02726v2 Announce Type: replace
-Abstract: Multimarginal optimal transport (MOT) has emerged as a useful framework for many applied problems. However, compared to the well-studied classical two-marginal optimal transport theory, analysis of MOT is far more challenging and remains much less developed. In this paper, we study the statistical estimation and inference problems for the entropic MOT (EMOT), whose optimal solution is characterized by the multimarginal Schr\"{o}dinger system. Assuming only boundedness of the cost function, we derive sharp sample complexity for estimating several key quantities pertaining to EMOT (cost functional and Schr\"{o}dinger coupling) from point clouds that are randomly sampled from the input marginal distributions. Moreover, with substantially weaker smoothness assumption on the cost function than the existing literature, we derive distributional limits and bootstrap validity of various key EMOT objects. As an application, we propose the multimarginal Schr\"{o}dinger barycenter as a new and natural way to regularize the exact Wasserstein barycenter and demonstrate its statistical optimality.
- oai:arXiv.org:2502.02726v2
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Pengtao Li, Xiaohui Chen
-
-
- Concentration Inequalities for Stochastic Optimization of Unbounded Objective Functions with Application to Denoising Score Matching
- https://arxiv.org/abs/2502.08628
- arXiv:2502.08628v2 Announce Type: replace
-Abstract: We derive novel concentration inequalities that bound the statistical error for a large class of stochastic optimization problems, focusing on the case of unbounded objective functions. Our derivations utilize the following key tools: 1) A new form of McDiarmid's inequality that is based on sample-dependent one-component mean-difference bounds and which leads to a novel uniform law of large numbers result for unbounded functions. 2) A new Rademacher complexity bound for families of functions that satisfy an appropriate sample-dependent Lipschitz property, which allows for application to a large class of distributions with unbounded support. As an application of these results, we derive statistical error bounds for denoising score matching (DSM), an application that inherently requires one to consider unbounded objective functions and distributions with unbounded support, even in cases where the data distribution has bounded support. In addition, our results quantify the benefit of sample-reuse in algorithms that employ easily-sampled auxiliary random variables in addition to the training data, e.g., as in DSM, which uses auxiliary Gaussian random variables.
- oai:arXiv.org:2502.08628v2
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jeremiah Birrell
-
-
- A Practical Guide to Estimating Conditional Marginal Effects: Modern Approaches
- https://arxiv.org/abs/2504.01355
- arXiv:2504.01355v2 Announce Type: replace
-Abstract: This Element offers a practical guide to estimating conditional marginal effects-how treatment effects vary with a moderating variable-using modern statistical methods. Commonly used approaches, such as linear interaction models, often suffer from unclarified estimands, limited overlap, and restrictive functional forms. This guide begins by clearly defining the estimand and presenting the main identification results. It then reviews and improves upon existing solutions, such as the semiparametric kernel estimator, and introduces robust estimation strategies, including augmented inverse propensity score weighting with Lasso selection (AIPW-Lasso) and double machine learning (DML) with modern algorithms. Each method is evaluated through simulations and empirical examples, with practical recommendations tailored to sample size and research context. All tools are implemented in the accompanying \texttt{interflex} package for \texttt{R}.
- oai:arXiv.org:2504.01355v2
- stat.ME
- econ.EM
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Jiehan Liu, Ziyi Liu, Yiqing Xu
-
-
- Discovery and inference beyond linearity by integrating Bayesian regression, tree ensembles and Shapley values
- https://arxiv.org/abs/2505.00571
- arXiv:2505.00571v2 Announce Type: replace
-Abstract: Machine Learning (ML) is gaining popularity for hypothesis-free discovery of risk and protective factors in healthcare studies. ML is strong at discovering nonlinearities and interactions, but this power is compromised by a lack of reliable inference. Although Shapley values provide local measures of features' effects, valid uncertainty quantification for these effects is typically lacking, thus precluding statistical inference. We propose RuleSHAP, a framework that addresses this limitation by combining a dedicated Bayesian sparse regression model with a new tree-based rule generator and Shapley value attribution. RuleSHAP provides detection of nonlinear and interaction effects with uncertainty quantification at the individual level. We derive an efficient formula for computing marginal Shapley values within this framework. We demonstrate the validity of our framework on simulated data. Finally, we apply RuleSHAP to data from an epidemiological cohort to detect and infer several effects for high cholesterol and blood pressure, such as nonlinear interaction effects between features like age, sex, ethnicity, BMI and glucose level.
- oai:arXiv.org:2505.00571v2
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Giorgio Spadaccini, Marjolein Fokkema, Mark A. van de Wiel
-
-
- New affine invariant ensemble samplers and their dimensional scaling
- https://arxiv.org/abs/2505.02987
- arXiv:2505.02987v3 Announce Type: replace
-Abstract: We introduce new affine invariant ensemble Markov chain Monte Carlo (MCMC) samplers that are easy to construct and improve upon existing methods, especially for high-dimensional problems. We first propose a simple derivative-free side move sampler that improves upon popular samplers in the \texttt{emcee} package by generating more effective proposal directions. We then develop a class of derivative-based affine invariant ensemble Hamiltonian Monte Carlo (HMC) samplers based on antisymmetric preconditioning using complementary ensembles, which outperform standard, non-affine-invariant HMC when sampling highly anisotropic distributions. We provide asymptotic scaling analysis for high-dimensional Gaussian targets to further elucidate the properties of these affine invariant ensemble samplers. In particular, with derivative information, the affine invariant ensemble HMC can scale much better with dimension compared to derivative-free ensemble samplers.
- oai:arXiv.org:2505.02987v3
- stat.CO
- cs.LG
- math.ST
- stat.ML
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yifan Chen
-
-
- Value of Information-based assessment of strain-based thickness loss monitoring in ship hull structures
- https://arxiv.org/abs/2505.07427
- arXiv:2505.07427v2 Announce Type: replace
-Abstract: Recent advances in Structural Health Monitoring (SHM) have attracted industry interest, yet real-world applications, such as in ship structures remain scarce. Despite SHM's potential to optimise maintenance, its adoption in ships is limited due to the lack of clearly quantifiable benefits for hull maintenance. This study employs a Bayesian pre-posterior decision analysis to quantify the value of information (VoI) from SHM systems monitoring corrosion-induced thickness loss (CITL) in ship hulls, in a first-of-its-kind analysis for ship structures. We define decision-making consequence cost functions based on exceedance probabilities relative to a target CITL threshold, which can be set by the decision-maker. This introduces a practical aspect to our framework, that enables implicitly modelling the decision-maker's risk perception. We apply this framework to a large-scale, high-fidelity numerical model of a commercial vessel and examine the relative benefits of different CITL monitoring strategies, including strain-based SHM and traditional on-site inspections.
- oai:arXiv.org:2505.07427v2
- stat.AP
- cs.CE
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-nd/4.0/
- Nicholas E. Silionis, Konstantinos N. Anyfantis
-
-
- Measuring the Impact of Missingness in Traffic Stop Data
- https://arxiv.org/abs/2505.18281
- arXiv:2505.18281v2 Announce Type: replace
-Abstract: In this article we explore the data available through the Stanford Open Policing Project. The data consist of information on millions of traffic stops across close to 100 different cities and highway patrols. Using a variety of metrics, we identify that the data is not missing completely at random. Furthermore, we develop ways of quantifying and visualizing missingness trends for different variables across the datasets. We follow up by performing a sensitivity analysis to extend work done on the outcome test as well as to extend work done on sharp bounds on the average treatment effect. We demonstrate that bias calculations can fundamentally shift depending on the assumptions made about the observations for which the race variable has not been recorded. We suggest ways that our missingness sensitivity analysis can be extended to myriad different contexts.
- oai:arXiv.org:2505.18281v2
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Saatvik Kher, Johanna Hardin
-
-
- Consistent line clustering using geometric hypergraphs
- https://arxiv.org/abs/2505.24868
- arXiv:2505.24868v2 Announce Type: replace
-Abstract: Many datasets are naturally modeled as graphs, where vertices denote entities and edges encode pairwise interactions. However, some problems exhibit higher-order structure that lies beyond this framework. Among the simplest examples is line clustering, in which points in a Euclidean space are grouped into clusters well approximated by line segments. As any two points trivially determine a line, the relevant structure emerges only when considering higher-order tuples. To capture this, we construct a 3-uniform hypergraph by treating sets of three points as hyperedges whenever they are approximately collinear. This geometric hypergraph encodes information about the underlying line segments, which can be extracted using community recovery algorithms. We characterize the fundamental limits of line clustering and establish the near-optimality of hypergraph-based methods. In particular, we derive information-theoretic thresholds for exact and almost exact recovery for noisy observations from intersecting lines in the plane. Finally, we introduce a polynomial-time spectral algorithm that succeeds up to polylogarithmic factors of the information-theoretic bounds.
- oai:arXiv.org:2505.24868v2
- math.ST
- stat.ML
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Kalle Alaluusua, Konstantin Avrachenkov, B. R. Vinay Kumar, Lasse Leskel\"a
-
-
- Learning quadratic neural networks in high dimensions: SGD dynamics and scaling laws
- https://arxiv.org/abs/2508.03688
- arXiv:2508.03688v3 Announce Type: replace
-Abstract: We study the optimization and sample complexity of gradient-based training of a two-layer neural network with quadratic activation function in the high-dimensional regime, where the data is generated as $f_*(\boldsymbol{x}) \propto \sum_{j=1}^{r}\lambda_j \sigma\left(\langle \boldsymbol{\theta_j}, \boldsymbol{x}\rangle\right), \boldsymbol{x} \sim N(0,\boldsymbol{I}_d)$, $\sigma$ is the 2nd Hermite polynomial, and $\lbrace\boldsymbol{\theta}_j \rbrace_{j=1}^{r} \subset \mathbb{R}^d$ are orthonormal signal directions. We consider the extensive-width regime $r \asymp d^\beta$ for $\beta \in [0, 1)$, and assume a power-law decay on the (non-negative) second-layer coefficients $\lambda_j\asymp j^{-\alpha}$ for $\alpha \geq 0$. We present a sharp analysis of the SGD dynamics in the feature learning regime, for both the population limit and the finite-sample (online) discretization, and derive scaling laws for the prediction risk that highlight the power-law dependencies on the optimization time, sample size, and model width. Our analysis combines a precise characterization of the associated matrix Riccati differential equation with novel matrix monotonicity arguments to establish convergence guarantees for the infinite-dimensional effective dynamics.
- oai:arXiv.org:2508.03688v3
- stat.ML
- cs.LG
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- G\'erard Ben Arous, Murat A. Erdogdu, Nuri Mert Vural, Denny Wu
-
-
- Reframing Three-Dimensional Morphometrics Through Functional Data Innovations
- https://arxiv.org/abs/2509.00650
- arXiv:2509.00650v2 Announce Type: replace
-Abstract: This study innovates geometric morphometrics by incorporating functional data analysis, the square-root velocity function (SRVF), and arc-length parameterisation for 3D morphometric data, leading to the development of seven new pipelines in addition to the standard geometric morphometrics (GM) approach.. This enables three-dimensional images to be examined from perspectives that do not neglect curvature, through the combined use of arc-length parameterisation, soft-alignment, and elastic-alignment. A simulation study was conducted to demonstrate the general effectiveness of eight pipelines: geometric morphometrics (GM, baseline), arc-GM, functional data morphometrics (FDM), arc-FDM, soft-SRV-FDM, arc-soft-SRV-FDM, elastic-SRV-FDM, and arc-elastic-SRV-FDM. These pipelines were also applied to distinguish dietary categories of kangaroos (omnivores, mixed feeders, browsers, and grazers) using cranial landmarks obtained from 41 extant species. Principal component analysis was conducted, followed by classification analysis using linear discriminant analysis, multinomial regression and support vector machines with a linear kernel. The results highlight the effectiveness of functional data analysis, together with arc-length and SRVF-based approaches, in opening the door to more robust perspectives for analysing three-dimensional morphometrics, while establishing geometric morphometrics as the baseline for comparison.
- oai:arXiv.org:2509.00650v2
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Aneesha Balachandran Pillay, Issa-Mbenard Dabo, Sophie Dabo-Niang, Dharini Pathmanathan
-
-
- Lipschitz-Guided Design of Interpolation Schedules in Generative Models
- https://arxiv.org/abs/2509.01629
- arXiv:2509.01629v2 Announce Type: replace
-Abstract: We study the design of interpolation schedules in the stochastic interpolants framework for flow and diffusion-based generative models. We show that while all scalar interpolation schedules achieve identical statistical efficiency under Kullback-Leibler divergence in path space after optimal diffusion coefficient tuning, their numerical efficiency can differ substantially. This motivates focusing on numerical properties of the resulting drift fields rather than purely statistical criteria for schedule design. We propose averaged squared Lipschitzness minimization as a principled criterion for numerical optimization, providing an alternative to kinetic energy minimization used in optimal transport approaches. A transfer formula is derived that enables conversion between different schedules at inference time without retraining neural networks. For Gaussian distributions, the optimized schedules achieve exponential improvements in Lipschitz constants over standard linear schedules, while for Gaussian mixtures, they reduce mode collapse in few-step sampling. We also validate our approach on high-dimensional invariant distributions from stochastic Allen-Cahn equations and Navier-Stokes equations, demonstrating robust performance improvements across resolutions.
- oai:arXiv.org:2509.01629v2
- stat.ML
- cs.LG
- cs.NA
- math.NA
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Yifan Chen, Eric Vanden-Eijnden, Jiawei Xu
-
-
- Robustified Gaussian quasi-likelihood inference for volatility
- https://arxiv.org/abs/2510.02666
- arXiv:2510.02666v3 Announce Type: replace
-Abstract: We consider statistical inference for a class of continuous semimartingale regression models based on high-frequency observations subject to contamination by finite-activity jumps and spike noise. By employing density-power weighting and H\"{o}lder-inequality-based normalization, we propose easy-to-implement, robustified versions of the conventional Gaussian quasi-maximum-likelihood estimator that require only a single tuning parameter. We prove their asymptotic mixed normality at the standard rate of $\sqrt{n}$. It is theoretically shown that these estimators are simultaneously robust against contamination in both the covariate and response processes. Additionally, under suitable conditions on the selection of the tuning parameter, the proposed estimators achieve the same asymptotic distribution as the conventional estimator in the contamination-free case. Illustrative simulation results highlight the estimators' insensitivity to the choice of the tuning parameter.
- oai:arXiv.org:2510.02666v3
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Shoichi Eguchi, Hiroki Masuda
-
-
- Bayesian spatio-temporal weighted regression for integrating missing and misaligned environmental data
- https://arxiv.org/abs/2511.02149
- arXiv:2511.02149v2 Announce Type: replace
-Abstract: Estimating environmental exposures from multi-source data is central to public health research and policy. Integrating data from satellite products and ground monitors are increasingly used to produce exposure surfaces. However, spatio-temporal misalignment often induced from missing data introduces substantial uncertainty and reduces predictive accuracy. We propose a Bayesian weighted predictor regression framework that models spatio-temporal relationships when predictors are observed on irregular supports or have substantial missing data, and are not concurrent with the outcome. The key feature of our model is a spatio-temporal kernel that aggregates the predictor over local space-time neighborhoods, built directly into the likelihood, eliminating any separate gap-filling or forced data alignment stage. We introduce a numerical approximation using a Voronoi-based spatial quadrature combined with irregular temporal increments for estimation under data missingness and misalignment. We showed that misspecification of the spatial and temporal lags induced bias in the mean and parameter estimates, indicating the need for principled parameter selection. Simulation studies confirmed these findings, where careful tuning was critical to control bias and achieve accurate prediction, while the proposed quadrature performed well under severe missingness. As an illustrative application, we estimated fine particulate matter (PM$_{2.5}$) in northern California using satellite-derived aerosol optical depth (AOD) and wildfire smoke plume indicators. Relative to a traditional collocated linear model, our approach improved out-of-sample predictive performance, reduced uncertainty, and yielded robust temporal predictions and spatial surface estimation. Our framework is extensible to additional spatio-temporally varying covariates and other kernel families.
- oai:arXiv.org:2511.02149v2
- stat.ME
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Yovna Junglee, Vianey Leos Barajas, Meredith Franklin
-
-
- On The Hidden Biases of Flow Matching Samplers
- https://arxiv.org/abs/2512.16768
- arXiv:2512.16768v2 Announce Type: replace
-Abstract: We study the implicit bias of flow matching (FM) samplers via the lens of empirical flow matching. Although population FM may produce gradient-field velocities resembling optimal transport (OT), we show that the empirical FM minimizer is generally not a gradient field, even when each conditional flow is. Consequently, empirical FM is intrinsically not OT-optimal in the Benamou-Brenier sense. In view of this, we analyze the kinetic energy of generated samples. With Gaussian sources, both instantaneous and integrated kinetic energies exhibit exponential concentration, while heavy-tailed sources lead to polynomial tails. These behaviors are governed primarily by the choice of source distribution rather than the data. Overall, these notes provide a concise mathematical account of the structural and energetic biases arising in empirical FM.
- oai:arXiv.org:2512.16768v2
- stat.ML
- cs.LG
- math.PR
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Soon Hoe Lim
-
-
- Estimation and Inference for Causal Explainability
- https://arxiv.org/abs/2512.20219
- arXiv:2512.20219v4 Announce Type: replace
-Abstract: Understanding how much each variable contributes to an outcome is a central question across disciplines. A causal view of explainability is favorable for its ability in uncovering underlying mechanisms and generalizing to new contexts. Based on a family of causal explainability quantities, we develop methods for their estimation and inference. In particular, we construct a one-step correction estimator using semi-parametric efficiency theory, which explicitly leverages the independence structure of variables to reduce the asymptotic variance. For a null hypothesis on the boundary, i.e., zero explainability, we show its equivalence to Fisher's sharp null, which motivates a randomization-based inference procedure. Finally, we illustrate the empirical efficacy of our approach through simulations as well as an immigration experiment dataset, where we investigate how features and their interactions shape public opinion toward admitting immigrants.
- oai:arXiv.org:2512.20219v4
- stat.ME
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://creativecommons.org/licenses/by/4.0/
- Weihan Zhang, Zijun Gao
-
-
- A Sieve M-Estimator for Entropic Optimal Transport
- https://arxiv.org/abs/2512.21981
- arXiv:2512.21981v2 Announce Type: replace
-Abstract: The entropically regularized optimal transport problem between probability measures on compact Euclidean subsets can be represented as an information projection with moment inequality constraints. This allows its Fenchel dual to be approximated by a sequence of convex, finite-dimensional problems using sieve methods, enabling tractable estimation of the primal value and dual optimizers from samples. Assuming only continuity of the cost function, I establish almost sure consistency of these estimators. I derive a finite-sample convergence rate for the primal value estimator, showing logarithmic dependence on sieve complexity, and quantify uncertainty for the dual optimal value estimator via matching stochastic bounds involving suprema of centered Gaussian processes. These results provide the first statistical guarantees for sieve-based estimators of entropic optimal transport, extending beyond the empirical Sinkhorn approach.
- oai:arXiv.org:2512.21981v2
- math.ST
- stat.TH
- Thu, 01 Jan 2026 00:00:00 -0500
- replace
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Rami V. Tabri
-
-
- To ArXiv or not to ArXiv: A Study Quantifying Pros and Cons of Posting Preprints Online
- https://arxiv.org/abs/2203.17259
- arXiv:2203.17259v4 Announce Type: replace-cross
-Abstract: Double-blind conferences have engaged in debates over whether to allow authors to post their papers online on arXiv or elsewhere during the review process. Independently, some authors of research papers face the dilemma of whether to put their papers on arXiv due to its pros and cons. We conduct a study to substantiate this debate and dilemma via quantitative measurements. Specifically, we conducted surveys of reviewers in two top-tier double-blind computer science conferences -- ICML 2021 (5361 submissions and 4699 reviewers) and EC 2021 (498 submissions and 190 reviewers). Our three main findings are as follows. First, more than a third of the reviewers self-report searching online for a paper they are assigned to review. Second, conference policies restricting authors from publicising their work on social media or posting preprints before the review process may have only limited effectiveness in maintaining anonymity. Third, outside the review process, we find that preprints from better-ranked institutions experience a very small increase in visibility compared to preprints from other institutions.
- oai:arXiv.org:2203.17259v4
- cs.DL
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Charvi Rastogi, Ivan Stelmakh, Xinwei Shen, Marina Meila, Federico Echenique, Shuchi Chawla, Nihar B. Shah
-
-
- Active Learning with Neural Networks: Insights from Nonparametric Statistics
- https://arxiv.org/abs/2210.08367
- arXiv:2210.08367v2 Announce Type: replace-cross
-Abstract: Deep neural networks have great representation power, but typically require large numbers of training examples. This motivates deep active learning methods that can significantly reduce the amount of labeled training data. Empirical successes of deep active learning have been recently reported in the literature, however, rigorous label complexity guarantees of deep active learning have remained elusive. This constitutes a significant gap between theory and practice. This paper tackles this gap by providing the first near-optimal label complexity guarantees for deep active learning. The key insight is to study deep active learning from the nonparametric classification perspective. Under standard low noise conditions, we show that active learning with neural networks can provably achieve the minimax label complexity, up to disagreement coefficient and other logarithmic terms. When equipped with an abstention option, we further develop an efficient deep active learning algorithm that achieves $\mathsf{polylog}(\frac{1}{\epsilon})$ label complexity, without any low noise assumptions. We also provide extensions of our results beyond the commonly studied Sobolev/H\"older spaces and develop label complexity guarantees for learning in Radon $\mathsf{BV}^2$ spaces, which have recently been proposed as natural function spaces associated with neural networks.
- oai:arXiv.org:2210.08367v2
- cs.LG
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Yinglun Zhu, Robert Nowak
-
-
- The Power of Preconditioning in Overparameterized Low-Rank Matrix Sensing
- https://arxiv.org/abs/2302.01186
- arXiv:2302.01186v4 Announce Type: replace-cross
-Abstract: We propose $\textsf{ScaledGD($\lambda$)}$, a preconditioned gradient descent method to tackle the low-rank matrix sensing problem when the true rank is unknown, and when the matrix is possibly ill-conditioned. Using overparametrized factor representations, $\textsf{ScaledGD($\lambda$)}$ starts from a small random initialization, and proceeds by gradient descent with a specific form of damped preconditioning to combat bad curvatures induced by overparameterization and ill-conditioning. At the expense of light computational overhead incurred by preconditioners, $\textsf{ScaledGD($\lambda$)}$ is remarkably robust to ill-conditioning compared to vanilla gradient descent ($\textsf{GD}$) even with overprameterization. Specifically, we show that, under the Gaussian design, $\textsf{ScaledGD($\lambda$)}$ converges to the true low-rank matrix at a constant linear rate after a small number of iterations that scales only logarithmically with respect to the condition number and the problem dimension. This significantly improves over the convergence rate of vanilla $\textsf{GD}$ which suffers from a polynomial dependency on the condition number. Our work provides evidence on the power of preconditioning in accelerating the convergence without hurting generalization in overparameterized learning.
- oai:arXiv.org:2302.01186v4
- cs.LG
- eess.SP
- math.OC
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Xingyu Xu, Yandi Shen, Yuejie Chi, Cong Ma
-
-
- Multi-fidelity Bayesian Optimization: A Review
- https://arxiv.org/abs/2311.13050
- arXiv:2311.13050v3 Announce Type: replace-cross
-Abstract: Resided at the intersection of multi-fidelity optimization (MFO) and Bayesian optimization (BO), MF BO has found a niche in solving expensive engineering design optimization problems, thanks to its advantages in incorporating physical and mathematical understandings of the problems, saving resources, addressing exploitation-exploration trade-off, considering uncertainty, and processing parallel computing. The increasing number of works dedicated to MF BO suggests the need for a comprehensive review of this advanced optimization technique. In this paper, we survey recent developments of two essential ingredients of MF BO: Gaussian process (GP) based MF surrogates and acquisition functions. We first categorize the existing MF modeling methods and MFO strategies to locate MF BO in a large family of surrogate-based optimization and MFO algorithms. We then exploit the common properties shared between the methods from each ingredient of MF BO to describe important GP-based MF surrogate models and review various acquisition functions. By doing so, we expect to provide a structured understanding of MF BO. Finally, we attempt to reveal important aspects that require further research for applications of MF BO in solving intricate yet important design optimization problems, including constrained optimization, high-dimensional optimization, optimization under uncertainty, and multi-objective optimization.
- oai:arXiv.org:2311.13050v3
- cs.CE
- cs.LG
- math.OC
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- 10.2514/1.J063812
- AIAA Journal 63:6 (2025) 2286-2322
- Bach Do, Ruda Zhang
-
-
- Minibatch Optimal Transport and Perplexity Bound Estimation in Discrete Flow Matching
- https://arxiv.org/abs/2411.00759
- arXiv:2411.00759v3 Announce Type: replace-cross
-Abstract: Discrete flow matching, a recent framework for modeling categorical data, has shown competitive performance with autoregressive models. However, unlike continuous flow matching, the rectification strategy cannot be applied due to the stochasticity of discrete paths, necessitating alternative methods to minimize state transitions. We propose a dynamic-optimal-transport-like minimization objective and derive its Kantorovich formulation for discrete flows with convex interpolants, where transport cost depends solely on inter-state similarity and can be optimized via minibatch strategies. In the case of bag-of-words (BoW) sourced flows, we show that such methods can reduce the number of transitions up to 8 times (1024 to 128) to reach the same generative perplexity without compromising diversity. Additionally, path nondeterminism in discrete flows precludes an instantaneous change-of-variables analogue, preventing precise probability estimation available to continuous flows. We therefore propose two upper bounds on perplexity, enabling principled training, evaluation and model comparison. Finally, we introduce Multimask Flows which outperform masked flows in generative perplexity, particularly when utilizing minibatch Optimal Transport, without sacrificing diversity.
- oai:arXiv.org:2411.00759v3
- cs.LG
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Etrit Haxholli, Yeti Z. G\"urb\"uz, O\u{g}ul Can, Eli Waxman
-
-
- Mathematical artificial data for operator learning
- https://arxiv.org/abs/2507.06752
- arXiv:2507.06752v2 Announce Type: replace-cross
-Abstract: Machine learning has emerged as a transformative tool for solving differential equations (DEs), yet prevailing methodologies remain constrained by dual limitations: data-driven methods demand costly labeled datasets while model-driven techniques face efficiency-accuracy trade-offs. We present the Mathematical Artificial Data (MAD) framework, a new paradigm that integrates physical laws with data-driven learning to facilitate large-scale operator discovery. By exploiting DEs' intrinsic mathematical structure to generate physics-embedded analytical solutions and associated synthetic data, MAD fundamentally eliminates dependence on experimental or simulated training data. This enables computationally efficient operator learning across multi-parameter systems while maintaining mathematical rigor. Through numerical demonstrations spanning 2D parametric problems where both the boundary values and source term are functions, we showcase MAD's generalizability and superior efficiency/accuracy across various DE scenarios. This physics-embedded-data-driven framework and its capacity to handle complex parameter spaces gives it the potential to become a universal paradigm for physics-informed machine intelligence in scientific computing.
- oai:arXiv.org:2507.06752v2
- cs.LG
- cs.NA
- math.NA
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by-nc-sa/4.0/
- Heng Wu, Benzhuo Lu
-
-
- Sampling from Gaussian Processes: A Tutorial and Applications in Global Sensitivity Analysis and Optimization
- https://arxiv.org/abs/2507.14746
- arXiv:2507.14746v2 Announce Type: replace-cross
-Abstract: High-fidelity simulations and physical experiments are essential for engineering analysis and design, yet their high cost often makes two critical tasks--global sensitivity analysis (GSA) and optimization--prohibitively expensive. This limitation motivates the common use of Gaussian processes (GPs) as proxy regression models that provide uncertainty-aware predictions from a limited number of high-quality observations. GPs naturally enable efficient sampling strategies that support informed decision-making under uncertainty by extracting information from a subset of possible functions for the model of interest. However, direct sampling from GPs is inefficient due to their infinite-dimensional nature and the high cost associated with large covariance matrix operations. Despite their popularity in machine learning and statistics communities, sampling from GPs has received little attention in the community of engineering optimization. In this paper, we present the formulation and detailed implementation of two notable sampling methods--random Fourier features and pathwise conditioning--for generating posterior samples from GPs at reduced computational cost. Alternative approaches are briefly described. Importantly, we detail how the generated samples can be applied in GSA, single-objective optimization, and multi-objective optimization. We show successful applications of these sampling methods through a series of numerical examples.
- oai:arXiv.org:2507.14746v2
- cs.LG
- math.OC
- stat.AP
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Bach Do, Nafeezat A. Ajenifuja, Taiwo A. Adebiyi, Ruda Zhang
-
-
- Online Convex Optimization with Heavy Tails: Old Algorithms, New Regrets, and Applications
- https://arxiv.org/abs/2508.07473
- arXiv:2508.07473v2 Announce Type: replace-cross
-Abstract: In Online Convex Optimization (OCO), when the stochastic gradient has a finite variance, many algorithms provably work and guarantee a sublinear regret. However, limited results are known if the gradient estimate has a heavy tail, i.e., the stochastic gradient only admits a finite $\mathsf{p}$-th central moment for some $\mathsf{p}\in\left(1,2\right]$. Motivated by it, this work examines different old algorithms for OCO (e.g., Online Gradient Descent) in the more challenging heavy-tailed setting. Under the standard bounded domain assumption, we establish new regrets for these classical methods without any algorithmic modification. Remarkably, these regret bounds are fully optimal in all parameters (can be achieved even without knowing $\mathsf{p}$), suggesting that OCO with heavy tails can be solved effectively without any extra operation (e.g., gradient clipping). Our new results have several applications. A particularly interesting one is the first provable and optimal convergence result for nonsmooth nonconvex optimization under heavy-tailed noise without gradient clipping. Furthermore, we explore broader settings (e.g., smooth OCO) and extend our ideas to optimistic algorithms to handle different cases simultaneously.
- oai:arXiv.org:2508.07473v2
- cs.LG
- math.OC
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Zijian Liu
-
-
- Closing the Evidence Gap: reddemcee, a Fast Adaptive Parallel Tempering Sampler
- https://arxiv.org/abs/2509.24870
- arXiv:2509.24870v2 Announce Type: replace-cross
-Abstract: Markov Chain Monte Carlo (MCMC) excels at sampling complex posteriors but traditionally lags behind nested sampling in accurate evidence estimation, which is crucial for model comparison in astrophysical problems. We introduce reddemcee, an Adaptive Parallel Tempering Ensemble Sampler, aiming to close this gap by simultaneously presenting next-generation automated temperature-ladder adaptation techniques and robust, low-bias evidence estimators. reddemcee couples an affine-invariant stretch move with five interchangeable ladder-adaptation objectives, Uniform Swap Acceptance Rate, Swap Mean Distance, Gaussian-Area Overlap, Small Gaussian Gap, and Equalised Thermodynamic Length, implemented through a common differential update rule. Three evidence estimators are provided: Curvature-aware Thermodynamic Integration (TI+), Geometric-Bridge Stepping Stones (SS+), and a novel Hybrid algorithm that blends both approaches (H+). Performance and accuracy are benchmarked on n-dimensional Gaussian Shells, Gaussian Egg-box, Rosenbrock Functions, and exoplanet radial-velocity time-series of HD 20794. Across Shells up to 15 dimensions, reddemcee presents roughly 7 times the effective sampling speed of the best dynamic nested sampling configuration. The TI+, SS+ and H+ estimators recover estimates under 3 percent error and supply realistic uncertainties with as few as six temperatures. In the HD 20794 case study, reddemcee reproduces literature model rankings and yields tighter yet consistent planetary parameters compared with dynesty, with evidence errors that track run-to-run dispersion. By unifying fast ladder adaptation with reliable evidence estimators, reddemcee delivers strong throughput and accurate evidence estimates, often matching, and occasionally surpassing, dynamic nested sampling, while preserving the rich posterior information which makes MCMC indispensable for modern Bayesian inference.
- oai:arXiv.org:2509.24870v2
- astro-ph.IM
- stat.AP
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Pablo A. Pe\~na, James S. Jenkins
-
-
- Enhancing Diffusion-Based Sampling with Molecular Collective Variables
- https://arxiv.org/abs/2510.11923
- arXiv:2510.11923v2 Announce Type: replace-cross
-Abstract: Diffusion-based samplers learn to sample complex, high-dimensional distributions using energies or log densities alone, without training data. Yet, they remain impractical for molecular sampling because they are often slower than molecular dynamics and miss thermodynamically relevant modes. Inspired by enhanced sampling, we encourage exploration by introducing a sequential bias along bespoke, information-rich, low-dimensional projections of atomic coordinates known as collective variables (CVs). We introduce a repulsive potential centered on the CVs from recent samples, which pushes future samples towards novel CV regions and effectively increases the temperature in the projected space. Our resulting method improves efficiency, mode discovery, enables the estimation of free energy differences, and retains independent sampling from the approximate Boltzmann distribution via reweighting by the bias. On standard peptide conformational sampling benchmarks, the method recovers diverse conformational states and accurate free energy profiles. We are the first to demonstrate reactive sampling using a diffusion-based sampler, capturing bond breaking and formation with universal interatomic potentials at near-first-principles accuracy. The approach resolves reactive energy landscapes at a fraction of the wall-clock time of standard sampling methods, advancing diffusion-based sampling towards practical use in molecular sciences.
- oai:arXiv.org:2510.11923v2
- physics.chem-ph
- cs.LG
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Juno Nam, B\'alint M\'at\'e, Artur P. Toshev, Manasa Kaniselvan, Rafael G\'omez-Bombarelli, Ricky T. Q. Chen, Brandon Wood, Guan-Horng Liu, Benjamin Kurt Miller
-
-
- Human- vs. AI-generated tests: dimensionality and information accuracy in latent trait evaluation
- https://arxiv.org/abs/2510.24739
- arXiv:2510.24739v2 Announce Type: replace-cross
-Abstract: Artificial Intelligence (AI) and large language models (LLMs) are increasingly used in social and psychological research. Among potential applications, LLMs can be used to generate, customise, or adapt measurement instruments. This study presents a preliminary investigation of AI-generated questionnaires by comparing two ChatGPT-based adaptations of the Body Awareness Questionnaire (BAQ) with the validated human-developed version. The AI instruments were designed with different levels of explicitness in content and instructions on construct facets, and their psychometric properties were assessed using a Bayesian Graded Response Model. Results show that although surface wording between AI and original items was similar, differences emerged in dimensionality and in the distribution of item and test information across latent traits. These findings illustrate the importance of applying statistical measures of accuracy to ensure the validity and interpretability of AI-driven tools.
- oai:arXiv.org:2510.24739v2
- cs.HC
- cs.IT
- math.IT
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Mario Angelelli, Morena Oliva, Serena Arima, Enrico Ciavolino
-
-
- Deep sequence models tend to memorize geometrically; it is unclear why
- https://arxiv.org/abs/2510.26745
- arXiv:2510.26745v2 Announce Type: replace-cross
-Abstract: Deep sequence models are said to store atomic facts predominantly in the form of associative memory: a brute-force lookup of co-occurring entities. We identify a dramatically different form of storage of atomic facts that we term as geometric memory. Here, the model has synthesized embeddings encoding novel global relationships between all entities, including ones that do not co-occur in training. Such storage is powerful: for instance, we show how it transforms a hard reasoning task involving an $\ell$-fold composition into an easy-to-learn $1$-step navigation task.
- From this phenomenon, we extract fundamental aspects of neural embedding geometries that are hard to explain. We argue that the rise of such a geometry, as against a lookup of local associations, cannot be straightforwardly attributed to typical supervisory, architectural, or optimizational pressures. Counterintuitively, a geometry is learned even when it is more complex than the brute-force lookup.
- Then, by analyzing a connection to Node2Vec, we demonstrate how the geometry stems from a spectral bias that -- in contrast to prevailing theories -- indeed arises naturally despite the lack of various pressures. This analysis also points out to practitioners a visible headroom to make Transformer memory more strongly geometric. We hope the geometric view of parametric memory encourages revisiting the default intuitions that guide researchers in areas like knowledge acquisition, capacity, discovery, and unlearning.
- oai:arXiv.org:2510.26745v2
- cs.LG
- cs.AI
- cs.CL
- stat.ML
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Shahriar Noroozizadeh, Vaishnavh Nagarajan, Elan Rosenfeld, Sanjiv Kumar
-
-
- Energy-Efficient Routing Protocol in Vehicular Opportunistic Networks: A Dynamic Cluster-based Routing Using Deep Reinforcement Learning
- https://arxiv.org/abs/2511.19026
- arXiv:2511.19026v3 Announce Type: replace-cross
-Abstract: Opportunistic Networks (OppNets) employ the Store-Carry-Forward (SCF) paradigm to maintain communication during intermittent connectivity. However, routing performance suffers due to dynamic topology changes, unpredictable contact patterns, and resource constraints including limited energy and buffer capacity. These challenges compromise delivery reliability, increase latency, and reduce node longevity in highly dynamic environments. This paper proposes Cluster-based Routing using Deep Reinforcement Learning (CR-DRL), an adaptive routing approach that integrates an Actor-Critic learning framework with a heuristic function. CR-DRL enables real-time optimal relay selection and dynamic cluster overlap adjustment to maintain connectivity while minimizing redundant transmissions and enhancing routing efficiency. Simulation results demonstrate significant improvements over state-of-the-art baselines. CR-DRL extends node lifetimes by up to 21%, overall energy use is reduced by 17%, and nodes remain active for 15% longer. Communication performance also improves, with up to 10% higher delivery ratio, 28.5% lower delay, 7% higher throughput, and data requiring 30% fewer transmission steps across the network.
- oai:arXiv.org:2511.19026v3
- cs.NI
- stat.ME
- Thu, 01 Jan 2026 00:00:00 -0500
- replace-cross
- http://creativecommons.org/licenses/by/4.0/
- Meisam Sahrifi Sani, Saeid Iranmanesh, Raad Raad, Faisel Tubbal
-