diff --git "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml" --- "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" +++ "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml" @@ -7,1230 +7,12 @@ http://www.rssboard.org/rss-specification en-us - Thu, 06 Nov 2025 05:00:11 +0000 + Sat, 08 Nov 2025 05:00:03 +0000 rss-help@arxiv.org - Thu, 06 Nov 2025 00:00:00 -0500 + Sat, 08 Nov 2025 00:00:00 -0500 - Saturday Sunday + Saturday - - Curvature of high-dimensional data - https://arxiv.org/abs/2511.02873 - arXiv:2511.02873v1 Announce Type: new -Abstract: We consider the problem of estimating curvature where the data can be viewed as a noisy sample from an underlying manifold. For manifolds of dimension greater than one there are multiple definitions of local curvature, each suggesting a different estimation process for a given data set. Recently, there has been progress in proving that estimates of ``local point cloud curvature" converge to the related smooth notion of local curvature as the density of the point cloud approaches infinity. Herein we investigate practical limitations of such convergence theorems and discuss the significant impact of bias in such estimates as reported in recent literature. We provide theoretical arguments for the fact that bias increases drastically in higher dimensions, so much so that in high dimensions, the probability that a naive curvature estimate lies in a small interval near the true curvature could be near zero. We present a probabilistic framework that enables the construction of more accurate estimators of curvature for arbitrary noise models. The efficacy of our technique is supported with experiments on spheres of dimension as large as twelve. - oai:arXiv.org:2511.02873v1 - math.ST - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jiayi Chen, Mohammad Javad Latifi Jebelli, Daniel N. Rockmore - - - From Hume to Jaynes: Induction as the Logic of Plausible Reasoning - https://arxiv.org/abs/2511.02881 - arXiv:2511.02881v1 Announce Type: new -Abstract: The problem of induction has persisted since Hume exposed the logical gap between repeated observation and universal inference. Traditional attempts to resolve it have oscillated between two extremes: the probabilistic optimism of Laplace and Jeffreys, who sought to quantify belief through probability, and the critical skepticism of Popper, who replaced confirmation with falsification. Both approaches, however, assume that induction must deliver certainty or its negation. In this paper, I argue that the problem of induction dissolves when recast in terms of logical coherence (understood as internal consistency of credences under updating) rather than truth. Following E. T. Jaynes, probability is interpreted not as frequency or decision rule but as the extension of deductive logic to incomplete information. Under this interpretation, Bayes's theorem is not an empirical statement but a consistency condition that constrains rational belief updating. Induction thus emerges as the special case of deductive reasoning applied to uncertain premises. Falsification appears as the limiting form of Bayesian updating when new data drive posterior plausibility toward zero, while the Bayes Factor quantifies the continuous spectrum of evidential strength. Through analytical examples, including Laplace's sunrise problem, Jeffreys's mixed prior, and confidence-based reformulations, I show that only the logic of plausible reasoning unifies these perspectives without contradiction. Induction, properly understood, is not the leap from past to future but the discipline of maintaining coherence between evidence, belief, and information. - oai:arXiv.org:2511.02881v1 - stat.OT - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tommaso Costa - - - Optimal transport with a density-dependent cost function - https://arxiv.org/abs/2511.02929 - arXiv:2511.02929v1 Announce Type: new -Abstract: A new pairwise cost function is proposed for the optimal transport barycenter problem, adopting the form of the minimal action between two points, with a Lagrangian that takes into account an underlying probability distribution. Under this notion of distance, two points can only be close if there exist paths joining them that do not traverse areas of small probability. A framework is proposed and developed for the numerical solution of the corresponding data-driven optimal transport problem. The procedure parameterizes the paths of minimal action through path dependent Chebyshev polynomials and enforces the agreement between the paths' endpoints and the given source and target distributions through an adversarial penalization. The methodology and its application to clustering and matching problems is illustrated through synthetic examples. - oai:arXiv.org:2511.02929v1 - stat.CO - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Zichu Wang, Esteban G. Tabak - - - Adaptive Orthogonalization for Stable Estimation of the Effects of Time-Varying Treatments - https://arxiv.org/abs/2511.02971 - arXiv:2511.02971v1 Announce Type: new -Abstract: Inferring the causal effects of time-varying treatments is often hindered by highly variable inverse propensity weights, particularly in settings with limited covariate overlap. Building on the key framework of Imai and Ratkovic (2015), we establish sufficient balancing conditions for identification in longitudinal studies of treatment effects and propose a novel estimator that directly targets features of counterfactual or potential covariates. Instead of balancing observed covariates, our method balances the components of covariates that are orthogonal to their history, thereby isolating the new information at each time point. This strategy directly targets the joint distribution of potential covariates and prioritizes features that are most relevant to the outcome. We prove that the resulting estimator for the mean potential outcome is consistent and asymptotically normal, even in settings where standard inverse propensity weighting fails. Extensive simulations show that our estimator attains efficiency comparable to that of g-computation while providing superior robustness to model misspecification. We apply our method to a longitudinal study of private versus public schooling in Chile, demonstrating its stability and interpretability in estimating their effects on university admission scores. - oai:arXiv.org:2511.02971v1 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Yige Li, Mar\'ia de los Angeles Resa, Jos\'e R. Zubizarreta - - - Detecting Conflicts in Evidence Synthesis Models Using Score Discrepancies - https://arxiv.org/abs/2511.02977 - arXiv:2511.02977v1 Announce Type: new -Abstract: Evidence synthesis models combine multiple data sources to estimate latent quantities of interest, enabling reliable inference on parameters that are difficult to measure directly. However, shared parameters across data sources can induce conflicts both among the data and with the assumed model structure. Detecting and quantifying such conflicts remains a challenge in model criticism. Here we propose a general framework for conflict detection in evidence synthesis models based on score discrepancies, extending prior-data conflict diagnostics to more general conflict checks in the latent space of hierarchical models. Simulation studies in an exchangeable model demonstrate that the proposed approach effectively detects between-data inconsistencies. Application to an influenza severity model illustrates its use, complementary to traditional deviance-based diagnostics, in complex real-world hierarchical settings. The proposed framework thus provides a flexible and broadly applicable tool for consistency assessment in Bayesian evidence synthesis. - oai:arXiv.org:2511.02977v1 - stat.ME - stat.CO - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Fuming Yang, David J. Nott, Anne M. Presanis - - - Constructing Large Orthogonal Minimally Aliased Response Surface Designs by Concatenating Two Definitive Screening Designs - https://arxiv.org/abs/2511.02984 - arXiv:2511.02984v1 Announce Type: new -Abstract: Orthogonal minimally aliased response surface (OMARS) designs permit the study of quantitative factors at three levels using an economical number of runs. In these designs, the linear effects of the factors are neither aliased with each other nor with the quadratic effects and the two-factor interactions. Complete catalogs of OMARS designs with up to five factors have been obtained using an enumeration algorithm. However, the algorithm is computationally demanding for designs with many factors and runs. To overcome this issue, we propose a construction method for large OMARS designs that concatenates two definitive screening designs and improves the statistical features of its parent designs. The concatenation employs an algorithm that minimizes the aliasing among the second-order effects using foldover techniques and column permutations for one of the parent designs. We study the properties of the new OMARS designs and compare them with alternative designs in the literature. - oai:arXiv.org:2511.02984v1 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Alan R. Vazquez, Peter Goos, Eric D. Schoen - - - Scalable Single-Cell Gene Expression Generation with Latent Diffusion Models - https://arxiv.org/abs/2511.02986 - arXiv:2511.02986v1 Announce Type: new -Abstract: Computational modeling of single-cell gene expression is crucial for understanding cellular processes, but generating realistic expression profiles remains a major challenge. This difficulty arises from the count nature of gene expression data and complex latent dependencies among genes. Existing generative models often impose artificial gene orderings or rely on shallow neural network architectures. We introduce a scalable latent diffusion model for single-cell gene expression data, which we refer to as scLDM, that respects the fundamental exchangeability property of the data. Our VAE uses fixed-size latent variables leveraging a unified Multi-head Cross-Attention Block (MCAB) architecture, which serves dual roles: permutation-invariant pooling in the encoder and permutation-equivariant unpooling in the decoder. We enhance this framework by replacing the Gaussian prior with a latent diffusion model using Diffusion Transformers and linear interpolants, enabling high-quality generation with multi-conditional classifier-free guidance. We show its superior performance in a variety of experiments for both observational and perturbational single-cell data, as well as downstream tasks like cell-level classification. - oai:arXiv.org:2511.02986v1 - stat.ML - cs.LG - q-bio.GN - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Giovanni Palla, Sudarshan Babu, Payam Dibaeinia, James D. Pearce, Donghui Li, Aly A. Khan, Theofanis Karaletsos, Jakub M. Tomczak - - - Unifying Information-Theoretic and Pair-Counting Clustering Similarity - https://arxiv.org/abs/2511.03000 - arXiv:2511.03000v1 Announce Type: new -Abstract: Comparing clusterings is central to evaluating unsupervised models, yet the many existing similarity measures can produce widely divergent, sometimes contradictory, evaluations. Clustering similarity measures are typically organized into two principal families, pair-counting and information-theoretic, reflecting whether they quantify agreement through element pairs or aggregate information across full cluster contingency tables. Prior work has uncovered parallels between these families and applied empirical normalization or chance-correction schemes, but their deeper analytical connection remains only partially understood. Here, we develop an analytical framework that unifies these families through two complementary perspectives. First, both families are expressed as weighted expansions of observed versus expected co-occurrences, with pair-counting arising as a quadratic, low-order approximation and information-theoretic measures as higher-order, frequency-weighted extensions. Second, we generalize pair-counting to $k$-tuple agreement and show that information-theoretic measures can be viewed as systematically accumulating higher-order co-assignment structure beyond the pairwise level. We illustrate the approaches analytically for the Rand index and Mutual Information, and show how other indices in each family emerge as natural extensions. Together, these views clarify when and why the two regimes diverge, relating their sensitivities directly to weighting and approximation order, and provide a principled basis for selecting, interpreting, and extending clustering similarity measures across applications. - oai:arXiv.org:2511.03000v1 - stat.ML - cs.IT - cs.LG - math.IT - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Alexander J. Gates - - - New sampling approaches for Shrinkage Inverse-Wishart distribution - https://arxiv.org/abs/2511.03044 - arXiv:2511.03044v1 Announce Type: new -Abstract: In this paper, we propose new sampling approaches for the Shrinkage Inverse-Wishart (SIW) distribution, a generalized family of the Inverse-Wishart distribution originally proposed by Berger et al. (2020, Annals of Statistics). It offers a flexible prior for covariance matrices and remains conjugate to the Gaussian likelihood, similar to the classical Inverse-Wishart. Despite these advantages, sampling from SIW remains challenging. The existing algorithm relies on a nested Gibbs sampler, which is slow and lacks rigorous theoretical analysis of its convergence. We propose a new algorithm based on the Sampling Importance Resampling (SIR) method, which is significantly faster and comes with theoretical guarantees on convergence rates. A known issue with SIR methods is the large discrepancy in importance weights, which occurs when the proposal distribution has thinner tails than the target. In the case of SIW, certain parameter settings can lead to such discrepancies, reducing the robustness of the output samples. To sample from such SIW distributions, we robustify the proposed algorithm by including a clipping step to the SIR framework which transforms large importance weights. We provide theoretical results on the convergence behavior in terms of the clipping size, and discuss strategies for choosing this parameter via simulation studies. The robustified version retains the computational efficiency of the original algorithm. - oai:arXiv.org:2511.03044v1 - stat.ME - stat.CO - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Yiye Jiang - - - Precise asymptotic analysis of Sobolev training for random feature models - https://arxiv.org/abs/2511.03050 - arXiv:2511.03050v1 Announce Type: new -Abstract: Gradient information is widely useful and available in applications, and is therefore natural to include in the training of neural networks. Yet little is known theoretically about the impact of Sobolev training -- regression with both function and gradient data -- on the generalization error of highly overparameterized predictive models in high dimensions. In this paper, we obtain a precise characterization of this training modality for random feature (RF) models in the limit where the number of trainable parameters, input dimensions, and training data tend proportionally to infinity. Our model for Sobolev training reflects practical implementations by sketching gradient data onto finite dimensional subspaces. By combining the replica method from statistical physics with linearizations in operator-valued free probability theory, we derive a closed-form description for the generalization errors of the trained RF models. For target functions described by single-index models, we demonstrate that supplementing function data with additional gradient data does not universally improve predictive performance. Rather, the degree of overparameterization should inform the choice of training method. More broadly, our results identify settings where models perform optimally by interpolating noisy function and gradient data. - oai:arXiv.org:2511.03050v1 - stat.ML - cond-mat.dis-nn - cs.LG - math.PR - math.ST - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Katharine E Fisher, Matthew TC Li, Youssef Marzouk, Timo Schorlepp - - - Beyond Maximum Likelihood: Variational Inequality Estimation for Generalized Linear Models - https://arxiv.org/abs/2511.03087 - arXiv:2511.03087v1 Announce Type: new -Abstract: Generalized linear models (GLMs) are fundamental tools for statistical modeling, with maximum likelihood estimation (MLE) serving as the classical method for parameter inference. While MLE performs well in canonical GLMs, it can become computationally inefficient near the true parameter value. In more general settings with non-canonical or fully general link functions, the resulting optimization landscape is often non-convex, non-smooth, and numerically unstable. To address these challenges, we investigate an alternative estimator based on solving the variational inequality (VI) formulation of the GLM likelihood equations, originally proposed by Juditsky and Nemirovski as an alternative for solving nonlinear least-squares problems. Unlike their focus on algorithmic convergence in monotone settings, we analyze the VI approach from a statistical perspective, comparing it systematically with the MLE. We also extend the theory of VI estimators to a broader class of link functions, including non-monotone cases satisfying a strong Minty condition, and show that it admits weaker smoothness requirements than MLE, enabling faster, more stable, and less locally trapped optimization. Theoretically, we establish both non-asymptotic estimation error bounds and asymptotic normality for the VI estimator, and further provide convergence guarantees for fixed-point and stochastic approximation algorithms. Numerical experiments show that the VI framework preserves the statistical efficiency of MLE while substantially extending its applicability to more challenging GLM settings. - oai:arXiv.org:2511.03087v1 - stat.ME - math.OC - math.ST - stat.ML - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Linglingzhi Zhu, Jonghyeok Lee, Yao Xie - - - Provable Accelerated Bayesian Optimization with Knowledge Transfer - https://arxiv.org/abs/2511.03125 - arXiv:2511.03125v1 Announce Type: new -Abstract: We study how Bayesian optimization (BO) can be accelerated on a target task with historical knowledge transferred from related source tasks. Existing works on BO with knowledge transfer either do not have theoretical guarantees or achieve the same regret as BO in the non-transfer setting, $\tilde{\mathcal{O}}(\sqrt{T \gamma_f})$, where $T$ is the number of evaluations of the target function and $\gamma_f$ denotes its information gain. In this paper, we propose the DeltaBO algorithm, in which a novel uncertainty-quantification approach is built on the difference function $\delta$ between the source and target functions, which are allowed to belong to different reproducing kernel Hilbert spaces (RKHSs). Under mild assumptions, we prove that the regret of DeltaBO is of order $\tilde{\mathcal{O}}(\sqrt{T (T/N + \gamma_\delta)})$, where $N$ denotes the number of evaluations from source tasks and typically $N \gg T$. In many applications, source and target tasks are similar, which implies that $\gamma_\delta$ can be much smaller than $\gamma_f$. Empirical studies on both real-world hyperparameter tuning tasks and synthetic functions show that DeltaBO outperforms other baseline methods and support our theoretical claims. - oai:arXiv.org:2511.03125v1 - stat.ML - cs.LG - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Haitao Lin, Boxin Zhao, Mladen Kolar, Chong Liu - - - Modeling Headway in Heterogeneous and Mixed Traffic Flow: A Statistical Distribution Based on a General Exponential Function - https://arxiv.org/abs/2511.03154 - arXiv:2511.03154v1 Announce Type: new -Abstract: The ability of existing headway distributions to accurately reflect the diverse behaviors and characteristics in heterogeneous traffic (different types of vehicles) and mixed traffic (human-driven vehicles with autonomous vehicles) is limited, leading to unsatisfactory goodness of fit. To address these issues, we modified the exponential function to obtain a novel headway distribution. Rather than employing Euler's number (e) as the base of the exponential function, we utilized a real number base to provide greater flexibility in modeling the observed headway. However, the proposed is not a probability function. We normalize it to calculate the probability and derive the closed-form equation. In this study, we utilized a comprehensive experiment with five open datasets: highD, exiD, NGSIM, Waymo, and Lyft to evaluate the performance of the proposed distribution and compared its performance with six existing distributions under mixed and heterogeneous traffic flow. The results revealed that the proposed distribution not only captures the fundamental characteristics of headway distribution but also provides physically meaningful parameters that describe the distribution shape of observed headways. Under heterogeneous flow on highways (i.e., uninterrupted traffic flow), the proposed distribution outperforms other candidate distributions. Under urban road conditions (i.e., interrupted traffic flow), including heterogeneous and mixed traffic, the proposed distribution still achieves decent results. - oai:arXiv.org:2511.03154v1 - stat.AP - cs.LG - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Natchaphon Leungbootnak, Zihao Li, Zihang Wei, Dominique Lord, Yunlong Zhang - - - On Ignorability of Preferential Sampling in Geostatistics - https://arxiv.org/abs/2511.03158 - arXiv:2511.03158v1 Announce Type: new -Abstract: Preferential sampling has attracted considerable attention in geostatistics since the pioneering work of Diggle et al. (2010). A variety of likelihood-based approaches have been developed to correct estimation bias by explicitly modelling the sampling mechanism. While effective in many applications, these methods are often computationally expensive and can be susceptible to model misspecification. In this paper, we present a surprising finding: some existing non-likelihood-based methods that ignore preferential sampling can still produce unbiased and consistent estimators under the widely used framework of Diggle et al. (2010) and its extensions. We investigate the conditions under which preferential sampling can be ignored and develop relevant estimators for both regression and covariance parameters without specifying the sampling mechanism parametrically. Simulation studies demonstrate clear advantages of our approach, including reduced estimation error, improved confidence interval coverage, and substantially lower computational cost. To show the practical utility, we further apply it to a tropical forest data set. - oai:arXiv.org:2511.03158v1 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Changqing Lu, Ganggang Xu, Junho Yang, Yongtao Guan - - - Statistical Properties of Rectified Flow - https://arxiv.org/abs/2511.03193 - arXiv:2511.03193v1 Announce Type: new -Abstract: Rectified flow (Liu et al., 2022; Liu, 2022; Wu et al., 2023) is a method for defining a transport map between two distributions, and enjoys popularity in machine learning, although theoretical results supporting the validity of these methods are scant. The rectified flow can be regarded as an approximation to optimal transport, but in contrast to other transport methods that require optimization over a function space, computing the rectified flow only requires standard statistical tools such as regression or density estimation. Because of this, one can leverage standard data analysis tools for regression and density estimation to develop empirical versions of transport maps. We study some structural properties of the rectified flow, including existence, uniqueness, and regularity, as well as the related statistical properties, such as rates of convergence and central limit theorems, for some selected estimators. To do so, we analyze separately the bounded and unbounded cases as each presents unique challenges. In both cases, we are able to establish convergence at faster rates than the ones for the usual nonparametric regression and density estimation. - oai:arXiv.org:2511.03193v1 - stat.TH - cs.LG - math.ST - stat.ME - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Gonzalo Mena, Arun Kumar Kuchibhotla, Larry Wasserman - - - Provable Separations between Memorization and Generalization in Diffusion Models - https://arxiv.org/abs/2511.03202 - arXiv:2511.03202v1 Announce Type: new -Abstract: Diffusion models have achieved remarkable success across diverse domains, but they remain vulnerable to memorization -- reproducing training data rather than generating novel outputs. This not only limits their creative potential but also raises concerns about privacy and safety. While empirical studies have explored mitigation strategies, theoretical understanding of memorization remains limited. We address this gap through developing a dual-separation result via two complementary perspectives: statistical estimation and network approximation. From the estimation side, we show that the ground-truth score function does not minimize the empirical denoising loss, creating a separation that drives memorization. From the approximation side, we prove that implementing the empirical score function requires network size to scale with sample size, spelling a separation compared to the more compact network representation of the ground-truth score function. Guided by these insights, we develop a pruning-based method that reduces memorization while maintaining generation quality in diffusion transformers. - oai:arXiv.org:2511.03202v1 - stat.ML - cs.LG - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Zeqi Ye, Qijie Zhu, Molei Tao, Minshuo Chen - - - RKUM: An R Package for Robust Kernel Unsupervised Methods - https://arxiv.org/abs/2511.03216 - arXiv:2511.03216v1 Announce Type: new -Abstract: RKUM is an R package developed for implementing robust kernel-based unsupervised methods. It provides functions for estimating the robust kernel covariance operator (CO) and the robust kernel cross-covariance operator (CCO) using generalized loss functions instead of the conventional quadratic loss. These operators form the foundation of robust kernel learning and enable reliable analysis under contaminated or noisy data conditions. The package includes implementations of robust kernel canonical correlation analysis (Kernel CCA), as well as the influence function (IF) for both standard and multiple kernel CCA frameworks. The influence function quantifies sensitivity and helps detect influential or outlying observations across two-view and multi-view datasets. Experiments using synthesized two-view and multi-view data demonstrate that the IF of the standard kernel CCA effectively identifies outliers, while the robust kernel methods implemented in RKUM exhibit reduced sensitivity to contamination. Overall, RKUM provides an efficient and extensible platform for robust kernel-based analysis in high-dimensional data applications. - oai:arXiv.org:2511.03216v1 - stat.ML - cs.LG - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Md Ashad Alam - - - Comment on: "Model uncertainty and missing data: An Objective Bayesian Perspective" - https://arxiv.org/abs/2511.03395 - arXiv:2511.03395v1 Announce Type: new -Abstract: We give a contributed discussion on "Model uncertainty and missing data: An Objective Bayesian Perspective", where we discuss frequentist perspectives on the proposed methodology. - oai:arXiv.org:2511.03395v1 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Stefan Franssen - - - Bayesian Causal Effect Estimation for Categorical Data using Staged Tree Models - https://arxiv.org/abs/2511.03399 - arXiv:2511.03399v1 Announce Type: new -Abstract: We propose a fully Bayesian approach for causal inference with multivariate categorical data based on staged tree models, a class of probabilistic graphical models capable of representing asymmetric and context-specific dependencies. To account for uncertainty in both structure and parameters, we introduce a flexible family of prior distributions over staged trees. These include product partition models to encourage parsimony, a novel distance-based prior to promote interpretable dependence patterns, and an extension that incorporates continuous covariates into the learning process. Posterior inference is achieved via a tailored Markov Chain Monte Carlo algorithm with split-and-merge moves, yielding posterior samples of staged trees from which average treatment effects and uncertainty measures are derived. Posterior summaries and uncertainty measures are obtained via techniques from the Bayesian nonparametrics literature. Two case studies on electronic fetal monitoring and cesarean delivery and on anthracycline therapy and cardiac dysfunction in breast cancer illustrate the methods. - oai:arXiv.org:2511.03399v1 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Andrea Cremaschi, Manuele Leonelli, Gherardo Varando - - - Multi-layer dissolution exponential-family models for weighted signed networks - https://arxiv.org/abs/2511.03420 - arXiv:2511.03420v1 Announce Type: new -Abstract: Understanding the structure of weighted signed networks is essential for analysing social systems in which relationships vary both in sign and strength. Despite significant advances in statistical network analysis, there is still a lack of statistical models that can jointly and rigorously account for both the sign and strength of relationships in networks. We introduce a multi-layer dissolution exponential random graph modelling framework that jointly captures the signed and weighted processes, conditional on the observed interaction structure. The framework enables rigorous assessment of structural balance effects while fully accounting for edge weights. To enhance inference, we adopt a fully-probabilistic Bayesian hierarchical approach that partially pools information across layers, with parameters estimated via an adaptive approximate exchange algorithm. We demonstrate the flexibility and explanatory power of the proposed methodology by applying it to bill sponsorship data from the 108th US Senate, revealing complex patterns of signed and weighted interactions and structural balance effects that traditional approaches are unable to capture. - oai:arXiv.org:2511.03420v1 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Alberto Caimo, Isabella Gollini - - - The Bradley-Terry Stochastic Block Model - https://arxiv.org/abs/2511.03467 - arXiv:2511.03467v1 Announce Type: new -Abstract: The Bradley-Terry model is widely used for the analysis of pairwise comparison data and, in essence, produces a ranking of the items under comparison. We embed the Bradley-Terry model within a stochastic block model, allowing items to cluster. The resulting Bradley-Terry SBM (BT-SBM) ranks clusters so that items within a cluster share the same tied rank. We develop a fully Bayesian specification in which all quantities-the number of blocks, their strengths, and item assignments-are jointly learned via a fast Gibbs sampler derived through a Thurstonian data augmentation. Despite its efficiency, the sampler yields coherent and interpretable posterior summaries for all model components. Our motivating application analyzes men's tennis results from ATP tournaments over the seasons 2000-2022. We find that the top 100 players can be broadly partitioned into three or four tiers in most seasons. Moreover, the size of the strongest tier was small from the mid-2000s to 2018 and has increased since, providing evidence that men's tennis has become more competitive in recent years. - oai:arXiv.org:2511.03467v1 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Lapo Santi, Nial Friel - - - Asymptotics of the maximum likelihood estimator of the location parameter of Pearson Type VII distribution - https://arxiv.org/abs/2511.03535 - arXiv:2511.03535v1 Announce Type: new -Abstract: We study the maximum likelihood estimator of the location parameter of the Pearson Type VII distribution with known scale. We rigorously establish precise asymptotic properties such as strong consistency, asymptotic normality, Bahadur efficiency and asymptotic variance of the maximum likelihood estimator. Our focus is the heavy-tailed case, including the Cauchy distribution. The main difficulty lies in the fact that the likelihood equation may have multiple roots; nevertheless, the maximum likelihood estimator performs well for large samples. - oai:arXiv.org:2511.03535v1 - math.ST - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Kazuki Okamura - - - The Structure of Cross-Validation Error: Stability, Covariance, and Minimax Limits - https://arxiv.org/abs/2511.03554 - arXiv:2511.03554v1 Announce Type: new -Abstract: Despite ongoing theoretical research on cross-validation (CV), many theoretical questions about CV remain widely open. This motivates our investigation into how properties of algorithm-distribution pairs can affect the choice for the number of folds in $k$-fold cross-validation. - Our results consist of a novel decomposition of the mean-squared error of cross-validation for risk estimation, which explicitly captures the correlations of error estimates across overlapping folds and includes a novel algorithmic stability notion, squared loss stability, that is considerably weaker than the typically required hypothesis stability in other comparable works. - Furthermore, we prove: - 1. For every learning algorithm that minimizes empirical error, a minimax lower bound on the mean-squared error of $k$-fold CV estimating the population risk $L_\mathcal{D}$: \[ \min_{k \mid n}\; \max_{\mathcal{D}}\; \mathbb{E}\!\left[\big(\widehat{L}_{\mathrm{CV}}^{(k)} - L_{\mathcal{D}}\big)^{2}\right] \;=\; \Omega\!\big(\sqrt{k}/n\big), \] where $n$ is the sample size and $k$ the number of folds. This shows that even under idealized conditions, for large values of $k$, CV cannot attain the optimum of order $1/n$ achievable by a validation set of size $n$, reflecting an inherent penalty caused by dependence between folds. - 2. Complementing this, we exhibit learning rules for which \[ - \max_{\mathcal{D}}\; \mathbb{E}\!\left[\big(\widehat{L}_{\mathrm{CV}}^{(k)} - L_{\mathcal{D}}\big)^{2}\right] \;=\; \Omega(k/n), \] matching (up to constants) the accuracy of a hold-out estimator of a single fold of size $n/k$. - Together these results delineate the fundamental trade-off in resampling-based risk estimation: CV cannot fully exploit all $n$ samples for unbiased risk evaluation, and its minimax performance is pinned between the $k/n$ and $\sqrt{k}/n$ regimes. - oai:arXiv.org:2511.03554v1 - math.ST - cs.LG - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ido Nachum, R\"udiger Urbanke, Thomas Weinberger - - - Post-2024 U.S. Presidential Election Analysis of Election and Poll Data: Real-life Validation of Prediction via Small Area Estimation and Uncertainty Quantification - https://arxiv.org/abs/2511.03555 - arXiv:2511.03555v1 Announce Type: new -Abstract: We carry out a post-election analysis of the 2024 U.S. Presidential Election (USPE) using a prediction model derived from the Small Area Estimation (SAE) methodology. With pollster data obtained one week prior to the election day, retrospectively, our SAE-based prediction model can perfectly predict the Electoral College election results in all 44 states where polling data were available. In addition to such desirable prediction accuracy, we introduce the probability of incorrect prediction (PoIP) to rigorously analyze prediction uncertainty. Since the standard bootstrap method appears inadequate for estimating PoIP, we propose a conformal inference method that yields reliable uncertainty quantification. We further investigate potential pollster biases by the means of sensitivity analyses and conclude that swing states are particularly vulnerable to polling bias in the prediction of the 2024 USPE. - oai:arXiv.org:2511.03555v1 - stat.AP - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Zheshi Zheng, Yuanyuan Li, Peter X. K. Song, Jiming Jiang - - - Adjusting for Heavy Censoring and Double-Dipping to Compare Risk Stratification Abilities of Existing Models for Time to Diagnosis of Huntington Disease - https://arxiv.org/abs/2511.03596 - arXiv:2511.03596v1 Announce Type: new -Abstract: Huntington disease (HD) is a genetically inherited neurodegenerative disease with progressively worsening symptoms. Accurately modeling time to HD diagnosis is essential for clinical trial design and treatment planning. Langbehn's model, the CAG-Age Product (CAP) model, the Prognostic Index Normed (PIN) model, and the Multivariate Risk Score (MRS) model have all been proposed for this task. However, differing in methodology, assumptions, and accuracy, these models may yield conflicting predictions. Few studies have systematically compared these models' performance, and those that have could be misleading due to (i) testing the models on the same data used to train them and (ii) failing to account for high rates of right censoring (80%+) in performance metrics. We discuss the theoretical foundations of the four most common models of time to HD diagnosis, offering intuitive comparisons about their practical feasibility. Further, we externally validate their risk stratification abilities using data from the ENROLL-HD study and performance metrics that adjust for censoring. Our findings guide the selection of a model for HD clinical trial design. The MRS model, which incorporates the most covariates, performed the best. However, the simpler CAP and PIN models were not far behind and may be logistically simpler to adopt. We also show how these models can be used to estimate sample sizes for an HD clinical trial, emphasizing that previous estimates would lead to underpowered trials. - oai:arXiv.org:2511.03596v1 - stat.AP - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Kyle F. Grosser, Abigail G. Foes, Stellen Li, Vraj Parikh, Tanya P. Garcia, Sarah C. Lotspeich - - - Bayesian Topological Analysis of Functional Brain Networks - https://arxiv.org/abs/2511.03605 - arXiv:2511.03605v1 Announce Type: new -Abstract: Subtle alterations in brain network topology often evade detection by traditional statistical methods. To address this limitation, we introduce a Bayesian inference framework for topological comparison of brain networks that probabilistically models within- and between-group dissimilarities. The framework employs Markov chain Monte Carlo sampling to estimate posterior distributions of test statistics and Bayes factors, enabling graded evidence assessment beyond binary significance testing. Simulations confirmed statistical consistency to permutation testing. Applied to fMRI data from the Duke-UNC Alzheimer's Disease Research Center, the framework detected topology-based network differences that conventional permutation tests failed to reveal, highlighting its enhanced sensitivity to early or subtle brain network alterations in clinical neuroimaging. - oai:arXiv.org:2511.03605v1 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Xukun Zhu, Michael W Lutz, Tananun Songdechakraiwut - - - Vector-valued self-normalized concentration inequalities beyond sub-Gaussianity - https://arxiv.org/abs/2511.03606 - arXiv:2511.03606v1 Announce Type: new -Abstract: The study of self-normalized processes plays a crucial role in a wide range of applications, from sequential decision-making to econometrics. While the behavior of self-normalized concentration has been widely investigated for scalar-valued processes, vector-valued processes remain comparatively underexplored, especially outside of the sub-Gaussian framework. In this contribution, we provide concentration bounds for self-normalized processes with light tails beyond sub-Gaussianity (such as Bennett or Bernstein bounds). We illustrate the relevance of our results in the context of online linear regression, with applications in (kernelized) linear bandits. - oai:arXiv.org:2511.03606v1 - stat.ML - cs.LG - math.ST - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Diego Martinez-Taboada, Tomas Gonzalez, Aaditya Ramdas - - - Colorectal Cancer Histopathological Grading using Multi-Scale Federated Learning - https://arxiv.org/abs/2511.03693 - arXiv:2511.03693v1 Announce Type: new -Abstract: Colorectal cancer (CRC) grading is a critical prognostic factor but remains hampered by inter-observer variability and the privacy constraints of multi-institutional data sharing. While deep learning offers a path to automation, centralized training models conflict with data governance regulations and neglect the diagnostic importance of multi-scale analysis. In this work, we propose a scalable, privacy-preserving federated learning (FL) framework for CRC histopathological grading that integrates multi-scale feature learning within a distributed training paradigm. Our approach employs a dual-stream ResNetRS50 backbone to concurrently capture fine-grained nuclear detail and broader tissue-level context. This architecture is integrated into a robust FL system stabilized using FedProx to mitigate client drift across heterogeneous data distributions from multiple hospitals. Extensive evaluation on the CRC-HGD dataset demonstrates that our framework achieves an overall accuracy of 83.5%, outperforming a comparable centralized model (81.6%). Crucially, the system excels in identifying the most aggressive Grade III tumors with a high recall of 87.5%, a key clinical priority to prevent dangerous false negatives. Performance further improves with higher magnification, reaching 88.0% accuracy at 40x. These results validate that our federated multi-scale approach not only preserves patient privacy but also enhances model performance and generalization. The proposed modular pipeline, with built-in preprocessing, checkpointing, and error handling, establishes a foundational step toward deployable, privacy-aware clinical AI for digital pathology. - oai:arXiv.org:2511.03693v1 - stat.ML - cs.LG - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Md Ahasanul Arafath, Abhijit Kumar Ghosh, Md Rony Ahmed, Sabrin Afroz, Minhazul Hosen, Md Hasan Moon, Md Tanzim Reza, Md Ashad Alam - - - Robust Global Fr'echet Regression via Weight Regularization - https://arxiv.org/abs/2511.03694 - arXiv:2511.03694v1 Announce Type: new -Abstract: The Fr\'echet regression is a useful method for modeling random objects in a general metric space given Euclidean covariates. However, the conventional approach could be sensitive to outlying objects in the sense that the distance from the regression surface is large compared to the other objects. In this study, we develop a robust version of the global Fr\'echet regression by incorporating weight parameters into the objective function. We then introduce the Elastic net regularization, favoring a sparse vector of robust parameters to control the influence of outlying objects. We provide a computational algorithm to iteratively estimate the regression function and weight parameters, with providing a linear convergence property. We also propose the Bayesian information criterion to select the tuning parameters for regularization, which gives adaptive robustness along with observed data. The finite sample performance of the proposed method is demonstrated through numerical studies on matrix and distribution responses. - oai:arXiv.org:2511.03694v1 - stat.CO - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hao Li, Shonosuke Sugasawa, Shota Katayama - - - The Adaptivity Barrier in Batched Nonparametric Bandits: Sharp Characterization of the Price of Unknown Margin - https://arxiv.org/abs/2511.03708 - arXiv:2511.03708v1 Announce Type: new -Abstract: We study batched nonparametric contextual bandits under a margin condition when the margin parameter $\alpha$ is unknown. To capture the statistical price of this ignorance, we introduce the regret inflation criterion, defined as the ratio between the regret of an adaptive algorithm and that of an oracle knowing $\alpha$. We show that the optimal regret inflation grows polynomial with the horizon $T$, with exponent precisely given by the value of a convex optimization problem involving the dimension, smoothness, and batch budget. Moreover, the minimizers of this optimization problem directly prescribe the batch allocation and exploration strategy of a rate-optimal algorithm. Building on this principle, we develop RoBIN (RObust batched algorithm with adaptive BINning), which achieves the optimal regret inflation up to logarithmic factors. These results reveal a new adaptivity barrier: under batching, adaptation to an unknown margin parameter inevitably incurs a polynomial penalty, sharply characterized by a variational problem. Remarkably, this barrier vanishes when the number of batches exceeds $\log \log T$; with only a doubly logarithmic number of updates, one can recover the oracle regret rate up to polylogarithmic factors. - oai:arXiv.org:2511.03708v1 - math.ST - cs.LG - stat.ML - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - new - http://creativecommons.org/licenses/by/4.0/ - Rong Jiang, Cong Ma - - - Power Constrained Nonstationary Bandits with Habituation and Recovery Dynamics - https://arxiv.org/abs/2511.02944 - arXiv:2511.02944v1 Announce Type: cross -Abstract: A common challenge for decision makers is selecting actions whose rewards are unknown and evolve over time based on prior policies. For instance, repeated use may reduce an action's effectiveness (habituation), while inactivity may restore it (recovery). These nonstationarities are captured by the Reducing or Gaining Unknown Efficacy (ROGUE) bandit framework, which models real-world settings such as behavioral health interventions. While existing algorithms can compute sublinear regret policies to optimize these settings, they may not provide sufficient exploration due to overemphasis on exploitation, limiting the ability to estimate population-level effects. This is a challenge of particular interest in micro-randomized trials (MRTs) that aid researchers in developing just-in-time adaptive interventions that have population-level effects while still providing personalized recommendations to individuals. In this paper, we first develop ROGUE-TS, a Thompson Sampling algorithm tailored to the ROGUE framework, and provide theoretical guarantees of sublinear regret. We then introduce a probability clipping procedure to balance personalization and population-level learning, with quantified trade-off that balances regret and minimum exploration probability. Validation on two MRT datasets concerning physical activity promotion and bipolar disorder treatment shows that our methods both achieve lower regret than existing approaches and maintain high statistical power through the clipping procedure without significantly increasing regret. This enables reliable detection of treatment effects while accounting for individual behavioral dynamics. For researchers designing MRTs, our framework offers practical guidance on balancing personalization with statistical validity. - oai:arXiv.org:2511.02944v1 - cs.LG - cs.AI - math.OC - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fengxu Li, Stephanie M. Carpenter, Matthew P. Buman, Yonatan Mintz - - - Discrete Bayesian Sample Inference for Graph Generation - https://arxiv.org/abs/2511.03015 - arXiv:2511.03015v1 Announce Type: cross -Abstract: Generating graph-structured data is crucial in applications such as molecular generation, knowledge graphs, and network analysis. However, their discrete, unordered nature makes them difficult for traditional generative models, leading to the rise of discrete diffusion and flow matching models. In this work, we introduce GraphBSI, a novel one-shot graph generative model based on Bayesian Sample Inference (BSI). Instead of evolving samples directly, GraphBSI iteratively refines a belief over graphs in the continuous space of distribution parameters, naturally handling discrete structures. Further, we state BSI as a stochastic differential equation (SDE) and derive a noise-controlled family of SDEs that preserves the marginal distributions via an approximation of the score function. Our theoretical analysis further reveals the connection to Bayesian Flow Networks and Diffusion models. Finally, in our empirical evaluation, we demonstrate state-of-the-art performance on molecular and synthetic graph generation, outperforming existing one-shot graph generative models on the standard benchmarks Moses and GuacaMol. - oai:arXiv.org:2511.03015v1 - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Ole Petersen, Marcel Kollovieh, Marten Lienen, Stephan G\"unnemann - - - Quantifying Power Systems Resilience Using Statistical Analysis and Bayesian Learning - https://arxiv.org/abs/2511.03043 - arXiv:2511.03043v1 Announce Type: cross -Abstract: The increasing frequency and intensity of extreme weather events is significantly affecting the power grid, causing large-scale outages and impacting power system resilience. Yet limited work has been done on systematically modeling the impacts of weather parameters to quantify resilience. This study presents a framework using statistical and Bayesian learning approaches to quantitatively model the relationship between weather parameters and power system resilience metrics. By leveraging real-world publicly available outage and weather data, we identify key weather variables of wind speed, temperature, and precipitation influencing a particular region's resilience metrics. A case study of Cook County, Illinois, and Miami-Dade County, Florida, reveals that these weather parameters are critical factors in resiliency analysis and risk assessment. Additionally, we find that these weather variables have combined effects when studied jointly compared to their effects in isolation. This framework provides valuable insights for understanding how weather events affect power distribution system performance, supporting decision-makers in developing more effective strategies for risk mitigation, resource allocation, and adaptation to changing climatic conditions. - oai:arXiv.org:2511.03043v1 - eess.SY - cs.SY - stat.AP - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Apsara Adhikari, Charlotte Wertz, Anamika Dubey, Arslan Ahmad, Ian Dobson - - - Epidemiology of Large Language Models: A Benchmark for Observational Distribution Knowledge - https://arxiv.org/abs/2511.03070 - arXiv:2511.03070v1 Announce Type: cross -Abstract: Artificial intelligence (AI) systems hold great promise for advancing various scientific disciplines, and are increasingly used in real-world applications. Despite their remarkable progress, further capabilities are expected in order to achieve more general types of intelligence. A critical distinction in this context is between factual knowledge, which can be evaluated against true or false answers (e.g., "what is the capital of England?"), and probabilistic knowledge, reflecting probabilistic properties of the real world (e.g., "what is the sex of a computer science graduate in the US?"). In this paper, our goal is to build a benchmark for understanding the capabilities of LLMs in terms of knowledge of probability distributions describing the real world. Given that LLMs are trained on vast amounts of text, it may be plausible that they internalize aspects of these distributions. Indeed, LLMs are touted as powerful universal approximators of real-world distributions. At the same time, classical results in statistics, known as curse of dimensionality, highlight fundamental challenges in learning distributions in high dimensions, challenging the notion of universal distributional learning. In this work, we develop the first benchmark to directly test this hypothesis, evaluating whether LLMs have access to empirical distributions describing real-world populations across domains such as economics, health, education, and social behavior. Our results demonstrate that LLMs perform poorly overall, and do not seem to internalize real-world statistics naturally. When interpreted in the context of Pearl's Causal Hierarchy (PCH), our benchmark demonstrates that language models do not contain knowledge on observational distributions (Layer 1 of PCH), and thus the Causal Hierarchy Theorem implies that interventional (Layer 2) and counterfactual (Layer 3) knowledge of these models is also limited. - oai:arXiv.org:2511.03070v1 - cs.AI - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Drago Plecko, Patrik Okanovic, Torsten Hoefler, Elias Bareinboim - - - Fast SDE-based Monte Carlo dose calculation for proton therapy validated against Geant4 - https://arxiv.org/abs/2511.03115 - arXiv:2511.03115v1 Announce Type: cross -Abstract: Objective: To validate a newly proposed stochastic differential equation (SDE)-based model for proton beam energy deposition by comparing its predictions with those from Geant4 in simplified phantom scenarios. Approach: Building on previous work in Crossley et al. (2025), where energy deposition from a proton beam was modelled using an SDE framework, we implemented the model with standard approximations to interaction cross sections and mean excitation energies, which makes simulations easily adaptable to new materials and configurations. The model was benchmarked against Geant4 in homogeneous and heterogeneous phantoms. Main results: The SDE-based dose distributions agreed well with Geant4, showing range differences within 0.4 mm and 3D gamma pass rates exceeding 98% under 3%/2 mm criteria with a 1% dose threshold. The model achieved a computational speed-up of approximately fivefold relative to Geant4, consistent across different Geant4 physics lists. Significance: These results demonstrate that the SDE approach can reproduce accuracy comparable to high-fidelity Monte Carlo for proton therapy at a fraction of the computational cost, highlighting its potential for accelerating dose calculations and treatment planning. - oai:arXiv.org:2511.03115v1 - physics.med-ph - stat.AP - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Christopher B. C. Dean, Maria L. P\'erez-Lara, Emma Horton, Matthew Southerby, Jere Koskela, Andreas E. Kyprianou - - - Cross-Modal Alignment via Variational Copula Modelling - https://arxiv.org/abs/2511.03196 - arXiv:2511.03196v1 Announce Type: cross -Abstract: Various data modalities are common in real-world applications (e.g., electronic health records, medical images and clinical notes in healthcare). It is essential to develop multimodal learning methods to aggregate various information from multiple modalities. The main challenge is how to appropriately align and fuse the representations of different modalities into a joint distribution. Existing methods mainly rely on concatenation or the Kronecker product, oversimplifying the interaction structure between modalities and indicating a need to model more complex interactions. Additionally, the joint distribution of latent representations with higher-order interactions is underexplored. Copula is a powerful statistical structure for modelling the interactions among variables, as it naturally bridges the joint distribution and marginal distributions of multiple variables. We propose a novel copula-driven multimodal learning framework, which focuses on learning the joint distribution of various modalities to capture the complex interactions among them. The key idea is to interpret the copula model as a tool to align the marginal distributions of the modalities efficiently. By assuming a Gaussian mixture distribution for each modality and a copula model on the joint distribution, our model can generate accurate representations for missing modalities. Extensive experiments on public MIMIC datasets demonstrate the superior performance of our model over other competitors. The code is available at https://github.com/HKU-MedAI/CMCM. - oai:arXiv.org:2511.03196v1 - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - published by ICML2025 - Feng Wu, Tsai Hor Chan, Fuying Wang, Guosheng Yin, Lequan Yu - - - Unbiased Regression-Adjusted Estimation of Average Treatment Effects in Randomized Controlled Trials - https://arxiv.org/abs/2511.03236 - arXiv:2511.03236v1 Announce Type: cross -Abstract: This article introduces a leave-one-out regression adjustment estimator (LOORA) for estimating average treatment effects in randomized controlled trials. The method removes the finite-sample bias of conventional regression adjustment and provides exact variance expressions for LOORA versions of the Horvitz-Thompson and difference-in-means estimators under simple and complete random assignment. Ridge regularization limits the influence of high-leverage observations, improving stability and precision in small samples. In large samples, LOORA attains the asymptotic efficiency of regression-adjusted estimator as characterized by Lin (2013, Annals of Applied Statistics), while remaining exactly unbiased. To construct confidence intervals, we rely on asymptotic variance estimates that treat the estimator as a two-step procedure, accounting for both the regression adjustment and the random assignment stages. Two within-subject experimental applications that provide realistic joint distributions of potential outcomes as ground truth show that LOORA eliminates substantial biases and achieves close-to-nominal confidence interval coverage. - oai:arXiv.org:2511.03236v1 - econ.EM - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Alberto Abadie, Mehrdad Ghadiri, Ali Jadbabaie, Mahyar JafariNodeh - - - Topography, climate, land cover, and biodiversity: Explaining endemic richness and management implications on a Mediterranean island - https://arxiv.org/abs/2511.03242 - arXiv:2511.03242v1 Announce Type: cross -Abstract: Island endemism is shaped by complex interactions among environmental, ecological, and evolutionary factors, yet the relative contributions of topography, climate, and land cover remain incompletely quantified. We investigated the drivers of endemic plant richness across Crete, a Mediterranean biodiversity hotspot, using spatially explicit data on species distributions, topographic complexity, climatic variability, land cover, and soil characteristics. Artificial Neural Network models, a machine learning tool, were employed to assess the relative importance of these predictors and to identify hotspots of endemism. We found that total species richness, elevation range, and climatic variability were the strongest predictors of endemic richness, reflecting the role of biodiversity, topographic heterogeneity, and climatic gradients in generating diverse habitats and micro-refugia that promote speciation and buffer extinction risk. Endemic hotspots only partially overlapped with areas of high total species richness, indicating that total species richness was the optimal from the ones examined, yet an imperfect surrogate. These environmentally heterogeneous areas also provide critical ecosystem services, including soil stabilization, pollination, and cultural value, which are increasingly threatened by tourism, renewable energy development, land-use change, and climate impacts. Our findings underscore the importance of prioritizing mountainous and climatically variable regions in conservation planning, integrating ecosystem service considerations, and accounting for within-island spatial heterogeneity. By explicitly linking the environmental drivers of endemism to both biodiversity patterns and ecosystem function, this study provides a framework for evidence-based conservation planning in Crete and other Mediterranean islands with similar geological and biogeographic contexts. - oai:arXiv.org:2511.03242v1 - q-bio.PE - cs.LG - stat.OT - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Aristides Moustakas, Ioannis N Vogiatzakis - - - Decoupled Entropy Minimization - https://arxiv.org/abs/2511.03256 - arXiv:2511.03256v1 Announce Type: cross -Abstract: Entropy Minimization (EM) is beneficial to reducing class overlap, bridging domain gap, and restricting uncertainty for various tasks in machine learning, yet its potential is limited. To study the internal mechanism of EM, we reformulate and decouple the classical EM into two parts with opposite effects: cluster aggregation driving factor (CADF) rewards dominant classes and prompts a peaked output distribution, while gradient mitigation calibrator (GMC) penalizes high-confidence classes based on predicted probabilities. Furthermore, we reveal the limitations of classical EM caused by its coupled formulation: 1) reward collapse impedes the contribution of high-certainty samples in the learning process, and 2) easy-class bias induces misalignment between output distribution and label distribution. To address these issues, we propose Adaptive Decoupled Entropy Minimization (AdaDEM), which normalizes the reward brought from CADF and employs a marginal entropy calibrator (MEC) to replace GMC. AdaDEM outperforms DEM*, an upper-bound variant of classical EM, and achieves superior performance across various imperfectly supervised learning tasks in noisy and dynamic environments. - oai:arXiv.org:2511.03256v1 - cs.LG - cs.CV - cs.IT - math.IT - math.ST - stat.ML - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Jing Ma, Hanlin Li, Xiang Xiang - - - A Probabilistic Approach to Pose Synchronization for Multi-Reference Alignment with Applications to MIMO Wireless Communication Systems - https://arxiv.org/abs/2511.03280 - arXiv:2511.03280v1 Announce Type: cross -Abstract: From molecular imaging to wireless communications, the ability to align and reconstruct signals from multiple misaligned observations is crucial for system performance. We study the problem of multi-reference alignment (MRA), which arises in many real-world problems, such as cryo-EM, computer vision, and, in particular, wireless communication systems. Using a probabilistic approach to model MRA, we find a new algorithm that uses relative poses as nuisance variables to marginalize out -- thereby removing the global symmetries of the problem and allowing for more direct solutions and improved convergence. The decentralization of this approach enables significant computational savings by avoiding the cubic scaling of centralized methods through cycle consistency. Both proposed algorithms achieve lower reconstruction error across experimental settings. - oai:arXiv.org:2511.03280v1 - cs.LG - stat.AP - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Rob Romijnders, Gabriele Cesa, Christos Louizos, Kumar Pratik, Arash Behboodi - - - Silenced Biases: The Dark Side LLMs Learned to Refuse - https://arxiv.org/abs/2511.03369 - arXiv:2511.03369v1 Announce Type: cross -Abstract: Safety-aligned large language models (LLMs) are becoming increasingly widespread, especially in sensitive applications where fairness is essential and biased outputs can cause significant harm. However, evaluating the fairness of models is a complex challenge, and approaches that do so typically utilize standard question-answer (QA) styled schemes. Such methods often overlook deeper issues by interpreting the model's refusal responses as positive fairness measurements, which creates a false sense of fairness. In this work, we introduce the concept of silenced biases, which are unfair preferences encoded within models' latent space and are effectively concealed by safety-alignment. Previous approaches that considered similar indirect biases often relied on prompt manipulation or handcrafted implicit queries, which present limited scalability and risk contaminating the evaluation process with additional biases. We propose the Silenced Bias Benchmark (SBB), which aims to uncover these biases by employing activation steering to reduce model refusals during QA. SBB supports easy expansion to new demographic groups and subjects, presenting a fairness evaluation framework that encourages the future development of fair models and tools beyond the masking effects of alignment training. We demonstrate our approach over multiple LLMs, where our findings expose an alarming distinction between models' direct responses and their underlying fairness issues. - oai:arXiv.org:2511.03369v1 - cs.CL - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Rom Himelstein, Amit LeVi, Brit Youngmann, Yaniv Nemcovsky, Avi Mendelson - - - A Support-Set Algorithm for Optimization Problems with Nonnegative and Orthogonal Constraints - https://arxiv.org/abs/2511.03443 - arXiv:2511.03443v1 Announce Type: cross -Abstract: In this paper, we investigate optimization problems with nonnegative and orthogonal constraints, where any feasible matrix of size $n \times p$ exhibits a sparsity pattern such that each row accommodates at most one nonzero entry. Our analysis demonstrates that, by fixing the support set, the global solution of the minimization subproblem for the proximal linearization of the objective function can be computed in closed form with at most $n$ nonzero entries. Exploiting this structural property offers a powerful avenue for dramatically enhancing computational efficiency. Guided by this insight, we propose a support-set algorithm preserving strictly the feasibility of iterates. A central ingredient is a strategically devised update scheme for support sets that adjusts the placement of nonzero entries. We establish the global convergence of the support-set algorithm to a first-order stationary point, and show that its iteration complexity required to reach an $\epsilon$-approximate first-order stationary point is $O (\epsilon^{-2})$. Numerical results are strongly in favor of our algorithm in real-world applications, including nonnegative PCA, clustering, and community detection. - oai:arXiv.org:2511.03443v1 - math.OC - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Lei Wang, Xin Liu, Xiaojun Chen - - - Why Less is More (Sometimes): A Theory of Data Curation - https://arxiv.org/abs/2511.03492 - arXiv:2511.03492v1 Announce Type: cross -Abstract: This paper introduces a theoretical framework to resolve a central paradox in modern machine learning: When is it better to use less data? This question has become critical as classical scaling laws suggesting ``more is more'' (Sun et al., 2025) are challenged by methods like LIMO (``less is more'') and s1 (Ye et al., 2025; Muenighoff et al., 2025), which achieve superior performance with small, aggressively curated datasets. Here, we study data curation strategies where an imperfect oracle selects the training examples according to their difficulty and correctness. Our results provide exact scaling law curves for test error under both label-agnostic and label-aware curation rules, revealing when and why keeping only a subset of data can improve generalization. In contrast to classical scaling laws, we show that under certain conditions, small curated datasets can outperform full datasets, and we provide analytical conditions for this by deriving precise phase transition curves tied to data size and quality. We validate these theoretical claims with empirical results on ImageNet, confirming our predictions about when curation improves accuracy and can even mitigate model collapse. Furthermore, our framework provides a principled explanation for the contradictory curation strategies recently observed in LLM mathematical reasoning. - oai:arXiv.org:2511.03492v1 - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Elvis Dohmatob, Mohammad Pezeshki, Reyhane Askari-Hemmat - - - Towards Formalizing Reinforcement Learning Theory - https://arxiv.org/abs/2511.03618 - arXiv:2511.03618v1 Announce Type: cross -Abstract: In this paper, we formalize the almost sure convergence of $Q$-learning and linear temporal difference (TD) learning with Markovian samples using the Lean 4 theorem prover based on the Mathlib library. $Q$-learning and linear TD are among the earliest and most influential reinforcement learning (RL) algorithms. The investigation of their convergence properties is not only a major research topic during the early development of the RL field but also receives increasing attention nowadays. This paper formally verifies their almost sure convergence in a unified framework based on the Robbins-Siegmund theorem. The framework developed in this work can be easily extended to convergence rates and other modes of convergence. This work thus makes an important step towards fully formalizing convergent RL results. The code is available at https://github.com/ShangtongZhang/rl-theory-in-lean. - oai:arXiv.org:2511.03618v1 - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Shangtong Zhang - - - Addressing prior dependence in hierarchical Bayesian modeling for PTA data analysis I: Methodology and implementation - https://arxiv.org/abs/2511.03667 - arXiv:2511.03667v1 Announce Type: cross -Abstract: Complex inference tasks, such as those encountered in Pulsar Timing Array (PTA) data analysis, rely on Bayesian frameworks. The high-dimensional parameter space and the strong interdependencies among astrophysical, pulsar noise, and nuisance parameters introduce significant challenges for efficient learning and robust inference. These challenges are emblematic of broader issues in decision science, where model over-parameterization and prior sensitivity can compromise both computational tractability and the reliability of the results. We address these issues in the framework of hierarchical Bayesian modeling by introducing a reparameterization strategy. Our approach employs Normalizing Flows (NFs) to decorrelate the parameters governing hierarchical priors from those of astrophysical interest. The use of NF-based mappings provides both the flexibility to realize the reparametrization and the tractability to preserve proper probability densities. We further adopt i-nessai, a flow-guided nested sampler, to accelerate exploration of complex posteriors. This unified use of NFs improves statistical robustness and computational efficiency, providing a principled methodology for addressing hierarchical Bayesian inference in PTA analysis. - oai:arXiv.org:2511.03667v1 - astro-ph.IM - astro-ph.CO - astro-ph.HE - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - cross - http://creativecommons.org/licenses/by/4.0/ - Luigi D'amico, Eleonora Villa, Fatima Modica Bittordo, Aldo Barca, Francesco Al\`i, Massimo Meneghetti, Luca Naso - - - The synthetic instrument: From sparse association to sparse causation - https://arxiv.org/abs/2304.01098 - arXiv:2304.01098v4 Announce Type: replace -Abstract: In many observational studies, researchers are often interested in studying the effects of multiple exposures on a single outcome. Standard approaches for high-dimensional data such as the lasso assume the associations between the exposures and the outcome are sparse. These methods, however, do not estimate the causal effects in the presence of unmeasured confounding. In this paper, we consider an alternative approach that assumes the causal effects in view are sparse. We show that with sparse causation, the causal effects are identifiable even with unmeasured confounding. At the core of our proposal is a novel device, called the synthetic instrument, that in contrast to standard instrumental variables, can be constructed using the observed exposures directly. We show that under linear structural equation models, the problem of causal effect estimation can be formulated as an $\ell_0$-penalization problem, and hence can be solved efficiently using off-the-shelf software. Simulations show that our approach outperforms state-of-art methods in both low-dimensional and high-dimensional settings. We further illustrate our method using a mouse obesity dataset. - oai:arXiv.org:2304.01098v4 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Dingke Tang, Dehan Kong, Linbo Wang - - - Variable Selection and Minimax Prediction in High-dimensional Functional Linear Model - https://arxiv.org/abs/2310.14419 - arXiv:2310.14419v5 Announce Type: replace -Abstract: High-dimensional functional data have become increasingly prevalent in modern applications such as high-frequency financial data and neuroimaging data analysis. We investigate a class of high-dimensional linear regression models, where each predictor is a random element in an infinite-dimensional function space, and the number of functional predictors p can potentially be ultra-high. Assuming that each of the unknown coefficient functions belongs to some reproducing kernel Hilbert space (RKHS), we regularize the fitting of the model by imposing a group elastic-net type of penalty on the RKHS norms of the coefficient functions. We show that our loss function is Gateaux sub-differentiable, and our functional elastic-net estimator exists uniquely in the product RKHS. Under suitable sparsity assumptions and a functional version of the irrepresentable condition, we derive a non-asymptotic tail bound for variable selection consistency of our method. Allowing the number of true functional predictors $q$ to diverge with the sample size, we also show a post-selection refined estimator can achieve the oracle minimax optimal prediction rate. The proposed methods are illustrated through simulation studies and a real-data application from the Human Connectome Project. - oai:arXiv.org:2310.14419v5 - stat.ME - math.ST - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.5705/ss.202025.0151 - Statistica Sinica (2028) - Xingche Guo, Yehua Li, Tailen Hsing - - - Variable Selection in Maximum Mean Discrepancy for Interpretable Distribution Comparison - https://arxiv.org/abs/2311.01537 - arXiv:2311.01537v2 Announce Type: replace -Abstract: We study two-sample variable selection: identifying variables that discriminate between the distributions of two sets of data vectors. Such variables help scientists understand the mechanisms behind dataset discrepancies. Although domain-specific methods exist (e.g., in medical imaging, genetics, and computational social science), a general framework remains underdeveloped. We make two separate contributions. (i) We introduce a mathematical notion of the discriminating set of variables: the largest subset containing no variables whose marginals are identical across the two distributions and independent of the remaining variables. We prove this set is uniquely defined and establish further properties, making it a suitable ground truth for theory and evaluation. (ii) We propose two methods for two-sample variable selection that assign weights to variables and optimise them to maximise the power of a kernel two-sample test while enforcing sparsity to downweight redundant variables. To select the regularisation parameter - unknown in practice, as it controls the number of selected variables - we develop two data-driven procedures to balance recall and precision. Synthetic experiments show improved performance over baselines, and we illustrate the approach on two applications using datasets from water-pipe and traffic networks. - oai:arXiv.org:2311.01537v2 - stat.ML - cs.LG - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-sa/4.0/ - Kensuke Mitsuzawa, Motonobu Kanagawa, Stefano Bortoli, Margherita Grossi, Paolo Papotti - - - Inference of Dependency Knowledge Graph for Electronic Health Records - https://arxiv.org/abs/2312.15611 - arXiv:2312.15611v2 Announce Type: replace -Abstract: The effective analysis of high-dimensional Electronic Health Record (EHR) data, with substantial potential for healthcare research, presents notable methodological challenges. Employing predictive modeling guided by a knowledge graph (KG), which enables efficient feature selection, can enhance both statistical efficiency and interpretability. While various methods have emerged for constructing KGs, existing techniques often lack statistical certainty concerning the presence of links between entities, especially in scenarios where the utilization of patient-level EHR data is limited due to privacy concerns. In this paper, we propose the first inferential framework for deriving a sparse KG with statistical guarantee based on the dynamic log-linear topic model proposed by \cite{arora2016latent}. Within this model, the KG embeddings are estimated by performing singular value decomposition on the empirical pointwise mutual information matrix, offering a scalable solution. We then establish entrywise asymptotic normality for the KG low-rank estimator, enabling the recovery of sparse graph edges with controlled type I error. Our work uniquely addresses the under-explored domain of statistical inference about non-linear statistics under the low-rank temporal dependent models, a critical gap in existing research. We validate our approach through extensive simulation studies and then apply the method to real-world EHR data in constructing clinical KGs and generating clinical feature embeddings. - oai:arXiv.org:2312.15611v2 - stat.ME - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Zhiwei Xu, Ziming Gan, Doudou Zhou, Shuting Shen, Junwei Lu, Tianxi Cai - - - Handling incomplete outcomes and covariates in cluster-randomized trials: doubly-robust estimation, efficiency considerations, and sensitivity analysis - https://arxiv.org/abs/2401.11278 - arXiv:2401.11278v4 Announce Type: replace -Abstract: In cluster-randomized trials (CRTs), missing data can occur in various ways, including missing values in outcomes and baseline covariates at the individual or cluster level, or completely missing information for non-participants. Among the various types of missing data in CRTs, missing outcomes have attracted the most attention. However, no existing methods simultaneously address all aforementioned types of missing data in CRTs. To fill in this gap, we propose a doubly-robust estimator for the average treatment effect on a variety of effect measure scales. The proposed estimator simultaneously handles missing outcomes under missingness at random, missing covariates without constraining the missingness mechanism, and missing cluster-population sizes via a uniform sampling mechanism. Furthermore, we detail key considerations to improve precision by specifying the optimal weights, leveraging machine learning, and modeling the treatment assignment mechanism. Finally, to evaluate the impact of violating missing data assumptions, we contribute a new sensitivity analysis framework tailored to CRTs. We assess the performance of the proposed methods through simulations and illustrate their use in a real data application. - oai:arXiv.org:2401.11278v4 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Bingkai Wang, Fan Li, Rui Wang - - - On Neighbourhood Cross Validation - https://arxiv.org/abs/2404.16490 - arXiv:2404.16490v4 Announce Type: replace -Abstract: Many varieties of cross validation would be statistically appealing for the estimation of smoothing and other penalized regression hyperparameters, were it not for the high cost of evaluating such criteria. Here it is shown how to efficiently and accurately compute and optimize a broad variety of cross validation criteria for a wide range of models estimated by minimizing a quadratically penalized loss. The leading order computational cost of hyperparameter estimation is made comparable to the cost of a single model fit given hyperparameters. In many cases this represents an $O(n)$ computational saving when modelling $n$ data. This development makes if feasible, for the first time, to use leave-out-neighbourhood cross validation to deal with the wide spread problem of un-modelled short range autocorrelation which otherwise leads to underestimation of smoothing parameters. It is also shown how to accurately quantifying uncertainty in this case, despite the un-modelled autocorrelation. Practical examples are provided including smooth quantile regression, generalized additive models for location scale and shape, and focussing particularly on dealing with un-modelled autocorrelation. - oai:arXiv.org:2404.16490v4 - stat.ME - stat.CO - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Simon N. Wood - - - Asymptotic inference with flexible covariate adjustment under rerandomization and stratified rerandomization - https://arxiv.org/abs/2406.02834 - arXiv:2406.02834v2 Announce Type: replace -Abstract: Rerandomization is an effective treatment allocation procedure to control for baseline covariate imbalance. For estimating the average treatment effect, rerandomization has been previously shown to improve the precision of the unadjusted and the linearly-adjusted estimators over simple randomization without compromising consistency. However, it remains unclear whether such results apply more generally to the class of M-estimators, including the g-computation formula with generalized linear regression and doubly-robust methods, and more broadly, to efficient estimators with data-adaptive machine learners. In this paper, under a super-population framework, we develop the asymptotic theory for a more general class of covariate-adjusted estimators under rerandomization and its stratified extension. We prove that the asymptotic linearity and the influence function remain identical for any M-estimator under simple randomization and rerandomization, but rerandomization may lead to a non-Gaussian asymptotic distribution. We further explain, drawing examples from several common M-estimators, that asymptotic normality can be achieved if rerandomization variables are appropriately adjusted for in the final estimator. These results are extended to stratified rerandomization. Finally, we study the asymptotic theory for efficient estimators based on data-adaptive machine learners, and prove their efficiency optimality under rerandomization and stratified rerandomization. Our results are demonstrated via simulations and re-analyses of a cluster-randomized experiment that used stratified rerandomization. - oai:arXiv.org:2406.02834v2 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Bingkai Wang, Fan Li - - - Minimax rates for the linear-in-means model reveal an identifiability-estimability gap - https://arxiv.org/abs/2410.10772 - arXiv:2410.10772v2 Announce Type: replace -Abstract: The linear-in-means model is widely used to study peer influence in social networks. We consider estimation in the linear-in-means model when a randomized treatment is applied to nodes in a network. We show that even when peer effects are identified, they may not be estimable at standard rates, due to near-perfect collinearity. We prove a minimax lower bound on estimation error and show that estimation becomes more difficult as networks grow denser. In sufficiently dense networks, consistent estimation of peer effects is impossible. To address this challenge, we investigate network-dependent treatment assignment. Using random dot product graphs, we show that treatments depending on network structure can prevent asymptotic collinearity when there is sufficient degree heterogeneity. However, such dependence is not a panacea, as different dependence structures must be individually evaluated for estimability. These results suggest caution when using the linear-in-means model to estimate peer effects and highlight the importance of explicitly modeling the relationship between treatments and network structure. - oai:arXiv.org:2410.10772v2 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Alex Hayes, Keith Levin - - - Design of Bayesian Clinical Trials with Clustered Data - https://arxiv.org/abs/2501.13218 - arXiv:2501.13218v3 Announce Type: replace -Abstract: In the design of clinical trials, it is essential to assess the design operating characteristics (e.g., power and the type I error rate). Common practice for the evaluation of operating characteristics in Bayesian clinical trials relies on estimating the sampling distribution of posterior summaries via Monte Carlo simulation. It is computationally intensive to repeat this estimation process for each design configuration considered, particularly for clustered data that are analyzed using complex, high-dimensional models. In this paper, we propose an efficient method to assess operating characteristics and determine sample sizes for Bayesian trials with clustered data. We prove theoretical results that enable posterior probabilities to be modeled as a function of the number of clusters. Using these functions, we assess operating characteristics at a range of sample sizes given simulations conducted at only two cluster counts. These theoretical results are also leveraged to quantify the impact of simulation variability on our sample size recommendations. The applicability of our methodology is illustrated using an example cluster-randomized Bayesian clinical trial. - oai:arXiv.org:2501.13218v3 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Luke Hagar, Shirin Golchi - - - Dirichlet kernel density estimation for strongly mixing sequences on the simplex - https://arxiv.org/abs/2506.08816 - arXiv:2506.08816v2 Announce Type: replace -Abstract: This paper investigates the theoretical properties of Dirichlet kernel density estimators for compositional data supported on simplices, for the first time addressing scenarios involving time-dependent observations characterized by strong mixing conditions. We establish rigorous results for the asymptotic normality and mean squared error of these estimators, extending previous findings from the independent and identically distributed (iid) context to the more general setting of strongly mixing processes. To demonstrate its practical utility, the estimator is applied to monthly market-share compositions of several Renault vehicle classes over a twelve-year period, with bandwidth selection performed via leave-one-out least squares cross-validation. Our findings underscore the reliability and strength of Dirichlet kernel techniques when applied to temporally dependent compositional data. - oai:arXiv.org:2506.08816v2 - math.ST - stat.ME - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hanen Daayeb, Salah Khardani, Fr\'ed\'eric Ouimet - - - FARS: Factor Augmented Regression Scenarios in R - https://arxiv.org/abs/2507.10679 - arXiv:2507.10679v5 Announce Type: replace -Abstract: In the context of macroeconomic/financial time series, the FARS package provides a comprehensive framework in R for the construction of conditional densities of the variable of interest based on the factor-augmented quantile regressions (FA-QRs) methodology, with the factors extracted from multi-level dynamic factor models (ML-DFMs) with potential overlapping group-specific factors. Furthermore, the package also allows the construction of measures of risk as well as modeling and designing economic scenarios based on the conditional densities. In particular, the package enables users to: (i) extract global and group-specific factors using a flexible multi-level factor structure; (ii) compute asymptotically valid confidence regions for the estimated factors, accounting for uncertainty in the factor loadings; (iii) obtain estimates of the parameters of the FA-QRs together with their standard deviations; (iv) recover full predictive conditional densities from estimated quantiles; (v) obtain risk measures based on extreme quantiles of the conditional densities; and (vi) estimate the conditional density and the corresponding extreme quantiles when the factors are stressed. - oai:arXiv.org:2507.10679v5 - stat.CO - econ.EM - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Gian Pietro Bellocca, Ignacio Garr\'on, Vladimir Rodr\'iguez-Caballero, Esther Ruiz - - - "Within-trial" prognostic score adjustment is targeted maximum likelihood estimation - https://arxiv.org/abs/2507.23446 - arXiv:2507.23446v2 Announce Type: replace -Abstract: Adjustment for ``super'' or ``prognostic'' composite covariates has become more popular in randomized trials recently. These prognostic covariates are often constructed from historical data by fitting a predictive model of the outcome on the raw covariates. A natural question that we have been asked by applied researchers is whether this can be done without the historical data: can the prognostic covariate be constructed or derived from the trial data itself, possibly using different folds of the data, before adjusting for it? Here we clarify that such ``within-trial'' prognostic adjustment is nothing more than a form of targeted maximum likelihood estimation (TMLE), a well-studied procedure for optimal inference. We demonstrate the equivalence with a simulation study and discuss the pros and cons of within-trial prognostic adjustment (standard efficient estimation) relative to standard TMLE and standard prognostic adjustment with historical data. - oai:arXiv.org:2507.23446v2 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Emilie H{\o}jbjerre-Frandsen, Alejandro Schuler - - - Reverse Diffusion Sequential Monte Carlo Samplers - https://arxiv.org/abs/2508.05926 - arXiv:2508.05926v2 Announce Type: replace -Abstract: We propose a novel sequential Monte Carlo (SMC) method for sampling from unnormalized target distributions based on a reverse denoising diffusion process. While recent diffusion-based samplers simulate the reverse diffusion using approximate score functions, they can suffer from accumulating errors due to time discretization and imperfect score estimation. In this work, we introduce a principled SMC framework that formalizes diffusion-based samplers as proposals while systematically correcting for their biases. The core idea is to construct informative intermediate target distributions that progressively steer the sampling trajectory toward the final target distribution. Although ideal intermediate targets are intractable, we develop exact approximations using quantities from the score estimation-based proposal, without requiring additional model training or inference overhead. The resulting sampler, termed Reverse Diffusion Sequential Monte Carlo, enables consistent sampling and unbiased estimation of the target's normalization constant under mild conditions. We demonstrate the effectiveness of our method on a range of synthetic targets and real-world Bayesian inference problems. - oai:arXiv.org:2508.05926v2 - stat.CO - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Luhuan Wu, Yi Han, Christian A. Naesseth, John P. Cunningham - - - Shotgun DNA sequencing evidence: sample-specific and unknown genotyping error probabilities - https://arxiv.org/abs/2509.26112 - arXiv:2509.26112v3 Announce Type: replace -Abstract: Many forensic genetic trace samples are of too low quality to obtain short tandem repeat (STR) DNA profiles as the nuclear DNA they contain is highly degraded (e.g., telogen hairs). Instead, performing shotgun DNA sequencing of such samples can provide valuable information on, e.g., single nucleotide polymorphism (SNP) markers. As a result, shotgun sequencing is starting to gain more attention in forensic genetics and statistical models to correctly interpret such evidence, including properly accounting for sequencing errors, are needed. One such model is the wgsLR model by Andersen et. al. (2025) that enabled evaluating the evidential strength of a comparison between the genotypes in the trace sample and reference sample assuming a single-source contribution to both samples. This paper extends the wgsLR model to allow for different (asymmetric) genotyping error probabilities (e.g., from a low quality trace sample and a high quality reference sample). The model was also extended to handle unknown genotyping error probabilities via both maximising profile likelihood and using a prior distribution. The sensitivity of the wgsLR model against overdispersion was also investigated and it was found robust against it. It was also found that handling an unknown genotyping error probability of the trace sample with the methods having a sufficient number of independent markers gave concordant weight of evidence (WoE) under both the hypotheses (same or different individuals being donors of trace and reference sample). It was found more conservative to use a too small trace sample genotyping error probability rather than a too high genotyping error probability as the latter can explain genotype inconsistencies by errors rather than due to two different individuals being the donors of the trace sample and reference sample. The extensions of the model are implemented in the R package wgsLR. - oai:arXiv.org:2509.26112v3 - stat.AP - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Mikkel Meyer Andersen - - - Likelihood-based inference for the Gompertz model with Poisson errors - https://arxiv.org/abs/2510.06787 - arXiv:2510.06787v2 Announce Type: replace -Abstract: Population dynamics models play an important role in a number of fields, such as actuarial science, demography, and ecology, as they help explain past fluctuations and predict future population. The accuracy of these models is often influenced by the uncertainty introduced by sampling error. Statistical inference for these models can be difficult when, in addition to the process' inherent stochasticity, one also needs to account for sampling error. Ignoring the latter can lead to biases in the estimation, which in turn can produce erroneous conclusions about the system's behavior. The Gompertz model is widely used to infer population size dynamics, but a full likelihood approach can be computationally prohibitive when sampling error is accounted for. We close this gap by developing efficient computational tools for statistical inference in the Gompertz model with Poisson sampling error based on the full likelihood. The approach is illustrated in both the Bayesian and frequentist paradigms. Performance is illustrated with simulations and data analysis. - oai:arXiv.org:2510.06787v2 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Paolo Onorati, Sofia Ruiz-Suarez, Radu Craiu - - - Multivariate Bernoulli Hoeffding Decomposition: From Theory to Sensitivity Analysis - https://arxiv.org/abs/2510.07088 - arXiv:2510.07088v3 Announce Type: replace -Abstract: Understanding the behavior of predictive models with random inputs can be achieved through functional decompositions into sub-models that capture interpretable effects of input groups. Building on recent advances in uncertainty quantification, the existence and uniqueness of a generalized Hoeffding decomposition have been established for correlated input variables, using oblique projections onto suitable functional subspaces. This work focuses on the case of Bernoulli inputs and provides a complete analytical characterization of the decomposition. We show that, in this discrete setting, the associated subspaces are one-dimensional and that the decomposition admits a closed-form representation. One of the main contributions of this study is to generalize the classical Fourier--Walsh--Hadamard decomposition for pseudo-Boolean functions to the correlated case, yielding an oblique version when the underlying distribution is not a product measure, and recovering the standard orthogonal form when independence holds. This explicit structure offers a fully interpretable framework, clarifying the contribution of each input combination and theoretically enabling model reverse engineering. From this formulation, explicit sensitivity measures-such as Sobol' indices and Shapley effects-can be directly derived. Numerical experiments illustrate the practical interest of the approach for decision-support problems involving binary features. The paper concludes with perspectives on extending the methodology to high-dimensional settings and to models involving inputs with finite, non-binary support. - oai:arXiv.org:2510.07088v3 - stat.ML - cs.LG - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Baptiste Ferrere (EDF R\&D PRISME, IMT, SINCLAIR AI Lab), Nicolas Bousquet (EDF R\&D PRISME, SINCLAIR AI Lab, LPSM), Fabrice Gamboa (IMT, ANITI), Jean-Michel Loubes (IMT, ANITI), Joseph Mur\'e (EDF R\&D PRISME) - - - Batch learning equals online learning in Bayesian supervised learning - https://arxiv.org/abs/2510.16892 - arXiv:2510.16892v3 Announce Type: replace -Abstract: Using functoriality of probabilistic morphisms, we prove that sequential and batch Bayesian inversions coincide in supervised learning models with conditionally independent (possibly non-i.i.d.) data \cite{Le2025}. This equivalence holds without domination or discreteness assumptions on sampling operators. We derive a recursive formula for posterior predictive distributions, which reduces to the Kalman filter in Gaussian process regression. For Polish label spaces $\mathcal{Y}$ and arbitrary input sets $\mathcal{X}$, we characterize probability measures on $\mathcal{P}(\mathcal{Y})^{\mathcal{X}}$ via projective systems, generalizing Orbanz \cite{Orbanz2011}. We revisit MacEachern's Dependent Dirichlet Processes (DDP) \cite{MacEachern2000} using copula-based constructions \cite{BJQ2012} and show how to compute posterior predictive distributions in universal Bayesian supervised models with DDP priors. - oai:arXiv.org:2510.16892v3 - math.ST - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - H\^ong V\^an L\^e - - - Curvature-based rejection sampling - https://arxiv.org/abs/2510.24537 - arXiv:2510.24537v2 Announce Type: replace -Abstract: The present work introduces curvature-based rejection sampling (CURS). This is a method for sampling from a general class of probability densities defined on Riemannian manifolds. It can be used to sample from any probability density which ``depends only on distance". The idea is to combine the statistical principle of rejection sampling with the geometric principle of volume comparison. CURS is an exact sampling method and (assuming the underlying Riemannian manifold satisfies certain technical conditions) it has a particularly moderate computational cost. The aim of the present work is to show that there are many applications where CURS should be the user's method of choice for dealing with relatively low-dimensional scenarios. - oai:arXiv.org:2510.24537v2 - math.ST - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Isabella Costa Maia, Marco Congedo, Pedro L. C. Rodrigues, Salem Said - - - Using latent representations to link disjoint longitudinal data for mixed-effects regression - https://arxiv.org/abs/2510.25531 - arXiv:2510.25531v2 Announce Type: replace -Abstract: Many rare diseases offer limited established treatment options, leading patients to switch therapies when new medications emerge. To analyze the impact of such treatment switches within the low sample size limitations of rare disease trials, it is important to use all available data sources. This, however, is complicated when usage of measurement instruments change during the observation period, for example when instruments are adapted to specific age ranges. The resulting disjoint longitudinal data trajectories, complicate the application of traditional modeling approaches like mixed-effects regression. We tackle this by mapping observations of each instrument to a aligned low-dimensional temporal trajectory, enabling longitudinal modeling across instruments. Specifically, we employ a set of variational autoencoder architectures to embed item values into a shared latent space for each time point. Temporal disease dynamics and treatment switch effects are then captured through a mixed-effects regression model applied to latent representations. To enable statistical inference, we present a novel statistical testing approach that accounts for the joint parameter estimation of mixed-effects regression and variational autoencoders. The methodology is applied to quantify the impact of treatment switches for patients with spinal muscular atrophy. Here, our approach aligns motor performance items from different measurement instruments for mixed-effects regression and maps estimated effects back to the observed item level to quantify the treatment switch effect. Our approach allows for model selection as well as for assessing effects of treatment switching. The results highlight the potential of modeling in joint latent representations for addressing small data challenges. - oai:arXiv.org:2510.25531v2 - stat.ML - cs.AI - cs.LG - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by/4.0/ - Clemens Sch\"achter, Maren Hackenberg, Michelle Pfaffenlehner, F\'elix B. Tambe-Ndonfack, Thorsten Schmidt, Astrid Pechmann, Janbernd Kirschner, Jan Hasenauer, Harald Binder - - - Bridging the Gap between Empirical Welfare Maximization and Conditional Average Treatment Effect Estimation in Policy Learning - https://arxiv.org/abs/2510.26723 - arXiv:2510.26723v2 Announce Type: replace -Abstract: The goal of policy learning is to train a policy function that recommends a treatment given covariates to maximize population welfare. There are two major approaches in policy learning: the empirical welfare maximization (EWM) approach and the plug-in approach. The EWM approach is analogous to a classification problem, where one first builds an estimator of the population welfare, which is a functional of policy functions, and then trains a policy by maximizing the estimated welfare. In contrast, the plug-in approach is based on regression, where one first estimates the conditional average treatment effect (CATE) and then recommends the treatment with the highest estimated outcome. This study bridges the gap between the two approaches by showing that both are based on essentially the same optimization problem. In particular, we prove an exact equivalence between EWM and least squares over a reparameterization of the policy class. As a consequence, the two approaches are interchangeable in several respects and share the same theoretical guarantees under common conditions. Leveraging this equivalence, we propose a regularization method for policy learning. The reduction to least squares yields a smooth surrogate that is typically easier to optimize in practice. At the same time, for many natural policy classes the inherent combinatorial hardness of exact EWM generally remains, so the reduction should be viewed as an optimization aid rather than a universal bypass of NP-hardness. - oai:arXiv.org:2510.26723v2 - stat.ML - cs.LG - econ.EM - math.ST - stat.ME - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Masahiro Kato - - - Bias correction of satellite and reanalysis products for daily rainfall occurrence and intensity - https://arxiv.org/abs/2510.27456 - arXiv:2510.27456v2 Announce Type: replace -Abstract: Study region: Ghana and Zambia, Africa - Study focus: This study rigorously evaluates a suite of bias correction (BC) methods, including statistical approaches (LOCI, QM), machine learning (SVR, GPR), and hybrid techniques (LOCI-GPR, QM-GPR), applied to seven satellite rainfall estimates (SREs) across 38 stations in Ghana and Zambia, aiming to assess their performance in rainfall detection and intensity estimation. - New hydrological insights for the region: Results indicate that the ENACTS product, which uniquely integrates a large number of station records, was the most corrigible SRE; in Zambia, nearly all BC methods successfully reduced the mean error in daily rainfall amounts at over 70\% of stations. However, this performance requires further validation at independent stations not incorporated into the ENACTS product. Overall, statistical methods (QM and LOCI) generally outperformed other techniques, although QM exhibited a tendency to inflate rainfall values. All SREs corrected with the statistical and hybrid BC methods demonstrated high capability for detecting dry days (POD $\ge$ 0.80). A critical limitation persisted, however, as all SREs (except ENACTS), after correction with BC methods, consistently failed to improve the detection of heavy and violent rainfall events (POD $\leq$ 0.2), highlighting a crucial area for future research. - oai:arXiv.org:2510.27456v2 - stat.AP - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://creativecommons.org/licenses/by-nc-nd/4.0/ - John Bagiliko, David Stern, Francis Feehi Torgbor, Danny Parsons, Samuel Owusu Ansah, Denis Ndanguza - - - Interval Estimation for Binomial Proportions Under Differential Privacy - https://arxiv.org/abs/2511.02227 - arXiv:2511.02227v2 Announce Type: replace -Abstract: When releasing binary proportions computed using sensitive data, several government agencies and other data stewards protect confidentiality of the underlying values by ensuring the released statistics satisfy differential privacy. Typically, this is done by adding carefully chosen noise to the sample proportion computed using the confidential data. In this article, we describe and compare methods for turning this differentially private proportion into an interval estimate for an underlying population probability. Specifically, we consider differentially private versions of the Wald and Wilson intervals, Bayesian credible intervals based on denoising the differentially private proportion, and an exact interval motivated by the Clopper-Pearson confidence interval. We examine the repeated sampling performances of the intervals using simulation studies under both the Laplace mechanism and discrete Gaussian mechanism across a range of privacy guarantees. We find that while several methods can offer reasonable performances, the Bayesian credible intervals are the most attractive. - oai:arXiv.org:2511.02227v2 - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Hsuan-Chen Kao, Jerome P. Reiter - - - Privacy-aware identification - https://arxiv.org/abs/2006.14732 - arXiv:2006.14732v3 Announce Type: replace-cross -Abstract: The paper redefines econometric identification under formal privacy constraints, particularly differential privacy (DP). Traditionally, econometrics focuses on point or partial identification, aiming to recover parameters precisely or within a deterministic set. However, DP introduces a fundamental challenge: information asymmetry between researchers and data curators results in DP outputs belonging to a potentially large collection of differentially private statistics, which is naturally described as a random set. Due to the finite-sample nature of the DP notion and mechanisms, identification must be reinterpreted as the ability to recover parameters in the limit of this random set. In the DP setting this limit may remain random which necessitates new theoretical tools, such as random set theory, to characterize parameter properties and practical methods, like proposed decision mappings by data curators, to restore point identification. We argue that privacy constraints push econometrics toward a broader framework where randomness and uncertainty are intrinsic features of identification, moving beyond classical approaches. By integrating DP, identification, and random sets, we offer a privacy-aware identification. - oai:arXiv.org:2006.14732v3 - econ.EM - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tatiana Komarova, Denis Nekipelov - - - Multivariate ordered discrete response models with two layers of dependence - https://arxiv.org/abs/2205.05779 - arXiv:2205.05779v3 Announce Type: replace-cross -Abstract: We develop a class of multivariate ordered discrete response models featuring general rectangular structures, which allow for functionally interdependent thresholds across dimensions, extending beyond traditional (lattice) models that assume threshold independence. The new models incorporate two layers of dependence: one arising from the interdependence of decision rules (capturing broad bracketing behaviors) and another from the correlation of latent utilities conditional on observables. We provide microfoundations, explore semiparametric and parametric specifications, and establish identification conditions under logical consistency in decision-making. An empirical application to health insurance markets demonstrates the advantages of this new framework, showing how it disentangles moral hazard (captured via threshold dependence) from adverse selection (isolated in unobservable correlations), offering insights into behavioral responses obscured by lattice models. - oai:arXiv.org:2205.05779v3 - econ.EM - stat.AP - stat.ME - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Tatiana Komarova, William Matcham - - - Uniform-in-time propagation of chaos for mean field Langevin dynamics - https://arxiv.org/abs/2212.03050 - arXiv:2212.03050v4 Announce Type: replace-cross -Abstract: We study the mean field Langevin dynamics and the associated particle system. By assuming the functional convexity of the energy, we obtain the $L^p$-convergence of the marginal distributions towards the unique invariant measure for the mean field dynamics. Furthermore, we prove the uniform-in-time propagation of chaos in both the $L^2$-Wasserstein metric and relative entropy. - oai:arXiv.org:2212.03050v4 - math.PR - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1214/24-AIHP1499 - Ann. Inst. Henri Poincar\'e, Probab. Stat., 61(4):2357-2404, 2025 - Fan Chen, Zhenjie Ren, Songbo Wang - - - How does training shape the Riemannian geometry of neural network representations? - https://arxiv.org/abs/2301.11375 - arXiv:2301.11375v4 Announce Type: replace-cross -Abstract: In machine learning, there is a long history of trying to build neural networks that can learn from fewer example data by baking in strong geometric priors. However, it is not always clear a priori what geometric constraints are appropriate for a given task. Here, we explore the possibility that one can uncover useful geometric inductive biases by studying how training molds the Riemannian geometry induced by unconstrained neural network feature maps. We first show that at infinite width, neural networks with random parameters induce highly symmetric metrics on input space. This symmetry is broken by feature learning: networks trained to perform classification tasks learn to magnify local areas along decision boundaries. This holds in deep networks trained on high-dimensional image classification tasks, and even in self-supervised representation learning. These results begin to elucidate how training shapes the geometry induced by unconstrained neural network feature maps, laying the groundwork for an understanding of this richly nonlinear form of feature learning. - oai:arXiv.org:2301.11375v4 - cs.LG - cond-mat.dis-nn - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Proceedings of the 3rd Workshop on Symmetry and Geometry in Neural Representations (NeurReps) (2025) - Jacob A. Zavatone-Veth, Sheng Yang, Julian A. Rubinfien, Cengiz Pehlevan - - - Contraction of Private Quantum Channels and Private Quantum Hypothesis Testing - https://arxiv.org/abs/2406.18651 - arXiv:2406.18651v3 Announce Type: replace-cross -Abstract: A quantum generalized divergence by definition satisfies the data-processing inequality; as such, the relative decrease in such a divergence under the action of a quantum channel is at most one. This relative decrease is formally known as the contraction coefficient of the channel and the divergence. Interestingly, there exist combinations of channels and divergences for which the contraction coefficient is strictly less than one. Furthermore, understanding the contraction coefficient is fundamental for the study of statistical tasks under privacy constraints. To this end, here we establish upper bounds on contraction coefficients for the hockey-stick divergence under privacy constraints, where privacy is quantified with respect to the quantum local differential privacy (QLDP) framework, and we fully characterize the contraction coefficient for the trace distance under privacy constraints. With the machinery developed, we also determine an upper bound on the contraction of both the Bures distance and quantum relative entropy relative to the normalized trace distance, under QLDP constraints. Next, we apply our findings to establish bounds on the sample complexity of quantum hypothesis testing under privacy constraints. Furthermore, we study various scenarios in which the sample complexity bounds are tight, while providing order-optimal quantum channels that achieve those bounds. Lastly, we show how private quantum channels provide fairness and Holevo information stability in quantum learning settings. - oai:arXiv.org:2406.18651v3 - quant-ph - cs.CR - cs.IT - cs.LG - math.IT - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - 10.1109/TIT.2025.3527859 - IEEE Transactions on Information Theory, Volume 71, Issue 3, Pages 1851--1873, March 2025 - Theshani Nuradha, Mark M. Wilde - - - Social feedback amplifies emotional language in online video live chats - https://arxiv.org/abs/2408.05700 - arXiv:2408.05700v4 Announce Type: replace-cross -Abstract: A growing share of human interactions now occurs online, where the expression and perception of emotions are often amplified and distorted. Yet, the interplay between different emotions and the extent to which they are driven by external stimuli or social feedback remains poorly understood. We calibrate a multivariate Hawkes self-exciting point process to model the temporal expression of six basic emotions in YouTube Live chats. This framework captures both temporal and cross-emotional dependencies while allowing us to disentangle the influence of video content (exogenous) from peer interactions (endogenous). We find that emotional expressions are up to four times more strongly driven by peer interaction than by video content. Positivity is more contagious, spreading three times more readily, whereas negativity is more memorable, lingering nearly twice as long. Moreover, we observe asymmetric cross-excitation, with negative emotions frequently triggering positive ones, a pattern consistent with trolling dynamics, but not the reverse. These findings highlight the central role of social interaction in shaping emotional dynamics online and the risks of emotional manipulation as human-chatbot interactions become increasingly realistic. - oai:arXiv.org:2408.05700v4 - cs.SI - cs.HC - stat.AP - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Yishan Luo, Didier Sornette, Sandro Claudio Lera - - - Alleviating Hyperparameter-Tuning Burden in SVM Classifiers for Pulmonary Nodules Diagnosis with Multi-Task Bayesian Optimization - https://arxiv.org/abs/2411.06184 - arXiv:2411.06184v2 Announce Type: replace-cross -Abstract: In the field of non-invasive medical imaging, radiomic features are utilized to measure tumor characteristics. However, these features can be affected by the techniques used to discretize the images, ultimately impacting the accuracy of diagnosis. To investigate the influence of various image discretization methods on diagnosis, it is common practice to evaluate multiple discretization strategies individually. This approach often leads to redundant and time-consuming tasks such as training predictive models and fine-tuning hyperparameters separately. This study examines the feasibility of employing multi-task Bayesian optimization to accelerate the hyperparameters search for classifying benign and malignant pulmonary nodules using RBF SVM. Our findings suggest that multi-task Bayesian optimization significantly accelerates the search for hyperparameters in comparison to a single-task approach. To the best of our knowledge, this is the first investigation to utilize multi-task Bayesian optimization in a critical medical context. - oai:arXiv.org:2411.06184v2 - eess.IV - cs.CV - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wenhao Chi, Haiping Liu, Hongqiao Dong, Wenhua Liang, Bo Liu - - - Aspen Open Jets: Unlocking LHC Data for Foundation Models in Particle Physics - https://arxiv.org/abs/2412.10504 - arXiv:2412.10504v2 Announce Type: replace-cross -Abstract: Foundation models are deep learning models pre-trained on large amounts of data which are capable of generalizing to multiple datasets and/or downstream tasks. This work demonstrates how data collected by the CMS experiment at the Large Hadron Collider can be useful in pre-training foundation models for HEP. Specifically, we introduce the AspenOpenJets dataset, consisting of approximately 178M high $p_T$ jets derived from CMS 2016 Open Data. We show how pre-training the OmniJet-$\alpha$ foundation model on AspenOpenJets improves performance on generative tasks with significant domain shift: generating boosted top and QCD jets from the simulated JetClass dataset. In addition to demonstrating the power of pre-training of a jet-based foundation model on actual proton-proton collision data, we provide the ML-ready derived AspenOpenJets dataset for further public use. - oai:arXiv.org:2412.10504v2 - hep-ph - cs.LG - hep-ex - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - 10.1088/2632-2153/ade58f - Mach.Learn.Sci.Tech. 6 (2025) 3, 030601 - Oz Amram, Luca Anzalone, Joschka Birk, Darius A. Faroughy, Anna Hallin, Gregor Kasieczka, Michael Kr\"amer, Ian Pang, Humberto Reyes-Gonzalez, David Shih - - - Beyond Covariance Matrix: The Statistical Complexity of Private Linear Regression - https://arxiv.org/abs/2502.13115 - arXiv:2502.13115v2 Announce Type: replace-cross -Abstract: We study the statistical complexity of private linear regression under an unknown, potentially ill-conditioned covariate distribution. Somewhat surprisingly, under privacy constraints the intrinsic complexity is \emph{not} captured by the usual covariance matrix but rather its $L_1$ analogues. Building on this insight, we establish minimax convergence rates for both the central and local privacy models and introduce an Information-Weighted Regression method that attains the optimal rates. - As application, in private linear contextual bandits, we propose an efficient algorithm that achieves rate-optimal regret bounds of order $\sqrt{T}+\frac{1}{\alpha}$ and $\sqrt{T}/\alpha$ under joint and local $\alpha$-privacy models, respectively. Notably, our results demonstrate that joint privacy comes at almost no additional cost, addressing the open problems posed by Azize and Basu (2024). - oai:arXiv.org:2502.13115v2 - cs.LG - cs.AI - cs.CR - math.ST - stat.ML - stat.TH - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Fan Chen, Jiachun Li, Alexander Rakhlin, David Simchi-Levi - - - Data-Driven Probabilistic Air-Sea Flux Parameterization - https://arxiv.org/abs/2503.03990 - arXiv:2503.03990v2 Announce Type: replace-cross -Abstract: Accurately quantifying air-sea fluxes is important for understanding air-sea interactions and improving coupled weather and climate systems. This study introduces a probabilistic framework to represent the highly variable nature of air-sea fluxes, which is missing in deterministic bulk algorithms. Assuming Gaussian distributions conditioned on the input variables, we use artificial neural networks and eddy-covariance measurement data to estimate the mean and variance by minimizing negative log-likelihood loss. The trained neural networks provide alternative mean flux estimates to existing bulk algorithms, and quantify the uncertainty around the mean estimates. Stochastic parameterization of air-sea turbulent fluxes can be constructed by sampling from the predicted distributions. Tests in a single-column forced upper-ocean model suggest that changes in flux algorithms influence sea surface temperature and mixed layer depth seasonally. The ensemble spread in stochastic runs is most pronounced during spring restratification. - oai:arXiv.org:2503.03990v2 - physics.ao-ph - cs.LG - stat.AP - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Jiarong Wu, Pavel Perezhogin, David John Gagne, Brandon Reichl, Aneesh C. Subramanian, Elizabeth Thompson, Laure Zanna - - - NeuralSurv: Deep Survival Analysis with Bayesian Uncertainty Quantification - https://arxiv.org/abs/2505.11054 - arXiv:2505.11054v2 Announce Type: replace-cross -Abstract: We introduce NeuralSurv, the first deep survival model to incorporate Bayesian uncertainty quantification. Our non-parametric, architecture-agnostic framework captures time-varying covariate-risk relationships in continuous time via a novel two-stage data-augmentation scheme, for which we establish theoretical guarantees. For efficient posterior inference, we introduce a mean-field variational algorithm with coordinate-ascent updates that scale linearly in model size. By locally linearizing the Bayesian neural network, we obtain full conjugacy and derive all coordinate updates in closed form. In experiments, NeuralSurv delivers superior calibration compared to state-of-the-art deep survival models, while matching or exceeding their discriminative performance across both synthetic benchmarks and real-world datasets. Our results demonstrate the value of Bayesian principles in data-scarce regimes by enhancing model calibration and providing robust, well-calibrated uncertainty estimates for the survival function. - oai:arXiv.org:2505.11054v2 - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - M\'elodie Monod, Alessandro Micheli, Samir Bhatt - - - Recurrent Self-Attention Dynamics: An Energy-Agnostic Perspective from Jacobians - https://arxiv.org/abs/2505.19458 - arXiv:2505.19458v4 Announce Type: replace-cross -Abstract: The theoretical understanding of self-attention (SA) has been steadily progressing. A prominent line of work studies a class of SA layers that admit an energy function decreased by state updates. While it provides valuable insights into inherent biases in signal propagation, it often relies on idealized assumptions or additional constraints not necessarily present in standard SA. Thus, to broaden our understanding, this work aims to relax these energy constraints and provide an energy-agnostic characterization of inference dynamics by dynamical systems analysis. In more detail, we first consider relaxing the symmetry and single-head constraints traditionally required in energy-based formulations. Next, we show that analyzing the Jacobian matrix of the state is highly valuable when investigating more general SA architectures without necessarily admitting an energy function. It reveals that the normalization layer plays an essential role in suppressing the Lipschitzness of SA and the Jacobian's complex eigenvalues, which correspond to the oscillatory components of the dynamics. In addition, the Lyapunov exponents computed from the Jacobians demonstrate that the normalized dynamics lie close to a critical state, and this criticality serves as a strong indicator of high inference performance. Furthermore, the Jacobian perspective also enables us to develop regularization methods for training and a pseudo-energy for monitoring inference dynamics. - oai:arXiv.org:2505.19458v4 - cs.LG - cond-mat.dis-nn - cs.NE - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Akiyoshi Tomihari, Ryo Karakida - - - A Theoretical Framework for Grokking: Interpolation followed by Riemannian Norm Minimisation - https://arxiv.org/abs/2505.20172 - arXiv:2505.20172v2 Announce Type: replace-cross -Abstract: We study the dynamics of gradient flow with small weight decay on general training losses $F: \mathbb{R}^d \to \mathbb{R}$. Under mild regularity assumptions and assuming convergence of the unregularised gradient flow, we show that the trajectory with weight decay $\lambda$ exhibits a two-phase behaviour as $\lambda \to 0$. During the initial fast phase, the trajectory follows the unregularised gradient flow and converges to a manifold of critical points of $F$. Then, at time of order $1/\lambda$, the trajectory enters a slow drift phase and follows a Riemannian gradient flow minimising the $\ell_2$-norm of the parameters. This purely optimisation-based phenomenon offers a natural explanation for the \textit{grokking} effect observed in deep learning, where the training loss rapidly reaches zero while the test loss plateaus for an extended period before suddenly improving. We argue that this generalisation jump can be attributed to the slow norm reduction induced by weight decay, as explained by our analysis. We validate this mechanism empirically on several synthetic regression tasks. - oai:arXiv.org:2505.20172v2 - cs.LG - math.OC - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Etienne Boursier, Scott Pesme, Radu-Alexandru Dragomir - - - Robust and Computation-Aware Gaussian Processes - https://arxiv.org/abs/2505.21133 - arXiv:2505.21133v2 Announce Type: replace-cross -Abstract: Gaussian processes (GPs) are widely used for regression and optimization tasks such as Bayesian optimization (BO) due to their expressiveness and principled uncertainty estimates. However, in settings with large datasets corrupted by outliers, standard GPs and their sparse approximations struggle with computational tractability and robustness. We introduce Robust Computation-aware Gaussian Process (RCaGP), a novel GP model that jointly addresses these challenges by combining a principled treatment of approximation-induced uncertainty with robust generalized Bayesian updating. The key insight is that robustness and approximation-awareness are not orthogonal but intertwined: approximations can exacerbate the impact of outliers, and mitigating one without the other is insufficient. Unlike previous work that focuses narrowly on either robustness or approximation quality, RCaGP combines both in a principled and scalable framework, thus effectively managing both outliers and computational uncertainties introduced by approximations such as low-rank matrix multiplications. Our model ensures more conservative and reliable uncertainty estimates, a property we rigorously demonstrate. Additionally, we establish a robustness property and show that the mean function is key to preserving it, motivating a tailored model selection scheme for robust mean functions. Empirical results confirm that solving these challenges jointly leads to superior performance across both clean and outlier-contaminated settings, both on regression and high-throughput Bayesian optimization benchmarks. - oai:arXiv.org:2505.21133v2 - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Marshal Arijona Sinaga, Julien Martinelli, Samuel Kaski - - - Why Machine Learning Models Fail to Fully Capture Epistemic Uncertainty - https://arxiv.org/abs/2505.23506 - arXiv:2505.23506v2 Announce Type: replace-cross -Abstract: In recent years various supervised learning methods that disentangle aleatoric and epistemic uncertainty based on second-order distributions have been proposed. We argue that these methods fail to capture critical components of epistemic uncertainty, particularly due to the often-neglected component of model bias. To show this, we make use of a more fine-grained taxonomy of epistemic uncertainty sources in machine learning models, and analyse how the classical bias-variance decomposition of the expected prediction error can be decomposed into different parts reflecting these uncertainties. By using a simulation-based evaluation protocol which encompasses epistemic uncertainty due to both procedural- and data-driven uncertainty components, we illustrate that current methods rarely capture the full spectrum of epistemic uncertainty. Through theoretical insights and synthetic experiments, we show that high model bias can lead to misleadingly low estimates of epistemic uncertainty, and common second-order uncertainty quantification methods systematically blur bias-induced errors into aleatoric estimates, thereby underrepresenting epistemic uncertainty. Our findings underscore that meaningful aleatoric estimates are feasible only if all relevant sources of epistemic uncertainty are properly represented. - oai:arXiv.org:2505.23506v2 - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Sebasti\'an Jim\'enez, Mira J\"urgens, Willem Waegeman - - - Model-Informed Flows for Bayesian Inference - https://arxiv.org/abs/2505.24243 - arXiv:2505.24243v2 Announce Type: replace-cross -Abstract: Variational inference often struggles with the posterior geometry exhibited by complex hierarchical Bayesian models. Recent advances in flow-based variational families and Variationally Inferred Parameters (VIP) each address aspects of this challenge, but their formal relationship is unexplored. Here, we prove that the combination of VIP and a full-rank Gaussian can be represented exactly as a forward autoregressive flow augmented with a translation term and input from the model's prior. Guided by this theoretical insight, we introduce the Model-Informed Flow (MIF) architecture, which adds the necessary translation mechanism, prior information, and hierarchical ordering. Empirically, MIF delivers tighter posterior approximations and matches or exceeds state-of-the-art performance across a suite of hierarchical and non-hierarchical benchmarks. - oai:arXiv.org:2505.24243v2 - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Joohwan Ko, Justin Domke - - - VQC-MLPNet: An Unconventional Hybrid Quantum-Classical Architecture for Scalable and Robust Quantum Machine Learning - https://arxiv.org/abs/2506.10275 - arXiv:2506.10275v2 Announce Type: replace-cross -Abstract: Variational quantum circuits (VQCs) hold promise for quantum machine learning but face challenges in expressivity, trainability, and noise resilience. We propose VQC-MLPNet, a hybrid architecture where a VQC generates the first-layer weights of a classical multilayer perceptron during training, while inference is performed entirely classically. This design preserves scalability, reduces quantum resource demands, and enables practical deployment. We provide a theoretical analysis based on statistical learning and neural tangent kernel theory, establishing explicit risk bounds and demonstrating improved expressivity and trainability compared to purely quantum or existing hybrid approaches. These theoretical insights demonstrate exponential improvements in representation capacity relative to quantum circuit depth and the number of qubits, providing clear computational advantages over standalone quantum circuits and existing hybrid quantum architectures. Empirical results on diverse datasets, including quantum-dot classification and genomic sequence analysis, show that VQC-MLPNet achieves high accuracy and robustness under realistic noise models, outperforming classical and quantum baselines while using significantly fewer trainable parameters. - oai:arXiv.org:2506.10275v2 - quant-ph - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Jun Qi, Chao-Han Yang, Pin-Yu Chen, Min-Hsiu Hsieh - - - TensorHyper-VQC: A Tensor-Train-Guided Hypernetwork for Robust and Scalable Variational Quantum Computing - https://arxiv.org/abs/2508.01116 - arXiv:2508.01116v2 Announce Type: replace-cross -Abstract: Variational Quantum Computing (VQC) faces fundamental scalability barriers, primarily due to the presence of barren plateaus and its sensitivity to quantum noise. To address these challenges, we introduce TensorHyper-VQC, a novel tensor-train (TT)-guided hypernetwork framework that significantly improves the robustness and scalability of VQC. Our framework fully delegates the generation of quantum circuit parameters to a classical TT network, effectively decoupling optimization from quantum hardware. This innovative parameterization mitigates gradient vanishing, enhances noise resilience through structured low-rank representations, and facilitates efficient gradient propagation. Grounded in Neural Tangent Kernel and statistical learning theory, our rigorous theoretical analyses establish strong guarantees on approximation capability, optimization stability, and generalization performance. Extensive empirical results across quantum dot classification, Max-Cut optimization, and molecular quantum simulation tasks demonstrate that TensorHyper-VQC consistently achieves superior performance and robust noise tolerance, including hardware-level validation on a 156-qubit IBM Heron processor. These results position TensorHyper-VQC as a scalable and noise-resilient framework for advancing practical quantum machine learning on near-term devices. - oai:arXiv.org:2508.01116v2 - quant-ph - cs.AI - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Jun Qi, Chao-Han Yang, Pin-Yu Chen, Min-Hsiu Hsieh - - - Diagrams-to-Dynamics (D2D): Exploring Causal Loop Diagram Leverage Points under Uncertainty - https://arxiv.org/abs/2508.05659 - arXiv:2508.05659v3 Announce Type: replace-cross -Abstract: Causal loop diagrams (CLDs) are widely used in health and environmental research to represent hypothesized causal structures underlying complex problems. However, as qualitative and static representations, CLDs are limited in their ability to support dynamic analysis and inform intervention strategies. We propose Diagrams-to-Dynamics (D2D), a method for converting CLDs into exploratory system dynamics models (SDMs) in the absence of empirical data. With minimal user input - following a protocol to label variables as stocks, flows or auxiliaries, and constants - D2D leverages the structural information already encoded in CLDs, namely, link existence and polarity, to simulate hypothetical interventions and explore potential leverage points under uncertainty. Results suggest that D2D helps distinguish between high- and low-ranked leverage points. We compare D2D to a data-driven SDM constructed from the same CLD and variable labels. D2D showed greater consistency with the data-driven model compared to static network centrality analysis, while providing uncertainty estimates and guidance for future data collection. The D2D method is implemented in an open-source Python package and a web-based application to support further testing and lower the barrier to dynamic modeling for researchers working with CLDs. We expect that additional validation studies will further establish the approach's utility across a broad range of cases and domains. - oai:arXiv.org:2508.05659v3 - cs.LG - stat.ME - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by-nc-nd/4.0/ - Jeroen F. Uleman, Loes Crielaard, Leonie K. Elsenburg, Guido A. Veldhuis, Naja Hulvej Rod, Rick Quax, V\'itor V. Vasconcelos - - - ADPO: Anchored Direct Preference Optimization - https://arxiv.org/abs/2510.18913 - arXiv:2510.18913v4 Announce Type: replace-cross -Abstract: Direct Preference Optimization (DPO) has become a standard for aligning models with human feedback, yet its reliance on hard, pairwise preferences makes it brittle to annotator noise and distribution shift. We propose Anchored Direct Preference Optimization (ADPO), a theoretically grounded framework that extends preference learning to soft, listwise supervision through reference anchoring. Our key theoretical contributions are threefold: (1) we establish that ADPO unifies major learning paradigms, including supervised fine-tuning, knowledge distillation, maximum-entropy reinforcement learning, and DPO, as special cases through different choices of target distribution, anchor policy, and temperature; (2) we prove that anchoring induces an implicit trust region governed by the softmax Fisher metric; and (3) we formalize the stability of dynamic anchor updates. Empirically, we discover a task-dependent tradeoff: dynamic anchors suit online exploration, while fixed anchors excel at offline distillation, reducing teacher-student KL divergence by two to three orders of magnitude (170 to 5000 times). - oai:arXiv.org:2510.18913v4 - cs.LG - cs.AI - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://arxiv.org/licenses/nonexclusive-distrib/1.0/ - Wang Zixian - - - On Measuring Localization of Shortcuts in Deep Networks - https://arxiv.org/abs/2510.26560 - arXiv:2510.26560v2 Announce Type: replace-cross -Abstract: Shortcuts, spurious rules that perform well during training but fail to generalize, present a major challenge to the reliability of deep networks (Geirhos et al., 2020). However, the impact of shortcuts on feature representations remains understudied, obstructing the design of principled shortcut-mitigation methods. To overcome this limitation, we investigate the layer-wise localization of shortcuts in deep models. Our novel experiment design quantifies the layer-wise contribution to accuracy degradation caused by a shortcut-inducing skew by counterfactual training on clean and skewed datasets. We employ our design to study shortcuts on CIFAR-10, Waterbirds, and CelebA datasets across VGG, ResNet, DeiT, and ConvNeXt architectures. We find that shortcut learning is not localized in specific layers but distributed throughout the network. Different network parts play different roles in this process: shallow layers predominantly encode spurious features, while deeper layers predominantly forget core features that are predictive on clean data. We also analyze the differences in localization and describe its principal axes of variation. Finally, our analysis of layer-wise shortcut-mitigation strategies suggests the hardness of designing general methods, supporting dataset- and architecture-specific approaches instead. - oai:arXiv.org:2510.26560v2 - cs.LG - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by/4.0/ - Nikita Tsoy, Nikola Konstantinov - - - Probabilistic Graph Cuts - https://arxiv.org/abs/2511.02272 - arXiv:2511.02272v2 Announce Type: replace-cross -Abstract: Probabilistic relaxations of graph cuts offer a differentiable alternative to spectral clustering, enabling end-to-end and online learning without eigendecompositions, yet prior work centered on RatioCut and lacked general guarantees and principled gradients. We present a unified probabilistic framework that covers a wide class of cuts, including Normalized Cut. Our framework provides tight analytic upper bounds on expected discrete cuts via integral representations and Gauss hypergeometric functions with closed-form forward and backward. Together, these results deliver a rigorous, numerically stable foundation for scalable, differentiable graph partitioning covering a wide range of clustering and contrastive learning objectives. - oai:arXiv.org:2511.02272v2 - cs.LG - cs.DS - stat.ML - Thu, 06 Nov 2025 00:00:00 -0500 - replace-cross - http://creativecommons.org/licenses/by-nc-sa/4.0/ - Ayoub Ghriss -