|
|
<?xml version='1.0' encoding='UTF-8'?> |
|
|
<rss xmlns:arxiv="http://arxiv.org/schemas/atom" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0"> |
|
|
<channel> |
|
|
<title>stat updates on arXiv.org</title> |
|
|
<link>http://rss.arxiv.org/rss/stat</link> |
|
|
<description>stat updates on the arXiv.org e-print archive.</description> |
|
|
<atom:link href="http://rss.arxiv.org/rss/stat" rel="self" type="application/rss+xml"/> |
|
|
<docs>http://www.rssboard.org/rss-specification</docs> |
|
|
<language>en-us</language> |
|
|
<lastBuildDate>Mon, 02 Feb 2026 05:00:12 +0000</lastBuildDate> |
|
|
<managingEditor>rss-help@arxiv.org</managingEditor> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<skipDays> |
|
|
<day>Sunday</day> |
|
|
<day>Saturday</day> |
|
|
</skipDays> |
|
|
<item> |
|
|
<title>A Time-Varying Branching Process Approach to Model Self-Renewing Cells</title> |
|
|
<link>https://arxiv.org/abs/2601.22282</link> |
|
|
<description>arXiv:2601.22282v1 Announce Type: new |
|
|
Abstract: Stem cells, through their ability to produce daughter stem cells and differentiate into specialized cells, are essential in the growth, maintenance, and repair of biological tissues. Understanding the dynamics of cell populations in the proliferation process not only uncovers proliferative properties of stem cells, but also offers insight into tissue development under both normal conditions and pathological disruption. In this paper, we develop a continuous time branching process model with time-dependent offspring distribution to characterize stem cell proliferation process. We derive analytical expressions for mean, variance, and autocovariance of the stem cell counts, and develop likelihood-based inference procedures to estimate model parameters. Particularly, we construct a forward algorithm likelihood to handle situations when some cell types cannot be directly observed. Simulation results demonstrate that our estimation method recovers the time-dependent division probabilities with good accuracy.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22282v1</guid> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Huyen Nguyen, Haim Bar, Zhiyi Chi, Vladimir Pozdnyakov</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Dependence-Aware Label Aggregation for LLM-as-a-Judge via Ising Models</title> |
|
|
<link>https://arxiv.org/abs/2601.22336</link> |
|
|
<description>arXiv:2601.22336v1 Announce Type: new |
|
|
Abstract: Large-scale AI evaluation increasingly relies on aggregating binary judgments from $K$ annotators, including LLMs used as judges. Most classical methods, e.g., Dawid-Skene or (weighted) majority voting, assume annotators are conditionally independent given the true label $Y\in\{0,1\}$, an assumption often violated by LLM judges due to shared data, architectures, prompts, and failure modes. Ignoring such dependencies can yield miscalibrated posteriors and even confidently incorrect predictions. We study label aggregation through a hierarchy of dependence-aware models based on Ising graphical models and latent factors. For class-dependent Ising models, the Bayes log-odds is generally quadratic in votes; for class-independent couplings, it reduces to a linear weighted vote with correlation-adjusted parameters. We present finite-$K$ examples showing that methods based on conditional independence can flip the Bayes label despite matching per-annotator marginals. We prove separation results demonstrating that these methods remain strictly suboptimal as the number of judges grows, incurring nonvanishing excess risk under latent factors. Finally, we evaluate the proposed method on three real-world datasets, demonstrating improved performance over the classical baselines.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22336v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Krishnakumar Balasubramanian, Aleksandr Podkopaev, Shiva Prasad Kasiviswanathan</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Amortized Simulation-Based Inference in Generalized Bayes via Neural Posterior Estimation</title> |
|
|
<link>https://arxiv.org/abs/2601.22367</link> |
|
|
<description>arXiv:2601.22367v1 Announce Type: new |
|
|
Abstract: Generalized Bayesian Inference (GBI) tempers a loss with a temperature $\beta>0$ to mitigate overconfidence and improve robustness under model misspecification, but existing GBI methods typically rely on costly MCMC or SDE-based samplers and must be re-run for each new dataset and each $\beta$ value. We give the first fully amortized variational approximation to the tempered posterior family $p_\beta(\theta \mid x) \propto \pi(\theta)\,p(x \mid \theta)^\beta$ by training a single $(x,\beta)$-conditioned neural posterior estimator $q_\phi(\theta \mid x,\beta)$ that enables sampling in a single forward pass, without simulator calls or inference-time MCMC. We introduce two complementary training routes: (i) synthesize off-manifold samples $(\theta,x) \sim \pi(\theta)\,p(x \mid \theta)^\beta$ and (ii) reweight a fixed base dataset $\pi(\theta)\,p(x \mid \theta)$ using self-normalized importance sampling (SNIS). We show that the SNIS-weighted objective provides a consistent forward-KL fit to the tempered posterior with finite weight variance. Across four standard simulation-based inference (SBI) benchmarks, including the chaotic Lorenz-96 system, our $\beta$-amortized estimator achieves competitive posterior approximations in standard two-sample metrics, matching non-amortized MCMC-based power-posterior samplers over a wide range of temperatures.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22367v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Shiyi Sun, Geoff K. Nicholls, Jeong Eun Lee</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>It's all the (Exponential) Family: An Equivalence between Maximum Likelihood Estimation and Control Variates for Sketching Algorithms</title> |
|
|
<link>https://arxiv.org/abs/2601.22378</link> |
|
|
<description>arXiv:2601.22378v1 Announce Type: new |
|
|
Abstract: Maximum likelihood estimators (MLE) and control variate estimators (CVE) have been used in conjunction with known information across sketching algorithms and applications in machine learning. We prove that under certain conditions in an exponential family, an optimal CVE will achieve the same asymptotic variance as the MLE, giving an Expectation-Maximization (EM) algorithm for the MLE. Experiments show the EM algorithm is faster and numerically stable compared to other root finding algorithms for the MLE for the bivariate Normal distribution, and we expect this to hold across distributions satisfying these conditions. We show how the EM algorithm leads to reproducibility for algorithms using MLE / CVE, and demonstrate how the EM algorithm leads to finding the MLE when the CV weights are known.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22378v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Keegan Kang, Kerong Wang, Ding Zhang, Rameshwar Pratap, Bhisham Dev Verma, Benedict H. W. Wong</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Mixed Latent Position Cluster Models for Networks</title> |
|
|
<link>https://arxiv.org/abs/2601.22380</link> |
|
|
<description>arXiv:2601.22380v1 Announce Type: new |
|
|
Abstract: Over the last two decades, the Latent Position Model (LPM) has become a prominent tool to obtain model-based visualizations of networks. However, the geometric structure of the LPM is inherently symmetric, in the sense that outgoing and incoming edges are assumed to follow the same statistical distribution. As a consequence, the canonical LPM framework is not ideal for the analysis of directed networks. In addition, edges may be weighted to describe the duration or intensity of a connection. This can lead to disassortative patterns and other motifs that cannot be easily captured by the underlying geometry. To address these limitations, we develop a novel extension of the LPM, called the Mixed Latent Position Cluster Model (MLPCM), which can deal with asymmetry and non-Euclidean patterns, while providing new interpretations of the latent space. We dissect the directed edges of the network by formally disentangling how a node behaves from how it is perceived by others. This leads to a dual representation of a node's profile, identifying its ``overt'' and ``covert'' social positions. In order to efficiently estimate the parameters of our model, we develop a variational Bayes approach to approximate the posterior distribution. Unlike many existing variational frameworks, our algorithm does not require any additional numerical approximations. Model selection is performed by introducing a novel partially integrated complete likelihood criteria, which builds upon the literature on penalized likelihood methods. We demonstrate the accuracy of our proposed methodology using synthetic datasets, and we illustrate its practical utility with an application to a dataset of international arms transfers.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22380v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Chaoyi Lu, Riccardo Rastelli</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Simulation-based Bayesian inference with ameliorative learned summary statistics -- Part I</title> |
|
|
<link>https://arxiv.org/abs/2601.22441</link> |
|
|
<description>arXiv:2601.22441v1 Announce Type: new |
|
|
Abstract: This paper, which is Part 1 of a two-part paper series, considers a simulation-based inference with learned summary statistics, in which such a learned summary statistic serves as an empirical-likelihood with ameliorative effects in the Bayesian setting, when the exact likelihood function associated with the observation data and the simulation model is difficult to obtain in a closed form or computationally intractable. In particular, a transformation technique which leverages the Cressie-Read discrepancy criterion under moment restrictions is used for summarizing the learned statistics between the observation data and the simulation outputs, while preserving the statistical power of the inference. Here, such a transformation of data-to-learned summary statistics also allows the simulation outputs to be conditioned on the observation data, so that the inference task can be performed over certain sample sets of the observation data that are considered as an empirical relevance or believed to be particular importance. Moreover, the simulation-based inference framework discussed in this paper can be extended further, and thus handling weakly dependent observation data. Finally, we remark that such an inference framework is suitable for implementation in distributed computing, i.e., computational tasks involving both the data-to-learned summary statistics and the Bayesian inferencing problem can be posed as a unified distributed inference problem that will exploit distributed optimization and MCMC algorithms for supporting large datasets associated with complex simulation models.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22441v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Getachew K. Befekadu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Changepoint Detection As Model Selection: A General Framework</title> |
|
|
<link>https://arxiv.org/abs/2601.22481</link> |
|
|
<description>arXiv:2601.22481v1 Announce Type: new |
|
|
Abstract: This dissertation presents a general framework for changepoint detection based on L0 model selection. The core method, Iteratively Reweighted Fused Lasso (IRFL), improves upon the generalized lasso by adaptively reweighting penalties to enhance support recovery and minimize criteria such as the Bayesian Information Criterion (BIC). The approach allows for flexible modeling of seasonal patterns, linear and quadratic trends, and autoregressive dependence in the presence of changepoints. |
|
|
Simulation studies demonstrate that IRFL achieves accurate changepoint detection across a wide range of challenging scenarios, including those involving nuisance factors such as trends, seasonal patterns, and serially correlated errors. The framework is further extended to image data, where it enables edge-preserving denoising and segmentation, with applications spanning medical imaging and high-throughput plant phenotyping. |
|
|
Applications to real-world data demonstrate IRFL's utility. In particular, analysis of the Mauna Loa CO2 time series reveals changepoints that align with volcanic eruptions and ENSO events, yielding a more accurate trend decomposition than ordinary least squares. Overall, IRFL provides a robust, extensible tool for detecting structural change in complex data.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22481v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.AP</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Michael Grantham, Xueheng Shi, Bertrand Clarke</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Corrected Samplers for Discrete Flow Models</title> |
|
|
<link>https://arxiv.org/abs/2601.22519</link> |
|
|
<description>arXiv:2601.22519v1 Announce Type: new |
|
|
Abstract: Discrete flow models (DFMs) have been proposed to learn the data distribution on a finite state space, offering a flexible framework as an alternative to discrete diffusion models. A line of recent work has studied samplers for discrete diffusion models, such as tau-leaping and Euler solver. However, these samplers require a large number of iterations to control discretization error, since the transition rates are frozen in time and evaluated at the initial state within each time interval. Moreover, theoretical results for these samplers often require boundedness conditions of the transition rate or they focus on a specific type of source distributions. To address those limitations, we establish non-asymptotic discretization error bounds for those samplers without any restriction on transition rates and source distributions, under the framework of discrete flow models. Furthermore, by analyzing a one-step lower bound of the Euler sampler, we propose two corrected samplers: \textit{time-corrected sampler} and \textit{location-corrected sampler}, which can reduce the discretization error of tau-leaping and Euler solver with almost no additional computational cost. We rigorously show that the location-corrected sampler has a lower iteration complexity than existing parallel samplers. We validate the effectiveness of the proposed method by demonstrating improved generation quality and reduced inference time on both simulation and text-to-image generation tasks. Code can be found in https://github.com/WanZhengyan/Corrected-Samplers-for-Discrete-Flow-Models.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22519v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Zhengyan Wan, Yidong Ouyang, Liyan Xie, Fang Fang, Hongyuan Zha, Guang Cheng</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Group Sequential Methods for the Win Ratio</title> |
|
|
<link>https://arxiv.org/abs/2601.22525</link> |
|
|
<description>arXiv:2601.22525v1 Announce Type: new |
|
|
Abstract: The win ratio is increasingly used in randomized trials due to its intuitive clinical interpretation, ability to incorporate the relative importance of composite endpoints, and its capacity for combining different types of outcomes (e.g. time-to-event, binary, counts, etc.) to be combined. There are open questions, however, about how to implement adaptive design approaches when the primary endpoint is a win ratio, including in group sequential designs. A key requirement allowing for straightforward application of classical group sequential methods is the independence of incremental interim test statistics. This paper derives the covariance structure of incremental U-statistics that evaluate the win ratio under its asymptotic distribution. The derived covariance shows that the independent increments assumption holds for the asymptotic distribution of U-statistics that test the win ratio. Simulations confirm that traditional $\alpha$-spending preserves Type I error across interim looks. A retrospective look at the IN.PACT SFA clinical trial data illustrates the potential for stopping early in a group sequential design using the win ratio. We have demonstrated that straightforward use of Lan-De\uppercase{M}ets $\alpha$-spending is possible for randomized trials involving the win ratio under certain common conditions. Thus, existing software capable of computing traditional group sequential boundaries can be employed.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22525v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Tracy Bergemann, Tim Hanson</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Propensity score weighted Cox regression for survival outcomes in observational studies with multiple or factorial treatments</title> |
|
|
<link>https://arxiv.org/abs/2601.22572</link> |
|
|
<description>arXiv:2601.22572v1 Announce Type: new |
|
|
Abstract: In observational studies with survival or time-to-event outcomes, a propensity score weighted marginal Cox proportional hazard model with the treatment variable as the only predictor is commonly used to estimate the causal marginal hazard ratio between two treatments. Observational studies often have more than two treatments, but corresponding analysis methods are limited. In this paper, we combine the propensity score weighting method for multiple treatments and a marginal Cox model with indicators for each treatment to estimate the causal hazard ratios between multiple treatments and a common reference treatment. We illustrate two weighting schemes: inverse probability of treatment weighting and overlap weighting. We prove the consistency of the maximum weighted partial likelihood estimator of the causal marginal hazard ratio and derive a robust sandwich variance estimator. As an important special case of multiple treatments, we elaborate the Cox model for two-way factorial treatments. We apply the method to evaluate the real-world comparative effectiveness of three types of anti-obesity medications on heart failure. We develop an associated R package 'PSsurvival'.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22572v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Zixian Zhao, Chengxin Yang, Fan Li</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Quadratic robust methods for causal mediation analysis</title> |
|
|
<link>https://arxiv.org/abs/2601.22592</link> |
|
|
<description>arXiv:2601.22592v1 Announce Type: new |
|
|
Abstract: Estimating natural effects is a core task in causal mediation analysis. Existing triply robust (TR) frameworks (Tchetgen Tchetgen & Shpitser 2012) and their extensions have been developed to estimate the natural effects. In this work, we introduce a new quadruply robust (QR) framework that enlarges the model class for unbiased identification. We study two modeling strategies. The first is a nonparametric modeling approach, under which we propose a general QR estimator that supports the use of machine learning methods for nuisance estimation. We also study high-dimensional settings, where the dimensions of covariates and mediators may both be large. In these settings, we adopt a parametric modeling strategy and develop a model quadruply robust (MQR) estimator to limit the impact of model misspecification. Simulation studies and a real data application demonstrate the finite-sample performance of the proposed methods.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22592v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Zhen Qi, Yuqian Zhang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>An Efficient Algorithm for Thresholding Monte Carlo Tree Search</title> |
|
|
<link>https://arxiv.org/abs/2601.22600</link> |
|
|
<description>arXiv:2601.22600v1 Announce Type: new |
|
|
Abstract: We introduce the Thresholding Monte Carlo Tree Search problem, in which, given a tree $\mathcal{T}$ and a threshold $\theta$, a player must answer whether the root node value of $\mathcal{T}$ is at least $\theta$ or not. In the given tree, `MAX' or `MIN' is labeled on each internal node, and the value of a `MAX'-labeled (`MIN'-labeled) internal node is the maximum (minimum) of its child values. The value of a leaf node is the mean reward of an unknown distribution, from which the player can sample rewards. For this problem, we develop a $\delta$-correct sequential sampling algorithm based on the Track-and-Stop strategy that has asymptotically optimal sample complexity. We show that a ratio-based modification of the D-Tracking arm-pulling strategy leads to a substantial improvement in empirical sample complexity, as well as reducing the per-round computational cost from linear to logarithmic in the number of arms.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22600v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Shoma Nameki (Graduate School of Information Science and Technology, Hokkaido University), Atsuyoshi Nakamura (Faculty of Information Science and Technology, Hokkaido University), Junpei Komiyama (Mohamed bin Zayed University of Artificial Intelligence, RIKEN AIP), Koji Tabata (Research Institute for Electronic Science, Hokkaido University)</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>A spectral approach for online covariance change point detection</title> |
|
|
<link>https://arxiv.org/abs/2601.22602</link> |
|
|
<description>arXiv:2601.22602v1 Announce Type: new |
|
|
Abstract: Change point detection in covariance structures is a fundamental and crucial problem for sequential data. Under the high-dimensional setting, most of the existing research has focused on identifying change points in historical data. However, there is a significant lack of studies on the practically relevant online change point problem, which means promptly detecting change points as they occur. In this paper, applying the limiting theory of linear spectral statistics for random matrices, we propose a class of spectrum based CUSUM-type statistic. We first construct a martingale from the difference of linear spectral statistics of sequential sample Fisher matrices, which converges to a Brownian motion. Our CUSUM-type statistic is then defined as the maximum of a variant of this process. Finally, we develop our detection procedure based on the invariance principle. Simulation results show that our detection method is highly sensitive to the occurrence of change point and is able to identify it shortly after they arise, outperforming the existing approaches.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22602v1</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/publicdomain/zero/1.0/</dc:rights> |
|
|
<dc:creator>Zhigang Bao, Kha Man Cheong, Yuji Li, Jiaxin Qiu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>RPWithPrior: Label Differential Privacy in Regression</title> |
|
|
<link>https://arxiv.org/abs/2601.22625</link> |
|
|
<description>arXiv:2601.22625v1 Announce Type: new |
|
|
Abstract: With the wide application of machine learning techniques in practice, privacy preservation has gained increasing attention. Protecting user privacy with minimal accuracy loss is a fundamental task in the data analysis and mining community. In this paper, we focus on regression tasks under $\epsilon$-label differential privacy guarantees. Some existing methods for regression with $\epsilon$-label differential privacy, such as the RR-On-Bins mechanism, discretized the output space into finite bins and then applied RR algorithm. To efficiently determine these finite bins, the authors rounded the original responses down to integer values. However, such operations does not align well with real-world scenarios. To overcome these limitations, we model both original and randomized responses as continuous random variables, avoiding discretization entirely. Our novel approach estimates an optimal interval for randomized responses and introduces new algorithms designed for scenarios where a prior is either known or unknown. Additionally, we prove that our algorithm, RPWithPrior, guarantees $\epsilon$-label differential privacy. Numerical results demonstrate that our approach gets better performance compared with the Gaussian, Laplace, Staircase, and RRonBins, Unbiased mechanisms on the Communities and Crime, Criteo Sponsored Search Conversion Log, California Housing datasets.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22625v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Haixia Liu, Ruifan Huang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Generative and Nonparametric Approaches for Conditional Distribution Estimation: Methods, Perspectives, and Comparative Evaluations</title> |
|
|
<link>https://arxiv.org/abs/2601.22650</link> |
|
|
<description>arXiv:2601.22650v1 Announce Type: new |
|
|
Abstract: The inference of conditional distributions is a fundamental problem in statistics, essential for prediction, uncertainty quantification, and probabilistic modeling. A wide range of methodologies have been developed for this task. This article reviews and compares several representative approaches spanning classical nonparametric methods and modern generative models. We begin with the single-index method of Hall and Yao (2005), which estimates the conditional distribution through a dimension-reducing index and nonparametric smoothing of the resulting one-dimensional cumulative conditional distribution function. We then examine the basis-expansion approaches, including FlexCode (Izbicki and Lee, 2017) and DeepCDE (Dalmasso et al., 2020), which convert conditional density estimation into a set of nonparametric regression problems. In addition, we discuss two recent generative simulation-based methods that leverage modern deep generative architectures: the generative conditional distribution sampler (Zhou et al., 2023) and the conditional denoising diffusion probabilistic model (Fu et al., 2024; Yang et al., 2025). A systematic numerical comparison of these approaches is provided using a unified evaluation framework that ensures fairness and reproducibility. The performance metrics used for the estimated conditional distribution include the mean-squared errors of conditional mean and standard deviation, as well as the Wasserstein distance. We also discuss their flexibility and computational costs, highlighting the distinct advantages and limitations of each approach.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22650v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Yen-Shiu Chin, Zhi-Yu Jou, Toshinari Morimoto, Chia-Tse Wang, Ming-Chung Chang, Tso-Jung Yen, Su-Yun Huang, Tailen Hsing</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Spectral Gradient Descent Mitigates Anisotropy-Driven Misalignment: A Case Study in Phase Retrieval</title> |
|
|
<link>https://arxiv.org/abs/2601.22652</link> |
|
|
<description>arXiv:2601.22652v1 Announce Type: new |
|
|
Abstract: Spectral gradient methods, such as the Muon optimizer, modify gradient updates by preserving directional information while discarding scale, and have shown strong empirical performance in deep learning. We investigate the mechanisms underlying these gains through a dynamical analysis of a nonlinear phase retrieval model with anisotropic Gaussian inputs, equivalent to training a two-layer neural network with the quadratic activation and fixed second-layer weights. Focusing on a spiked covariance setting where the dominant variance direction is orthogonal to the signal, we show that gradient descent (GD) suffers from a variance-induced misalignment: during the early escaping stage, the high-variance but uninformative spike direction is multiplicatively amplified, degrading alignment with the true signal under strong anisotropy. In contrast, spectral gradient descent (SpecGD) removes this spike amplification effect, leading to stable alignment and accelerated noise contraction. Numerical experiments confirm the theory and show that these phenomena persist under broader anisotropic covariances.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22652v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Guillaume Braun, Han Bao, Wei Huang, Masaaki Imaizumi</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Policy learning under constraint: Maximizing a primary outcome while controlling an adverse event</title> |
|
|
<link>https://arxiv.org/abs/2601.22717</link> |
|
|
<description>arXiv:2601.22717v1 Announce Type: new |
|
|
Abstract: A medical policy aims to support decision-making by mapping patient characteristics to individualized treatment recommendations. Standard approaches typically optimize a single outcome criterion. For example, recommending treatment according to the sign of the Conditional Average Treatment Effect (CATE) maximizes the policy "value" by exploiting treatment effect heterogeneity. This point of view shifts policy learning towards the challenge of learning a reliable CATE estimator. However, in multi-outcome settings, such strategies ignore the risk of adverse events, despite their relevance. PLUC (Policy Learning Under Constraint) addresses this challenges by learning an estimator of the CATE that yields smoothed policies controlling the probability of an adverse event in observational settings. Inspired by insights from EP-learning, PLUC involves the optimization of strongly convex Lagrangian criteria over a convex hull of functions. Its alternating procedure iteratively applies the Frank-Wolfe algorithm to minimize the current criterion, then performs a targeting step that updates the criterion so that its evaluations at previously visited landmarks become targeted estimators of the corresponding theoretical quantities. An R package PLUC-R provides a practical implementation. We illustrate PLUC's performance through a series of numerical experiments.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22717v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Laura Fuentes-Vicente, Mathieu Even, Gaelle Dormion, Julie Josse, Antoine Chambaz</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>GRANITE: A Generalized Regional Framework for Identifying Agreement in Feature-Based Explanations</title> |
|
|
<link>https://arxiv.org/abs/2601.22771</link> |
|
|
<description>arXiv:2601.22771v1 Announce Type: new |
|
|
Abstract: Feature-based explanation methods aim to quantify how features influence the model's behavior, either locally or globally, but different methods often disagree, producing conflicting explanations. This disagreement arises primarily from two sources: how feature interactions are handled and how feature dependencies are incorporated. We propose GRANITE, a generalized regional explanation framework that partitions the feature space into regions where interaction and distribution influences are minimized. This approach aligns different explanation methods, yielding more consistent and interpretable explanations. GRANITE unifies existing regional approaches, extends them to feature groups, and introduces a recursive partitioning algorithm to estimate such regions. We demonstrate its effectiveness on real-world datasets, providing a practical tool for consistent and interpretable feature explanations.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22771v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Julia Herbinger, Gabriel Laberge, Maximilian Muschalik, Yann Pequignot, Marvin N. Wright, Fabian Fumagalli</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Optimal Sample Splitting for Observational Studies</title> |
|
|
<link>https://arxiv.org/abs/2601.22782</link> |
|
|
<description>arXiv:2601.22782v1 Announce Type: new |
|
|
Abstract: In observational studies of treatment effects, estimates may be biased by unmeasured confounders, which can potentially affect the validity of the results. Understanding sensitivity to such biases helps assess how unmeasured confounding impacts credibility. The design of an observational study strongly influences its sensitivity to bias. Previous work has shown that the sensitivity to bias can be reduced by dividing a dataset into a planning sample and a larger analysis sample, where the planning sample guides design decisions. But the choice of what fraction of the data to put in the planning sample vs. the analysis sample was ad hoc. Here, we develop an approach to find the optimal fraction using plasmode datasets. We show that our method works well in high-dimensional outcome spaces. We apply our method to study the effects of exposure to second-hand smoke in children. The OptimalSampling R package implementing our method is available at GitHub.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22782v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-sa/4.0/</dc:rights> |
|
|
<dc:creator>Qishuo Yin, Dylan S. Small</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Approximating $f$-Divergences with Rank Statistics</title> |
|
|
<link>https://arxiv.org/abs/2601.22784</link> |
|
|
<description>arXiv:2601.22784v1 Announce Type: new |
|
|
Abstract: We introduce a rank-statistic approximation of $f$-divergences that avoids explicit density-ratio estimation by working directly with the distribution of ranks. For a resolution parameter $K$, we map the mismatch between two univariate distributions $\mu$ and $\nu$ to a rank histogram on $\{ 0, \ldots, K\}$ and measure its deviation from uniformity via a discrete $f$-divergence, yielding a rank-statistic divergence estimator. We prove that the resulting estimator of the divergence is monotone in $K$, is always a lower bound of the true $f$-divergence, and we establish quantitative convergence rates for $K\to\infty$ under mild regularity of the quantile-domain density ratio. To handle high-dimensional data, we define the sliced rank-statistic $f$-divergence by averaging the univariate construction over random projections, and we provide convergence results for the sliced limit as well. We also derive finite-sample deviation bounds along with asymptotic normality results for the estimator. Finally, we empirically validate the approach by benchmarking against neural baselines and illustrating its use as a learning objective in generative modelling experiments.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22784v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Viktor Stein, Jos\'e Manuel de Frutos</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Convergence of Multi-Level Markov Chain Monte Carlo Adaptive Stochastic Gradient Algorithms</title> |
|
|
<link>https://arxiv.org/abs/2601.22799</link> |
|
|
<description>arXiv:2601.22799v1 Announce Type: new |
|
|
Abstract: Stochastic optimization in learning and inference often relies on Markov chain Monte Carlo (MCMC) to approximate gradients when exact computation is intractable. However, finite-time MCMC estimators are biased, and reducing this bias typically comes at a higher computational cost. We propose a multilevel Monte Carlo gradient estimator whose bias decays as $O(T_{n}^{-1} )$ while its expected computational cost grows only as $O(log T_n )$, where $T_n$ is the maximal truncation level at iteration n. Building on this approach, we introduce a multilevel MCMC framework for adaptive stochastic gradient methods, leading to new multilevel variants of Adagrad and AMSGrad algorithms. Under conditions controlling the estimator bias and its second and third moments, we establish a convergence rate of order $O(n^{-1/2} )$ up to logarithmic factors. Finally, we illustrate these results on Importance-Weighted Autoencoders trained with the proposed multilevel adaptive methods.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22799v1</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Antoine Godichon-Baggioni (LPSM), Gabriel Lang (MIA Paris-Saclay), Sylvain Le Corff (CEREMADE), Julien Stoehr (CEREMADE), Sobihan Surendran</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Wasserstein Geometry of Information Loss in Nonlinear Dynamical Systems</title> |
|
|
<link>https://arxiv.org/abs/2601.22814</link> |
|
|
<description>arXiv:2601.22814v1 Announce Type: new |
|
|
Abstract: Time-delay embedding is a powerful technique for reconstructing the state space of nonlinear time series. However, the fidelity of reconstruction relies on the assumption that the time-delay map is an embedding, which is implicitly justified by Takens' embedding theorem but rarely scrutinised in practice. In this work, we argue that time-delay reconstruction is not always an embedding, and that the non-injectivity of the time-delay map induced by a given measurement function causes irreducible information loss, degrading downstream model performance. Our analysis reveals that this local self-overlap stems from inherent dynamical properties, governed by the competition between the dynamical and the curvature penalty, and the irreducible information loss scales with the product of the geometric separation and the probability mass. We establish a measure-theoretic framework that lifts the dynamics to the space of probability measures, where the multi-valued evolution induced by the non-injectivity is quantified by how far the $n$-step conditional kernel $K^{n}(x, \cdot)$ deviates from a Dirac mass and introduce intrinsic stochasticity $\mathcal{E}^{*}_{n}$, an almost-everywhere, data-driven certificate of deterministic closure, to quantify irreducible information loss without any prior information. We demonstrate that $\mathcal{E}^{*}_{n}$ improves reconstruction quality and downstream model performance on both synthetic and real-world nonlinear data sets.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22814v1</guid> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Yiting Duan, Zhikun Zhang, Yi Guo</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Asymmetric conformal prediction with penalized kernel sum-of-squares</title> |
|
|
<link>https://arxiv.org/abs/2601.22834</link> |
|
|
<description>arXiv:2601.22834v1 Announce Type: new |
|
|
Abstract: Conformal prediction (CP) is a distribution-free method to construct reliable prediction intervals that has gained significant attention in recent years. Despite its success and various proposed extensions, a significant practical feature which has been overlooked in previous research is the potential skewed nature of the noise, or of the residuals when the predictive model exhibits bias. In this work, we leverage recent developments in CP to propose a new asymmetric procedure that bridges the gap between skewed and non-skewed noise distributions, while still maintaining adaptivity of the prediction intervals. We introduce a new statistical learning problem to construct adaptive and asymmetric prediction bands, with a unique feature based on a penalty which promotes symmetry: when its intensity varies, the intervals smoothly change from symmetric to asymmetric ones. This learning problem is based on reproducing kernel Hilbert spaces and the recently introduced kernel sum-of-squares framework. First, we establish representer theorems to make our problem tractable in practice, and derive dual formulations which are essential for scalability to larger datasets. Second, the intensity of the penalty is chosen using a novel data-driven method which automatically identifies the symmetric nature of the noise. We show that consenting to some asymmetry can let the learned prediction bands better adapt to small sample regimes or biased predictive models.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22834v1</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Louis Allain (ENSAI, CREST), S\'ebastien Da Veiga (ENSAI, CREST, RT-UQ), Brian Staber</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Depth-based estimation for multivariate functional data with phase variability</title> |
|
|
<link>https://arxiv.org/abs/2601.22884</link> |
|
|
<description>arXiv:2601.22884v1 Announce Type: new |
|
|
Abstract: In the context of multivariate functional data with individual phase variation, we develop a robust depth-based approach to estimate the main pattern function when cross-component time warping is also present. In particular, we consider the latent deformation model (Carroll and M\"uller, 2023) in which the different components of a multivariate functional variable are also time-distorted versions of a common template function. Rather than focusing on a particular functional depth measure, we discuss the necessary conditions on a depth function to be able to provide a consistent estimation of the central pattern, considering different model assumptions. We evaluate the method performance and its robustness against atypical observations and violations of the model assumptions through simulations, and illustrate its use on two real data sets.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22884v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Ana Arribas-Gil, Sara L\'opez-Pintado</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>A Framework for the Bayesian Calibration of Complex and Data-Scarce Models in Applied Sciences</title> |
|
|
<link>https://arxiv.org/abs/2601.22890</link> |
|
|
<description>arXiv:2601.22890v1 Announce Type: new |
|
|
Abstract: In this work, we review the theory involved in the Bayesian calibration of complex computer models, with particular emphasis on their use for applications involving computationally expensive simulations and scarce experimental data. In the article, we present a unified framework that incorporates various Bayesian calibration methods, including well-established approaches. Furthermore, we describe their implementation and use with a new, open-source Python library, ACBICI (A Configurable BayesIan Calibration and Inference Package). All algorithms are implemented with an object-oriented structure designed to be both easy to use and readily extensible. In particular, single-output and multiple-output calibration are addressed in a consistent manner. The article completes the theory and its implementation with practical recommendations for calibrating the problems of interest. These guidelines -- currently unavailable in a unified form elsewhere -- together with the open-source Python library, are intended to support the reliable calibration of computational codes and models commonly used in engineering and related fields. Overall, this work aims to serve both as a comprehensive review of the statistical foundations and (computational) tools required to perform such calculations, and as a practical guide to Bayesian calibration with modern software tools.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22890v1</guid> |
|
|
<category>stat.CO</category> |
|
|
<category>cond-mat.mtrl-sci</category> |
|
|
<category>math.OC</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Christina Schenk, Ignacio Romero</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>A categorical account of the Metropolis-Hastings algorithm</title> |
|
|
<link>https://arxiv.org/abs/2601.22911</link> |
|
|
<description>arXiv:2601.22911v1 Announce Type: new |
|
|
Abstract: Metropolis-Hastings (MH) is a foundational Markov chain Monte Carlo (MCMC) algorithm. In this paper, we ask whether it is possible to formulate and analyse MH in terms of categorical probability, using a recent involutive framework for MH-type procedures as a concrete case study. We show how basic MCMC concepts such as invariance and reversibility can be formulated in Markov categories, and how one part of the MH kernel can be analysed using standard CD categories. To go further, we then study enrichments of CD categories over commutative monoids. This gives an expressive setting for reasoning abstractly about a range of important probabilistic concepts, including substochastic kernels, finite and $\sigma$-finite measures, absolute continuity, singular measures, and Lebesgue decompositions. Using these tools, we give synthetic necessary and sufficient conditions for a general MH-type sampler to be reversible with respect to a given target distribution.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22911v1</guid> |
|
|
<category>stat.CO</category> |
|
|
<category>math.CT</category> |
|
|
<category>math.PR</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Rob Cornish, Andi Q. Wang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Persuasive Privacy</title> |
|
|
<link>https://arxiv.org/abs/2601.22945</link> |
|
|
<description>arXiv:2601.22945v1 Announce Type: new |
|
|
Abstract: We propose a novel framework for measuring privacy from a Bayesian game-theoretic perspective. This framework enables the creation of new, purpose-driven privacy definitions that are rigorously justified, while also allowing for the assessment of existing privacy guarantees through game theory. We show that pure and probabilistic differential privacy are special cases of our framework, and provide new interpretations of the post-processing inequality in this setting. Further, we demonstrate that privacy guarantees can be established for deterministic algorithms, which are overlooked by current privacy standards.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22945v1</guid> |
|
|
<category>math.ST</category> |
|
|
<category>cs.CR</category> |
|
|
<category>econ.TH</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Joshua J Bon, James Bailie, Judith Rousseau, Christian P Robert</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>OneFlowSBI: One Model, Many Queries for Simulation-Based Inference</title> |
|
|
<link>https://arxiv.org/abs/2601.22951</link> |
|
|
<description>arXiv:2601.22951v1 Announce Type: new |
|
|
Abstract: We introduce \textit{OneFlowSBI}, a unified framework for simulation-based inference that learns a single flow-matching generative model over the joint distribution of parameters and observations. Leveraging a query-aware masking distribution during training, the same model supports multiple inference tasks, including posterior sampling, likelihood estimation, and arbitrary conditional distributions, without task-specific retraining. We evaluate \textit{OneFlowSBI} on ten benchmark inference problems and two high-dimensional real-world inverse problems across multiple simulation budgets. \textit{OneFlowSBI} is shown to deliver competitive performance against state-of-the-art generalized inference solvers and specialized posterior estimators, while enabling efficient sampling with few ODE integration steps and remaining robust under noisy and partially observed data.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22951v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Mayank Nautiyal, Li Ju, Melker Ernfors, Klara Hagland, Ville Holma, Maximilian Werk\"o S\"oderholm, Andreas Hellander, Prashant Singh</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Dynamic modelling and evaluation of preclinical trials in acute leukaemia</title> |
|
|
<link>https://arxiv.org/abs/2601.22971</link> |
|
|
<description>arXiv:2601.22971v1 Announce Type: new |
|
|
Abstract: Dynamic models are widely used to mathematically describe biological phenomena that evolve over time. One important area of application is leukaemia research, where leukaemia cells are genetically modified in preclinical studies to explore new therapeutic targets for reducing leukaemic burden. In advanced experiments, these studies are often conducted in mice and generate time-resolved data, the analysis of which may reveal growth-inhibiting effects of the investigated gene modifications. However, the experimental data is often times evaluated using statistical tests which compare measurements from only two different time points. This approach does not only reduce the time series to two instances but also neglects biological knowledge about cell mechanisms. Such knowledge, translated into mathematical models, expands the power to investigate and understand effects of modifications on underlying mechanisms based on experimental data. We utilise two population growth models -- an exponential and a logistic growth model -- to capture cell dynamics over the whole experimental time horizon and to consider all measurement times jointly. This approach enables us to derive modification effects from estimated model parameters. We demonstrate that the exponential growth model recognises simulated scenarios more reliably than the other candidate model and than a statistical test. Moreover, we apply the population growth models to evaluate the efficacy of candidate gene knockouts in patient-derived xenograft (PDX) models of acute leukaemia.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22971v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>q-bio.QM</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Julian W\"asche, Romina Ludwig, Irmela Jeremias, Christiane Fuchs</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Computationally efficient segmentation for non-stationary time series with oscillatory patterns</title> |
|
|
<link>https://arxiv.org/abs/2601.22999</link> |
|
|
<description>arXiv:2601.22999v1 Announce Type: new |
|
|
Abstract: We propose a novel approach for change-point detection and parameter learning in multivariate non-stationary time series exhibiting oscillatory behaviour. We approximate the process through a piecewise function defined by a sum of sinusoidal functions with unknown frequencies and amplitudes plus noise. The inference for this model is non-trivial. However, discretising the parameter space allows us to recast this complex estimation problem into a more tractable linear model, where the covariates are Fourier basis functions. Then, any change-point detection algorithms for segmentation can be used. The advantage of our proposal is that it bypasses the need for trans-dimensional Markov chain Monte Carlo algorithms used by state-of-the-art methods. Through simulations, we demonstrate that our method is significantly faster than existing approaches while maintaining comparable numerical accuracy. We also provide high probability bounds on the change-point localization error. We apply our methodology to climate and EEG sleep data.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22999v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Nicolas Bianco, Lorenzo Cappello</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Differences in Performance of Bayesian Dynamic Borrowing and Synthetic Control Methods: A Case Study of Pediatric Atopic Dermatitis</title> |
|
|
<link>https://arxiv.org/abs/2601.23021</link> |
|
|
<description>arXiv:2601.23021v1 Announce Type: new |
|
|
Abstract: Bayesian dynamic borrowing (BDB) and synthetic control methods (SCM) are both used in clinical trial design when recruitment, retention, or allocation is a challenge. The performance of these approaches has not previously been directly compared due to differences in application, product, and measurement metrics. This study aims to conduct a comparison of power and type 1 error rates of BDB (using meta-analytic predictive prior (MAP)) and SCM using a case study of Pediatric Atopic Dermatitis. Six historical randomised control trials were selected for use in both the creation of the MAP prior and synthetic control arm. The R library RBesT was used to create a MAP prior and the R library Synthpop was used to create a synthetic control arm for the SCM. Power and type 1 error rate were used as comparison metrics. BDB produced a power of 0.580 and a type 1 error rate of 0.026. SCM produced a power of 0.641 and a type 1 error rate of 0.027. In this case study, the SCM model produced a higher power than the BDB method with a similar type 1 error rate. However, the decision to use SCM or BDB should come from the specific needs of the potential trial, since their power and type 1 error rate may differ on a case-by-case basis.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23021v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Nicole Cizauskas, Foteini Strimenopoulou, Svetlana S. Cherlin, James M. S. Wason</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Neural Backward Filtering Forward Guiding</title> |
|
|
<link>https://arxiv.org/abs/2601.23030</link> |
|
|
<description>arXiv:2601.23030v1 Announce Type: new |
|
|
Abstract: Inference in non-linear continuous stochastic processes on trees is challenging, particularly when observations are sparse (leaf-only) and the topology is complex. Exact smoothing via Doob's $h$-transform is intractable for general non-linear dynamics, while particle-based methods degrade in high dimensions. We propose Neural Backward Filtering Forward Guiding (NBFFG), a unified framework for both discrete transitions and continuous diffusions. Our method constructs a variational posterior by leveraging an auxiliary linear-Gaussian process. This auxiliary process yields a closed-form backward filter that serves as a ``guide'', steering the generative path toward high-likelihood regions. We then learn a neural residual--parameterized as a normalizing flow or a controlled SDE--to capture the non-linear discrepancies. This formulation allows for an unbiased path-wise subsampling scheme, reducing the training complexity from tree-size dependent to path-length dependent. Empirical results show that NBFFG outperforms baselines on synthetic benchmarks, and we demonstrate the method on a high-dimensional inference task in phylogenetic analysis with reconstruction of ancestral butterfly wing shapes.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23030v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Gefan Yang, Frank van der Meulen, Stefan Sommer</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Asymptotic Theory of Iterated Empirical Risk Minimization, with Applications to Active Learning</title> |
|
|
<link>https://arxiv.org/abs/2601.23031</link> |
|
|
<description>arXiv:2601.23031v1 Announce Type: new |
|
|
Abstract: We study a class of iterated empirical risk minimization (ERM) procedures in which two successive ERMs are performed on the same dataset, and the predictions of the first estimator enter as an argument in the loss function of the second. This setting, which arises naturally in active learning and reweighting schemes, introduces intricate statistical dependencies across samples and fundamentally distinguishes the problem from classical single-stage ERM analyses. For linear models trained with a broad class of convex losses on Gaussian mixture data, we derive a sharp asymptotic characterization of the test error in the high-dimensional regime where the sample size and ambient dimension scale proportionally. Our results provide explicit, fully asymptotic predictions for the performance of the second-stage estimator despite the reuse of data and the presence of prediction-dependent losses. We apply this theory to revisit a well-studied pool-based active learning problem, removing oracle and sample-splitting assumptions made in prior work. We uncover a fundamental tradeoff in how the labeling budget should be allocated across stages, and demonstrate a double-descent behavior of the test error driven purely by data selection, rather than model size or sample count.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23031v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Hugo Cui, Yue M. Lu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Semi-knockoffs: a model-agnostic conditional independence testing method with finite-sample guarantees</title> |
|
|
<link>https://arxiv.org/abs/2601.23124</link> |
|
|
<description>arXiv:2601.23124v1 Announce Type: new |
|
|
Abstract: Conditional independence testing (CIT) is essential for reliable scientific discovery. It prevents spurious findings and enables controlled feature selection. Recent CIT methods have used machine learning (ML) models as surrogates of the underlying distribution. However, model-agnostic approaches require a train-test split, which reduces statistical power. We introduce Semi-knockoffs, a CIT method that can accommodate any pre-trained model, avoids this split, and provides valid p-values and false discovery rate (FDR) control for high-dimensional settings. Unlike methods that rely on the model-$X$ assumption (known input distribution), Semi-knockoffs only require conditional expectations for continuous variables. This makes the procedure less restrictive and more practical for machine learning integration. To ensure validity when estimating these expectations, we present two new theoretical results of independent interest: (i) stability for regularized models trained with a null feature and (ii) the double-robustness property.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23124v1</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Angel Reyero-Lobo, Bertrand Thirion, Pierre Neuvial</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Revisiting the Lost Submarine Problem: A Decision Theoretic Approach</title> |
|
|
<link>https://arxiv.org/abs/2601.23171</link> |
|
|
<description>arXiv:2601.23171v1 Announce Type: new |
|
|
Abstract: This article includes a discussion of the ``lost submarine problem", following Morey \emph{et al} (2016). As the title of that paper suggests (\emph{The fallacy of placing confidence in confidence intervals}), the example is intended to illustrate the futility of relying on the confidence interval as a formal inference statement. In the view of this author, the misgivings expressed in Morey \emph{et al} (2016) can be resolved using a decision theoretic approach. While it is true that a variety of statistical methods lead to a variety of confidence intervals, once we precisely define their purpose, a single optimal choice emerges. Furthermore, distinct purposes lead to distinct optimal choices. Therefore, that a variety of procedures exist is an advantage rather than a liability.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23171v1</guid> |
|
|
<category>stat.OT</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Anthony Almudevar</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Robust, partially alive particle Metropolis-Hastings via the Frankenfilter</title> |
|
|
<link>https://arxiv.org/abs/2601.23173</link> |
|
|
<description>arXiv:2601.23173v1 Announce Type: new |
|
|
Abstract: When a hidden Markov model permits the conditional likelihood of an observation given the hidden process to be zero, all particle simulations from one observation time to the next could produce zeros. If so, the filtering distribution cannot be estimated and the estimated parameter likelihood is zero. The alive particle filter addresses this by simulating a random number of particles for each inter-observation interval, stopping after a target number of non-zero conditional likelihoods. For outlying observations or poor parameter values, a non-zero result can be extremely unlikely, and computational costs prohibitive. We introduce the Frankenfilter, a principled, partially alive particle filter that targets a user-defined amount of success whilst fixing lower and upper bounds on the number of simulations. The Frankenfilter produces unbiased estimators of the likelihood, suitable for pseudo-marginal Metropolis--Hastings (PMMH). We demonstrate that PMMH with the Frankenfilter is more robust to outliers and mis-specified initial parameter values than PMMH using standard particle filters, and is typically at least 2-3 times more efficient. We also provide advice for choosing the amount of success. In the case of n exact observations, this is particularly simple: target n successes.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23173v1</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Chris Sherlock, Andrew Golightly, Anthony Lee</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Beyond the Null Effect: Unmasking the True Impact of Teacher-Child Interaction Quality on Child Outcomes in Early Head Start</title> |
|
|
<link>https://arxiv.org/abs/2601.23203</link> |
|
|
<description>arXiv:2601.23203v1 Announce Type: new |
|
|
Abstract: In Early Head Start (EHS), teacher-child interactions are widely believed to shape infant-toddler outcomes, yet large-scale studies often find only modest or null associations. This study addresses four methodological sources of attenuation -- item-level measurement error, center-level confounding, teacher- and classroom-level covariate imbalance, and overlooked nonlinearities -- to clarify classroom process quality's true influence on child development. Using data from the 2018 wave of the Early Head Start Family and Child Experiences Survey (Baby FACES), we applied a three-level generalized additive latent and mixed model (GALAMM) to distinguish genuine classroom-level variability in process quality, as measured by the Classroom Assessment Scoring System (CLASS) and Quality of Caregiver-Child Interactions for Infants and Toddlers (QCIT), from item-level noise and center-level effects. We then estimated dose-response relationships with children's language and socioemotional outcomes, employing covariate balancing weights and generalized additive models. Results show that nearly half of each item's variance reflects classroom-level processes, with the remainder tied to measurement error or center-wide influences, masking true classroom effects. After correcting for these biases, domain-focused dose-response analyses reveal robust linear associations between cognitive/language supports and children's English communicative skills, while emotional-behavioral supports better predict social-emotional competence. Some domains display plateaus when pushed to extremes, underscoring potential nonlinearities. These findings challenge the "null effect" narrative, demonstrating that rigorous methodology can uncover the critical, domain-specific impacts of teacher-child interaction quality, offering clearer guidance for targeted professional development and policy in EHS.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23203v1</guid> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>JoonHo Lee, Alison Hooper</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>A Random Matrix Theory of Masked Self-Supervised Regression</title> |
|
|
<link>https://arxiv.org/abs/2601.23208</link> |
|
|
<description>arXiv:2601.23208v1 Announce Type: new |
|
|
Abstract: In the era of transformer models, masked self-supervised learning (SSL) has become a foundational training paradigm. A defining feature of masked SSL is that training aggregates predictions across many masking patterns, giving rise to a joint, matrix-valued predictor rather than a single vector-valued estimator. This object encodes how coordinates condition on one another and poses new analytical challenges. We develop a precise high-dimensional analysis of masked modeling objectives in the proportional regime where the number of samples scales with the ambient dimension. Our results provide explicit expressions for the generalization error and characterize the spectral structure of the learned predictor, revealing how masked modeling extracts structure from data. For spiked covariance models, we show that the joint predictor undergoes a Baik--Ben Arous--P\'ech\'e (BBP)-type phase transition, identifying when masked SSL begins to recover latent signals. Finally, we identify structured regimes in which masked self-supervised learning provably outperforms PCA, highlighting potential advantages of SSL objectives over classical unsupervised methods</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23208v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Arie Wortsman Zurich, Federica Gerace, Bruno Loureiro, Yue M. Lu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Graph Attention Network for Node Regression on Random Geometric Graphs with Erd\H{o}s--R\'enyi contamination</title> |
|
|
<link>https://arxiv.org/abs/2601.23239</link> |
|
|
<description>arXiv:2601.23239v1 Announce Type: new |
|
|
Abstract: Graph attention networks (GATs) are widely used and often appear robust to noise in node covariates and edges, yet rigorous statistical guarantees demonstrating a provable advantage of GATs over non-attention graph neural networks~(GNNs) are scarce. We partially address this gap for node regression with graph-based errors-in-variables models under simultaneous covariate and edge corruption: responses are generated from latent node-level covariates, but only noise-perturbed versions of the latent covariates are observed; and the sample graph is a random geometric graph created from the node covariates but contaminated by independent Erd\H{o}s--R\'enyi edges. We propose and analyze a carefully designed, task-specific GAT that constructs denoised proxy features for regression. We prove that regressing the response variables on the proxies achieves lower error asymptotically in (a) estimating the regression coefficient compared to the ordinary least squares (OLS) estimator on the noisy node covariates, and (b) predicting the response for an unlabelled node compared to a vanilla graph convolutional network~(GCN) -- under mild growth conditions. Our analysis leverages high-dimensional geometric tail bounds and concentration for neighbourhood counts and sample covariances. We verify our theoretical findings through experiments on synthetically generated data. We also perform experiments on real-world graphs and demonstrate the effectiveness of the attention mechanism in several node regression tasks.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23239v1</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.IT</category> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.SI</category> |
|
|
<category>math.IT</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Somak Laha, Suqi Liu, Morgane Austern</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Nested Slice Sampling: Vectorized Nested Sampling for GPU-Accelerated Inference</title> |
|
|
<link>https://arxiv.org/abs/2601.23252</link> |
|
|
<description>arXiv:2601.23252v1 Announce Type: new |
|
|
Abstract: Model comparison and calibrated uncertainty quantification often require integrating over parameters, but scalable inference can be challenging for complex, multimodal targets. Nested Sampling is a robust alternative to standard MCMC, yet its typically sequential structure and hard constraints make efficient accelerator implementations difficult. This paper introduces Nested Slice Sampling (NSS), a GPU-friendly, vectorized formulation of Nested Sampling that uses Hit-and-Run Slice Sampling for constrained updates. A tuning analysis yields a simple near-optimal rule for setting the slice width, improving high-dimensional behavior and making per-step compute more predictable for parallel execution. Experiments on challenging synthetic targets, high dimensional Bayesian inference, and Gaussian process hyperparameter marginalization show that NSS maintains accurate evidence estimates and high-quality posterior samples, and is particularly robust on difficult multimodal problems where current state-of-the-art methods such as tempered SMC baselines can struggle. An open-source implementation is released to facilitate adoption and reproducibility.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23252v1</guid> |
|
|
<category>stat.CO</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>new</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>David Yallup, Namu Kroupa, Will Handley</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Variational Tail Bounds for Norms of Random Vectors and Matrices</title> |
|
|
<link>https://arxiv.org/abs/2503.17300</link> |
|
|
<description>arXiv:2503.17300v4 Announce Type: cross |
|
|
Abstract: We propose a variational tail bound for norms of random vectors under moment assumptions on their one-dimensional marginals. A simplified version of the bound that parametrizes the ``aggregating distribution'' using a certain pushforward of the Gaussian distribution is also provided. We apply the proposed method to reproduce some of the well-known bounds on norms of Gaussian random vectors, and also obtain dimension-free tail bounds for the Euclidean norm of random vectors with arbitrary moment profiles. Furthermore, we reproduce a dimension-free concentration inequality for sum of independent and identically distributed positive semidefinite matrices with sub-exponential marginals, and obtain a concentration inequality for the sample covariance matrix of sub-exponential random vectors. We also obtain a tail bound for the operator norm of a random matrix series whose random coefficients may have arbitrary moment profiles. Furthermore, we use coupling to formulate an abstraction of the proposed approach that applies more broadly.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2503.17300v4</guid> |
|
|
<category>math.PR</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.ML</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Sohail Bahmani</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Large Language Models: A Mathematical Formulation</title> |
|
|
<link>https://arxiv.org/abs/2601.22170</link> |
|
|
<description>arXiv:2601.22170v1 Announce Type: cross |
|
|
Abstract: Large language models (LLMs) process and predict sequences containing text to answer questions, and address tasks including document summarization, providing recommendations, writing software and solving quantitative problems. We provide a mathematical framework for LLMs by describing the encoding of text sequences into sequences of tokens, defining the architecture for next-token prediction models, explaining how these models are learned from data, and demonstrating how they are deployed to address a variety of tasks. The mathematical sophistication required to understand this material is not high, and relies on straightforward ideas from information theory, probability and optimization. Nonetheless, the combination of ideas resting on these different components from the mathematical sciences yields a complex algorithmic structure; and this algorithmic structure has demonstrated remarkable empirical successes. The mathematical framework established here provides a platform from which it is possible to formulate and address questions concerning the accuracy, efficiency and robustness of the algorithms that constitute LLMs. The framework also suggests directions for development of modified and new methodologies.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22170v1</guid> |
|
|
<category>math.NA</category> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.NA</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Ricardo Baptista, Andrew Stuart, Son Tran</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Adaptive Benign Overfitting (ABO): Overparameterized RLS for Online Learning in Non-stationary Time-series</title> |
|
|
<link>https://arxiv.org/abs/2601.22200</link> |
|
|
<description>arXiv:2601.22200v1 Announce Type: cross |
|
|
Abstract: Overparameterized models have recently challenged conventional learning theory by exhibiting improved generalization beyond the interpolation limit, a phenomenon known as benign overfitting. This work introduces Adaptive Benign Overfitting (ABO), extending the recursive least-squares (RLS) framework to this regime through a numerically stable formulation based on orthogonal-triangular updates. A QR-based exponentially weighted RLS (QR-EWRLS) algorithm is introduced, combining random Fourier feature mappings with forgetting-factor regularization to enable online adaptation under non-stationary conditions. The orthogonal decomposition prevents the numerical divergence associated with covariance-form RLS while retaining adaptability to evolving data distributions. Experiments on nonlinear synthetic time series confirm that the proposed approach maintains bounded residuals and stable condition numbers while reproducing the double-descent behavior characteristic of overparameterized models. Applications to forecasting foreign exchange and electricity demand show that ABO is highly accurate (comparable to baseline kernel methods) while achieving speed improvements of between 20 and 40 percent. The results provide a unified view linking adaptive filtering, kernel approximation, and benign overfitting within a stable online learning framework.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22200v1</guid> |
|
|
<category>q-fin.ST</category> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.MS</category> |
|
|
<category>cs.NA</category> |
|
|
<category>math.NA</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Luis Ontaneda Mijares, Nick Firoozye</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Causal Imitation Learning Under Measurement Error and Distribution Shift</title> |
|
|
<link>https://arxiv.org/abs/2601.22206</link> |
|
|
<description>arXiv:2601.22206v1 Announce Type: cross |
|
|
Abstract: We study offline imitation learning (IL) when part of the decision-relevant state is observed only through noisy measurements and the distribution may change between training and deployment. Such settings induce spurious state-action correlations, so standard behavioral cloning (BC) -- whether conditioning on raw measurements or ignoring them -- can converge to systematically biased policies under distribution shift. We propose a general framework for IL under measurement error, inspired by explicitly modeling the causal relationships among the variables, yielding a target that retains a causal interpretation and is robust to distribution shift. Building on ideas from proximal causal inference, we introduce \texttt{CausIL}, which treats noisy state observations as proxy variables, and we provide identification conditions under which the target policy is recoverable from demonstrations without rewards or interactive expert queries. We develop estimators for both discrete and continuous state spaces; for continuous settings, we use an adversarial procedure over RKHS function classes to learn the required parameters. We evaluate \texttt{CausIL} on semi-simulated longitudinal data from the PhysioNet/Computing in Cardiology Challenge 2019 cohort and demonstrate improved robustness to distribution shift compared to BC baselines.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22206v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Shi Bo, AmirEmad Ghassami</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Matrix Factorization for Practical Continual Mean Estimation Under User-Level Differential Privacy</title> |
|
|
<link>https://arxiv.org/abs/2601.22320</link> |
|
|
<description>arXiv:2601.22320v1 Announce Type: cross |
|
|
Abstract: We study continual mean estimation, where data vectors arrive sequentially and the goal is to maintain accurate estimates of the running mean. We address this problem under user-level differential privacy, which protects each user's entire dataset even when they contribute multiple data points. Previous work on this problem has focused on pure differential privacy. While important, this approach limits applicability, as it leads to overly noisy estimates. In contrast, we analyze the problem under approximate differential privacy, adopting recent advances in the Matrix Factorization mechanism. We introduce a novel mean estimation specific factorization, which is both efficient and accurate, achieving asymptotically lower mean-squared error bounds in continual mean estimation under user-level differential privacy.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22320v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Nikita P. Kalinin, Ali Najar, Valentin Roth, Christoph H. Lampert</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Label-Efficient Monitoring of Classification Models via Stratified Importance Sampling</title> |
|
|
<link>https://arxiv.org/abs/2601.22326</link> |
|
|
<description>arXiv:2601.22326v1 Announce Type: cross |
|
|
Abstract: Monitoring the performance of classification models in production is critical yet challenging due to strict labeling budgets, one-shot batch acquisition of labels and extremely low error rates. We propose a general framework based on Stratified Importance Sampling (SIS) that directly addresses these constraints in model monitoring. While SIS has previously been applied in specialized domains, our theoretical analysis establishes its broad applicability to the monitoring of classification models. Under mild conditions, SIS yields unbiased estimators with strict finite-sample mean squared error (MSE) improvements over both importance sampling (IS) and stratified random sampling (SRS). The framework does not rely on optimally defined proposal distributions or strata: even with noisy proxies and sub-optimal stratification, SIS can improve estimator efficiency compared to IS or SRS individually, though extreme proposal mismatch may limit these gains. Experiments across binary and multiclass tasks demonstrate consistent efficiency improvements under fixed label budgets, underscoring SIS as a principled, label-efficient, and operationally lightweight methodology for post-deployment model monitoring.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22326v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Lupo Marsigli, Angel Lopez de Haro</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Scalable Batch Correction for Cell Painting via Batch-Dependent Kernels and Adaptive Sampling</title> |
|
|
<link>https://arxiv.org/abs/2601.22331</link> |
|
|
<description>arXiv:2601.22331v1 Announce Type: cross |
|
|
Abstract: Cell Painting is a microscopy-based, high-content imaging assay that produces rich morphological profiles of cells and can support drug discovery by quantifying cellular responses to chemical perturbations. At scale, however, Cell Painting data is strongly affected by batch effects arising from differences in laboratories, instruments, and protocols, which can obscure biological signal. We present BALANS (Batch Alignment via Local Affinities and Subsampling), a scalable batch-correction method that aligns samples across batches by constructing a smoothed affinity matrix from pairwise distances. Given $n$ data points, BALANS builds a sparse affinity matrix $A \in \mathbb{R}^{n \times n}$ using two ideas. (i) For points $i$ and $j$, it sets a local scale using the distance from $i$ to its $k$-th nearest neighbor within the batch of $j$, then computes $A_{ij}$ via a Gaussian kernel calibrated by these batch-aware local scales. (ii) Rather than forming all $n^2$ entries, BALANS uses an adaptive sampling procedure that prioritizes rows with low cumulative neighbor coverage and retains only the strongest affinities per row, yielding a sparse but informative approximation of $A$. We prove that this sampling strategy is order-optimal in sample complexity and provides an approximation guarantee, and we show that BALANS runs in nearly linear time in $n$. Experiments on diverse real-world Cell Painting datasets and controlled large-scale synthetic benchmarks demonstrate that BALANS scales to large collections while improving runtime over native implementations of widely used batch-correction methods, without sacrificing correction quality.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22331v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Aditya Narayan Ravi, Snehal Vadvalkar, Abhishek Pandey, Ilan Shomorony</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Knowledge Gradient for Preference Learning</title> |
|
|
<link>https://arxiv.org/abs/2601.22335</link> |
|
|
<description>arXiv:2601.22335v1 Announce Type: cross |
|
|
Abstract: The knowledge gradient is a popular acquisition function in Bayesian optimization (BO) for optimizing black-box objectives with noisy function evaluations. Many practical settings, however, allow only pairwise comparison queries, yielding a preferential BO problem where direct function evaluations are unavailable. Extending the knowledge gradient to preferential BO is hindered by its computational challenge. At its core, the look-ahead step in the preferential setting requires computing a non-Gaussian posterior, which was previously considered intractable. In this paper, we address this challenge by deriving an exact and analytical knowledge gradient for preferential BO. We show that the exact knowledge gradient performs strongly on a suite of benchmark problems, often outperforming existing acquisition functions. In addition, we also present a case study illustrating the limitation of the knowledge gradient in certain scenarios.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22335v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Kaiwen Wu, Jacob R. Gardner</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Optimization, Generalization and Differential Privacy Bounds for Gradient Descent on Kolmogorov-Arnold Networks</title> |
|
|
<link>https://arxiv.org/abs/2601.22409</link> |
|
|
<description>arXiv:2601.22409v1 Announce Type: cross |
|
|
Abstract: Kolmogorov--Arnold Networks (KANs) have recently emerged as a structured alternative to standard MLPs, yet a principled theory for their training dynamics, generalization, and privacy properties remains limited. In this paper, we analyze gradient descent (GD) for training two-layer KANs and derive general bounds that characterize their training dynamics, generalization, and utility under differential privacy (DP). As a concrete instantiation, we specialize our analysis to logistic loss under an NTK-separable assumption, where we show that polylogarithmic network width suffices for GD to achieve an optimization rate of order $1/T$ and a generalization rate of order $1/n$, with $T$ denoting the number of GD iterations and $n$ the sample size. In the private setting, we characterize the noise required for $(\epsilon,\delta)$-DP and obtain a utility bound of order $\sqrt{d}/(n\epsilon)$ (with $d$ the input dimension), matching the classical lower bound for general convex Lipschitz problems. Our results imply that polylogarithmic width is not only sufficient but also necessary under differential privacy, revealing a qualitative gap between non-private (sufficiency only) and private (necessity also emerges) training regimes. Experiments further illustrate how these theoretical insights can guide practical choices, including network width selection and early stopping.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22409v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Puyu Wang, Junyu Zhou, Philipp Liznerski, Marius Kloft</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Weak Diffusion Priors Can Still Achieve Strong Inverse-Problem Performance</title> |
|
|
<link>https://arxiv.org/abs/2601.22443</link> |
|
|
<description>arXiv:2601.22443v1 Announce Type: cross |
|
|
Abstract: Can a diffusion model trained on bedrooms recover human faces? Diffusion models are widely used as priors for inverse problems, but standard approaches usually assume a high-fidelity model trained on data that closely match the unknown signal. In practice, one often must use a mismatched or low-fidelity diffusion prior. Surprisingly, these weak priors often perform nearly as well as full-strength, in-domain baselines. We study when and why inverse solvers are robust to weak diffusion priors. Through extensive experiments, we find that weak priors succeed when measurements are highly informative (e.g., many observed pixels), and we identify regimes where they fail. Our theory, based on Bayesian consistency, gives conditions under which high-dimensional measurements make the posterior concentrate near the true signal. These results provide a principled justification on when weak diffusion priors can be used reliably.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22443v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.CV</category> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Jing Jia, Wei Yuan, Sifan Liu, Liyue Shen, Guanyang Wang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Learning to Defer in Non-Stationary Time Series via Switching State-Space Models</title> |
|
|
<link>https://arxiv.org/abs/2601.22538</link> |
|
|
<description>arXiv:2601.22538v1 Announce Type: cross |
|
|
Abstract: We study Learning to Defer for non-stationary time series with partial feedback and time-varying expert availability. At each time step, the router selects an available expert, observes the target, and sees only the queried expert's prediction. We model signed expert residuals using L2D-SLDS, a factorized switching linear-Gaussian state-space model with context-dependent regime transitions, a shared global factor enabling cross-expert information transfer, and per-expert idiosyncratic states. The model supports expert entry and pruning via a dynamic registry. Using one-step-ahead predictive beliefs, we propose an IDS-inspired routing rule that trades off predicted cost against information gained about the latent regime and shared factor. Experiments show improvements over contextual-bandit baselines and a no-shared-factor ablation.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22538v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Yannis Montreuil, Letian Yu, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Neural-Inspired Posterior Approximation (NIPA)</title> |
|
|
<link>https://arxiv.org/abs/2601.22539</link> |
|
|
<description>arXiv:2601.22539v1 Announce Type: cross |
|
|
Abstract: Humans learn efficiently from their environment by engaging multiple interacting neural systems that support distinct yet complementary forms of control, including model-based (goal-directed) planning, model-free (habitual) responding, and episodic memory-based learning. Model-based mechanisms compute prospective action values using an internal model of the environment, supporting flexible but computationally costly planning; model-free mechanisms cache value estimates and build heuristics that enable fast, efficient habitual responding; and memory-based mechanisms allow rapid adaptation from individual experience. In this work, we aim to elucidate the computational principles underlying this biological efficiency and translate them into a sampling algorithm for scalable Bayesian inference through effective exploration of the posterior distribution. More specifically, our proposed algorithm comprises three components: a model-based module that uses the target distribution for guided but computationally slow sampling; a model-free module that uses previous samples to learn patterns in the parameter space, enabling fast, reflexive sampling without directly evaluating the expensive target distribution; and an episodic-control module that supports rapid sampling by recalling specific past events (i.e., samples). We show that this approach advances Bayesian methods and facilitates their application to large-scale statistical machine learning problems. In particular, we apply our proposed framework to Bayesian deep learning, with an emphasis on proper and principled uncertainty quantification.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22539v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Babak Shahbaba, Zahra Moslemi</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Conditional Performance Guarantee for Large Reasoning Models</title> |
|
|
<link>https://arxiv.org/abs/2601.22790</link> |
|
|
<description>arXiv:2601.22790v1 Announce Type: cross |
|
|
Abstract: Large reasoning models have shown strong performance through extended chain-of-thought reasoning, yet their computational cost remains significant. Probably approximately correct (PAC) reasoning provides statistical guarantees for efficient reasoning by adaptively switching between thinking and non-thinking models, but the guarantee holds only in the marginal case and does not provide exact conditional coverage. We propose G-PAC reasoning, a practical framework that provides PAC-style guarantees at the group level by partitioning the input space. We develop two instantiations: Group PAC (G-PAC) reasoning for known group structures and Clustered PAC (C-PAC) reasoning for unknown groupings. We prove that both G-PAC and C-PAC achieve group-conditional risk control, and that grouping can strictly improve efficiency over marginal PAC reasoning in heterogeneous settings. Our experiments on diverse reasoning benchmarks demonstrate that G-PAC and C-PAC successfully achieve group-conditional risk control while maintaining substantial computational savings.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22790v1</guid> |
|
|
<category>cs.AI</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Jianguo Huang, Hao Zeng, Bingyi Jing, Hongxin Wei, Bo An</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Cascaded Flow Matching for Heterogeneous Tabular Data with Mixed-Type Features</title> |
|
|
<link>https://arxiv.org/abs/2601.22816</link> |
|
|
<description>arXiv:2601.22816v1 Announce Type: cross |
|
|
Abstract: Advances in generative modeling have recently been adapted to tabular data containing discrete and continuous features. However, generating mixed-type features that combine discrete states with an otherwise continuous distribution in a single feature remains challenging. We advance the state-of-the-art in diffusion models for tabular data with a cascaded approach. We first generate a low-resolution version of a tabular data row, that is, the collection of the purely categorical features and a coarse categorical representation of numerical features. Next, this information is leveraged in the high-resolution flow matching model via a novel guided conditional probability path and data-dependent coupling. The low-resolution representation of numerical features explicitly accounts for discrete outcomes, such as missing or inflated values, and therewith enables a more faithful generation of mixed-type features. We formally prove that this cascade tightens the transport cost bound. The results indicate that our model generates significantly more realistic samples and captures distributional details more accurately, for example, the detection score increases by 40%.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22816v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Markus Mueller, Kathrin Gruber, Dennis Fok</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Perplexity Cannot Always Tell Right from Wrong</title> |
|
|
<link>https://arxiv.org/abs/2601.22950</link> |
|
|
<description>arXiv:2601.22950v1 Announce Type: cross |
|
|
Abstract: Perplexity -- a function measuring a model's overall level of "surprise" when encountering a particular output -- has gained significant traction in recent years, both as a loss function and as a simple-to-compute metric of model quality. Prior studies have pointed out several limitations of perplexity, often from an empirical manner. Here we leverage recent results on Transformer continuity to show in a rigorous manner how perplexity may be an unsuitable metric for model selection. Specifically, we prove that, if there is any sequence that a compact decoder-only Transformer model predicts accurately and confidently -- a necessary pre-requisite for strong generalisation -- it must imply existence of another sequence with very low perplexity, but not predicted correctly by that same model. Further, by analytically studying iso-perplexity plots, we find that perplexity will not always select for the more accurate model -- rather, any increase in model confidence must be accompanied by a commensurate rise in accuracy for the new model to be selected.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22950v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.CL</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Petar Veli\v{c}kovi\'c, Federico Barbero, Christos Perivolaropoulos, Simon Osindero, Razvan Pascanu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Value-at-Risk Constrained Policy Optimization</title> |
|
|
<link>https://arxiv.org/abs/2601.22993</link> |
|
|
<description>arXiv:2601.22993v1 Announce Type: cross |
|
|
Abstract: We introduce the Value-at-Risk Constrained Policy Optimization algorithm (VaR-CPO), a sample efficient and conservative method designed to optimize Value-at-Risk (VaR) constraints directly. Empirically, we demonstrate that VaR-CPO is capable of safe exploration, achieving zero constraint violations during training in feasible environments, a critical property that baseline methods fail to uphold. To overcome the inherent non-differentiability of the VaR constraint, we employ the one-sided Chebyshev inequality to obtain a tractable surrogate based on the first two moments of the cost return. Additionally, by extending the trust-region framework of the Constrained Policy Optimization (CPO) method, we provide rigorous worst-case bounds for both policy improvement and constraint violation during the training process.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.22993v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Rohan Tangri, Jan-Peter Calliess</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>A unified theory of order flow, market impact, and volatility</title> |
|
|
<link>https://arxiv.org/abs/2601.23172</link> |
|
|
<description>arXiv:2601.23172v1 Announce Type: cross |
|
|
Abstract: We propose a microstructural model for the order flow in financial markets that distinguishes between {\it core orders} and {\it reaction flow}, both modeled as Hawkes processes. This model has a natural scaling limit that reconciles a number of salient empirical properties: persistent signed order flow, rough trading volume and volatility, and power-law market impact. In our framework, all these quantities are pinned down by a single statistic $H_0$, which measures the persistence of the core flow. Specifically, the signed flow converges to the sum of a fractional process with Hurst index $H_0$ and a martingale, while the limiting traded volume is a rough process with Hurst index $H_0-1/2$. No-arbitrage constraints imply that volatility is rough, with Hurst parameter $2H_0-3/2$, and that the price impact of trades follows a power law with exponent $2-2H_0$. The analysis of signed order flow data yields an estimate $H_0 \approx 3/4$. This is not only consistent with the square-root law of market impact, but also turns out to match estimates for the roughness of traded volumes and volatilities remarkably well.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23172v1</guid> |
|
|
<category>q-fin.ST</category> |
|
|
<category>math.PR</category> |
|
|
<category>q-fin.MF</category> |
|
|
<category>q-fin.TR</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Johannes Muhle-Karbe, Youssef Ouazzani Chahd, Mathieu Rosenbaum, Gr\'egoire Szymanski</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>YuriiFormer: A Suite of Nesterov-Accelerated Transformers</title> |
|
|
<link>https://arxiv.org/abs/2601.23236</link> |
|
|
<description>arXiv:2601.23236v1 Announce Type: cross |
|
|
Abstract: We propose a variational framework that interprets transformer layers as iterations of an optimization algorithm acting on token embeddings. In this view, self-attention implements a gradient step of an interaction energy, while MLP layers correspond to gradient updates of a potential energy. Standard GPT-style transformers emerge as vanilla gradient descent on the resulting composite objective, implemented via Lie--Trotter splitting between these two energy functionals. This perspective enables principled architectural design using classical optimization ideas. As a proof of concept, we introduce a Nesterov-style accelerated transformer that preserves the same attention and MLP oracles. The resulting architecture consistently outperforms a nanoGPT baseline on TinyStories and OpenWebText, demonstrating that optimization-theoretic insights can translate into practical gains.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.23236v1</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>math.OC</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Aleksandr Zimin, Yury Polyanskiy, Philippe Rigollet</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Leaf clustering using circular densities</title> |
|
|
<link>https://arxiv.org/abs/2211.10547</link> |
|
|
<description>arXiv:2211.10547v2 Announce Type: replace |
|
|
Abstract: In the biology field of botany, leaf shape recognition is an important task. One way of characterising the leaf shape is through the centroid contour distances (CCD). Each CCD path might have different resolution, so normalisation is done by associating each contour to a circular density. Densities are rotated by subtracting the mean or mode preferred direction. Distance measures between densities are used to produce a hierarchical clustering method to cluster the leaves. We illustrate our approach with a motivating small dataset as well as a larger dataset.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2211.10547v2</guid> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Luis E. Nieto-Barajas</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>A VAE Approach to Sample Multivariate Extremes</title> |
|
|
<link>https://arxiv.org/abs/2306.10987</link> |
|
|
<description>arXiv:2306.10987v2 Announce Type: replace |
|
|
Abstract: Generating accurate extremes from an observational data set is crucial when seeking to estimate risks associated with the occurrence of future extremes which could be larger than those already observed. Applications range from the occurrence of natural disasters to financial crashes. Generative approaches from the machine learning community do not apply to extreme samples without careful adaptation. Besides, asymptotic results from extreme value theory (EVT) give a theoretical framework to model multivariate extreme events, especially through the notion of multivariate regular variation. Bridging these two fields, this paper details a variational autoencoder (VAE) approach for sampling multivariate heavy-tailed distributions, i.e., distributions likely to have extremes of particularly large intensities. We illustrate the relevance of our approach on a synthetic data set and on a real data set of discharge measurements along the Danube river network. The latter shows the potential of our approach for flood risks' assessment. In addition to outperforming the standard VAE for the tested data sets, we also provide a comparison with a competing EVT-based generative approach. On the tested cases, our approach improves the learning of the dependency structure between extremes.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2306.10987v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Nicolas Lafon, Philippe Naveau, Ronan Fablet</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Logarithmic Asymptotic Relations Between $p$-Values and Mutual Information</title> |
|
|
<link>https://arxiv.org/abs/2308.14735</link> |
|
|
<description>arXiv:2308.14735v2 Announce Type: replace |
|
|
Abstract: We establish a precise connection between statistical significance in dependence testing and information-theoretic dependence as quantified by Shannon mutual information (MI). In the absence of prior distributional information, we consider a maximum-entropy model and show that the probability associated with the realization of a given magnitude of MI takes an exponential form, yielding a corresponding tail-probability interpretation of a $p$-value. In contingency tables with fixed marginal frequencies, we analyze Fisher's exact test and prove that its $p$-value $P_F$ satisfies a logarithmic asymptotic relation of the form $MI=-(1/N)\log P_F + O(\log(N+1)/N)$ as the sample size $N\to\infty$. These results clarify the role of MI as the exponential rate governing the asymptotic behavior of $p$-values in the settings studied here, and they enable principled comparisons of dependence across datasets with different sample sizes. We further discuss implications for combining evidence across studies via meta-analysis, allowing mutual information and its statistical significance to be integrated in a unified framework.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2308.14735v2</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Tsutomu Mori, Takashi Kawamura</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>High-Dimensional Bernstein Von-Mises Theorems for Covariance and Precision Matrices</title> |
|
|
<link>https://arxiv.org/abs/2309.08556</link> |
|
|
<description>arXiv:2309.08556v3 Announce Type: replace |
|
|
Abstract: This paper aims to examine the characteristics of the posterior distribution of covariance/precision matrices in a "large $p$, large $n$" scenario, where $p$ represents the number of variables and $n$ is the sample size. Our analysis focuses on establishing asymptotic normality of the posterior distribution of the entire covariance/precision matrices under specific growth restrictions on $p_n$ and other mild assumptions. In particular, the limiting distribution turns out to be a symmetric matrix variate normal distribution whose parameters depend on the maximum likelihood estimate. Our results hold for a wide class of prior distributions which includes standard choices used by practitioners. Next, we consider Gaussian graphical models which induce sparsity in the precision matrix. Asymptotic normality of the corresponding posterior distribution is established under mild assumptions on the prior and true data-generating mechanism.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2309.08556v3</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Partha Sarkar, Kshitij Khare, Malay Ghosh, Matt P. Wand</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Extending Mean-Field Variational Inference via Entropic Regularization: Theory and Computation</title> |
|
|
<link>https://arxiv.org/abs/2404.09113</link> |
|
|
<description>arXiv:2404.09113v4 Announce Type: replace |
|
|
Abstract: Variational inference (VI) has emerged as a popular method for approximate inference for high-dimensional Bayesian models. In this paper, we propose a novel VI method that extends the naive mean field via entropic regularization, referred to as $\Xi$-variational inference ($\Xi$-VI). $\Xi$-VI has a close connection to the entropic optimal transport problem and benefits from the computationally efficient Sinkhorn algorithm. We show that $\Xi$-variational posteriors effectively recover the true posterior dependency, where the dependence is downweighted by the regularization parameter. We analyze the role of dimensionality of the parameter space on the accuracy of $\Xi$-variational approximation and how it affects computational considerations, providing a rough characterization of the statistical-computational trade-off in $\Xi$-VI. We also investigate the frequentist properties of $\Xi$-VI and establish results on consistency, asymptotic normality, high-dimensional asymptotics, and algorithmic stability. We provide sufficient criteria for achieving polynomial-time approximate inference using the method. Finally, we demonstrate the practical advantage of $\Xi$-VI over mean-field variational inference on simulated and real data.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2404.09113v4</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Bohan Wu, David Blei</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Bayesian Strategies for Repulsive Spatial Point Processes</title> |
|
|
<link>https://arxiv.org/abs/2404.15133</link> |
|
|
<description>arXiv:2404.15133v3 Announce Type: replace |
|
|
Abstract: There is increasing interest to develop Bayesian inferential algorithms for point process models with intractable likelihoods. A purpose of this paper is to illustrate the utility of using simulation based strategies, including Approximate Bayesian Computation (ABC) and Markov Chain Monte Carlo (MCMC) methods for this task. Shirota and Gelfand (2017) proposed an extended version of an ABC approach for Repulsive Spatial Point Processes (RSPP), but their algorithm was not correctly detailed. In this paper, we correct their method and, based on this, we propose a new ABC-MCMC algorithm to which Markov property is introduced compared to a typical ABC method. Though it is generally impractical to use, Monte Carlo approximations can be leveraged for intractable terms. Another aspect of this paper is to explore the use of the exchange algorithm and the noisy Metropolis-Hastings algorithm (Alquier et al., 2016) on RSPP. Comparisons to ABC-MCMC methods are also provided. We find that the inferential approaches outlined above yield good performance for RSPP in both simulated and real data applications and should be considered as viable approaches for the analysis of these models.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2404.15133v3</guid> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Chaoyi Lu, Nial Friel</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Multivariate Bayesian Last Layer for Regression with Uncertainty Quantification and Decomposition</title> |
|
|
<link>https://arxiv.org/abs/2405.01761</link> |
|
|
<description>arXiv:2405.01761v2 Announce Type: replace |
|
|
Abstract: We present new Bayesian Last Layer neural network models in the setting of multivariate regression under heteroscedastic noise, and propose EM algorithms for parameter learning. Bayesian modeling of a neural network's final layer has the attractive property of uncertainty quantification with a single forward pass. The proposed framework is capable of disentangling the aleatoric and epistemic uncertainty, and can be used to enhance a canonically trained deep neural network with uncertainty-aware capabilities.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2405.01761v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Han Wang, Eiji Kawasaki, Guillaume Damblin, Geoffrey Daniel</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>On the statistical analysis of grouped data: when Pearson $\chi^2$ and other divisible statistics are not goodness-of-fit tests</title> |
|
|
<link>https://arxiv.org/abs/2406.09195</link> |
|
|
<description>arXiv:2406.09195v5 Announce Type: replace |
|
|
Abstract: Thousands of experiments are analyzed and papers are published each year involving the statistical analysis of grouped data. While this area of statistics is often perceived -- somewhat naively -- as saturated, several misconceptions still affect everyday practice, and new frontiers have so far remained unexplored. Researchers must be aware of the limitations affecting their analyses and what are the new possibilities in their hands. |
|
|
Motivated by this need, the article introduces a unifying approach to the analysis of grouped data, which allows us to study the class of divisible statistics -- that includes Pearson's $\chi^2$, the likelihood ratio as special cases -- with a fresh perspective. The contributions collected in this manuscript span from modeling and estimation to distribution-free goodness-of-fit tests. |
|
|
Perhaps the most surprising result presented here is that, in a sparse regime, all tests proposed in the literature are dominated by members of the class of weighted linear statistics.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2406.09195v5</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>math.ST</category> |
|
|
<category>physics.data-an</category> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Sara Algeri, Estate V. Khmaladze</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>CLE-SH: Comprehensive Literal Explanation package for SHapley values by statistical validity</title> |
|
|
<link>https://arxiv.org/abs/2409.12578</link> |
|
|
<description>arXiv:2409.12578v2 Announce Type: replace |
|
|
Abstract: Recently, SHapley Additive exPlanations (SHAP) has been widely utilized in various research domains. This is particularly evident in application fields, where SHAP analysis serves as a crucial tool for identifying biomarkers and assisting in result validation. However, despite its frequent usage, SHAP is often not applied in a manner that maximizes its potential contributions. A review of recent papers employing SHAP reveals that many studies subjectively select a limited number of features as 'important' and analyze SHAP values by approximately observing plots without assessing statistical significance. Such superficial application may hinder meaningful contributions to the applied fields. To address this, we propose a library package designed to simplify the interpretation of SHAP values. By simply inputting the original data and SHAP values, our library provides: 1) the number of important features to analyze, 2) the pattern of each feature via univariate analysis, and 3) the interaction between features. All information is extracted based on its statistical significance and presented in simple, comprehensible sentences, enabling users of all levels to understand the interpretations. We hope this library fosters a comprehensive understanding of statistically valid SHAP results.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2409.12578v2</guid> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<arxiv:DOI>10.1109/ACCESS.2026.3654890</arxiv:DOI> |
|
|
<arxiv:journal_reference>IEEE Access, vol. 14, pp. 12514-12525, 2026</arxiv:journal_reference> |
|
|
<dc:creator>Kyungjin Kim, Youngro Lee, Jongmo Seo</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>On the tails of log-concave density estimators</title> |
|
|
<link>https://arxiv.org/abs/2409.17910</link> |
|
|
<description>arXiv:2409.17910v3 Announce Type: replace |
|
|
Abstract: It is shown that the nonparametric maximum likelihood estimator of a univariate log-concave probability density satisfies desirable consistency properties in the tail regions. Specifically, let $P$ and $f$ denote the true underlying distribution and density, respectively. If $\hat{f}_n$ is the estimated log-concave density, and $\hat{\varphi}_n = \log \hat{f}_n$, then we specify sequences $(b_n)_{n\in \mathbb{N}}$ such that $P([b_n,\infty)) \to 0$ at a specific speed, ensuring that the absolute errors or absolute relative errors of $\hat{f}_n, \ \hat{\varphi}_n$ and $\hat{\varphi}_n'$ converge to zero uniformly on sets $[a, b_n]$. The main tools, besides characterizations of $\hat{f}_n$, are exponential and maximal inequalities for truncated moments of log-concave distributions, which are of independent interest.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2409.17910v3</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Didier B. Ryter, Lutz Duembgen</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Bayesian Transfer Learning for Artificially Intelligent Geospatial Systems: A Predictive Stacking Approach</title> |
|
|
<link>https://arxiv.org/abs/2410.09504</link> |
|
|
<description>arXiv:2410.09504v4 Announce Type: replace |
|
|
Abstract: Building artificially intelligent geospatial systems requires rapid delivery of spatial data analysis on massive scales with minimal human intervention. Depending upon their intended use, data analysis can also involve model assessment and uncertainty quantification. This article devises transfer learning frameworks for deployment in artificially intelligent systems, where a massive data set is split into smaller data sets that stream into the analytical framework to propagate learning and assimilate inference for the entire data set. Specifically, we introduce Bayesian predictive stacking for multivariate spatial data and demonstrate rapid and automated analysis of massive data sets. Furthermore, inference is delivered without human intervention without excessively demanding hardware settings. We illustrate the effectiveness of our approach through extensive simulation experiments and in producing inference from massive dataset on vegetation index that are indistinguishable from traditional (and more expensive) statistical approaches.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2410.09504v4</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Luca Presicce, Sudipto Banerjee</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Stein's method for marginals on large graphical models</title> |
|
|
<link>https://arxiv.org/abs/2410.11771</link> |
|
|
<description>arXiv:2410.11771v3 Announce Type: replace |
|
|
Abstract: Many spatial models exhibit locality structures that effectively reduce their intrinsic dimensionality, enabling efficient approximation and sampling of high-dimensional distributions. However, existing approximation techniques primarily focus on joint distributions and do not provide precise accuracy control for low-dimensional marginals, which are of primary interest in many practical scenarios. By leveraging the locality structures, we establish a dimension independent uniform error bound for the marginals of approximate distributions. Inspired by the Stein's method, we introduce a novel $\delta$-locality condition that quantifies the locality in distributions, and link it to the structural assumptions such as the sparse graphical models. The theoretical guarantee motivates the localization of existing sampling methods, as we illustrate through the localized likelihood-informed subspace method and localized score matching. We show that by leveraging the locality structure, these methods greatly reduce the sample complexity and computational cost via localized and parallel implementations.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2410.11771v3</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.NA</category> |
|
|
<category>math.NA</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Tiangang Cui, Shuigen Liu, Xin T. Tong</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Model-assisted inference for dynamic causal effects in staggered rollout cluster randomized experiments</title> |
|
|
<link>https://arxiv.org/abs/2502.10939</link> |
|
|
<description>arXiv:2502.10939v3 Announce Type: replace |
|
|
Abstract: Staggered rollout cluster randomized experiments (SR-CREs) involve sequential treatment adoption across clusters, requiring analysis methods that address a general class of dynamic causal effects, anticipation, and non-ignorable cluster-period sizes. Without imposing any outcome modeling assumptions, we study regression estimators using individual data, cluster-period averages, and scaled cluster-period totals, with and without covariate adjustment from a design-based perspective. We establish consistency and asymptotic normality of each estimator under a randomization-based framework and prove that the associated variance estimators are asymptotically conservative in the L\"{o}wner ordering. Furthermore, we conduct a unified efficiency comparison of the estimators and provide recommendations. We highlight the efficiency advantage of using estimators based on scaled cluster-period totals with covariate adjustment over their counterparts using individual-level data and cluster-period averages. Our results rigorously justify linear regression estimators as model-assisted methods to address an entire class of dynamic causal effects in SR-CREs.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2502.10939v3</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Xinyuan Chen, Fan Li</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Bayesian Kernel Machine Regression via Random Fourier Features for Estimating Joint Health Effects of Multiple Exposures</title> |
|
|
<link>https://arxiv.org/abs/2502.13157</link> |
|
|
<description>arXiv:2502.13157v2 Announce Type: replace |
|
|
Abstract: Environmental epidemiology has traditionally examined single exposure one at a time. Advances in exposure assessment and statistical methods now enable studies of multiple exposures and their combined health impacts. Bayesian Kernel Machine Regression (BKMR) is a widely used approach to flexibly estimates joint, nonlinear effects of multiple exposures. But BMKR is computationally intensive for large datasets, as repeated kernel inversion in Markov chain Monte Carlo (MCMC) can be time-consuming and often infeasible in practice. To address this issue, we propose using supervised random Fourier basis functions to replace the Gaussian process random effects. This re-frames the kernel machine regression into a linear mixed-effect model that facilitates computationally efficient estimation and prediction. Bayesian inference is conducted using MCMC with Hamiltonian Monte Carlo algorithms. Simulation studies demonstrate that our method yields results comparable to BKMR while significantly reduces the computation time. Our approach outperforms BKMR when the exposure-response surface has stronger dependency and when using predictive process as an alternative approximation method. Finally, we applied this approach to analyze over 270,000 birth records, examining associations between multiple ambient air pollutants and birthweight in Georgia.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2502.13157v2</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Danlu Zhang, Stephanie M. Eick, Howard H. Chang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>A Zero-Inflated Poisson Latent Position Cluster Model</title> |
|
|
<link>https://arxiv.org/abs/2502.13790</link> |
|
|
<description>arXiv:2502.13790v2 Announce Type: replace |
|
|
Abstract: The latent position network model (LPM) is a popular approach for the statistical analysis of network data. A central aspect of this model is that it assigns nodes to random positions in a latent space, such that the probability of an interaction between each pair of individuals or nodes is determined by their distance in this latent space. A key feature of this model is that it allows one to visualize nuanced structures via the latent space representation. The LPM can be further extended to the Latent Position Cluster Model (LPCM), to accommodate the clustering of nodes by assuming that the latent positions are distributed following a finite mixture distribution. In this paper, we extend the LPCM to accommodate missing network data and apply this to non-negative discrete weighted social networks. By treating missing data as ``unusual'' zero interactions, we propose a combination of the LPCM with the zero-inflated Poisson distribution. Statistical inference is based on a novel partially collapsed Markov chain Monte Carlo algorithm, where a Mixture-of-Finite-Mixtures (MFM) model is adopted to automatically determine the number of clusters and optimal group partitioning. Our algorithm features a truncated absorb-eject move, which is a novel adaptation of an idea commonly used in collapsed samplers, within the context of MFMs. Another aspect of our work is that we illustrate our results on 3-dimensional latent spaces, maintaining clear visualizations while achieving more flexibility than 2-dimensional models. The performance of this approach is illustrated via three carefully designed simulation studies, as well as four different publicly available real networks, where some interesting new perspectives are uncovered.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2502.13790v2</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.AP</category> |
|
|
<category>stat.CO</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<arxiv:DOI>10.1017/nws.2025.10021</arxiv:DOI> |
|
|
<arxiv:journal_reference>Net Sci 14 (2026) e2</arxiv:journal_reference> |
|
|
<dc:creator>Chaoyi Lu, Riccardo Rastelli, Nial Friel</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Estimation of relative risk, odds ratio and their logarithms with guaranteed accuracy and controlled sample size ratio</title> |
|
|
<link>https://arxiv.org/abs/2503.04876</link> |
|
|
<description>arXiv:2503.04876v3 Announce Type: replace |
|
|
Abstract: Given two populations from which independent binary observations are taken with parameters $p_1$ and $p_2$ respectively, estimators are proposed for the relative risk $p_1/p_2$, the odds ratio $p_1(1-p_2)/(p_2(1-p_1))$ and their logarithms. The sampling strategy used by the estimators is based on two-stage sequential sampling applied to each population, where the sample sizes of the second stage depend on the results observed in the first stage. The estimators guarantee that the relative mean-square error, or the mean-square error for the logarithmic versions, is less than a target value for any $p_1, p_2 \in (0,1)$, and the ratio of average sample sizes from the two populations is close to a prescribed value. The estimators can also be used with group sampling, whereby samples are taken in batches of fixed size from the two populations simultaneously, each batch containing samples from the two populations. The efficiency of the estimators with respect to the Cram\'er-Rao bound is good, and in particular it is close to $1$ for small values of the target error.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2503.04876v3</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Luis Mendo</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Quantifying sleep apnea heterogeneity using hierarchical Bayesian modeling</title> |
|
|
<link>https://arxiv.org/abs/2503.11599</link> |
|
|
<description>arXiv:2503.11599v5 Announce Type: replace |
|
|
Abstract: Obstructive Sleep Apnea (OSA) is a breathing disorder during sleep that affects millions of people worldwide. The diagnosis of OSA often occurs through an overnight polysomnogram (PSG) sleep study that generates a massive amount of physiological data. However, despite the evidence of substantial heterogeneity in the expression and symptoms of OSA, diagnosis and scientific analysis of severity typically focus on a single summary statistic, the Apnea-Hypopnea Index (AHI). We address the limitations of this approach through hierarchical Bayesian modeling of PSG data. Our approach produces interpretable random effects for each patient, which govern sleep-stage dynamics, rates of OSA events, and impacts of OSA events on subsequent sleep-stage dynamics. We propose a novel approach for using these random effects to produce a Bayes optimal clustering of patients. We use the proposed approach to analyze data from the APPLES study. Our analysis produces clinically interesting groups of patients with sleep apnea and a novel finding of an association between OSA expression and cognitive performance that is missed by an AHI-based analysis.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2503.11599v5</guid> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Glenn Palmer, Narat Srivali, David B. Dunson</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Representation Learning for Extrapolation in Perturbation Modeling</title> |
|
|
<link>https://arxiv.org/abs/2504.18522</link> |
|
|
<description>arXiv:2504.18522v2 Announce Type: replace |
|
|
Abstract: We consider the problem of modeling the effects of perturbations, such as gene knockdowns or drugs, on measurements, such as single-cell RNA or protein counts. Given data for some perturbations, we aim to predict the distribution of measurements for new combinations of perturbations. To address this challenging extrapolation task, we posit that perturbations act additively in a suitable, unknown embedding space. We formulate the data-generating process as a latent variable model, in which perturbations amount to mean shifts in latent space and can be combined additively. We then prove that, given sufficiently diverse training perturbations, the representation and perturbation effects are identifiable up to orthogonal transformation and use this to characterize the class of unseen perturbations for which we obtain extrapolation guarantees. We establish a link between our model class and shift interventions in linear latent causal models. To estimate the model from data, we propose a new method, the perturbation distribution autoencoder (PDAE), which is trained by maximizing the distributional similarity between true and simulated perturbation distributions. The trained model can then be used to predict previously unseen perturbation distributions. Through simulations, we demonstrate that PDAE can accurately predict the effects of unseen but identifiable perturbations, supporting our theoretical results.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2504.18522v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Julius von K\"ugelgen, Jakob Ketterer, Xinwei Shen, Nicolai Meinshausen, Jonas Peters</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Discrimination performance in illness-death models with interval-censored disease data</title> |
|
|
<link>https://arxiv.org/abs/2504.19726</link> |
|
|
<description>arXiv:2504.19726v2 Announce Type: replace |
|
|
Abstract: In clinical studies, the illness-death model is often used to describe disease progression. A subject starts disease-free, may develop the disease and then die, or die directly. In clinical practice, disease can only be diagnosed at pre-specified follow-up visits, so the exact time of disease onset is often unknown, resulting in interval-censored data. This study examines the impact of ignoring this interval-censored nature of disease data on the discrimination performance of illness-death models, focusing on the time-specific Area Under the receiver operating characteristic Curve (AUC) in both incident/dynamic and cumulative/dynamic definitions. A simulation study with data simulated from Weibull transition hazards and disease state censored at regular intervals is conducted. Estimates are derived using different methods: the Cox model with a time-dependent binary disease marker, which ignores interval-censoring, and the illness-death model for interval-censored data estimated with three implementations - the piecewise-constant model from the msm package, the Weibull and M-spline models from the SmoothHazard package. These methods are also applied to a dataset of 2232 patients with high-grade soft tissue sarcoma, where the interval-censored disease state is the post-operative development of distant metastases. The results suggest that, in the presence of interval-censored disease times, it is important to account for interval-censoring not only when estimating the parameters of the model but also when evaluating the discrimination performance of the disease.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2504.19726v2</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<arxiv:DOI>10.1177/09622802251412855</arxiv:DOI> |
|
|
<arxiv:journal_reference>Statistical Methods in Medical Research 2026</arxiv:journal_reference> |
|
|
<dc:creator>Marta Spreafico, Anja J. Rueten-Budde, Hein Putter, Marta Fiocco</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Uncertainty Quantification for Prior-Data Fitted Networks using Martingale Posteriors</title> |
|
|
<link>https://arxiv.org/abs/2505.11325</link> |
|
|
<description>arXiv:2505.11325v3 Announce Type: replace |
|
|
Abstract: Prior-data fitted networks (PFNs) have emerged as promising foundation models for prediction from tabular data sets, achieving state-of-the-art performance on small to moderate data sizes without tuning. While PFNs are motivated by Bayesian ideas, they do not provide any uncertainty quantification for predictive means, quantiles, or similar quantities. We propose a principled and efficient sampling procedure to construct Bayesian posteriors for such estimates based on Martingale posteriors, and prove its convergence. Several simulated and real-world data examples showcase the uncertainty quantification of our method in inference applications.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2505.11325v3</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Thomas Nagler, David R\"ugamer</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Generalization Dynamics of Linear Diffusion Models</title> |
|
|
<link>https://arxiv.org/abs/2505.24769</link> |
|
|
<description>arXiv:2505.24769v2 Announce Type: replace |
|
|
Abstract: Diffusion models are powerful generative models that produce high-quality samples from complex data. While their infinite-data behavior is well understood, their generalization with finite data remains less clear. Classical learning theory predicts that generalization occurs at a sample complexity that is exponential in the dimension, far exceeding practical needs. We address this gap by analyzing diffusion models through the lens of data covariance spectra, which often follow power-law decays, reflecting the hierarchical structure of real data. To understand whether such a hierarchical structure can benefit learning in diffusion models, we develop a theoretical framework based on linear neural networks, congruent with a Gaussian hypothesis on the data. We quantify how the hierarchical organization of variance in the data and regularization impacts generalization. We find two regimes: When $N <d$, not all directions of variation are present in the training data, which results in a large gap between training and test loss. In this regime, we demonstrate how a strongly hierarchical data structure, as well as regularization and early stopping help to prevent overfitting. For $N > d$, we find that the sampling distributions of linear diffusion models approach their optimum (measured by the Kullback-Leibler divergence) linearly with $d/N$, independent of the specifics of the data distribution. Our work clarifies how sample complexity governs generalization in a simple model of diffusion-based generative models.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2505.24769v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cond-mat.dis-nn</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Claudia Merger, Sebastian Goldt</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Two-Phase Treatment with Noncompliance: Identifying the Cumulative Average Treatment Effect via Multisite Instrumental Variables</title> |
|
|
<link>https://arxiv.org/abs/2506.03104</link> |
|
|
<description>arXiv:2506.03104v3 Announce Type: replace |
|
|
Abstract: When evaluating a two-phase intervention, the cumulative average treatment effect (ATE) is often the primary causal estimand of interest. However, some individuals who do not respond well to the Phase I treatment may subsequently display noncompliant behaviors. At the same time, exposure to the Phase I treatment is expected to directly influence an individual's potential outcomes, thereby violating the exclusion restriction. Building on an instrumental variable (IV) strategy for multisite trials, we clarify the conditions under which the cumulative ATE of a two-phase treatment can be identified by employing the random assignment of the Phase I treatment as the instrument. Our strategy relaxes both the conventional exclusion restriction and sequential ignorability assumptions. We assess the performance of the new strategy through simulation studies. Additionally, we reanalyze data from the Tennessee class size study, in which students and teachers were randomly assigned to either small or regular class types in kindergarten (Phase I) with noncompliance emerging in Grade 1 (Phase II). Applying our new strategy, we estimate the cumulative ATE of receiving two consecutive years of instruction in a small versus regular class.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2506.03104v3</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Guanglei Hong, Xu Qin, Zhengyan Xu, Fan Yang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Post-selection inference with a single realization of a network</title> |
|
|
<link>https://arxiv.org/abs/2508.11843</link> |
|
|
<description>arXiv:2508.11843v2 Announce Type: replace |
|
|
Abstract: Given a dataset consisting of a single realization of a network, we consider conducting inference on a parameter selected from the data. In particular, we focus on the setting where the parameter of interest is a linear combination of the mean connectivities within and between estimated communities. Inference in this setting poses a challenge, since the communities are themselves estimated from the data. Furthermore, since only a single realization of the network is available, sample splitting is not possible. In this paper, we show that it is possible to split a single realization of a network consisting of $n$ nodes into two (or more) networks involving the same $n$ nodes; the first network can be used to select a data-driven parameter, and the second to conduct inference on that parameter. In the case of weighted networks with Poisson or Gaussian edges, we obtain two independent realizations of the network; by contrast, in the case of Bernoulli edges, the two realizations are dependent, and so extra care is required. We establish the theoretical properties of our estimators, in the sense of confidence intervals that attain the nominal (selective) coverage, and demonstrate their utility in numerical simulations and in application to a dataset representing the relationships among dolphins in Doubtful Sound, New Zealand.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2508.11843v2</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Ethan Ancell, Daniela Witten, Daniel Kessler</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Haussdorff consistency of MLE in folded normal and Gaussian mixtures</title> |
|
|
<link>https://arxiv.org/abs/2509.12206</link> |
|
|
<description>arXiv:2509.12206v2 Announce Type: replace |
|
|
Abstract: We develop a constant-tracking likelihood theory for two nonregular models: the folded normal and finite Gaussian mixtures. For the folded normal, we prove boundary coercivity for the profiled likelihood, show that the profile path of the location parameter exists and is strictly decreasing by an implicit-function argument, and establish a unique profile maximizer in the scale parameter. Deterministic envelopes for the log-likelihood, the score, and the Hessian yield elementary uniform laws of large numbers with finite-sample bounds, avoiding covering numbers. Identification and Kullback-Leibler separation deliver consistency. A sixth-order expansion of the log hyperbolic cosine creates a quadratic-minus-quartic contrast around zero, leading to a nonstandard one-fourth-power rate for the location estimator at the kink and a standard square-root rate for the scale estimator, with a uniform remainder bound. For finite Gaussian mixtures with distinct components and positive weights, we give a short identifiability proof up to label permutations via Fourier and Vandermonde ideas, derive two-sided Gaussian envelopes and responsibility-based gradient bounds on compact sieves, and obtain almost-sure and high-probability uniform laws with explicit constants. Using a minimum-matching distance on permutation orbits, we prove Hausdorff consistency on fixed and growing sieves. We quantify variance-collapse spikes via an explicit spike-bonus bound and show that a quadratic penalty in location and log-scale dominates this bonus, making penalized likelihood coercive; when penalties shrink but sample size times penalty diverges, penalized estimators remain consistent. All proofs are constructive, track constants, verify measurability of maximizers, and provide practical guidance for tuning sieves, penalties, and EM-style optimization.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2509.12206v2</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-sa/4.0/</dc:rights> |
|
|
<dc:creator>Koustav Mallik</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Error Analysis of Discrete Flow with Generator Matching</title> |
|
|
<link>https://arxiv.org/abs/2509.21906</link> |
|
|
<description>arXiv:2509.21906v2 Announce Type: replace |
|
|
Abstract: Discrete flow models offer a powerful framework for learning distributions over discrete state spaces and have demonstrated superior performance compared to the discrete diffusion models. However, their convergence properties and error analysis remain largely unexplored. In this work, we develop a unified framework grounded in stochastic calculus theory to systematically investigate the theoretical properties of discrete flow models. Specifically, by leveraging a Girsanov-type theorem for the path measures of two continuous-time Markov chains (CTMCs), we present a comprehensive error analysis that accounts for both transition rate estimation error and early stopping error. In fact, the estimation error of transition rates has received little attention in existing works. Unlike discrete diffusion models, discrete flow incurs no initialization error caused by truncating the time horizon in the noising process. Building on generator matching and uniformization, we establish non-asymptotic error bounds for distribution estimation without the boundedness condition on oracle transition rates. Furthermore, we derive a faster rate of total variation convergence for the estimated distribution with the boundedness condition, yielding a nearly optimal rate in terms of sample size. Our results provide the first error analysis for discrete flow models. We also investigate model performance under different settings based on simulation results.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2509.21906v2</guid> |
|
|
<category>math.ST</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Zhengyan Wan, Yidong Ouyang, Qiang Yao, Liyan Xie, Fang Fang, Hongyuan Zha, Guang Cheng</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>A General Framework for Joint Multi-State Models</title> |
|
|
<link>https://arxiv.org/abs/2510.07128</link> |
|
|
<description>arXiv:2510.07128v3 Announce Type: replace |
|
|
Abstract: Conventional joint modeling approaches generally characterize the relationship between longitudinal biomarkers and discrete event occurrences within terminal, recurring or competing risk settings, thereby offering a limited representation of complex, multi-state trajectories. |
|
|
We propose a general multi-state joint modeling framework that unifies longitudinal biomarker dynamics with multi-state time-to-event processes defined on arbitrary directed graphs. The proposed framework also accomodates nonlinear longitudinal submodels and scalable inference via stochastic gradient descent. This formulation encompasses both Markovian and semi-Markovian transition structures, allowing recurrent cycles and terminal absorptions to be naturally represented. The longitudinal and event processes are linked through shared latent structures within nonlinear mixed-effects models, extending classical joint modeling formulations. |
|
|
We derive the complete likelihood, model selection criteria, and develop scalable inference procedures based on stochastic gradient descent to enable high-dimensional and large-scale applications. In addition, we formulate a dynamic prediction framework that provides individualized state-transition probabilities and personalized risk assessments along complex event trajectories. |
|
|
Through simulation and application to the PAQUID cohort, we demonstrate accurate parameter recovery and individualized prediction.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.07128v3</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-sa/4.0/</dc:rights> |
|
|
<dc:creator>F\'elix Laplante, Christophe Ambroise</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Calibrating Decision Robustness via Inverse Conformal Risk Control</title> |
|
|
<link>https://arxiv.org/abs/2510.07750</link> |
|
|
<description>arXiv:2510.07750v2 Announce Type: replace |
|
|
Abstract: Robust optimization safeguards decisions against uncertainty by optimizing against worst-case scenarios, yet their effectiveness hinges on a prespecified robustness level that is often chosen ad hoc, leading to either insufficient protection or overly conservative and costly solutions. Recent approaches using conformal prediction construct data-driven uncertainty sets with finite-sample coverage guarantees, but they still fix coverage targets a priori and offer little guidance for selecting robustness levels. We propose a new framework that provides distribution-free, finite-sample guarantees on both miscoverage and regret for any family of robust predict-then-optimize policies. Our method constructs valid estimators that trace out the miscoverage--regret Pareto frontier, enabling decision-makers to reliably evaluate and calibrate robustness levels according to their cost--risk preferences. The framework is simple to implement, broadly applicable across classical optimization formulations, and achieves sharper finite-sample performance. This paper offers a principled data-driven methodology for guiding robustness selection and empowers practitioners to balance robustness and conservativeness in high-stakes decision-making.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.07750v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Wenbin Zhou, Shixiang Zhu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>On estimation of weighted cumulative residual Tsallis entropy for complete and censored samples</title> |
|
|
<link>https://arxiv.org/abs/2510.12442</link> |
|
|
<description>arXiv:2510.12442v2 Announce Type: replace |
|
|
Abstract: Recently, weighted cumulative residual Tsallis entropy has been introduced in the literature as a generalization of weighted cumulative residual entropy. We study some new properties of weighted cumulative residual Tsallis entropy measure. Next, we propose some non-parametric estimators of this measure. Asymptotic properties of these estimators are discussed. Performance of these estimators are compared by mean squared error. Non-parametric estimators for weighted cumulative residual entropy measure are also discussed. Estimator for weighted cumulative residual Tsallis entropy for progressive type-II censored data is proposed and its performance is investigated by Monte-Carlo simulations for various censoring schemes. Two uniformity tests for complete samples are proposed based on an estimator of these two measures and power of the tests are compared with some popular tests. The tests perform reasonably well. Uniformity test under progressively type-II censored data is also developed. Some real datasets are analysed for illustration.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.12442v2</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Siddhartha Chakraborty, Asok K. Nanda</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Deep Ensembles for Epistemic Uncertainty: A Frequentist Perspective</title> |
|
|
<link>https://arxiv.org/abs/2510.22063</link> |
|
|
<description>arXiv:2510.22063v2 Announce Type: replace |
|
|
Abstract: Decomposing prediction uncertainty into aleatoric (irreducible) and epistemic (reducible) components is critical for the reliable deployment of machine learning systems. While the mutual information between the response variable and model parameters is a principled measure for epistemic uncertainty, it requires access to the parameter posterior, which is computationally challenging to approximate. Consequently, practitioners often rely on probabilistic predictions from deep ensembles to quantify uncertainty, which have demonstrated strong empirical performance. However, a theoretical understanding of their success from a frequentist perspective remains limited. We address this gap by first considering a bootstrap-based estimator for epistemic uncertainty, which we prove is asymptotically correct. Next, we connect deep ensembles to the bootstrap estimator by decomposing it into data variability and training stochasticity; specifically, we show that deep ensembles capture the training stochasticity component. Through empirical studies, we show that this stochasticity component constitutes the majority of epistemic uncertainty, thereby explaining the effectiveness of deep ensembles.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.22063v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Anchit Jain, Stephen Bates</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Physics-Informed Neural Networks and Neural Operators for Parametric PDEs</title> |
|
|
<link>https://arxiv.org/abs/2511.04576</link> |
|
|
<description>arXiv:2511.04576v3 Announce Type: replace |
|
|
Abstract: PDEs arise ubiquitously in science and engineering, where solutions depend on parameters (physical properties, boundary conditions, geometry). Traditional numerical methods require re-solving the PDE for each parameter, making parameter space exploration prohibitively expensive. Recent machine learning advances, particularly physics-informed neural networks (PINNs) and neural operators, have revolutionized parametric PDE solving by learning solution operators that generalize across parameter spaces. We critically analyze two main paradigms: (1) PINNs, which embed physical laws as soft constraints and excel at inverse problems with sparse data, and (2) neural operators (e.g., DeepONet, Fourier Neural Operator), which learn mappings between infinite-dimensional function spaces and achieve unprecedented generalization. Through comparisons across fluid dynamics, solid mechanics, heat transfer, and electromagnetics, we show neural operators can achieve computational speedups of $10^3$ to $10^5$ times faster than traditional solvers for multi-query scenarios, while maintaining comparable accuracy. We provide practical guidance for method selection, discuss theoretical foundations (universal approximation, convergence), and identify critical open challenges: high-dimensional parameters, complex geometries, and out-of-distribution generalization. This work establishes a unified framework for understanding parametric PDE solvers via operator learning, offering a comprehensive, incrementally updated resource for this rapidly evolving field</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2511.04576v3</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Zhuo Zhang, Xiong Xiong, Sen Zhang, Yuan Zhao, Xi Yang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Model-oriented Graph Distances via Partially Ordered Sets</title> |
|
|
<link>https://arxiv.org/abs/2511.10625</link> |
|
|
<description>arXiv:2511.10625v2 Announce Type: replace |
|
|
Abstract: A well-defined distance on the parameter space is key to evaluating estimators, ensuring consistency, and building confidence sets. While there are typically standard distances to adopt in a continuous space, this is not the case for combinatorial parameters such as graphs that represent statistical models. Defined on the graphs alone, existing proposals like the structural Hamming distance ignore the structure of the model space and can thus exhibit undesirable behaviors. We propose a model-oriented framework for defining the distance between graphs that is applicable across different graph classes. Our approach treats each graph as a statistical model and organizes the graphs in a partially ordered set based on model inclusion. This induces a neighborhood structure, from which we define the model-oriented distance as the length of a shortest path through neighbors, yielding a metric in the space of graphs. We apply this framework to probabilistic undirected graphs, causal directed acyclic graphs, probabilistic completed partially directed acyclic graphs, and causal maximally oriented partially directed acyclic graphs. We analyze theoretical and empirical behaviors of the model-oriented distance and draw comparison with existing distances. By exploiting the underlying poset structures, we develop algorithms for computing and bounding the proposed distance that scale to moderate-sized graphs.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2511.10625v2</guid> |
|
|
<category>math.ST</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Armeen Taeb, F. Richard Guo, Leonard Henckel</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Standardized Descriptive Index for Measuring Deviation and Uncertainty in Psychometric Indicators</title> |
|
|
<link>https://arxiv.org/abs/2512.21399</link> |
|
|
<description>arXiv:2512.21399v2 Announce Type: replace |
|
|
Abstract: The use of descriptive statistics in pilot testing procedures requires objective, standard diagnostic tools that are feasible for small sample sizes. While current psychometric practices report item-level statistics, they often report these raw descriptives separately rather than consolidating both mean and standard deviation into a single diagnostic tool to directly measure item quality. By leveraging the analytical properties of Cohen's d, this article repurposes its use in scale development as a standardized item deviation index. This measures the extent of an item's raw deviation relative to its scale midpoint while accounting for its own uncertainty. Analytical properties such as boundedness, scale invariance, and bias are explored to further understand how the index values behave, which will aid future efforts to establish empirical thresholds that characterize redundancy among formative indicators and consistency among reflective indicators.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.21399v2</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Mark Dominique Dalipe Mu\~noz</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>CAOS: Conformal Aggregation of One-Shot Predictors</title> |
|
|
<link>https://arxiv.org/abs/2601.05219</link> |
|
|
<description>arXiv:2601.05219v2 Announce Type: replace |
|
|
Abstract: One-shot prediction enables rapid adaptation of pretrained foundation models to new tasks using only one labeled example, but lacks principled uncertainty quantification. While conformal prediction provides finite-sample coverage guarantees, standard split conformal methods are inefficient in the one-shot setting due to data splitting and reliance on a single predictor. We propose Conformal Aggregation of One-Shot Predictors (CAOS), a conformal framework that adaptively aggregates multiple one-shot predictors and uses a leave-one-out calibration scheme to fully exploit scarce labeled data. Despite violating classical exchangeability assumptions, we prove that CAOS achieves valid marginal coverage using a monotonicity-based argument. Experiments on one-shot facial landmarking and RAFT text classification tasks show that CAOS produces substantially smaller prediction sets than split conformal baselines while maintaining reliable coverage.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.05219v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Maja Waldron</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Optimal Transport under Group Fairness Constraints</title> |
|
|
<link>https://arxiv.org/abs/2601.07144</link> |
|
|
<description>arXiv:2601.07144v2 Announce Type: replace |
|
|
Abstract: Ensuring fairness in matching algorithms is a key challenge in allocating scarce resources and positions. Focusing on Optimal Transport (OT), we introduce a novel notion of group fairness requiring that the probability of matching two individuals from any two given groups in the OT plan satisfies a predefined target. We first propose a modified Sinkhorn algorithm to compute perfectly fair transport plans efficiently. Since exact fairness can significantly degrade matching quality in practice, we then develop two relaxation strategies. The first one involves solving a penalized OT problem, for which we derive novel finite-sample complexity guarantees. Our second strategy leverages bilevel optimization to learn a ground cost that induces a fair OT solution, and we establish a bound on the deviation of fairness when matching unseen data. Finally, we present empirical results illustrating the performance of our approaches and the trade-off between fairness and transport cost.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.07144v2</guid> |
|
|
<category>stat.ML</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Linus Bleistein, Mathieu Dagr\'eou, Francisco Andrade, Thomas Boudou, Aur\'elien Bellet</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Variational autoencoder for inference of nonlinear mixed effect models based on ordinary differential equations</title> |
|
|
<link>https://arxiv.org/abs/2601.17400</link> |
|
|
<description>arXiv:2601.17400v2 Announce Type: replace |
|
|
Abstract: We propose a variational autoencoder (VAE) approach for parameter estimation in nonlinear mixed-effects models based on ordinary differential equations (NLME-ODEs) using longitudinal data from multiple subjects. In moderate dimensions, likelihood-based inference via the stochastic approximation EM algorithm (SAEM) is widely used, but it relies on Markov Chain Monte-Carlo (MCMC) to approximate subject-specific posteriors. As model complexity increases or observations per subject are sparse and irregular, performance often deteriorates due to a complex, multimodal likelihood surface which may lead to MCMC convergence difficulties. We instead estimate parameters by maximizing the evidence lower bound (ELBO), a regularized surrogate for the marginal likelihood. A VAE with a shared encoder amortizes inference of subject-specific random effects by avoiding per-subject optimization and the use of MCMC. Beyond pointwise estimation, we quantify parameter uncertainty using observed-information-based variance estimator and verify that practical identifiability of the model parameters is not compromised by nuisance parameters introduced in the encoder. We evaluate the method in three simulation case studies (pharmacokinetics, humoral response to vaccination, and TGF-$\beta$ activation dynamics in asthmatic airways) and on a real-world antibody kinetics dataset, comparing against SAEM baselines.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.17400v2</guid> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<arxiv:DOI>10.13140/RG.2.2.23271.71841</arxiv:DOI> |
|
|
<dc:creator>Zhe Li, M\'elanie Prague, Rodolphe Thi\'ebaut, Quentin Clairon</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>M-SGWR: Multiscale Similarity and Geographically Weighted Regression</title> |
|
|
<link>https://arxiv.org/abs/2601.19888</link> |
|
|
<description>arXiv:2601.19888v2 Announce Type: replace |
|
|
Abstract: The first law of geography is a cornerstone of spatial analysis, emphasizing that nearby and related locations tend to be more similar, however, defining what constitutes "near" and "related" remains challenging, as different phenomena exhibit distinct spatial patterns. Traditional local regression models, such as Geographically Weighted Regression (GWR) and Multiscale GWR (MGWR), quantify spatial relationships solely through geographic proximity. In an era of globalization and digital connectivity, however, geographic proximity alone may be insufficient to capture how locations are interconnected. To address this limitation, we propose a new multiscale local regression framework, termed M-SGWR, which characterizes spatial interaction across two dimensions: geographic proximity and attribute (variable) similarity. For each predictor, geographic and attribute-based weight matrices are constructed separately and then combined using an optimized parameter, alpha, which governs their relative contribution to local model fitting. Analogous to variable-specific bandwidths in MGWR, the optimal alpha varies by predictor, allowing the model to flexibly account for geographic, mixed, or non-spatial (remote similarity) effects. Results from two simulation experiments and one empirical application demonstrate that M-SGWR consistently outperforms GWR, SGWR, and MGWR across all goodness-of-fit metrics.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2601.19888v2</guid> |
|
|
<category>stat.ME</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.LG</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>M. Naser Lessani, Zhenlong Li, Manzhu Yu, Helen Greatrex, Chan Shen</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Iterative execution of discrete and inverse discrete Fourier transforms with applications for signal denoising via sparsification</title> |
|
|
<link>https://arxiv.org/abs/2211.09284</link> |
|
|
<description>arXiv:2211.09284v4 Announce Type: replace-cross |
|
|
Abstract: We describe a family of iterative algorithms that involve the repeated execution of discrete and inverse discrete Fourier transforms. One interesting member of this family is motivated by the discrete Fourier transform uncertainty principle and involves the application of a sparsification operation to both the real domain and frequency domain data with convergence obtained when real domain sparsity hits a stable pattern. This sparsification variant has practical utility for signal denoising, in particular the recovery of a periodic spike signal in the presence of Gaussian noise. General convergence properties and denoising performance relative to existing methods are demonstrated using simulation studies. An R package implementing this technique and related resources can be found at https://hrfrost.host.dartmouth.edu/IterativeFT.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2211.09284v4</guid> |
|
|
<category>eess.SP</category> |
|
|
<category>cs.NA</category> |
|
|
<category>math.NA</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>H. Robert Frost</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Optimal sampling for stochastic and natural gradient descent</title> |
|
|
<link>https://arxiv.org/abs/2402.03113</link> |
|
|
<description>arXiv:2402.03113v2 Announce Type: replace-cross |
|
|
Abstract: We consider the problem of optimising the expected value of a loss functional over a nonlinear model class of functions, assuming that we have only access to realisations of the gradient of the loss. This is a classical task in statistics, machine learning and physics-informed machine learning. A straightforward solution is to replace the exact objective with a Monte Carlo estimate before employing standard first-order methods like gradient descent, which yields the classical stochastic gradient descent method. But replacing the true objective with an estimate ensues a generalisation error. Rigorous bounds for this error typically require strong compactness and Lipschitz continuity assumptions while providing a very slow decay with sample size. To alleviate these issues, we propose a version of natural gradient descent that is based on optimal sampling methods. Under classical assumptions on the loss and the nonlinear model class, we prove that this scheme converges almost surely monotonically to a stationary point of the true objective. Under Polyak-Lojasiewicz-type conditions, this provides bounds for the generalisation error. As a remarkable result, we show that our stochastic optimisation scheme achieves the linear or exponential convergence rates of deterministic first order descent methods under suitable conditions.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2402.03113v2</guid> |
|
|
<category>math.OC</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Robert Gruhlke, Anthony Nouy, Philipp Trunschke</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>TorchCP: A Python Library for Conformal Prediction</title> |
|
|
<link>https://arxiv.org/abs/2402.12683</link> |
|
|
<description>arXiv:2402.12683v5 Announce Type: replace-cross |
|
|
Abstract: Conformal prediction (CP) is a powerful statistical framework that generates prediction intervals or sets with guaranteed coverage probability. While CP algorithms have evolved beyond traditional classifiers and regressors to sophisticated deep learning models like deep neural networks (DNNs), graph neural networks (GNNs), and large language models (LLMs), existing CP libraries often lack the model support and scalability for large-scale deep learning (DL) scenarios. This paper introduces TorchCP, a PyTorch-native library designed to integrate state-of-the-art CP algorithms into DL techniques, including DNN-based classifiers/regressors, GNNs, and LLMs. Released under the LGPL-3.0 license, TorchCP comprises about 16k lines of code, validated with 100\% unit test coverage and detailed documentation. Notably, TorchCP enables CP-specific training algorithms, online prediction, and GPU-accelerated batch processing, achieving up to 90\% reduction in inference time on large datasets. With its low-coupling design, comprehensive suite of advanced methods, and full GPU scalability, TorchCP empowers researchers and practitioners to enhance uncertainty quantification across cutting-edge applications.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2402.12683v5</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.CV</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Jianguo Huang, Jianqing Song, Xuanning Zhou, Bingyi Jing, Hongxin Wei</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Bias-Optimal Bounds for SGD: A Computer-Aided Lyapunov Analysis</title> |
|
|
<link>https://arxiv.org/abs/2505.17965</link> |
|
|
<description>arXiv:2505.17965v2 Announce Type: replace-cross |
|
|
Abstract: The non-asymptotic analysis of Stochastic Gradient Descent (SGD) typically yields bounds that decompose into a bias term and a variance term. In this work, we focus on the bias component and study the extent to which SGD can match the optimal convergence behavior of deterministic gradient descent. Assuming only (strong) convexity and smoothness of the objective, we derive new bounds that are bias-optimal, in the sense that the bias term coincides with the worst-case rate of gradient descent. Our results hold for the full range of constant step-sizes $\gamma L \in (0,2)$, including critical and large step-size regimes that were previously unexplored without additional variance assumptions. The bounds are obtained through the construction of a simple Lyapunov energy whose monotonicity yields sharp convergence guarantees. To design the parameters of this energy, we employ the Performance Estimation Problem framework, which we also use to provide numerical evidence for the optimality of the associated variance terms.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2505.17965v2</guid> |
|
|
<category>math.OC</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Daniel Cortild, Lucas Ketels, Juan Peypouquet, Guillaume Garrigos</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Model Agnostic Differentially Private Causal Inference</title> |
|
|
<link>https://arxiv.org/abs/2505.19589</link> |
|
|
<description>arXiv:2505.19589v3 Announce Type: replace-cross |
|
|
Abstract: Estimating causal effects from observational data is essential in fields such as medicine, economics and social sciences, where privacy concerns are paramount. We propose a general, model-agnostic framework for differentially private estimation of average treatment effects (ATE) that avoids strong structural assumptions on the data-generating process or the models used to estimate propensity scores and conditional outcomes. In contrast to prior work, which enforces differential privacy by directly privatizing these nuisance components, our approach decouples nuisance estimation from privacy protection. This separation allows the use of flexible, state-of-the-art black-box models, while differential privacy is achieved by perturbing only predictions and aggregation steps within a fold-splitting scheme with ensemble techniques. We instantiate the framework for three classical estimators -- the G-Formula, inverse propensity weighting (IPW), and augmented IPW (AIPW) -- and provide formal utility and privacy guarantees, together with privatized confidence intervals. Empirical results on synthetic and real data show that our methods maintain competitive performance under realistic privacy budgets.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2505.19589v3</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Christian Janos Lebeda, Mathieu Even, Aur\'elien Bellet, Julie Josse</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Antithetic Noise in Diffusion Models</title> |
|
|
<link>https://arxiv.org/abs/2506.06185</link> |
|
|
<description>arXiv:2506.06185v2 Announce Type: replace-cross |
|
|
Abstract: We systematically study antithetic initial noise in diffusion models, discovering that pairing each noise sample with its negation consistently produces strong negative correlation. This universal phenomenon holds across datasets, model architectures, conditional and unconditional sampling, and even other generative models such as VAEs and Normalizing Flows. To explain it, we combine experiments and theory and propose a \textit{symmetry conjecture} that the learned score function is approximately affine antisymmetric (odd symmetry up to a constant shift), supported by empirical evidence. This negative correlation leads to substantially more reliable uncertainty quantification with up to $90\%$ narrower confidence intervals. We demonstrate these gains on tasks including estimating pixel-wise statistics and evaluating diffusion inverse solvers. We also provide extensions with randomized quasi-Monte Carlo noise designs for uncertainty quantification, and explore additional applications of the antithetic noise design to improve image editing and generation diversity. Our framework is training-free, model-agnostic, and adds no runtime overhead. Code is available at https://github.com/jjia131/Antithetic-Noise-in-Diffusion-Models-page.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2506.06185v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.NA</category> |
|
|
<category>math.NA</category> |
|
|
<category>stat.CO</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Jing Jia, Sifan Liu, Bowen Song, Wei Yuan, Liyue Shen, Guanyang Wang</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Tokenization Multiplicity Leads to Arbitrary Price Variation in LLM-as-a-service</title> |
|
|
<link>https://arxiv.org/abs/2506.06446</link> |
|
|
<description>arXiv:2506.06446v2 Announce Type: replace-cross |
|
|
Abstract: Providers of LLM-as-a-service have predominantly adopted a simple pricing model: users pay a fixed price per token. Consequently, one may think that the price two different users would pay for the same output string under the same input prompt is the same. In our work, we show that, surprisingly, this is not (always) true. We find empirical evidence that, particularly for non-english outputs, both proprietary and open-weights LLMs often generate the same (output) string with multiple different tokenizations, even under the same input prompt, and this in turn leads to arbitrary price variation. To address the problem of tokenization multiplicity, we introduce canonical generation, a type of constrained generation that restricts LLMs to only generate canonical tokenizations -- the unique tokenization in which each string is tokenized during the training process of an LLM. Further, we introduce an efficient sampling algorithm for canonical generation based on the Gumbel-Max trick. Experiments on a variety of natural language tasks demonstrate that our sampling algorithm for canonical generation is comparable to standard sampling in terms of performance and runtime, and it solves the problem of tokenization multiplicity.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2506.06446v2</guid> |
|
|
<category>cs.CL</category> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Ivi Chatzi, Nina Corvelo Benz, Stratis Tsirtsis, Manuel Gomez-Rodriguez</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Diffusion Models under Alternative Noise: Simplified Analysis and Sensitivity</title> |
|
|
<link>https://arxiv.org/abs/2506.08337</link> |
|
|
<description>arXiv:2506.08337v2 Announce Type: replace-cross |
|
|
Abstract: Diffusion models, typically formulated as discretizations of stochastic differential equations (SDEs), have achieved state-of-the-art performance in generative tasks. However, their theoretical analysis often involves complex proofs. In this work, we present a simplified framework for analyzing the Euler--Maruyama discretization of variance-preserving SDEs (VP-SDEs). Using Gr\"onwall's inequality, we derive a convergence rate of $O(T^{-1/2})$ under standard Lipschitz assumptions, streamlining prior analyses. We then demonstrate that the standard Gaussian noise can be replaced by computationally cheaper discrete random variables (e.g., Rademacher) without sacrificing this convergence guarantee, provided the mean and variance are matched. Our experiments validate this theory, showing that (i) discrete noise achieves sample quality comparable to Gaussian noise provided the variance is matched correctly, and (ii) performance degrades if the noise variance is scaled incorrectly.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2506.08337v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Juhyeok Choi, Chenglin Fan</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Direct Bias-Correction Term Estimation for Average Treatment Effect Estimation</title> |
|
|
<link>https://arxiv.org/abs/2509.22122</link> |
|
|
<description>arXiv:2509.22122v2 Announce Type: replace-cross |
|
|
Abstract: This study considers the estimation of the direct bias-correction term for estimating the average treatment effect (ATE). Let $\{(X_i, D_i, Y_i)\}_{i=1}^{n}$ be the observations, where $X_i$ denotes $K$-dimensional covariates, $D_i \in \{0, 1\}$ denotes a binary treatment assignment indicator, and $Y_i$ denotes an outcome. In ATE estimation, $h_0(D_i, X_i) = \frac{1[D_i = 1]}{e_0(X_i)} - \frac{1[D_i = 0]}{1 - e_0(X_i)}$ is called the bias-correction term, where $e_0(X_i)$ is the propensity score. The bias-correction term is also referred to as the Riesz representer or clever covariates, depending on the literature, and plays an important role in construction of efficient ATE estimators. In this study, we propose estimating $h_0$ by directly minimizing the Bregman divergence between its model and $h_0$, which includes squared error and Kullback--Leibler divergence as special cases. Our proposed method is inspired by direct density ratio estimation methods and generalizes existing bias-correction term estimation methods, such as covariate balancing weights, Riesz regression, and nearest neighbor matching. Importantly, under specific choices of bias-correction term models and Bregman divergence, we can automatically ensure the covariate balancing property. Thus, our study provides a practical modeling and estimation approach through a generalization of existing methods.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2509.22122v2</guid> |
|
|
<category>econ.EM</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.ML</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Masahiro Kato</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Fidel-TS: A High-Fidelity Multimodal Benchmark for Time Series Forecasting</title> |
|
|
<link>https://arxiv.org/abs/2509.24789</link> |
|
|
<description>arXiv:2509.24789v3 Announce Type: replace-cross |
|
|
Abstract: The evaluation of time series forecasting models is hindered by a critical lack of high-quality benchmarks, leading to a potential illusion of progress. Existing datasets suffer from issues ranging from pre-training data contamination in the age of LLMs to the temporal and description leakage prevalent in early multimodal designs. To address this, we formalize the core principles of high-fidelity benchmarking, focusing on data sourcing integrity, leak-free and causally sound design, and structural clarity. We introduce Fidel-TS, a new large-scale benchmark built from the ground up on these principles by sourcing data from live APIs. Our experiments reveal the flaws of the previous benchmarks and the biases in model evaluation, providing new insights into multiple existing forecasting models and LLMs across various evaluation tasks.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2509.24789v3</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Zhijian Xu, Wanxu Cai, Xilin Dai, Zhaorong Deng, Qiang Xu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Test-Time Anchoring for Discrete Diffusion Posterior Sampling</title> |
|
|
<link>https://arxiv.org/abs/2510.02291</link> |
|
|
<description>arXiv:2510.02291v2 Announce Type: replace-cross |
|
|
Abstract: While continuous diffusion models have achieved remarkable success, discrete diffusion offers a unified framework for jointly modeling text and images. Beyond unification, discrete diffusion provides faster inference, finer control, and principled training-free guidance, making it well-suited for posterior sampling. Existing approaches to posterior sampling using discrete diffusion face severe challenges: derivative-free guidance yields sparse signals, continuous relaxations limit applicability, and split Gibbs samplers suffer from the curse of dimensionality. To overcome these limitations, we introduce Anchored Posterior Sampling (APS), built on two key innovations: quantized expectation for gradient-like guidance in discrete embedding space, and anchored remasking for adaptive decoding. APS achieves state-of-the-art performance among discrete diffusion samplers on both linear and nonlinear inverse problems across the standard image benchmarks. We demonstrate the generality of APS through training-free stylization and text-guided editing. We further apply APS to a large-scale diffusion language model, showing consistent improvement in question answering.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.02291v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.CV</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Litu Rout, Andreas Lugmayr, Yasamin Jafarian, Srivatsan Varadharajan, Constantine Caramanis, Sanjay Shakkottai, Ira Kemelmacher-Shlizerman</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>On the Provable Performance Guarantee of Efficient Reasoning Models</title> |
|
|
<link>https://arxiv.org/abs/2510.09133</link> |
|
|
<description>arXiv:2510.09133v2 Announce Type: replace-cross |
|
|
Abstract: Large reasoning models (LRMs) have achieved remarkable progress in complex problem-solving tasks. Despite this success, LRMs typically suffer from high computational costs during deployment, highlighting a need for efficient inference. A practical direction of efficiency improvement is to switch the LRM between thinking and non-thinking modes dynamically. However, such approaches often introduce additional reasoning errors and lack statistical guarantees for the performance loss, which are critical for high-stakes applications. In this work, we propose Probably Approximately Correct (PAC) reasoning that controls the performance loss under the user-specified tolerance. Specifically, we construct an upper confidence bound on the performance loss and determine a threshold for switching to the non-thinking model. Theoretically, using the threshold to switch between the thinking and non-thinking modes ensures bounded performance loss in a distribution-free manner. Our comprehensive experiments on reasoning benchmarks show that the proposed method can save computational budgets and control the user-specified performance loss.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.09133v2</guid> |
|
|
<category>cs.AI</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Hao Zeng, Jianguo Huang, Bingyi Jing, Hongxin Wei, Bo An</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>MARS-M: When Variance Reduction Meets Matrices</title> |
|
|
<link>https://arxiv.org/abs/2510.21800</link> |
|
|
<description>arXiv:2510.21800v3 Announce Type: replace-cross |
|
|
Abstract: Matrix-based preconditioned optimizers, such as Muon, have recently been shown to be more efficient than scalar-based optimizers for training large-scale neural networks, including large language models (LLMs). Recent benchmark studies of LLM pretraining optimizers have demonstrated that variance-reduction techniques such as MARS can substantially speed up training compared with standard optimizers that do not employ variance reduction. In this paper, we introduce MARS-M, a new optimizer that integrates MARS-style variance reduction with Muon. Under standard regularity conditions, we prove that MARS-M converges to a first-order stationary point at a rate of $\tilde{\mathcal{O}}(T^{-1/3})$, improving upon the $\tilde{\mathcal{O}}(T^{-1/4})$ rate attained by Muon. Empirical results on language modeling and computer vision tasks demonstrate that MARS-M consistently yields lower losses and improved performance across various downstream benchmarks. The implementation of MARS-M is available at https://github.com/AGI-Arena/MARS/tree/main/MARS_M.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.21800v3</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>math.OC</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Yifeng Liu, Angela Yuan, Quanquan Gu</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>An Analysis of Causal Effect Estimation using Outcome Invariant Data Augmentation</title> |
|
|
<link>https://arxiv.org/abs/2510.25128</link> |
|
|
<description>arXiv:2510.25128v2 Announce Type: replace-cross |
|
|
Abstract: The technique of data augmentation (DA) is often used in machine learning for regularization purposes to better generalize under i.i.d. settings. In this work, we present a unifying framework with topics in causal inference to make a case for the use of DA beyond just the i.i.d. setting, but for generalization across interventions as well. Specifically, we argue that when the outcome generating mechanism is invariant to our choice of DA, then such augmentations can effectively be thought of as interventions on the treatment generating mechanism itself. This can potentially help to reduce bias in causal effect estimation arising from hidden confounders. In the presence of such unobserved confounding we typically make use of instrumental variables (IVs) -- sources of treatment randomization that are conditionally independent of the outcome. However, IVs may not be as readily available as DA for many applications, which is the main motivation behind this work. By appropriately regularizing IV based estimators, we introduce the concept of IV-like (IVL) regression for mitigating confounding bias and improving predictive performance across interventions even when certain IV properties are relaxed. Finally, we cast parameterized DA as an IVL regression problem and show that when used in composition can simulate a worst-case application of such DA, further improving performance on causal estimation and generalization tasks beyond what simple DA may offer. This is shown both theoretically for the population case and via simulation experiments for the finite sample case using a simple linear example. We also present real data experiments to support our case.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2510.25128v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Uzair Akbar, Niki Kilbertus, Hao Shen, Krikamol Muandet, Bo Dai</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Optimal Fairness under Local Differential Privacy</title> |
|
|
<link>https://arxiv.org/abs/2511.16377</link> |
|
|
<description>arXiv:2511.16377v2 Announce Type: replace-cross |
|
|
Abstract: We investigate how to optimally design local differential privacy (LDP) mechanisms that reduce data unfairness and thereby improve fairness in downstream classification. We first derive a closed-form optimal mechanism for binary sensitive attributes and then develop a tractable optimization framework that yields the corresponding optimal mechanism for multi-valued attributes. As a theoretical contribution, we establish that for discrimination-accuracy optimal classifiers, reducing data unfairness necessarily leads to lower classification unfairness, thus providing a direct link between privacy-aware pre-processing and classification fairness. Empirically, we demonstrate that our approach consistently outperforms existing LDP mechanisms in reducing data unfairness across diverse datasets and fairness metrics, while maintaining accuracy close to that of non-private models. Moreover, compared with leading pre-processing and post-processing fairness methods, our mechanism achieves a more favorable accuracy-fairness trade-off while simultaneously preserving the privacy of sensitive attributes. Taken together, these results highlight LDP as a principled and effective pre-processing fairness intervention technique.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2511.16377v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.CR</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Hrad Ghoukasian, Shahab Asoodeh</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Covariance Estimation for Matrix-variate Data via Fixed-rank Core Covariance Geometry</title> |
|
|
<link>https://arxiv.org/abs/2512.01070</link> |
|
|
<description>arXiv:2512.01070v3 Announce Type: replace-cross |
|
|
Abstract: We study the geometry of the fixed-rank core covariance manifold and propose a novel covariance estimator for matrix-variate data leveraging this geometry. To generalize the separable covariance model, Hoff, McCormack, and Zhang (2023) showed that every covariance matrix $\Sigma$ of $p_1\times p_2$ matrix-variate data uniquely decomposes into a separable component $K$ and a core component $C$. Such a decomposition also exists for rank-$r$ $\Sigma$ if $p_1/p_2+p_2/p_1<r$, with $C$ sharing the same rank. They posed an open question on whether a partial-isotropy structure can be imposed on $C$ for high-dimensional covariance estimation. We address this question by showing that a partial-isotropy rank-$r$ core is a non-trivial convex combination of a rank-$r$ core and $I_p$ for $p:=p_1p_2$. This motivates studying the geometry of the space of rank-$r$ cores, $\mathcal{C}_{p_1,p_2,r}^+$. We show that $\mathcal{C}_{p_1,p_2,r}^+$ is a smooth manifold, except for a measure-zero subset, whereas $\mathcal{C}_{p_1,p_2}^{++}:=\mathcal{C}_{p_1,p_2,p}^+$ is itself a smooth manifold. The geometric properties, including smoothness of the positive definite cone via separability and the Riemannian gradient and Hessian operator relevant to $\mathcal{C}_{p_1,p_2,r}^+$, are also derived. Using this geometry, we propose a partial-isotropy core shrinkage estimator for matrix-variate data, supported by numerical illustrations.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.01070v3</guid> |
|
|
<category>math.DG</category> |
|
|
<category>stat.ME</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0/</dc:rights> |
|
|
<dc:creator>Bongjung Sung</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Tuning-Free Structured Sparse Recovery of Multiple Measurement Vectors using Implicit Regularization</title> |
|
|
<link>https://arxiv.org/abs/2512.03393</link> |
|
|
<description>arXiv:2512.03393v2 Announce Type: replace-cross |
|
|
Abstract: Recovering jointly sparse signals in the multiple measurement vectors (MMV) setting is a fundamental problem in machine learning, but traditional methods often require careful parameter tuning or prior knowledge of the sparsity of the signal and/or noise variance. We propose a tuning-free framework that leverages implicit regularization (IR) from overparameterization to overcome this limitation. Our approach reparameterizes the estimation matrix into factors that decouple the shared row-support from individual vector entries and applies gradient descent to a standard least-squares objective. We prove that with a sufficiently small and balanced initialization, the optimization dynamics exhibit a "momentum-like" effect where the true support grows significantly faster. Leveraging a Lyapunov-based analysis of the gradient flow, we further establish formal guarantees that the solution trajectory converges towards an idealized row-sparse solution. Empirical results demonstrate that our tuning-free approach achieves performance comparable to optimally tuned established methods. Furthermore, our framework significantly outperforms these baselines in scenarios where accurate priors are unavailable to the baselines.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.03393v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-sa/4.0/</dc:rights> |
|
|
<dc:creator>Lakshmi Jayalal, Sheetal Kalyani</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Defects and Inconsistencies in Solar Flare Data Sources: Implications for Machine Learning Forecasting</title> |
|
|
<link>https://arxiv.org/abs/2512.13417</link> |
|
|
<description>arXiv:2512.13417v2 Announce Type: replace-cross |
|
|
Abstract: Machine learning models for forecasting solar flares have been trained and evaluated using a variety of data sources, including Space Weather Prediction Center (SWPC) operational and science-quality data. Typically, data from these sources is minimally processed before being used to train and validate a forecasting model. However, predictive performance can be affected if defects and inconsistencies between these data sources are ignored. For a set of commonly used data sources, along with the software that queries and outputs processed data, we identify their defects and inconsistencies, quantify their extent, and show how they can affect predictions from data-driven machine-learning forecasting models. We also outline procedures for fixing these issues or at least mitigating their impacts. Finally, based on thorough comparisons of the effects of data sources on the trained forecasting model's predictive skill scores, we offer recommendations for using different data products in operational forecasting.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.13417v2</guid> |
|
|
<category>astro-ph.SR</category> |
|
|
<category>stat.AP</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Ke Hu, Kevin Jin, Victor Verma, Weihao Liu, Ward Manchester IV, Lulu Zhao, Tamas Gombosi, Yang Chen</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>Smoothing DiLoCo with Primal Averaging for Faster Training of LLMs</title> |
|
|
<link>https://arxiv.org/abs/2512.17131</link> |
|
|
<description>arXiv:2512.17131v2 Announce Type: replace-cross |
|
|
Abstract: We propose Generalized Primal Averaging (GPA), an extension of Nesterov's method that unifies and generalizes recent averaging-based optimizers like single-worker DiLoCo and Schedule-Free, within a non-distributed setting. While DiLoCo relies on a memory-intensive two-loop structure to periodically aggregate pseudo-gradients using Nesterov momentum, GPA eliminates this complexity by decoupling Nesterov's interpolation constants to enable smooth iterate averaging at every step. Structurally, GPA resembles Schedule-Free but replaces uniform averaging with exponential moving averaging. Empirically, GPA consistently outperforms single-worker DiLoCo and AdamW with reduced memory overhead. GPA achieves speedups of 8.71%, 10.13%, and 9.58% over the AdamW baseline in terms of steps to reach target validation loss for Llama-160M, 1B, and 8B models, respectively. Similarly, on the ImageNet ViT workload, GPA achieves speedups of 7% and 25.5% in the small and large batch settings respectively. Furthermore, we prove that for any base optimizer with $O(\sqrt{T})$ regret, where $T$ is the number of iterations, GPA matches or exceeds the original convergence guarantees depending on the interpolation constants.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.17131v2</guid> |
|
|
<category>cs.LG</category> |
|
|
<category>cs.AI</category> |
|
|
<category>stat.ML</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> |
|
|
<dc:creator>Aaron Defazio, Konstantin Mishchenko, Parameswaran Raman, Hao-Jun Michael Shi, Lin Xiao</dc:creator> |
|
|
</item> |
|
|
<item> |
|
|
<title>ScoreMatchingRiesz: Score Matching for Debiased Machine Learning and Policy Path Estimation</title> |
|
|
<link>https://arxiv.org/abs/2512.20523</link> |
|
|
<description>arXiv:2512.20523v2 Announce Type: replace-cross |
|
|
Abstract: We propose ScoreMatchingRiesz, a family of Riesz representer estimators based on score matching. The Riesz representer is a key nuisance component in debiased machine learning, enabling $\sqrt{n}$-consistent and asymptotically efficient estimation of causal and structural targets via Neyman-orthogonal scores. We formulate Riesz representer estimation as a score estimation problem. This perspective stabilizes representer estimation by allowing us to leverage denoising score matching and telescoping density ratio estimation. We also introduce the policy path, a parameter that captures how policy effects evolve under continuous treatments. We show that the policy path can be estimated via score matching by smoothly connecting average marginal effect (AME) and average policy effect (APE) estimation, which improves the interpretability of policy effects.</description> |
|
|
<guid isPermaLink="false">oai:arXiv.org:2512.20523v2</guid> |
|
|
<category>econ.EM</category> |
|
|
<category>cs.LG</category> |
|
|
<category>math.ST</category> |
|
|
<category>stat.ME</category> |
|
|
<category>stat.ML</category> |
|
|
<category>stat.TH</category> |
|
|
<pubDate>Mon, 02 Feb 2026 00:00:00 -0500</pubDate> |
|
|
<arxiv:announce_type>replace-cross</arxiv:announce_type> |
|
|
<dc:rights>http://creativecommons.org/licenses/by-nc-nd/4.0/</dc:rights> |
|
|
<dc:creator>Masahiro Kato</dc:creator> |
|
|
</item> |
|
|
</channel> |
|
|
</rss> |
|
|
|