diff --git "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml" "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
--- "a/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
+++ "b/raw_rss_feeds/https___arxiv_org_rss_stat.xml"
@@ -7,12 +7,814 @@
http://www.rssboard.org/rss-specificationen-us
- Sun, 04 Jan 2026 05:00:11 +0000
+ Wed, 07 Jan 2026 05:00:13 +0000rss-help@arxiv.org
- Sun, 04 Jan 2026 00:00:00 -0500
+ Wed, 07 Jan 2026 00:00:00 -0500
- SundaySaturday
+ Sunday
+
+ Mitigating Long-Tailed Anomaly Score Distributions with Importance-Weighted Loss
+ https://arxiv.org/abs/2601.02440
+ arXiv:2601.02440v1 Announce Type: new
+Abstract: Anomaly detection is crucial in industrial applications for identifying rare and unseen patterns to ensure system reliability. Traditional models, trained on a single class of normal data, struggle with real-world distributions where normal data exhibit diverse patterns, leading to class imbalance and long-tailed anomaly score distributions (LTD). This imbalance skews model training and degrades detection performance, especially for minority instances. To address this issue, we propose a novel importance-weighted loss designed specifically for anomaly detection. Compared to the previous method for LTD in classification, our method does not require prior knowledge of normal data classes. Instead, we introduce a weighted loss function that incorporates importance sampling to align the distribution of anomaly scores with a target Gaussian, ensuring a balanced representation of normal data. Extensive experiments on three benchmark image datasets and three real-world hyperspectral imaging datasets demonstrate the robustness of our approach in mitigating LTD-induced bias. Our method improves anomaly detection performance by 0.043, highlighting its effectiveness in real-world applications.
+ oai:arXiv.org:2601.02440v1
+ stat.ML
+ cs.AI
+ cs.LG
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1109/IJCNN64981.2025.11229283
+ Proc. IJCNN 2025
+ Jungi Lee, Jungkwon Kim, Chi Zhang, Sangmin Kim, Kwangsun Yoo, Seok-Joo Byun
+
+
+ A novel finite-sample testing procedure for composite null hypotheses via pointwise rejection
+ https://arxiv.org/abs/2601.02529
+ arXiv:2601.02529v1 Announce Type: new
+Abstract: We propose a novel finite-sample procedure for testing composite null hypotheses. Traditional likelihood ratio tests based on asymptotic $\chi^2$ approximations often exhibit substantial bias in small samples. Our procedure rejects the composite null hypothesis $H_0: \theta \in \Theta_0$ if the simple null hypothesis $H_0: \theta = \theta_t$ is rejected for every $\theta_t$ in the null region $\Theta_0$, using an inflated significance level. We derive formulas that determine this inflated level so that the overall test approximately maintains the desired significance level even with small samples. Whereas the traditional likelihood ratio test applies when the null region is defined solely by equality constraints--that is, when it forms a manifold without boundary--the proposed approach extends to null hypotheses defined by both equality and inequality constraints. In addition, it accommodates null hypotheses expressed as unions of several component regions and can be applied to models involving nuisance parameters. Through several examples featuring nonstandard composite null hypotheses, we demonstrate numerically that the proposed test achieves accurate inference, exhibiting only a small gap between the actual and nominal significance levels for both small and large samples.
+ oai:arXiv.org:2601.02529v1
+ stat.ME
+ math.ST
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Joonha Park, Ming Wang
+
+
+ Improve Power of Knockoffs with Annotation Information of Covariates
+ https://arxiv.org/abs/2601.02583
+ arXiv:2601.02583v1 Announce Type: new
+Abstract: Genome-wide association studies (GWAS) often find association signals between many genetic variants and traits of interest in a genomic region. Functional annotations of these variants provide valuable prior information that helps prioritize biologically relevant variants and enhances the power to detect causal variants. However, due to substantial correlations among these variants, a critical question is how to rigorously control the false discovery rate while effectively leveraging prior knowledge. We introduce annotation-informed knockoffs (AnnoKn), a knockoff-based method that performs annotation-informed variable selection with strict control of the false discovery rate. AnnoKn integrates the knockoff procedure with adaptive Lasso regression to evaluate the importance of multiple covariates while incorporating functional annotation information within a unified Bayesian framework. To facilitate real-world applications where individual-level data are not accessible, we further extend AnnoKn to operate on summary statistics. Through simulations and real-world applications to GTEx and GWAS datasets, we show that AnnoKn achieves superior power in detecting causal genetic variants compared with existing annotation-informed variable selection methods, while maintaining valid control over false discoveries.
+ oai:arXiv.org:2601.02583v1
+ stat.ME
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Xiangyu Zhang, Lijun Wang, Changjun Li, Chen Lin, Hongyu Zhao
+
+
+ Conformal novelty detection with false discovery rate control at the boundary
+ https://arxiv.org/abs/2601.02610
+ arXiv:2601.02610v1 Announce Type: new
+Abstract: Conformal novelty detection is a classical machine learning task for which uncertainty quantification is essential for providing reliable results. Recent work has shown that the BH procedure applied to conformal p-values controls the false discovery rate (FDR). Unfortunately, the BH procedure can lead to over-optimistic assessments near the rejection threshold, with an increase of false discoveries at the margin as pointed out by Soloff et al. (2024). This issue is solved therein by the support line (SL) correction, which is proven to control the boundary false discovery rate (bFDR) in the independent, non-conformal setting. The present work extends the SL method to the conformal setting: first, we show that the SL procedure can violate the bFDR control in this specific setting. Second, we propose several alternatives that provably control the bFDR in the conformal setting. Finally, numerical experiments with both synthetic and real data support our theoretical findings and show the relevance of the new proposed procedures.
+ oai:arXiv.org:2601.02610v1
+ stat.ME
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Zijun Gao, Etienne Roquain, Daniel Xiang
+
+
+ Bayesian Multiple Multivariate Density-Density Regression
+ https://arxiv.org/abs/2601.02640
+ arXiv:2601.02640v1 Announce Type: new
+Abstract: We propose the first approach for multiple multivariate density-density regression (MDDR), making it possible to consider the regression of a multivariate density-valued response on multiple multivariate density-valued predictors. The core idea is to define a fitted distribution using a sliced Wasserstein barycenter (SWB) of push-forwards of the predictors and to quantify deviations from the observed response using the sliced Wasserstein (SW) distance. Regression functions, which map predictors' supports to the response support, and barycenter weights are inferred within a generalized Bayes framework, enabling principled uncertainty quantification without requiring a fully specified likelihood. The inference process can be seen as an instance of an inverse SWB problem. We establish theoretical guarantees, including the stability of the SWB under perturbations of marginals and barycenter weights, sample complexity of the generalized likelihood, and posterior consistency. For practical inference, we introduce a differentiable approximation of the SWB and a smooth reparameterization to handle the simplex constraint on barycenter weights, allowing efficient gradient-based MCMC sampling. We demonstrate MDDR in an application to inference for population-scale single-cell data. Posterior analysis under the MDDR model in this example includes inference on communication between multiple source/sender cell types and a target/receiver cell type. The proposed approach provides accurate fits, reliable predictions, and interpretable posterior estimates of barycenter weights, which can be used to construct sparse cell-cell communication networks.
+ oai:arXiv.org:2601.02640v1
+ stat.ME
+ stat.CO
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Khai Nguyen, Yang Ni, Peter Mueller
+
+
+ Statistical Inference for Fuzzy Clustering
+ https://arxiv.org/abs/2601.02656
+ arXiv:2601.02656v1 Announce Type: new
+Abstract: Clustering is a central tool in biomedical research for discovering heterogeneous patient subpopulations, where group boundaries are often diffuse rather than sharply separated. Traditional methods produce hard partitions, whereas soft clustering methods such as fuzzy $c$-means (FCM) allow mixed memberships and better capture uncertainty and gradual transitions. Despite the widespread use of FCM, principled statistical inference for fuzzy clustering remains limited.
+ We develop a new framework for weighted fuzzy $c$-means (WFCM) for settings with potential cluster size imbalance. Cluster-specific weights rebalance the classical FCM criterion so that smaller clusters are not overwhelmed by dominant groups, and the weighted objective induces a normalized density model with scale parameter $\sigma$ and fuzziness parameter $m$. Estimation is performed via a blockwise majorize--minimize (MM) procedure that alternates closed-form membership and centroid updates with likelihood-based updates of $(\sigma,\bw)$. The intractable normalizing constant is approximated by importance sampling using a data-adaptive Gaussian mixture proposal. We further provide likelihood ratio tests for comparing cluster centers and bootstrap-based confidence intervals.
+ We establish consistency and asymptotic normality of the maximum likelihood estimator, validate the method through simulations, and illustrate it using single-cell RNA-seq and Alzheimer disease Neuroimaging Initiative (ADNI) data. These applications demonstrate stable uncertainty quantification and biologically meaningful soft memberships, ranging from well-separated cell populations under imbalance to a graded AD versus non-AD continuum consistent with disease progression.
+ oai:arXiv.org:2601.02656v1
+ stat.ME
+ cs.LG
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Qiuyi Wu, Zihan Zhu, Anru R. Zhang
+
+
+ Beyond Point Estimates: Toward Proper Statistical Inferencing and Reporting of Intraclass Correlation Coefficients
+ https://arxiv.org/abs/2601.02765
+ arXiv:2601.02765v1 Announce Type: new
+Abstract: Reporting test-retest reliability using the intraclass correlation coefficient (ICC) has received increasing attention due to the criticisms of poor transparency and replicability in neuroimaging research, as well as many other biomedical studies. Numerous studies have thus evaluated the reliability of their findings by comparing ICCs, however, they often failed to test statistical differences between ICCs or report confidence intervals. Relying solely on point estimates may preclude valid inference about population-level differences and compromise the reliability of conclusions. To address this issue, this study systematically reviewed the use of ICC in articles published in NeuroImage from 2022 to 2024, highlighting the prevalence of misreporting and misuse of ICCs. We further provide practical guidelines for conducting appropriate statistical inference on ICCs. For practitioners in this area, we introduce an online application for statistical testing and sample size estimation when utilizing ICCs. We recalculated confidence intervals and formally tested ICC values reported in the reviewed articles, thereby reassessing the original inferences. Our results demonstrate that exclusive reliance on point estimates could lead to unreliable or even misleading conclusions. Specifically, only two of the eleven reviewed articles provided unequivocally valid statistical inferences based on ICCs, whereas two articles failed to yield any valid inference at all, raising serious concerns about the replicability of findings in this field. These results underscore the urgent need for rigorous inferential frameworks when reporting and interpreting ICCs.
+ oai:arXiv.org:2601.02765v1
+ stat.ME
+ stat.AP
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yufeng Liu, Xiangfei Hong, Shanbao Tong
+
+
+ Fast Conformal Prediction using Conditional Interquantile Intervals
+ https://arxiv.org/abs/2601.02769
+ arXiv:2601.02769v1 Announce Type: new
+Abstract: We introduce Conformal Interquantile Regression (CIR), a conformal regression method that efficiently constructs near-minimal prediction intervals with guaranteed coverage. CIR leverages black-box machine learning models to estimate outcome distributions through interquantile ranges, transforming these estimates into compact prediction intervals while achieving approximate conditional coverage. We further propose CIR+ (Conditional Interquantile Regression with More Comparison), which enhances CIR by incorporating a width-based selection rule for interquantile intervals. This refinement yields narrower prediction intervals while maintaining comparable coverage, though at the cost of slightly increased computational time. Both methods address key limitations of existing distributional conformal prediction approaches: they handle skewed distributions more effectively than Conformalized Quantile Regression, and they achieve substantially higher computational efficiency than Conformal Histogram Regression by eliminating the need for histogram construction. Extensive experiments on synthetic and real-world datasets demonstrate that our methods optimally balance predictive accuracy and computational efficiency compared to existing approaches.
+ oai:arXiv.org:2601.02769v1
+ stat.ML
+ cs.LG
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Naixin Guo, Rui Luo, Zhixin Zhou
+
+
+ Decision-Theoretic Robustness for Network Models
+ https://arxiv.org/abs/2601.02811
+ arXiv:2601.02811v1 Announce Type: new
+Abstract: Bayesian network models (Erdos Renyi, stochastic block models, random dot product graphs, graphons) are widely used in neuroscience, epidemiology, and the social sciences, yet real networks are sparse, heterogeneous, and exhibit higher-order dependence. How stable are network-based decisions, model selection, and policy recommendations to small model misspecification? We study local decision-theoretic robustness by allowing the posterior to vary within a small Kullback-Leibler neighborhood and choosing actions that minimize worst-case posterior expected loss. Exploiting low-dimensional functionals available under exchangeability, we (i) adapt decision-theoretic robustness to exchangeable graphs via graphon limits and derive sharp small-radius expansions of robust posterior risk; under squared loss the leading inflation is controlled by the posterior variance of the loss, and for robustness indices that diverge at percolation/fragmentation thresholds we obtain a universal critical exponent describing the explosion of decision uncertainty near criticality. (ii) Develop a nonparametric minimax theory for robust model selection between sparse Erdos-Renyi and block models, showing-via robustness error exponents-that no Bayesian or frequentist method can uniformly improve upon the decision-theoretic limits over configuration models and sparse graphon classes for percolation-type functionals. (iii) Propose a practical algorithm based on entropic tilting of posterior or variational samples, and demonstrate it on functional brain connectivity and Karnataka village social networks.
+ oai:arXiv.org:2601.02811v1
+ math.ST
+ stat.ME
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Marios Papamichalis, Regina Ruane, Simon Lunagomez, Swati Chandna
+
+
+ Scalable Ultra-High-Dimensional Quantile Regression with Genomic Applications
+ https://arxiv.org/abs/2601.02826
+ arXiv:2601.02826v1 Announce Type: new
+Abstract: Modern datasets arising from social media, genomics, and biomedical informatics are often heterogeneous and (ultra) high-dimensional, creating substantial challenges for conventional modeling techniques. Quantile regression (QR) not only offers a flexible way to capture heterogeneous effects across the conditional distribution of an outcome, but also naturally produces prediction intervals that help quantify uncertainty in future predictions. However, classical QR methods can face serious memory and computational constraints in large-scale settings. These limitations motivate the use of parallel computing to maintain tractability. While extensive work has examined sample-splitting strategies in settings where the number of observations $n$ greatly exceeds the number of features $p$, the equally important (ultra) high-dimensional regime ($p >> n$) has been comparatively underexplored. To address this gap, we introduce a feature-splitting proximal point algorithm, FS-QRPPA, for penalized QR in high-dimensional regime. Leveraging recent developments in variational analysis, we establish a Q-linear convergence rate for FS-QRPPA and demonstrate its superior scalability in large-scale genomic applications from the UK Biobank relative to existing methods. Moreover, FS-QRPPA yields more accurate coefficient estimates and better coverage for prediction intervals than current approaches. We provide a parallel implementation in the R package fsQRPPA, making penalized QR tractable on large-scale datasets.
+ oai:arXiv.org:2601.02826v1
+ stat.ME
+ stat.AP
+ stat.CO
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hanqing Wu, Jonas Wallin, Iuliana Ionita-Laza
+
+
+ Collapsed Structured Block Models for Community Detection in Complex Networks
+ https://arxiv.org/abs/2601.02828
+ arXiv:2601.02828v1 Announce Type: new
+Abstract: Community detection seeks to recover mesoscopic structure from network data that may be binary, count-valued, signed, directed, weighted, or multilayer. The stochastic block model (SBM) explains such structure by positing a latent partition of nodes and block-specific edge distributions. In Bayesian SBMs, standard MCMC alternates between updating the partition and sampling block parameters, which can hinder mixing and complicate principled comparison across different partitions and numbers of communities. We develop a collapsed Bayesian SBM framework in which block-specific nuisance parameters are analytically integrated out under conjugate priors, so the marginal likelihood p(Y|z) depends only on the partition z and blockwise sufficient statistics. This yields fast local Gibbs/Metropolis updates based on ratios of closed-form integrated likelihoods and provides evidence-based complexity control that discourages gratuitous over-partitioning. We derive exact collapsed marginals for the most common SBM edge types-Beta-Bernoulli (binary), Gamma-Poisson (counts), and Normal-Inverse-Gamma (Gaussian weights)-and we extend collapsing to gap-constrained SBMs via truncated conjugate priors that enforce explicit upper bounds on between-community connectivity. We further show that the same collapsed strategy supports directed SBMs that model reciprocity through dyad states, signed SBMs via categorical block models, and multiplex SBMs where multiple layers contribute additive evidence for a shared partition. Across synthetic benchmarks and real networks (including email communication, hospital contact counts, and citation graphs), collapsed inference produces accurate partitions and interpretable posterior block summaries of within- and between-community interaction strengths while remaining computationally simple and modular.
+ oai:arXiv.org:2601.02828v1
+ math.ST
+ stat.ME
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Marios Papamichalis, Regina Ruane
+
+
+ Bayes Factor Group Sequential Designs
+ https://arxiv.org/abs/2601.02851
+ arXiv:2601.02851v1 Announce Type: new
+Abstract: The Bayes factor, the data-based updating factor from prior to posterior odds, is a principled measure of relative evidence for two competing hypotheses. It is naturally suited to sequential data analysis in settings such as clinical trials and animal experiments, where early stopping for efficacy or futility is desirable. However, designing such studies is challenging because computing design characteristics, such as the probability of obtaining conclusive evidence or the expected sample size, typically requires computationally intensive Monte Carlo simulations, as no closed-form or efficient numerical methods exist. To address this issue, we extend results from classical group sequential design theory to sequential Bayes factor designs. The key idea is to derive Bayes factor stopping regions in terms of the z-statistic and use the known distribution of the cumulative z-statistics to compute stopping probabilities through multivariate normal integration. The resulting method is fast, accurate, and simulation-free. We illustrate it with examples from clinical trials, animal experiments, and psychological studies. We also provide an open-source implementation in the bfpwr R package. Our method makes exploring sequential Bayes factor designs as straightforward as classical group sequential designs, enabling experiments to rapidly design informative and efficient experiments.
+ oai:arXiv.org:2601.02851v1
+ stat.ME
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Samuel Pawel, Leonhard Held
+
+
+ On the bias of the Hoover index estimator: Results for the gamma distribution
+ https://arxiv.org/abs/2601.03059
+ arXiv:2601.03059v1 Announce Type: new
+Abstract: The Hoover index is a widely used measure of inequality with an intuitive interpretation, yet little is known about the finite-sample properties of its empirical estimator. In this paper, we derive a simple expression for the expected value of the Hoover index estimator for general non-negative populations, based on Laplace transform techniques and exponential tilting. This unified framework applies to both continuous and discrete distributions. Explicit bias expressions are obtained for gamma population, showing that the estimator is generally biased in finite samples. Numerical and simulation results illustrate the magnitude of the bias and its dependence on the underlying distribution and sample size.
+ oai:arXiv.org:2601.03059v1
+ stat.ME
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by/4.0/
+ Roberto Vila, Helton Saulo
+
+
+ A non-parametric approach for estimating the correlation between log-rank test statistics with applications to a conjunctive power calculation
+ https://arxiv.org/abs/2601.03069
+ arXiv:2601.03069v1 Announce Type: new
+Abstract: We present a method for estimating the correlation between log-rank test statistics evaluating separate null hypotheses for two time-to-event endpoints. The correlation is estimated using subject-level data by a non-parametric approach based on the independent and identically distributed (iid) decomposition of the log-rank test statistic under any alternative. Using the iid decomposition, we are able to make an assumption-lean estimation of the correlation. A motivating example using the developed approach is provided. Here, we illustrate how the suggested approach can be used to give a realistic quantification of expected conjunctive power that can guide the design of a new randomized clinical trial using historical data. Finally, we investigate the method's finite sample properties via a simulation study that confirms unbiased and consistent behavior of the proposed approach. In addition, the simulation study gives insight into the effects of censoring on the correlation between the log-rank test statistics.
+ oai:arXiv.org:2601.03069v1
+ stat.ME
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Anne Lyngholm Soerensen, Paul Blanche, Henrik Ravn, Christian Pipper
+
+
+ Computationally Efficient Estimation of Localized Treatment Effects in High-Dimensional Design Spaces using Gaussian Process Regression
+ https://arxiv.org/abs/2601.03105
+ arXiv:2601.03105v1 Announce Type: new
+Abstract: Population-scale agent-based simulations of the opioid epidemic help evaluate intervention strategies and overdose outcomes in heterogeneous communities and provide estimates of localized treatment effects, which support the design of locally-tailored policies for precision public health. However, it is prohibitively costly to run simulations of all treatment conditions in all communities because the number of possible treatments grows exponentially with the number of interventions and levels at which they are applied. To address this need efficiently, we develop a metamodel framework, whereby treatment outcomes are modeled using a response function whose coefficients are learned through Gaussian process regression (GPR) on locally-contextualized covariates. We apply this framework to efficiently estimate treatment effects on overdose deaths in Pennsylvania counties. In contrast to classical designs such as fractional factorial design or Latin hypercube sampling, our approach leverages spatial correlations and posterior uncertainty to sequentially sample the most informative counties and treatment conditions. Using a calibrated agent-based opioid epidemic model, informed by county-level overdose mortality and baseline dispensing rate data for different treatments, we obtained county-level estimates of treatment effects on overdose deaths per 100,000 population for all treatment conditions in Pennsylvania, achieving approximately 5% average relative error using one-tenth the number of simulation runs required for exhaustive evaluation. Our bi-level framework provides a computationally efficient approach to decision support for policy makers, enabling rapid evaluation of alternative resource-allocation strategies to mitigate the opioid epidemic in local communities. The same analytical framework can be applied to guide precision public health interventions in other epidemic settings.
+ oai:arXiv.org:2601.03105v1
+ stat.AP
+ cs.MA
+ cs.SI
+ physics.soc-ph
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Abdulrahman A. Ahmed, M. Amin Rahimian, Qiushi Chen, Praveen Kumar
+
+
+ Self-Supervised Learning from Noisy and Incomplete Data
+ https://arxiv.org/abs/2601.03244
+ arXiv:2601.03244v1 Announce Type: new
+Abstract: Many important problems in science and engineering involve inferring a signal from noisy and/or incomplete observations, where the observation process is known. Historically, this problem has been tackled using hand-crafted regularization (e.g., sparsity, total-variation) to obtain meaningful estimates. Recent data-driven methods often offer better solutions by directly learning a solver from examples of ground-truth signals and associated observations. However, in many real-world applications, obtaining ground-truth references for training is expensive or impossible. Self-supervised learning methods offer a promising alternative by learning a solver from measurement data alone, bypassing the need for ground-truth references. This manuscript provides a comprehensive summary of different self-supervised methods for inverse problems, with a special emphasis on their theoretical underpinnings, and presents practical applications in imaging inverse problems.
+ oai:arXiv.org:2601.03244v1
+ stat.ML
+ cs.LG
+ eess.IV
+ Wed, 07 Jan 2026 00:00:00 -0500
+ new
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Juli\'an Tachella, Mike Davies
+
+
+ Breaking Rank - A Novel Unscented Kalman Filter for Parameter Estimations of a Lumped-Parameter Cardiovascular Model
+ https://arxiv.org/abs/2601.02390
+ arXiv:2601.02390v1 Announce Type: cross
+Abstract: We make modifications to the unscented Kalman filter (UKF) which bestow almost complete practical identifiability upon a lumped-parameter cardiovascular model with 10 parameters and 4 output observables - a highly non-linear, stiff problem of clinical significance. The modifications overcome the challenging problems of rank deficiency when applying the UKF to parameter estimation. Rank deficiency usually means only a small subset of parameters can be estimated. Traditionally, pragmatic compromises are made, such as selecting an optimal subset of parameters for estimation and fixing non-influential parameters. Kalman filters are typically used for dynamical state tracking, to facilitate the control u at every time step. However, for the purpose of parameter estimation, this constraint no longer applies. Our modification has transformed the utility of UKF for the parameter estimation purpose, including minimally influential parameters, with excellent robustness (i.e., under severe noise corruption, challenging patho-physiology, and no prior knowledge of parameter distributions). The modified UKF algorithm is robust in recovering almost all parameters to over 98% accuracy, over 90% of the time, with a challenging target data set of 50, 10-parameter samples. We compare this to the original implementation of the UKF algorithm for parameter estimation and demonstrate a significant improvement.
+ oai:arXiv.org:2601.02390v1
+ cs.IT
+ math.IT
+ stat.AP
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Alex Thornton, Ian Halliday, Harry Saxton, Xu Xu
+
+
+ Detecting and Mitigating Treatment Leakage in Text-Based Causal Inference: Distillation and Sensitivity Analysis
+ https://arxiv.org/abs/2601.02400
+ arXiv:2601.02400v1 Announce Type: cross
+Abstract: Text-based causal inference increasingly employs textual data as proxies for unobserved confounders, yet this approach introduces a previously undertheorized source of bias: treatment leakage. Treatment leakage occurs when text intended to capture confounding information also contains signals predictive of treatment status, thereby inducing post-treatment bias in causal estimates. Critically, this problem can arise even when documents precede treatment assignment, as authors may employ future-referencing language that anticipates subsequent interventions. Despite growing recognition of this issue, no systematic methods exist for identifying and mitigating treatment leakage in text-as-confounder applications. This paper addresses this gap through three contributions. First, we provide formal statistical and set-theoretic definitions of treatment leakage that clarify when and why bias occurs. Second, we propose four text distillation methods -- similarity-based passage removal, distant supervision classification, salient feature removal, and iterative nullspace projection -- designed to eliminate treatment-predictive content while preserving confounder information. Third, we validate these methods through simulations using synthetic text and an empirical application examining International Monetary Fund structural adjustment programs and child mortality. Our findings indicate that moderate distillation optimally balances bias reduction against confounder retention, whereas overly stringent approaches degrade estimate precision.
+ oai:arXiv.org:2601.02400v1
+ econ.EM
+ cs.CL
+ econ.GN
+ q-fin.EC
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Adel Daoud, Richard Johansson, Connor T. Jerzak
+
+
+ First Provably Optimal Asynchronous SGD for Homogeneous and Heterogeneous Data
+ https://arxiv.org/abs/2601.02523
+ arXiv:2601.02523v1 Announce Type: cross
+Abstract: Artificial intelligence has advanced rapidly through large neural networks trained on massive datasets using thousands of GPUs or TPUs. Such training can occupy entire data centers for weeks and requires enormous computational and energy resources. Yet the optimization algorithms behind these runs have not kept pace. Most large scale training still relies on synchronous methods, where workers must wait for the slowest device, wasting compute and amplifying the effects of hardware and network variability. Removing synchronization seems like a simple fix, but asynchrony introduces staleness, meaning updates computed on outdated models. This makes analysis difficult, especially when delays arise from system level randomness rather than algorithmic choices. As a result, the time complexity of asynchronous methods remains poorly understood. This dissertation develops a rigorous framework for asynchronous first order stochastic optimization, focusing on the core challenge of heterogeneous worker speeds. Within this framework, we show that with proper design, asynchronous SGD can achieve optimal time complexity, matching guarantees previously known only for synchronous methods. Our first contribution, Ringmaster ASGD, attains optimal time complexity in the homogeneous data setting by selectively discarding stale updates. The second, Ringleader ASGD, extends optimality to heterogeneous data, common in federated learning, using a structured gradient table mechanism. Finally, ATA improves resource efficiency by learning worker compute time distributions and allocating tasks adaptively, achieving near optimal wall clock time with less computation. Together, these results establish asynchronous optimization as a theoretically sound and practically efficient foundation for distributed learning, showing that coordination without synchronization can be both feasible and optimal.
+ oai:arXiv.org:2601.02523v1
+ math.OC
+ cs.DC
+ cs.LG
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ 10.25781/KAUST-WH234
+ Artavazd Maranjyan
+
+
+ Chronicals: A High-Performance Framework for LLM Fine-Tuning with 3.51x Speedup over Unsloth
+ https://arxiv.org/abs/2601.02609
+ arXiv:2601.02609v1 Announce Type: cross
+Abstract: Large language model fine-tuning is bottlenecked by memory: a 7B parameter model requires 84GB--14GB for weights, 14GB for gradients, and 56GB for FP32 optimizer states--exceeding even A100-40GB capacity. We present Chronicals, an open-source training framework achieving 3.51x speedup over Unsloth through four synergistic optimizations: (1) fused Triton kernels eliminating 75% of memory traffic via RMSNorm (7x), SwiGLU (5x), and QK-RoPE (2.3x) fusion; (2) Cut Cross-Entropy reducing logit memory from 5GB to 135MB through online softmax computation; (3) LoRA+ with theoretically-derived 16x differential learning rates between adapter matrices; and (4) Best-Fit Decreasing sequence packing recovering 60-75% of compute wasted on padding.
+ On Qwen2.5-0.5B with A100-40GB, Chronicals achieves 41,184 tokens/second for full fine-tuning versus Unsloth's 11,736 tokens/second (3.51x). For LoRA at rank 32, we reach 11,699 tokens/second versus Unsloth MAX's 2,857 tokens/second (4.10x). Critically, we discovered that Unsloth's reported 46,000 tokens/second benchmark exhibited zero gradient norms--the model was not training.
+ We provide complete mathematical foundations: online softmax correctness proofs, FlashAttention IO complexity bounds O(N^2 d^2 M^{-1}), LoRA+ learning rate derivations from gradient magnitude analysis, and bin-packing approximation guarantees. All implementations, benchmarks, and proofs are available at https://github.com/Ajwebdevs/Chronicals with pip installation via https://pypi.org/project/chronicals/.
+ oai:arXiv.org:2601.02609v1
+ cs.LG
+ cs.AI
+ cs.CL
+ cs.DC
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Arjun S. Nair
+
+
+ MAFS: Multi-head Attention Feature Selection for High-Dimensional Data via Deep Fusion of Filter Methods
+ https://arxiv.org/abs/2601.02668
+ arXiv:2601.02668v1 Announce Type: cross
+Abstract: Feature selection is essential for high-dimensional biomedical data, enabling stronger predictive performance, reduced computational cost, and improved interpretability in precision medicine applications. Existing approaches face notable challenges. Filter methods are highly scalable but cannot capture complex relationships or eliminate redundancy. Deep learning-based approaches can model nonlinear patterns but often lack stability, interpretability, and efficiency at scale. Single-head attention improves interpretability but is limited in capturing multi-level dependencies and remains sensitive to initialization, reducing reproducibility. Most existing methods rarely combine statistical interpretability with the representational power of deep learning, particularly in ultra-high-dimensional settings. Here, we introduce MAFS (Multi-head Attention-based Feature Selection), a hybrid framework that integrates statistical priors with deep learning capabilities. MAFS begins with filter-based priors for stable initialization and guide learning. It then uses multi-head attention to examine features from multiple perspectives in parallel, capturing complex nonlinear relationships and interactions. Finally, a reordering module consolidates outputs across attention heads, resolving conflicts and minimizing information loss to generate robust and consistent feature rankings. This design combines statistical guidance with deep modeling capacity, yielding interpretable importance scores while maximizing retention of informative signals. Across simulated and real-world datasets, including cancer gene expression and Alzheimer's disease data, MAFS consistently achieves superior coverage and stability compared with existing filter-based and deep learning-based alternatives, offering a scalable, interpretable, and robust solution for feature selection in high-dimensional biomedical data.
+ oai:arXiv.org:2601.02668v1
+ cs.LG
+ stat.ME
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Xiaoyan Sun, Qingyu Meng, Yalu Wen
+
+
+ Sampling non-log-concave densities via Hessian-free high-resolution dynamics
+ https://arxiv.org/abs/2601.02725
+ arXiv:2601.02725v1 Announce Type: cross
+Abstract: We study the problem of sampling from a target distribution $\pi(q)\propto e^{-U(q)}$ on $\mathbb{R}^d$, where $U$ can be non-convex, via the Hessian-free high-resolution (HFHR) dynamics, which is a second-order Langevin-type process that has $e^{-U(q)-\frac12|p|^2}$ as its unique invariant distribution, and it reduces to kinetic Langevin dynamics (KLD) as the resolution parameter $\alpha\to0$. The existing theory for HFHR dynamics in the literature is restricted to strongly-convex $U$, although numerical experiments are promising for non-convex settings as well. We focus on studying the convergence of HFHR dynamics when $U$ can be non-convex, which bridges a gap between theory and practice. Under a standard assumption of dissipativity and smoothness on $U$, we adopt the reflection/synchronous coupling method. This yields a Lyapunov-weighted Wasserstein distance in which the HFHR semigroup is exponentially contractive for all sufficiently small $\alpha>0$ whenever KLD is. We further show that, under an additional assumption that asymptotically $\nabla U$ has linear growth at infinity, the contraction rate for HFHR dynamics is strictly better than that of KLD, with an explicit gain. As a case study, we verify the assumptions and the resulting acceleration for three examples: a multi-well potential, Bayesian linear regression with $L^p$ regularizer and Bayesian binary classification. We conduct numerical experiments based on these examples, as well as an additional example of Bayesian logistic regression with real data processed by the neural networks, which illustrates the efficiency of the algorithms based on HFHR dynamics and verifies the acceleration and superior performance compared to KLD.
+ oai:arXiv.org:2601.02725v1
+ math.PR
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Xiaoyu Wang, Yingli Wang, Lingjiong Zhu
+
+
+ Language Hierarchization Provides the Optimal Solution to Human Working Memory Limits
+ https://arxiv.org/abs/2601.02740
+ arXiv:2601.02740v1 Announce Type: cross
+Abstract: Language is a uniquely human trait, conveying information efficiently by organizing word sequences in sentences into hierarchical structures. A central question persists: Why is human language hierarchical? In this study, we show that hierarchization optimally solves the challenge of our limited working memory capacity. We established a likelihood function that quantifies how well the average number of units according to the language processing mechanisms aligns with human working memory capacity (WMC) in a direct fashion. The maximum likelihood estimate (MLE) of this function, tehta_MLE, turns out to be the mean of units. Through computational simulations of symbol sequences and validation analyses of natural language sentences, we uncover that compared to linear processing, hierarchical processing far surpasses it in constraining the tehta_MLE values under the human WMC limit, along with the increase of sequence/sentence length successfully. It also shows a converging pattern related to children's WMC development. These results suggest that constructing hierarchical structures optimizes the processing efficiency of sequential language input while staying within memory constraints, genuinely explaining the universal hierarchical nature of human language.
+ oai:arXiv.org:2601.02740v1
+ cs.CL
+ stat.AP
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Luyao Chen, Weibo Gao, Junjie Wu, Jinshan Wu, Angela D. Friederici
+
+
+ Varadhan Functions, Variances, and Means on Compact Riemannian Manifolds
+ https://arxiv.org/abs/2601.02832
+ arXiv:2601.02832v1 Announce Type: cross
+Abstract: Motivated by Varadhan's theorem, we introduce Varadhan functions, variances, and means on compact Riemannian manifolds as smooth approximations to their Fr\'echet counterparts. Given independent and identically distributed samples, we prove uniform laws of large numbers for their empirical versions. Furthermore, we prove central limit theorems for Varadhan functions and variances for each fixed $t\ge0$, and for Varadhan means for each fixed $t>0$. By studying small time asymptotics of gradients and Hessians of Varadhan functions, we build a strong connection to the central limit theorem for Fr\'echet means, without assumptions on the geometry of the cut locus.
+ oai:arXiv.org:2601.02832v1
+ math.PR
+ math.ST
+ stat.ME
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yueqi Cao
+
+
+ Modeling ICD-10 Morbidity and Multidimensional Poverty as a Spatial Network: Evidence from Thailand
+ https://arxiv.org/abs/2601.02848
+ arXiv:2601.02848v1 Announce Type: cross
+Abstract: Health and poverty in Thailand exhibit pronounced geographic structuring, yet the extent to which they operate as interconnected regional systems remains insufficiently understood. This study analyzes ICD-10 chapter-level morbidity and multidimensional poverty as outcomes embedded in a spatial interaction network. Interpreting Thailand's 76 provinces as nodes within a fixed-degree regional graph, we apply tools from spatial econometrics and social network analysis, including Moran's I, Local Indicators of Spatial Association (LISA), and Spatial Durbin Models (SDM), to assess spatial dependence and cross-provincial spillovers.
+ Our findings reveal strong spatial clustering across multiple ICD-10 chapters, with persistent high-high morbidity zones, particularly for digestive, respiratory, musculoskeletal, and symptom-based diseases, emerging in well-defined regional belts. SDM estimates demonstrate that spillover effects from neighboring provinces frequently exceed the influence of local deprivation, especially for living-condition, health-access, accessibility, and poor-household indicators. These patterns are consistent with contagion and contextual influence processes well established in social network theory.
+ By framing morbidity and poverty as interdependent attributes on a spatial network, this study contributes to the growing literature on structural diffusion, health inequality, and regional vulnerability. The results highlight the importance of coordinated policy interventions across provincial boundaries and demonstrate how network-based modeling can uncover the spatial dynamics of health and deprivation.
+ oai:arXiv.org:2601.02848v1
+ cs.SI
+ cs.CY
+ stat.AP
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Pratana Kukieattikool, Kittiya Ku-kiattikun, Anukool Noymai, Navaporn Surasvadi, Jantakarn Makma, Pubodin Pornratchpum, Watcharakon Noothong, Chainarong Amornbunchornvej
+
+
+ Multi-Distribution Robust Conformal Prediction
+ https://arxiv.org/abs/2601.02998
+ arXiv:2601.02998v1 Announce Type: cross
+Abstract: In many fairness and distribution robustness problems, one has access to labeled data from multiple source distributions yet the test data may come from an arbitrary member or a mixture of them. We study the problem of constructing a conformal prediction set that is uniformly valid across multiple, heterogeneous distributions, in the sense that no matter which distribution the test point is from, the coverage of the prediction set is guaranteed to exceed a pre-specified level. We first propose a max-p aggregation scheme that delivers finite-sample, multi-distribution coverage given any conformity scores associated with each distribution. Upon studying several efficiency optimization programs subject to uniform coverage, we prove the optimality and tightness of our aggregation scheme, and propose a general algorithm to learn conformity scores that lead to efficient prediction sets after the aggregation under standard conditions. We discuss how our framework relates to group-wise distributionally robust optimization, sub-population shift, fairness, and multi-source learning. In synthetic and real-data experiments, our method delivers valid worst-case coverage across multiple distributions while greatly reducing the set size compared with naively applying max-p aggregation to single-source conformity scores, and can be comparable in size to single-source prediction sets with popular, standard conformity scores.
+ oai:arXiv.org:2601.02998v1
+ cs.LG
+ stat.ME
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by/4.0/
+ Yuqi Yang, Ying Jin
+
+
+ Time-Aware Synthetic Control
+ https://arxiv.org/abs/2601.03099
+ arXiv:2601.03099v1 Announce Type: cross
+Abstract: The synthetic control (SC) framework is widely used for observational causal inference with time-series panel data. SC has been successful in diverse applications, but existing methods typically treat the ordering of pre-intervention time indices interchangeable. This invariance means they may not fully take advantage of temporal structure when strong trends are present. We propose Time-Aware Synthetic Control (TASC), which employs a state-space model with a constant trend while preserving a low-rank structure of the signal. TASC uses the Kalman filter and Rauch-Tung-Striebel smoother: it first fits a generative time-series model with expectation-maximization and then performs counterfactual inference. We evaluate TASC on both simulated and real-world datasets, including policy evaluation and sports prediction. Our results suggest that TASC offers advantages in settings with strong temporal trends and high levels of observation noise.
+ oai:arXiv.org:2601.03099v1
+ cs.LG
+ econ.EM
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Saeyoung Rho, Cyrus Illick, Samhitha Narasipura, Alberto Abadie, Daniel Hsu, Vishal Misra
+
+
+ From Entropy to Epiplexity: Rethinking Information for Computationally Bounded Intelligence
+ https://arxiv.org/abs/2601.03220
+ arXiv:2601.03220v1 Announce Type: cross
+Abstract: Can we learn more from data than existed in the generating process itself? Can new and useful information be constructed from merely applying deterministic transformations to existing data? Can the learnable content in data be evaluated without considering a downstream task? On these questions, Shannon information and Kolmogorov complexity come up nearly empty-handed, in part because they assume observers with unlimited computational capacity and fail to target the useful information content. In this work, we identify and exemplify three seeming paradoxes in information theory: (1) information cannot be increased by deterministic transformations; (2) information is independent of the order of data; (3) likelihood modeling is merely distribution matching. To shed light on the tension between these results and modern practice, and to quantify the value of data, we introduce epiplexity, a formalization of information capturing what computationally bounded observers can learn from data. Epiplexity captures the structural content in data while excluding time-bounded entropy, the random unpredictable content exemplified by pseudorandom number generators and chaotic dynamical systems. With these concepts, we demonstrate how information can be created with computation, how it depends on the ordering of the data, and how likelihood modeling can produce more complex programs than present in the data generating process itself. We also present practical procedures to estimate epiplexity which we show capture differences across data sources, track with downstream performance, and highlight dataset interventions that improve out-of-distribution generalization. In contrast to principles of model selection, epiplexity provides a theoretical foundation for data selection, guiding how to select, generate, or transform data for learning systems.
+ oai:arXiv.org:2601.03220v1
+ cs.LG
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Marc Finzi, Shikai Qiu, Yiding Jiang, Pavel Izmailov, J. Zico Kolter, Andrew Gordon Wilson
+
+
+ Shallow-circuit Supervised Learning on a Quantum Processor
+ https://arxiv.org/abs/2601.03235
+ arXiv:2601.03235v1 Announce Type: cross
+Abstract: Quantum computing has long promised transformative advances in data analysis, yet practical quantum machine learning has remained elusive due to fundamental obstacles such as a steep quantum cost for the loading of classical data and poor trainability of many quantum machine learning algorithms designed for near-term quantum hardware. In this work, we show that one can overcome these obstacles by using a linear Hamiltonian-based machine learning method which provides a compact quantum representation of classical data via ground state problems for k-local Hamiltonians. We use the recent sample-based Krylov quantum diagonalization method to compute low-energy states of the data Hamiltonians, whose parameters are trained to express classical datasets through local gradients. We demonstrate the efficacy and scalability of the methods by performing experiments on benchmark datasets using up to 50 qubits of an IBM Heron quantum processor.
+ oai:arXiv.org:2601.03235v1
+ quant-ph
+ cs.LG
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Luca Candelori, Swarnadeep Majumder, Antonio Mezzacapo, Javier Robledo Moreno, Kharen Musaelian, Santhanam Nagarajan, Sunil Pinnamaneni, Kunal Sharma, Dario Villani
+
+
+ PET-TURTLE: Deep Unsupervised Support Vector Machines for Imbalanced Data Clusters
+ https://arxiv.org/abs/2601.03237
+ arXiv:2601.03237v1 Announce Type: cross
+Abstract: Foundation vision, audio, and language models enable zero-shot performance on downstream tasks via their latent representations. Recently, unsupervised learning of data group structure with deep learning methods has gained popularity. TURTLE, a state of the art deep clustering algorithm, uncovers data labeling without supervision by alternating label and hyperplane updates, maximizing the hyperplane margin, in a similar fashion to support vector machines (SVMs). However, TURTLE assumes clusters are balanced; when data is imbalanced, it yields non-ideal hyperplanes that cause higher clustering error. We propose PET-TURTLE, which generalizes the cost function to handle imbalanced data distributions by a power law prior. Additionally, by introducing sparse logits in the labeling process, PET-TURTLE optimizes a simpler search space that in turn improves accuracy for balanced datasets. Experiments on synthetic and real data show that PET-TURTLE improves accuracy for imbalanced sources, prevents over-prediction of minority clusters, and enhances overall clustering.
+ oai:arXiv.org:2601.03237v1
+ cs.LG
+ eess.IV
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ 10.1109/LSP.2025.3636453
+ IEEE Signal Processing Letters, vol. 33, pp. 91-95, 2026
+ Javier Salazar Cavazos
+
+
+ At the Intersection of Deep Sequential Model Framework and State-space Model Framework: Study on Option Pricing
+ https://arxiv.org/abs/2012.07784
+ arXiv:2012.07784v2 Announce Type: replace
+Abstract: Inference and forecast problems of the nonlinear dynamical system have arisen in a variety of contexts. Reservoir computing and deep sequential models, on the one hand, have demonstrated efficient, robust, and superior performance in modeling simple and chaotic dynamical systems. However, their innate deterministic feature has partially detracted their robustness to noisy system, and their inability to offer uncertainty measurement has also been an insufficiency of the framework. On the other hand, the traditional state-space model framework is robust to noise. It also carries measured uncertainty, forming a just-right complement to the reservoir computing and deep sequential model framework. We propose the unscented reservoir smoother, a model that unifies both deep sequential and state-space models to achieve both frameworks' superiorities. Evaluated in the option pricing setting on top of noisy datasets, URS strikes highly competitive forecasting accuracy, especially those of longer-term, and uncertainty measurement. Further extensions and implications on URS are also discussed to generalize a full integration of both frameworks.
+ oai:arXiv.org:2012.07784v2
+ stat.ML
+ cs.LG
+ math.DS
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ziyang Ding, Sayan Mukherjee
+
+
+ Using prior information to boost power in correlation structure support recovery
+ https://arxiv.org/abs/2111.11278
+ arXiv:2111.11278v2 Announce Type: replace
+Abstract: Hypothesis testing of structure in correlation and covariance matrices is of broad interest in many application areas. In high dimensions and/or small to moderate sample sizes, high error rates in testing is a substantial concern. This article focuses on increasing power through a frequentist assisted by Bayes (FAB) procedure. This FAB approach boosts power by including prior information on the correlation parameters. In particular, we suppose there is one of two sources of prior information: (i) a prior dataset that is distinct from the current data but related enough that it may contain valuable information about the correlation structure in the current data; and (ii) knowledge about a tendency for the correlations in different parameters to be similar so that it is appropriate to consider a hierarchical model. When the prior information is relevant, the proposed FAB approach can have significant gains in power. A divide-and-conquer algorithm is developed to reduce computational complexity in massive testing dimensions. We show improvements in power for detecting correlated gene pairs in genomic studies while maintaining control of Type I error or false discover rate (FDR).
+ oai:arXiv.org:2111.11278v2
+ stat.ME
+ math.ST
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Ziyang Ding, David Dunson
+
+
+ Bayesian score calibration for approximate models
+ https://arxiv.org/abs/2211.05357
+ arXiv:2211.05357v5 Announce Type: replace
+Abstract: Scientists continue to develop increasingly complex mechanistic models to reflect their knowledge more realistically. Statistical inference using these models can be challenging since the corresponding likelihood function is often intractable and model simulation may be computationally burdensome. Fortunately, in many of these situations it is possible to adopt a surrogate model or approximate likelihood function. It may be convenient to conduct Bayesian inference directly with a surrogate, but this can result in a posterior with poor uncertainty quantification. In this paper, we propose a new method for adjusting approximate posterior samples to reduce bias and improve posterior coverage properties. We do this by optimizing a transformation of the approximate posterior, the result of which maximizes a scoring rule. Our approach requires only a (fixed) small number of complex model simulations and is numerically stable. We develop supporting theory for our method and demonstrate beneficial corrections to approximate posteriors across several examples of increasing complexity.
+ oai:arXiv.org:2211.05357v5
+ stat.CO
+ stat.ME
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Joshua J Bon, David J Warne, David J Nott, Christopher Drovandi
+
+
+ Development of a high-resolution indoor radon map using a new machine learning-based probabilistic model and German radon survey data
+ https://arxiv.org/abs/2310.11143
+ arXiv:2310.11143v5 Announce Type: replace
+Abstract: Accurate knowledge of indoor radon concentration is crucial for assessing radon-related health effects or identifying radon-prone areas. Indoor radon concentration at the national scale is usually estimated on the basis of extensive measurement campaigns. However, characteristics of the sampled households often differ from the characteristics of the target population owing to the large number of relevant factors that control the indoor radon concentration, such as the availability of geogenic radon or floor level. We propose a model-based approach that allows a more realistic estimation of indoor radon distribution with a higher spatial resolution than a purely data-based approach. A modeling approach was used by applying a quantile regression forest to estimate the probability distribution function of indoor radon for each floor level of each residential building in Germany. Based on the estimated probability distribution function,a probabilistic Monte Carlo sampling technique was applied, enabling the combination and population weighting of floor-level predictions. In this way,the uncertainty of the individual predictions is effectively propagated into the estimate of variability at the aggregated level. The results show an approximate lognormal distribution of indoor radon in dwellings in Germany with an arithmetic mean of 63 Bq/m3, a geometric mean of 41 Bq/m3, and a 95th percentile of 180 Bq/m3. The exceedance probabilities for 100 and 300 Bq/m3 are 12.5% (10.5 million people affected) and 2.2 % (1.9 million people affected), respectively. The advantages of our approach are that it yields a) an accurate estimation of indoor radon concentration even if the survey is not fully representative with respect to floor level and radon concentration in soil, and b) an estimate of the indoor radon distribution with a much higher spatial resolution than basic descriptive statistics.
+ oai:arXiv.org:2310.11143v5
+ stat.ML
+ cs.LG
+ physics.data-an
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1289/EHP14171
+ Environmental Health Perspectives 132 (9), 097009 (2024)
+ Eric Petermann, Peter Bossew, Joachim Kemski, Valeria Gruber, Nils Suhr, Bernd Hoffmann
+
+
+ Learning mirror maps in policy mirror descent
+ https://arxiv.org/abs/2402.05187
+ arXiv:2402.05187v3 Announce Type: replace
+Abstract: Policy Mirror Descent (PMD) is a popular framework in reinforcement learning, serving as a unifying perspective that encompasses numerous algorithms. These algorithms are derived through the selection of a mirror map and enjoy finite-time convergence guarantees. Despite its popularity, the exploration of PMD's full potential is limited, with the majority of research focusing on a particular mirror map -- namely, the negative entropy -- which gives rise to the renowned Natural Policy Gradient (NPG) method. It remains uncertain from existing theoretical studies whether the choice of mirror map significantly influences PMD's efficacy. In our work, we conduct empirical investigations to show that the conventional mirror map choice (NPG) often yields less-than-optimal outcomes across several standard benchmark environments. Using evolutionary strategies, we identify more efficient mirror maps that enhance the performance of PMD. We first focus on a tabular environment, i.e. Grid-World, where we relate existing theoretical bounds with the performance of PMD for a few standard mirror maps and the learned one. We then show that it is possible to learn a mirror map that outperforms the negative entropy in more complex environments, such as the MinAtar suite. Additionally, we demonstrate that the learned mirror maps generalize effectively to different tasks by testing each map across various other environments.
+ oai:arXiv.org:2402.05187v3
+ stat.ML
+ cs.LG
+ math.OC
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Carlo Alfano, Sebastian Towers, Silvia Sapora, Chris Lu, Patrick Rebeschini
+
+
+ Scalable Bayesian Inference for Generalized Linear Mixed Models via Stochastic Gradient MCMC
+ https://arxiv.org/abs/2403.03007
+ arXiv:2403.03007v3 Announce Type: replace
+Abstract: The generalized linear mixed model (GLMM) is widely used for analyzing correlated data, particularly in large-scale biomedical and social science applications. Scalable Bayesian inference for GLMMs is challenging because the marginal likelihood is intractable and conventional Markov chain Monte Carlo (MCMC) methods become computationally prohibitive as the number of subjects grows. We develop a stochastic gradient MCMC (SGMCMC) algorithm tailored to GLMMs that enables accurate posterior inference in the large-sample regime. Our approach uses Fisher's identity to construct an unbiased Monte Carlo estimator of the gradient of the marginal log-likelihood, making SGMCMC feasible when direct gradient computation is impossible. We analyze the additional variability introduced by both minibatching and gradient approximation, and derive a post-hoc covariance correction that yields properly calibrated posterior uncertainty. Through simulations, we show that the proposed method provides accurate posterior means and variances, outperforming existing approaches, including control variate methods, in large-$n$ settings. We further demonstrate the method's practical utility in an analysis of electronic health records data, where accounting for variance inflation materially changes scientific conclusions.
+ oai:arXiv.org:2403.03007v3
+ stat.CO
+ stat.ME
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Samuel I. Berchuck, Youngsoo Baek, Felipe A. Medeiros, Andrea Agazzi
+
+
+ Semiparametric fiducial inference for Cox models
+ https://arxiv.org/abs/2404.18779
+ arXiv:2404.18779v2 Announce Type: replace
+Abstract: R. A. Fisher introduced the concept of fiducial as a potential replacement for the Bayesian posterior distribution in the 1930s. During the past century, fiducial approaches have been explored in various parametric and nonparametric settings. However, to the best of our knowledge, no fiducial inference has been developed in the realm of semiparametric statistics. In this paper, we propose a novel fiducial approach for semiparametric models. To streamline our presentation, we use the Cox proportional hazards model, which is the most popular model for the analysis of survival data, as a running example. Other models and extensions are also discussed. In our experiments, we find our method to perform well especially in situations when the maximum likelihood estimator fails.
+ oai:arXiv.org:2404.18779v2
+ stat.ME
+ math.ST
+ stat.CO
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yifan Cui, Jan Hannig, Paul Edlefsen
+
+
+ Scalable magnetic resonance fingerprinting: Incremental inference of high dimensional elliptical mixtures from large data volumes
+ https://arxiv.org/abs/2412.10173
+ arXiv:2412.10173v2 Announce Type: replace
+Abstract: Magnetic Resonance Fingerprinting (MRF) is an emerging technology with the potential to revolutionize radiology and medical diagnostics. In comparison to traditional magnetic resonance imaging (MRI), MRF enables the rapid, simultaneous, non-invasive acquisition and reconstruction of multiple tissue parameters, paving the way for novel diagnostic techniques. In the original matching approach, reconstruction is based on the search for the best matches between in vivo acquired signals and a dictionary of high-dimensional simulated signals (fingerprints) with known tissue properties. A critical and limiting challenge is that the size of the simulated dictionary increases exponentially with the number of parameters, leading to an extremely costly subsequent matching. In this work, we propose to address this scalability issue by considering probabilistic mixtures of high-dimensional elliptical distributions, to learn more efficient dictionary representations. Mixture components are modelled as flexible ellipitic shapes in low dimensional subspaces. They are exploited to cluster similar signals and reduce their dimension locally cluster-wise to limit information loss. To estimate such a mixture model, we provide a new incremental algorithm capable of handling large numbers of signals, allowing us to go far beyond the hardware limitations encountered by standard implementations. We demonstrate, on simulated and real data, that our method effectively manages large volumes of MRF data with maintained accuracy. It offers a more efficient solution for accurate tissue characterization and significantly reduces the computational burden, making the clinical application of MRF more practical and accessible.
+ oai:arXiv.org:2412.10173v2
+ stat.AP
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Geoffroy Oudoumanessah, Thomas Coudert, Carole Lartizien, Michel Dojat, Thomas Christen, Florence Forbes
+
+
+ SPARKLE: A Nonparametric Approach for Online Decision-Making with High-Dimensional Covariates
+ https://arxiv.org/abs/2503.16941
+ arXiv:2503.16941v3 Announce Type: replace
+Abstract: Personalized services are central to today's digital economy, and their sequential decisions are often modeled as contextual bandits. Modern applications pose two main challenges: high-dimensional covariates and the need for nonparametric models to capture complex reward-covariate relationships. We propose SPARKLE, a novel contextual bandit algorithm based on a sparse additive reward model that addresses both challenges through (i) a doubly penalized estimator for nonparametric reward estimation and (ii) an epoch-based design with adaptive screening to balance exploration and exploitation. We prove a sublinear regret bound that grows only logarithmically in the covariate dimensionality; to our knowledge, this is the first such result for nonparametric contextual bandits with high-dimensional covariates. We also derive an information-theoretic lower bound, and the gap to the upper bound vanishes as the reward smoothness increases. Extensive experiments on synthetic data and real data from video recommendation and personalized medicine show strong performance in high-dimensional settings.
+ oai:arXiv.org:2503.16941v3
+ stat.ML
+ cs.LG
+ stat.ME
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Wenjia Wang, Qingwen Zhang, Xiaowei Zhang
+
+
+ Successive classification learning for estimating quantile optimal treatment regimes
+ https://arxiv.org/abs/2507.11255
+ arXiv:2507.11255v2 Announce Type: replace
+Abstract: Quantile optimal treatment regimes (OTRs) aim to assign treatments that maximize a specified quantile of patients' outcomes. Compared to treatment regimes that target the mean outcomes, quantile OTRs offer fairer regimes when a lower quantile is selected, as it improves outcomes for vulnerable patients. In this paper, we propose a novel method for estimating quantile OTRs by reformulating the problem as a successive classification task, solvable via training a sequence of classifiers, each successive classifier built on the output of its predecessors. This reformulation enables us to leverage the powerful machine learning technique to enhance computational efficiency and handle complex decision boundaries. We also investigate the estimation of quantile OTRs when outcomes are discrete, a setting that has received limited attention in the literature. A key challenge is that direct extensions of existing methods to discrete outcomes often lead to inconsistency and ineffectiveness issues. To overcome this, we introduce a smoothing technique that maps discrete outcomes to continuous surrogates, enabling consistent and effective estimation. We provide theoretical guarantees to support our methodology, and demonstrate its superior performance through comprehensive simulation studies and real-data analysis.
+ oai:arXiv.org:2507.11255v2
+ stat.ME
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Junwen Xia, Jingxiao Zhang, Dehan Kong
+
+
+ Error analysis of a compositional score-based algorithm for simulation-based inference
+ https://arxiv.org/abs/2510.15817
+ arXiv:2510.15817v2 Announce Type: replace
+Abstract: Simulation-based inference (SBI) has become a widely used framework in applied sciences for estimating the parameters of stochastic models that best explain experimental observations. A central question in this setting is how to effectively combine multiple observations in order to improve parameter inference and obtain sharper posterior distributions. Recent advances in score-based diffusion methods address this problem by constructing a compositional score, obtained by aggregating individual posterior scores within the diffusion process. While it is natural to suspect that the accumulation of individual errors may significantly degrade sampling quality as the number of observations grows, this important theoretical issue has so far remained unexplored. In this paper, we study the compositional score produced by the GAUSS algorithm of Linhart et al. (2024) and establish an upper bound on its mean squared error in terms of both the individual score errors and the number of observations. We illustrate our theoretical findings on a Gaussian example, where all analytical expressions can be derived in a closed form.
+ oai:arXiv.org:2510.15817v2
+ stat.ML
+ cs.LG
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Camille Touron, Gabriel V. Cardoso, Julyan Arbel, Pedro L. C. Rodrigues
+
+
+ Distributionally Robust Synthetic Control: Ensuring Robustness Against Highly Correlated Controls and Weight Shifts
+ https://arxiv.org/abs/2511.02632
+ arXiv:2511.02632v2 Announce Type: replace
+Abstract: The synthetic control method estimates the causal effect by comparing the treated unit's outcomes to a weighted average of control units that closely match its pre-treatment outcomes, assuming the relationship between treated and control potential outcomes remains stable before and after treatment. However, the estimator may become unreliable when these relationships shift or when control units are highly correlated. To address these challenges, we introduce the Distributionally Robust Synthetic Control (DRoSC) method, which accommodates potential shifts in relationships and addresses high correlations among control units. The DRoSC method targets a novel causal estimand defined as the optimizer of a worst-case optimization problem considering all possible weights compatible with the pre-treatment period. When the identification conditions for the classical synthetic control method hold, the DRoSC method targets the same causal effect as the synthetic control; when these conditions are violated, we demonstrate that this new causal estimand is a conservative proxy for the non-identifiable causal effect. We further show that the DRoSC estimator's limiting distribution is non-normal and propose a novel inferential approach. We demonstrate its performance through numerical studies and an analysis of the economic impact of terrorism in the Basque Country.
+ oai:arXiv.org:2511.02632v2
+ stat.ME
+ econ.EM
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Taehyeon Koo, Zijian Guo
+
+
+ Source-Optimal Training is Transfer-Suboptimal
+ https://arxiv.org/abs/2511.08401
+ arXiv:2511.08401v4 Announce Type: replace
+Abstract: We prove that training a source model optimally for its own task is generically suboptimal when the objective is downstream transfer. We study the source-side optimization problem in L2-SP ridge regression and show a fundamental mismatch between the source-optimal and transfer-optimal source regularization: outside of a measure-zero set, $\tau_0^* \neq \tau_S^*$. We characterize the transfer-optimal source penalty $\tau_0^*$ as a function of task alignment and identify an alignment-dependent reversal: with imperfect alignment ($0<\rho<1$), transfer benefits from stronger source regularization, while in super-aligned regimes ($\rho>1$), transfer benefits from weaker regularization. Additionally, in isotropic settings, the decision of whether transfer helps is independent of the target sample size and noise, depending only on task alignment and source characteristics. We verify the linear predictions in a synthetic ridge regression experiment, and we present experiments on MNIST, CIFAR-10, and 20 Newsgroups as evidence that the source-optimal versus transfer-optimal mismatch persists in standard nonlinear transfer learning pipelines.
+ oai:arXiv.org:2511.08401v4
+ stat.ML
+ cs.LG
+ math.ST
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/publicdomain/zero/1.0/
+ C. Evans Hedges
+
+
+ Distributional Random Forests for Complex Survey Designs on Reproducing Kernel Hilbert Spaces
+ https://arxiv.org/abs/2512.08179
+ arXiv:2512.08179v2 Announce Type: replace
+Abstract: We study estimation of the conditional law $P(Y|X=x)$ and continuous functionals $\Psi(P(Y|X=x))$ when $Y$ takes values in a locally compact Polish space, $X \in \mathbb{R}^p$, and the observations arise from a complex survey design. We propose a survey-calibrated distributional random forest (SDRF) that incorporates complex-design features via a pseudo-population bootstrap, PSU-level honesty, and a Maximum Mean Discrepancy (MMD) split criterion computed from kernel mean embeddings of H\'{a}jek-type (design-weighted) node distributions. We provide a framework for analyzing forest-style estimators under survey designs; establish design consistency for the finite-population target and model consistency for the super-population target under explicit conditions on the design, kernel, resampling multipliers, and tree partitions. As far as we are aware, these are the first results on model-free estimation of conditional distributions under survey designs. Simulations under a stratified two-stage cluster design provide finite sample performance and demonstrate the statistical error price of ignoring the survey design. The broad applicability of SDRF is demonstrated using NHANES: We estimate the tolerance regions of the conditional joint distribution of two diabetes biomarkers, illustrating how distributional heterogeneity can support subgroup-specific risk profiling for diabetes mellitus in the U.S. population.
+ oai:arXiv.org:2512.08179v2
+ stat.ME
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Yating Zou, Marcos Matabuena, Michael R. Kosorok
+
+
+ A Conversation with Mike West
+ https://arxiv.org/abs/2512.09790
+ arXiv:2512.09790v2 Announce Type: replace
+Abstract: Mike West is currently the Arts & Sciences Distinguished Professor Emeritus of Statistics and Decision Sciences at Duke University. Mike's research in Bayesian analysis spans multiple interlinked areas: theory and methods of dynamic models in time series analysis, foundations of inference and decision analysis, multivariate and latent structure analysis, stochastic computation and optimisation, among others. Inter-disciplinary R&D has ranged across applications in commercial forecasting, dynamic networks, finance, econometrics, signal processing, climatology, systems biology, genomics and neuroscience, among other areas. Among Mike's currently active research areas are forecasting, causal prediction and decision analysis in business, economic policy and finance, as well as in personal decision making. Mike led the development of academic statistics at Duke University from 1990-2002, and has been broadly engaged in professional leadership elsewhere. He is past president of the International Society for Bayesian Analysis (ISBA), and has served in founding roles and as board member for several professional societies, national and international centres and institutes. Recipient of numerous awards, Mike has been active in research with various companies, banks, government agencies and academic centres, co-founder of a successful biotechnology company, and board member for several financial and IT companies. He has published 4 books, several edited volumes and over 200 papers. Mike has worked with many undergraduate and Master's research students, and as of 2025 has mentored around 65 primary PhD students and postdoctoral associates who moved to academic, industrial or governmental positions involving advanced statistical and data science research.
+ oai:arXiv.org:2512.09790v2
+ stat.OT
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Hedibert F. Lopes, Filippo Ascolani
+
+
+ Causal Judge Evaluation: Calibrated Surrogate Metrics for LLM Systems
+ https://arxiv.org/abs/2512.11150
+ arXiv:2512.11150v2 Announce Type: replace
+Abstract: Measuring long-run LLM outcomes (user satisfaction, expert judgment, downstream KPIs) is expensive. Teams default to cheap LLM judges, but uncalibrated proxies can invert rankings entirely. Causal Judge Evaluation (CJE) makes it affordable to aim at the right target: calibrate cheap scores against 5% oracle labels, then evaluate at scale with valid uncertainty. On 4,961 Arena prompts, CJE achieves 99% ranking accuracy at 14x lower cost. Key findings: naive confidence intervals on uncalibrated scores achieve 0% coverage (CJE: ~95%); importance-weighted estimators fail despite 90%+ effective sample size. We introduce the Coverage-Limited Efficiency (CLE) diagnostic explaining why. CJE combines mean-preserving calibration (AutoCal-R), weight stabilization (SIMCal-W), and bootstrap inference that propagates calibration uncertainty (OUA), grounded in semiparametric efficiency theory.
+ oai:arXiv.org:2512.11150v2
+ stat.ME
+ stat.AP
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by/4.0/
+ Eddie Landesberg
+
+
+ Exact inference via quasi-conjugacy in two-parameter Poisson-Dirichlet hidden Markov models
+ https://arxiv.org/abs/2512.22098
+ arXiv:2512.22098v2 Announce Type: replace
+Abstract: We introduce a nonparametric model for time-evolving, unobserved probability distributions from discrete-time data consisting of unlabelled partitions. The latent process is a two-parameter Poisson-Dirichlet diffusion, and observations arise via exchangeable sampling. Applications include social and genetic data where only aggregate clustering summaries are observed. To address the intractable likelihood, we develop a tractable inferential framework that avoids label enumeration and direct simulation of the latent state. We exploit a duality between the diffusion and a pure-death process on partitions, together with coagulation operators that encode the effect of new data. These yield closed-form, recursive updates for forward and backward inference. We compute exact posterior distributions of the latent state at arbitrary times and predictive distributions of future or interpolated partitions. This enables online and offline inference and forecasting with full uncertainty quantification, bypassing MCMC and sequential Monte Carlo. Compared to particle filtering, our method achieves higher accuracy, lower variance, and substantial computational gains. We illustrate the methodology with synthetic experiments and a social network application, recovering interpretable patterns in time-varying heterozygosity.
+ oai:arXiv.org:2512.22098v2
+ stat.ME
+ math.PR
+ math.ST
+ q-bio.PE
+ stat.CO
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Marco Dalla Pria, Matteo Ruggiero, Dario Span\`o
+
+
+ A Novel Multiple Imputation Approach For Parameter Estimation in Observation-Driven Time Series Models With Missing Data
+ https://arxiv.org/abs/2601.01259
+ arXiv:2601.01259v2 Announce Type: replace
+Abstract: Handling missing data in time series is a complex problem due to the presence of temporal dependence. General-purpose imputation methods, while widely used, often distort key statistical properties of the data, such as variance and dependence structure, leading to biased estimation and misleading inference. These issues become more pronounced in models that explicitly rely on capturing serial dependence, as standard imputation techniques fail to preserve the underlying dynamics. This paper proposes a novel multiple imputation method specifically designed for parameter estimation in observation-driven models (ODM). The approach takes advantage of the iterative nature of the systematic component in ODM to propagate the dependence structure through missing data, minimizing its impact on estimation. Unlike traditional imputation techniques, the proposed method accommodates continuous, discrete, and mixed-type data while preserving key distributional and dependence properties. We evaluate its performance through Monte Carlo simulations in the context of GARMA models, considering time series with up to 70\% missing data. An application to the proportion of stocked energy stored in South Brazil further demonstrates its practical utility.
+ oai:arXiv.org:2601.01259v2
+ stat.ME
+ math.ST
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Guilherme Pumi, Taiane Schaedler Prass, Douglas Krauthein Verdum
+
+
+ Modeling Information Blackouts in Missing Not-At-Random Time Series Data
+ https://arxiv.org/abs/2601.01480
+ arXiv:2601.01480v2 Announce Type: replace
+Abstract: Large-scale traffic forecasting relies on fixed sensor networks that often exhibit blackouts: contiguous intervals of missing measurements caused by detector or communication failures. These outages are typically handled under a Missing At Random (MAR) assumption, even though blackout events may correlate with unobserved traffic conditions (e.g., congestion or anomalous flow), motivating a Missing Not At Random (MNAR) treatment. We propose a latent state-space framework that jointly models (i) traffic dynamics via a linear dynamical system and (ii) sensor dropout via a Bernoulli observation channel whose probability depends on the latent traffic state. Inference uses an Extended Kalman Filter with Rauch-Tung-Striebel smoothing, and parameters are learned via an approximate EM procedure with a dedicated update for detector-specific missingness parameters. On the Seattle inductive loop detector data, introducing latent dynamics yields large gains over naive baselines, reducing blackout imputation RMSE from 7.02 (LOCF) and 5.02 (linear interpolation + seasonal naive) to 4.23 (MAR LDS), corresponding to about a 64% reduction in MSE relative to LOCF. Explicit MNAR modeling provides a consistent but smaller additional improvement on real data (imputation RMSE 4.20; 0.8% RMSE reduction relative to MAR), with similar modest gains for short-horizon post-blackout forecasts (evaluated at 1, 3, and 6 steps). In controlled synthetic experiments, the MNAR advantage increases as the true missingness dependence on latent state strengthens. Overall, temporal dynamics dominate performance, while MNAR modeling offers a principled refinement that becomes most valuable when missingness is genuinely informative.
+ oai:arXiv.org:2601.01480v2
+ stat.ML
+ cs.LG
+ stat.AP
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Aman Sunesh (New York University), Allan Ma (New York University), Siddarth Nilol (New York University)
+
+
+ A Method For Bounding Tail Probabilities
+ https://arxiv.org/abs/2402.13662
+ arXiv:2402.13662v3 Announce Type: replace-cross
+Abstract: We present a method for upper and lower bounding the right and the left tail probabilities of continuous random variables (RVs). For the right tail probability of RV $X$ with probability density function $f (x)$, this method requires first setting a continuous, positive, and strictly decreasing function $g (x)$ such that $-f (x)/g' (x)$ is a decreasing and increasing function, $\forall x>x_0$, which results in upper and lower bounds, respectively, given in the form $-f (x) g (x)/g' (x)$, $\forall x>x_0$, where $x_0$ is some point. Similarly, for the upper and lower bounds on the left tail probability of $X$, this method requires first setting a continuous, positive, and strictly increasing function $g (x)$ such that $f (x)/g' (x)$ is an increasing and decreasing function, $\forall x<x_0$, which results in upper and lower bounds, respectively, given in the form $f (x) g (x)/g' (x)$, $\forall x<x_0$. We provide some examples of good candidates for the function $g (x)$. We also establish connections between the new bounds and Markov's inequality and Chernoff's bound. In addition, we provide an iterative method for obtaining ever tighter lower and upper bounds, under certain conditions. As an application, we use the proposed method to derive a novel closed-form asymptotic expression of the converse bound on the capacity of the additive white Gaussian noise (AWGN) channel in the finite-blocklength regime, which is tighter than the closed-form asymptotic expression by Polyanskiy-Poor-Verd\'u. Finally, we provide numerical examples where we show the tightness of the bounds obtained by the proposed method.
+ oai:arXiv.org:2402.13662v3
+ math.PR
+ cs.IT
+ math.IT
+ math.ST
+ stat.ML
+ stat.TH
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ 10.1109/ACCESS.2026.3650974
+ IEEE Access, 2026
+ Nikola Zlatanov
+
+
+ Conformal Prediction for Dose-Response Models with Continuous Treatments
+ https://arxiv.org/abs/2409.20412
+ arXiv:2409.20412v2 Announce Type: replace-cross
+Abstract: Understanding the dose-response relation between a continuous treatment and the outcome for an individual can greatly drive decision-making, particularly in areas like personalized drug dosing and personalized healthcare interventions. Point estimates are often insufficient in these high-risk environments, highlighting the need for uncertainty quantification to support informed decisions. Conformal prediction, a distribution-free and model-agnostic method for uncertainty quantification, has seen limited application in continuous treatments or dose-response models. To address this gap, we propose a novel methodology that frames the causal dose-response problem as a covariate shift, leveraging weighted conformal prediction. By incorporating propensity estimation, conformal predictive systems, and likelihood ratios, we present a practical solution for generating prediction intervals for dose-response models. Additionally, our method approximates local coverage for every treatment value by applying kernel functions as weights in weighted conformal prediction. Finally, we use a new synthetic benchmark dataset to demonstrate the significance of covariate shift assumptions in achieving robust prediction intervals for dose-response models.
+ oai:arXiv.org:2409.20412v2
+ cs.LG
+ cs.AI
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ Jarne Verhaeghe, Jef Jonkers, Sofie Van Hoecke
+
+
+ Limits to scalable evaluation at the frontier: LLM as Judge won't beat twice the data
+ https://arxiv.org/abs/2410.13341
+ arXiv:2410.13341v3 Announce Type: replace-cross
+Abstract: High quality annotations are increasingly a bottleneck in the explosively growing machine learning ecosystem. Scalable evaluation methods that avoid costly annotation have therefore become an important research ambition. Many hope to use strong existing models in lieu of costly labels to provide cheap model evaluations. Unfortunately, this method of using models as judges introduces biases, such as self-preferencing, that can distort model comparisons. An emerging family of debiasing tools promises to fix these issues by using a few high quality labels to debias a large number of model judgments. In this paper, we study how far such debiasing methods, in principle, can go. Our main result shows that when the judge is no more accurate than the evaluated model, no debiasing method can decrease the required amount of ground truth labels by more than half. Our result speaks to the severe limitations of the LLM-as-a-judge paradigm at the evaluation frontier where the goal is to assess newly released models that are possibly better than the judge. Through an empirical evaluation, we demonstrate that the sample size savings achievable in practice are even more modest than what our theoretical limit suggests. Along the way, our work provides new observations about debiasing methods for model evaluation, and points out promising avenues for future work.
+ oai:arXiv.org:2410.13341v3
+ cs.LG
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-nd/4.0/
+ Florian E. Dorner, Vivian Y. Nastl, Moritz Hardt
+
+
+ Spatio-temporal analysis of extreme winter temperatures in Ireland
+ https://arxiv.org/abs/2412.10796
+ arXiv:2412.10796v2 Announce Type: replace-cross
+Abstract: We analyse extreme daily minimum temperatures in winter months over the island of Ireland from 1950-2022. We model the marginal distributions of extreme winter minima using a generalised Pareto distribution (GPD), capturing temporal and spatial non-stationarities in the parameters of the GPD. We investigate two independent temporal non-stationarities in extreme winter minima. We model the long-term trend in magnitude of extreme winter minima as well as short-term, large fluctuations in magnitude caused by anomalous behaviour of the jet stream. We measure magnitudes of spatial events with a carefully chosen risk function and fit an r-Pareto process to extreme events exceeding a high-risk threshold. Our analysis is based on synoptic data observations courtesy of Met \'Eireann and the Met Office. We show that the frequency of extreme cold winter events is decreasing over the study period. The magnitude of extreme winter events is also decreasing, indicating that winters are warming, and apparently warming at a faster rate than extreme summer temperatures. We also show that extremely cold winter temperatures are warming at a faster rate than non-extreme winter temperatures. We find that a climate model output previously shown to be informative as a covariate for modelling extremely warm summer temperatures is less effective as a covariate for extremely cold winter temperatures. However, we show that the climate model is useful for informing a non-extreme temperature model.
+ oai:arXiv.org:2412.10796v2
+ physics.ao-ph
+ stat.ME
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ D\'aire Healy, Jonathan A. Tawn, Peter Thorne, Andrew Parnell
+
+
+ Network topology of the Euro Area interbank market
+ https://arxiv.org/abs/2502.15611
+ arXiv:2502.15611v2 Announce Type: replace-cross
+Abstract: The rapidly increasing availability of large amounts of granular financial data, paired with the advances of big data related technologies induces the need of suitable analytics that can represent and extract meaningful information from such data. In this paper we propose a multi-layer network approach to distill the Euro Area (EA) banking system in different distinct layers. Each layer of the network represents a specific type of financial relationship between banks, based on various sources of EA granular data collections. The resulting multi-layer network allows one to describe, analyze and compare the topology and structure of EA banks from different perspectives, eventually yielding a more complete picture of the financial market. This granular information representation has the potential to enable researchers and practitioners to better apprehend financial system dynamics as well as to support financial policies to manage and monitor financial risk from a more holistic point of view.
+ oai:arXiv.org:2502.15611v2
+ q-fin.ST
+ cs.CE
+ stat.CO
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by/4.0/
+ 10.1007/978-3-031-63630-1_1
+ In: Mingione, M., Vichi, M., Zaccaria, G. (eds), High-quality and Timely Statistics. CESS 2022. Studies in Theoretical and Applied Statistics. Springer, Cham (2024)
+ Ilias Aarab, Thomas Gottron
+
+
+ Global law of conjugate kernel random matrices with heavy-tailed weights
+ https://arxiv.org/abs/2502.18428
+ arXiv:2502.18428v2 Announce Type: replace-cross
+Abstract: We study the asymptotic spectral distribution of the conjugate kernel random matrix $YY^\top$, where $Y= f(WX)$ arises from a two-layer neural network model. We consider the setting where $W$ and $X$ are random rectangular matrices with i.i.d.\ entries, where the entries of $W$ follow a heavy-tailed distribution, while those of $X$ have light tails. Our assumptions on $W$ include a broad class of heavy-tailed distributions, such as symmetric $\alpha$-stable laws with $\alpha \in ]0,2[$ and sparse matrices with $\mathcal{O}(1)$ nonzero entries per row. The activation function $f$, applied entrywise, is bounded, smooth, odd, and nonlinear. We compute the limiting eigenvalue distribution of $YY^\top$ through its moments and show that heavy-tailed weights induce strong correlations between the entries of $Y$, resulting in richer and fundamentally different spectral behavior compared to the light-tailed case.
+ oai:arXiv.org:2502.18428v2
+ math.PR
+ cs.LG
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Alice Guionnet, Vanessa Piccolo
+
+
+ A practical guide to estimation and uncertainty quantification of aerodynamic flows
+ https://arxiv.org/abs/2502.20280
+ arXiv:2502.20280v3 Announce Type: replace-cross
+Abstract: Many applications in aerodynamics, particularly in closed-loop control, depend on sensors to estimate the evolving state of the flow. This estimation task is inherently accompanied by uncertainty due to the noisy measurements of sensors or the non-uniqueness of the underlying mapping. Knowledge of this uncertainty can be as important for decision-making as that of the state itself. Uncertainty tracking is challenged by the often-nonlinear relationship between the measurements and the flow state. For example, a collection of passing vortices leaves a footprint in wall pressure that depends nonlinearly on the vortices' strengths and positions. In this paper, we outline recent approaches to flow estimation and illuminate them with worked examples and selected case studies. We review relevant probability tools, including sampling and estimation, in the powerful setting of Bayesian inference and demonstrate these in static flow estimation examples. We then review unsteady examples and illustrate the application of sequential estimation, and particularly, the ensemble Kalman filter. Finally, we discuss uncertainty quantification in neural network approximations of the mappings between sensor measurements and flow states. Recent aerodynamic applications have shown that the flow state can be encoded into a very low-dimensional latent space. We discuss the uncertainty implications of this encoding.
+ oai:arXiv.org:2502.20280v3
+ physics.flu-dyn
+ stat.AP
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://creativecommons.org/licenses/by-nc-sa/4.0/
+ Jeff D. Eldredge, Hanieh Mousavi
+
+
+ What Makes Looped Transformers Perform Better Than Non-Recursive Ones
+ https://arxiv.org/abs/2510.10089
+ arXiv:2510.10089v3 Announce Type: replace-cross
+Abstract: While looped transformers (termed as Looped-Attn) often outperform standard transformers (termed as Single-Attn) on complex reasoning tasks, the mechanism for this advantage remains underexplored. In this paper, we explain this phenomenon through the lens of loss landscape geometry, inspired by empirical observations of their distinct dynamics at both sample and Hessian levels. To formalize this, we extend the River-Valley landscape model by distinguishing between U-shaped valleys (flat) and V-shaped valleys (steep). Based on empirical observations, we conjecture that the recursive architecture of Looped-Attn induces a landscape-level inductive bias towards River-V-Valley. This inductive bias suggest a better loss convergence along the river due to valley hopping, and further encourage learning about complex patterns compared to the River-U-Valley induced by Single-Attn. Building on this insight, we propose SHIFT (Staged HIerarchical Framework for Progressive Training), a principled training strategy that accelerates the training process of Looped-Attn while achieving comparable performances.
+ oai:arXiv.org:2510.10089v3
+ cs.LG
+ cs.AI
+ stat.ML
+ Wed, 07 Jan 2026 00:00:00 -0500
+ replace-cross
+ http://arxiv.org/licenses/nonexclusive-distrib/1.0/
+ Zixuan Gong, Yong Liu, Jiaye Teng
+