id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
557fb052f39dfd29455d9979b010168b1ae03dde8a459ea44012ba14923b1608
|
2026-01-13T00:00:00-05:00
|
A Kernelization-Based Approach to Nonparametric Binary Choice Models
|
arXiv:2410.15734v2 Announce Type: replace Abstract: We propose a new estimator for nonparametric binary choice models that does not impose a parametric structure on either the systematic function of covariates or the distribution of the error term. A key advantage of our approach is its computational scalability in the number of covariates. For instance, even when assuming a normal error distribution as in probit models, commonly used sieves for approximating an unknown function of covariates can lead to a large-dimensional optimization problem when the number of covariates is moderate. Our approach, motivated by kernel methods in machine learning, views certain reproducing kernel Hilbert spaces as special sieve spaces, coupled with spectral cut-off regularization for dimension reduction. We establish the consistency of the proposed estimator and asymptotic normality of the plug-in estimator for weighted average partial derivatives. Simulation studies show that, compared to parametric estimation methods, the proposed method effectively improves finite sample performance in cases of misspecification, and has a rather mild efficiency loss if the model is correctly specified. Using administrative data on the grant decisions of US asylum applications to immigration courts, along with nine case-day variables on weather and pollution, we re-examine the effect of outdoor temperature on court judges' ``mood'', and thus, their grant decisions.
|
https://arxiv.org/abs/2410.15734
|
Academic Papers
|
svg
|
a749ca5747d20421d181c38d46cbfc5d5d2e5d11cb342b544e64a7ef6d5e1110
|
2026-01-13T00:00:00-05:00
|
Floods do not sink prices, historical memory does: How flood risk impacts the Italian housing market
|
arXiv:2502.12116v3 Announce Type: replace Abstract: Do home prices incorporate flood risk in the immediate aftermath of specific flood events, or is it the repeated exposure over the years that plays a more significant role? We address this question through the first systematic study of the Italian housing market, which is an ideal case study because it is highly exposed to floods, though unevenly distributed across the national territory. Using a novel dataset containing about 550,000 mortgage-financed transactions between 2016 and 2024, as well as hedonic regressions and a difference-in-difference design, we find that: (i) specific floods do not decrease home prices in areas at risk; (ii) the repeated exposure to floods in flood-prone areas leads to a price decline, up to 4\% in the most frequently flooded regions; (iii) responses are heterogeneous by buyers' income and age. Young buyers (with limited exposure to prior floods) do not obtain any price reduction for settling in risky areas, while experienced buyers do. At the same time, buyers who settle in risky areas have lower incomes than buyers in safe areas in the most affected regions. Our results emphasize the importance of cultural and institutional factors in understanding how flood risk affects the housing market and socioeconomic outcomes.
|
https://arxiv.org/abs/2502.12116
|
Academic Papers
|
svg
|
01f161fae9e8b404de97f78cd0945bc96b4801e73e65714376b1c943d946f610
|
2026-01-13T00:00:00-05:00
|
Does Ideological Polarization Lead to Policy Polarization?
|
arXiv:2502.14712v5 Announce Type: replace Abstract: I study an election between two ideologically polarized parties that are both office- and policy-motivated. The parties compete by proposing policies on a single issue. The analysis uncovers a non-monotonic relationship between ideological and policy polarization. When ideological polarization is low, an increase leads to policy moderation; when it is high, the opposite occurs, and policies become more extreme. Moreover, incorporating ideological polarization refines our understanding of the role of valence: both high- and low-valence candidates may adopt more extreme positions, depending on the electorate's degree of ideological polarization.
|
https://arxiv.org/abs/2502.14712
|
Academic Papers
|
svg
|
f490d215783e9a0e9c8b92dde28cf7b217720794fc7d63789b614f4c1a4ffdcb
|
2026-01-13T00:00:00-05:00
|
The heterogeneous causal effects of the EU's Cohesion Fund
|
arXiv:2504.13223v2 Announce Type: replace Abstract: This paper estimates the causal effect of EU cohesion policy on regional output and investment, focusing on the Cohesion Fund (CF), a comparatively understudied instrument. Departing from standard approaches such as regression discontinuity (RDD) and instrumental variables (IV), we use a recently developed causal inference method based on matrix completion within a factor model framework. This yields a new framework to evaluate the CF and to characterize the time-varying distribution of its causal effects across EU regions, along with distributional metrics relevant for policy assessment. Our results show that average treatment effects conceal substantial heterogeneity and may lead to misleading conclusions about policy effectiveness. The CF's impact is front-loaded, peaking within the first seven years after a region's initial inclusion. During this first seven-year funding cycle, the distribution of effects is right-skewed with relatively thick tails, indicating generally positive but uneven gains across regions. Effects are larger for regions that are relatively poorer at baseline, and we find a non-linear, diminishing-returns relationship: beyond a threshold, the impact declines as the ratio of CF receipts to regional gross value added (GVA) increases.
|
https://arxiv.org/abs/2504.13223
|
Academic Papers
|
svg
|
7b87f0196021cdbe142890f97c937eccd276b302b62bb784f2e125e76b05fc7c
|
2026-01-13T00:00:00-05:00
|
Causal Inference for Experiments with Latent Outcomes: Key Results and Their Implications for Design and Analysis
|
arXiv:2505.21909v3 Announce Type: replace Abstract: How should researchers analyze randomized experiments in which the main outcome is latent and measured in multiple ways but each measure contains some degree of error? We first identify a critical study-specific noncomparability problem in existing methods for handling multiple measurements, which often rely on strong modeling assumptions or arbitrary standardization. Such approaches render the resulting estimands noncomparable across studies. To address the problem, we describe design-based approaches that enable researchers to identify causal parameters of interest, suggest ways that experimental designs can be augmented so as to make assumptions more credible, and discuss empirical tests of key assumptions. We show that when experimental researchers invest appropriately in multiple outcome measures, an optimally weighted scaled index of these measures enables researchers to obtain efficient and interpretable estimates of causal parameters by applying standard regression. An empirical application illustrates the gains in precision and robustness that multiple outcome measures can provide.
|
https://arxiv.org/abs/2505.21909
|
Academic Papers
|
svg
|
3a9571bc47e3c34d196ce162eafad4f205b811d2fd694d232d86a7db5bf6d9c4
|
2026-01-13T00:00:00-05:00
|
Making Interpretable Discoveries from Unstructured Data: A High-Dimensional Multiple Hypothesis Testing Approach
|
arXiv:2511.01680v2 Announce Type: replace Abstract: Social scientists are increasingly turning to unstructured datasets to unlock new empirical insights, e.g., estimating descriptive statistics of or causal effects on quantitative measures derived from text, audio, or video data. In many such settings, unsupervised analysis is of primary interest, in that the researcher does not want to (or cannot) pre-specify all important aspects of the unstructured data to measure; they are interested in "discovery." This paper proposes a general and flexible framework for pursuing discovery from unstructured data in a statistically principled way. The framework leverages recent methods from the literature on machine learning interpretability to map unstructured data points to high-dimensional, sparse, and interpretable "dictionaries" of concepts; computes statistics of dictionary entries for testing relevant concept-level hypotheses; performs selective inference on these hypotheses using algorithms validated by new results in high-dimensional central limit theory, producing a selected set ("discoveries"); and both generates and evaluates human-interpretable natural language descriptions of these discoveries. The proposed framework has few researcher degrees of freedom, is fully replicable, and is cheap to implement -- both in terms of financial cost and researcher time. Applications to recent descriptive and causal analyses of unstructured data in empirical economics are explored. An open source Jupyter notebook is provided for researchers to implement the framework in their own projects.
|
https://arxiv.org/abs/2511.01680
|
Academic Papers
|
svg
|
2ba79e2f9117a4ae04a369b5e9b7504b223f4540413f0bb544dcce3c0098094b
|
2026-01-13T00:00:00-05:00
|
Quantile Selection in the Gender Pay Gap
|
arXiv:2511.16187v2 Announce Type: replace Abstract: We propose a new approach to estimate selection-corrected quantiles of the gender wage gap. Our method employs instrumental variables that explain variation in the latent variable but, conditional on the latent process, do not directly affect selection. We provide semiparametric identification of the quantile parameters without imposing parametric restrictions on the selection probability, derive the asymptotic distribution of the proposed estimator based on constrained selection probability weighting, and demonstrate how the approach applies to the Roy model of labor supply. Using German administrative data, we analyze the distribution of the gender gap in full-time earnings. We find pronounced positive selection among women at the lower end, especially those with less education, which widens the gender gap in this segment, and strong positive selection among highly educated men at the top, which narrows the gender wage gap at upper quantiles.
|
https://arxiv.org/abs/2511.16187
|
Academic Papers
|
svg
|
e8d2e8555163f6559438c771bfcd048c4a82d87c5f8eaaea3f5986b364b73da0
|
2026-01-13T00:00:00-05:00
|
Graph structure learning for stable processes
|
arXiv:2601.06264v1 Announce Type: new Abstract: We introduce Ising-H\"usler-Reiss processes, a new class of multivariate L\'evy processes that allows for sparse modeling of the path-wise conditional independence structure between marginal stable processes with different stability indices. The underlying conditional independence graph is encoded as zeroes in a suitable precision matrix. An Ising-type parametrization of the weights for each orthant of the L\'evy measure allows for data-driven modeling of asymmetry of the jumps while retaining an arbitrary sparse graph. We develop consistent estimators for the graphical structure and asymmetry parameters, relying on a new uniform small-time approximation for L\'evy processes. The methodology is illustrated in simulations and a real data application to modeling dependence of stock returns.
|
https://arxiv.org/abs/2601.06264
|
Academic Papers
|
svg
|
6f698fdd61f3708973380a82e7606f039a7286d66e53a929374fb64340e53792
|
2026-01-13T00:00:00-05:00
|
A Framework for Estimating Restricted Mean Survival Time Difference using Pseudo-observations
|
arXiv:2601.06296v1 Announce Type: new Abstract: A targeted learning (TL) framework is developed to estimate the difference in the restricted mean survival time (RMST) for a clinical trial with time-to-event outcomes. The approach starts by defining the target estimand as the RMST difference between investigational and control treatments. Next, an efficient estimation method is introduced: a targeted minimum loss estimator (TMLE) utilizing pseudo-observations. Moreover, a version of the copy reference (CR) approach is developed to perform a sensitivity analysis for right-censoring. The proposed TL framework is demonstrated using a real data application.
|
https://arxiv.org/abs/2601.06296
|
Academic Papers
|
svg
|
2648c2125bb2e947465bacc9041bdc983b5be3aa7ca08423d33600e1e86bfb61
|
2026-01-13T00:00:00-05:00
|
Efficient Data Reduction Via PCA-Guided Quantile Based Sampling
|
arXiv:2601.06375v1 Announce Type: new Abstract: In large-scale statistical modeling, reducing data size through subsampling is essential for balancing computational efficiency and statistical accuracy. We propose a new method, Principal Component Analysis guided Quantile Sampling (PCA-QS), which projects data onto principal components and applies quantile-based sampling to retain representative and diverse subsets. Compared with uniform random sampling, leverage score sampling, and coreset methods, PCA-QS consistently achieves lower mean squared error and better preservation of key data characteristics, while also being computationally efficient. This approach is adaptable to a variety of data scenarios and shows strong potential for broad applications in statistical computing.
|
https://arxiv.org/abs/2601.06375
|
Academic Papers
|
svg
|
caeff30a93f7dc5e26bd5e53aa18957ce6f3822382bae1c7a6991124a0b4a1fa
|
2026-01-13T00:00:00-05:00
|
Empirical Likelihood Test for Common Invariant Subspace of Multilayer Networks based on Monte Carlo Approximation
|
arXiv:2601.06390v1 Announce Type: new Abstract: Multilayer (or multiple) networks are widely used to represent diverse patterns of relationships among objects in increasingly complex real-world systems. Identifying a common invariant subspace across network layers has become an active area of research, as such a subspace can filter out layer-specific noise, facilitate cross-network comparisons, reduce dimensionality, and extract shared structural features of scientific interest. One statistical approach to detecting a common subspace is hypothesis testing, which evaluates whether the observed networks share a common latent structure. In this paper, we propose an empirical likelihood (EL) based test for this purpose. The null hypothesis states that all network layers share the same invariant subspace, whereas under the alternative hypothesis at least two layers differ in their subspaces. We study the asymptotic behavior of the proposed test via Monte Carlo approximation and assess its finite-sample performance through extensive simulations. The simulation results demonstrate that the proposed method achieves satisfactory size and power, and its practical utility is further illustrated with a real-data application.
|
https://arxiv.org/abs/2601.06390
|
Academic Papers
|
svg
|
70daaa3461d1cae08959879c9438641e84bfcde6807f037072f32eb4a0c21c2f
|
2026-01-13T00:00:00-05:00
|
Triple-dyad ratio estimation for the $p_1$ model
|
arXiv:2601.06481v1 Announce Type: new Abstract: Although the $p_1$ model was proposed 40 years ago, little progress has been made to address asymptotic theories in this model, that is, neither consistency of the maximum likelihood estimator (MLE) nor other parameter estimation with statistical guarantees is understood. This problem has been acknowledged as a long-standing open problem. To address it, we propose a novel parametric estimation method based on the ratios of the sum of a sequence of triple-dyad indicators to another one, where a triple-dyad indicator means the product of three dyad indicators. Our proposed estimators, called \emph{triple-dyad ratio estimator}, have explicit expressions and can be scaled to very large networks with millions of nodes. We establish the consistency and asymptotic normality of the triple-dyad ratio estimator when the number of nodes reaches infinity. Based on the asymptotic results, we develop a test statistic for evaluating whether is a reciprocity effect in directed networks. The estimators for the density and reciprocity parameters contain bias terms, where analytical bias correction formulas are proposed to make valid inference. Numerical studies demonstrate the findings of our theories and show that the estimator is comparable to the MLE in large networks.
|
https://arxiv.org/abs/2601.06481
|
Academic Papers
|
svg
|
f9b978997f72e5b2bae0fa2b8167b4e6614ffda0b1d30cc2cd9a65339463a6a5
|
2026-01-13T00:00:00-05:00
|
Bayesian Optimization of Noisy Log-Likelihoods Evaluated by Particle Filters -- One Parameter Case --
|
arXiv:2601.06545v1 Announce Type: new Abstract: Likelihood functions evaluated using particle filters are typically noisy, computationally expensive, and non-differentiable due to Monte Carlo variability. These characteristics make conventional optimization methods difficult to apply directly or potentially unreliable. This paper investigates the use of Bayesian optimization for maximizing log-likelihood functions estimated by particle filters. By modeling the noisy log-likelihood surface with a Gaussian process surrogate and employing an acquisition function that balances exploration and exploitation, the proposed approach identifies the maximizer using a limited number of likelihood evaluations. Through numerical experiments, we demonstrate that Bayesian optimization provides robust and stable estimation in the presence of observation noise. The results suggest that Bayesian optimization is a promising alternative for likelihood maximization problems where exhaustive search or gradient-based methods are impractical. The estimation accuracy is quantitatively assessed using mean squared error metrics by comparison with the exact maximum likelihood solution obtained via the Kalman filter.
|
https://arxiv.org/abs/2601.06545
|
Academic Papers
|
svg
|
8e4ee34a41e334d16c009ee5d18e72664e0a583c364e3702fef7c9f07e9ec460
|
2026-01-13T00:00:00-05:00
|
Mittag Leffler Distributions Estimation and Autoregressive Framework
|
arXiv:2601.06610v1 Announce Type: new Abstract: This work deals with the estimation of parameters of Mittag-Leffler (ML($\alpha, \sigma$)) distribution. We estimate the parameters of ML($\alpha, \sigma$) using empirical Laplace transform method. The simulation study indicates that the proposed method provides satisfactory results. The real life application of ML($\alpha, \sigma$) distribution on high frequency trading data is also demonstrated. We also provide the estimation of three-parameter Mittag-Leffler distribution using empirical Laplace transform. Additionally, we establish an autoregressive model of order 1, incorporating the Mittag-Leffler distribution as marginals in one scenario and as innovation terms in another. We apply empirical Laplace transform method to estimate the model parameters and provide the simulation study for the same.
|
https://arxiv.org/abs/2601.06610
|
Academic Papers
|
svg
|
fd5ae007c4a6b26f0a23d638badfe641bd3345a021d997a93a7059d837a49f5a
|
2026-01-13T00:00:00-05:00
|
R-Estimation with Right-Censored Data
|
arXiv:2601.06685v1 Announce Type: new Abstract: This paper considers the problem of directly generalizing the R-estimator under a linear model formulation with right-censored outcomes. We propose a natural generalization of the rank and corresponding estimating equation for the R-estimator in the case of the Wilcoxon (i.e., linear-in-ranks) score function, and show how it can respectively be exactly represented as members of the classes of estimating equations proposed in Ritov (1990) and Tsiatis (1990). We then establish analogous results for a large class of bounded nonlinear-in-ranks score functions. Asymptotics and variance estimation are obtained as straightforward consequences of these representation results. The self-consistent estimator of the residual distribution function, and the mid-cumulative distribution function (and, where needed, a generalization of it), play critical roles in these developments.
|
https://arxiv.org/abs/2601.06685
|
Academic Papers
|
svg
|
a6f7b24c01ad89128a1b8bad423442a197bc403eeeb25068ca499e5a56bf7776
|
2026-01-13T00:00:00-05:00
|
Nonparametric contaminated Gaussian mixture of regressions
|
arXiv:2601.06695v1 Announce Type: new Abstract: Semi- and non-parametric mixture of regressions are a very useful flexible class of mixture of regressions in which some or all of the parameters are non-parametric functions of the covariates. These models are, however, based on the Gaussian assumption of the component error distributions. Thus, their estimation is sensitive to outliers and heavy-tailed error distributions. In this paper, we propose semi- and non-parametric contaminated Gaussian mixture of regressions to robustly estimate the parametric and/or non-parametric terms of the models in the presence of mild outliers. The virtue of using a contaminated Gaussian error distribution is that we can simultaneously perform model-based clustering of observations and model-based outlier detection. We propose two algorithms, an expectation-maximization (EM)-type algorithm and an expectation-conditional-maximization (ECM)-type algorithm, to perform maximum likelihood and local-likelihood kernel estimation of the parametric and non-parametric of the proposed models, respectively. The robustness of the proposed models is examined using an extensive simulation study. The practical utility of the proposed models is demonstrated using real data.
|
https://arxiv.org/abs/2601.06695
|
Academic Papers
|
svg
|
b7c0377f81f72ff023ac08cffe9c405b91a963ca83bc478fa2a71e3eec354bff
|
2026-01-13T00:00:00-05:00
|
Adversarially Perturbed Precision Matrix Estimation
|
arXiv:2601.06807v1 Announce Type: new Abstract: Precision matrix estimation is a fundamental topic in multivariate statistics and modern machine learning. This paper proposes an adversarially perturbed precision matrix estimation framework, motivated by recent developments in adversarial training. The proposed framework is versatile for the precision matrix problem since, by adapting to different perturbation geometries, the proposed framework can not only recover the existing distributionally robust method but also inspire a novel moment-adaptive approach to precision matrix estimation, proven capable of sparsity recovery and adversarial robustness. Notably, the proposed perturbed precision matrix framework is proven to be asymptotically equivalent to regularized precision matrix estimation, and the asymptotic normality can be established accordingly. The resulting asymptotic distribution highlights the asymptotic bias introduced by perturbation and identifies conditions under which the perturbed estimation can be unbiased in the asymptotic sense. Numerical experiments on both synthetic and real data demonstrate the desirable performance of the proposed adversarially perturbed approach in practice.
|
https://arxiv.org/abs/2601.06807
|
Academic Papers
|
svg
|
10f070a56ff053d13261a9f93f2d235253a7962cf7f38ae1e306dbc1eebc212e
|
2026-01-13T00:00:00-05:00
|
Likelihood-Based Regression for Weibull Accelerated Life Testing Model Under Censored Data
|
arXiv:2601.06890v1 Announce Type: new Abstract: In this paper, we investigate accelerated life testing (ALT) models based on the Weibull distribution with stress-dependent shape and scale parameters. Temperature and voltage are treated as stress variables influencing the lifetime distribution. Data are assumed to be collected under Progressive Hybrid Censoring (PHC) and Adaptive Progressive Hybrid Censoring (APHC). A two-step estimation framework is developed. First, the Weibull parameters are estimated via maximum likelihood, and the consistency and asymptotic normality of the estimators are established under both censoring schemes. Second, the resulting parameter estimates are linked to the stress variables through a regression model to quantify the stress-lifetime relationship. Extensive simulations are conducted to examine finite-sample performance under a range of parameter settings, and a data illustration is also presented to showcase practical relevance. The proposed framework provides a flexible approach for modeling stress-dependent reliability behavior in ALT studies under complex censoring schemes.
|
https://arxiv.org/abs/2601.06890
|
Academic Papers
|
svg
|
114438407329a887811e63308e3d4859df88169383aa8df13e4d9db0b2dfb0a9
|
2026-01-13T00:00:00-05:00
|
Minimum information Markov model
|
arXiv:2601.06900v1 Announce Type: new Abstract: The analysis of high-dimensional time series data has become increasingly important across a wide range of fields. Recently, a method for constructing the minimum information Markov kernel on finite state spaces was established. In this study, we propose a statistical model based on a parametrization of its dependence function, which we call the \textit{Minimum Information Markov Model}. We show that its parametrization induces an orthogonal structure between the stationary distribution and the dependence function, and that the model arises as the optimal solution to a divergence rate minimization problem. In particular, for the Gaussian autoregressive case, we establish the existence of the optimal solution to this minimization problem, a nontrivial result requiring a rigorous proof. For parameter estimation, our approach exploits the conditional independence structure inherent in the model, which is supported by the orthogonality. Specifically, we develop several estimators, including conditional likelihood and pseudo likelihood estimators, for the minimum information Markov model in both univariate and multivariate settings. We demonstrate their practical performance through simulation studies and applications to real-world time series data.
|
https://arxiv.org/abs/2601.06900
|
Academic Papers
|
svg
|
22b05f47945dc288361a6dffd8fb2b8684f2d32b8c9ed8173809c1d370fe39e6
|
2026-01-13T00:00:00-05:00
|
Localization Estimator for High Dimensional Tensor Covariance Matrices
|
arXiv:2601.06989v1 Announce Type: new Abstract: This paper considers covariance matrix estimation of tensor data under high dimensionality. A multi-bandable covariance class is established to accommodate the need for complex covariance structures of multi-layer lattices and general covariance decay patterns. We propose a high dimensional covariance localization estimator for tensor data, which regulates the sample covariance matrix through a localization function. The statistical properties of the proposed estimator are studied by deriving the minimax rates of convergence under the spectral and the Frobenius norms. Numerical experiments and real data analysis on ocean eddy data are carried out to illustrate the utility of the proposed method in practice.
|
https://arxiv.org/abs/2601.06989
|
Academic Papers
|
svg
|
27bcc009c8a12a48ea40962f78bc6d6b7686a755448f6f62aa2973e14243ec32
|
2026-01-13T00:00:00-05:00
|
Semiparametric Analysis of Interval-Censored Data Subject to Inaccurate Diagnoses with A Terminal Event
|
arXiv:2601.07044v1 Announce Type: new Abstract: Interval-censoring frequently occurs in studies of chronic diseases where disease status is inferred from intermittently collected biomarkers. Although many methods have been developed to analyze such data, they typically assume perfect disease diagnosis, which often does not hold in practice due to the inherent imperfect clinical diagnosis of cognitive functions or measurement errors of biomarkers such as cerebrospinal fluid. In this work, we introduce a semiparametric modeling framework using the Cox proportional hazards model to address interval-censored data in the presence of inaccurate disease diagnosis. Our model incorporates sensitivity and specificity of the diagnosis to account for uncertainty in whether the interval truly contains the disease onset. Furthermore, the framework accommodates scenarios involving a terminal event and when diagnosis is accurate, such as through postmortem analysis. We propose a nonparametric maximum likelihood estimation method for inference and develop an efficient EM algorithm to ensure computational feasibility. The regression coefficient estimators are shown to be asymptotically normal, achieving semiparametric efficiency bounds. We further validate our approach through extensive simulation studies and an application assessing Alzheimer's disease (AD) risk. We find that amyloid-beta is significantly associated with AD, but Tau is predictive of both AD and mortality.
|
https://arxiv.org/abs/2601.07044
|
Academic Papers
|
svg
|
2cbd32261d8494fe8256e4983b79456a6d1798119318db95c098a31e58a6298b
|
2026-01-13T00:00:00-05:00
|
FormulaCompiler.jl and Margins.jl: Efficient Marginal Effects in Julia
|
arXiv:2601.07065v1 Announce Type: new Abstract: Marginal effects analysis is fundamental to interpreting statistical models, yet existing implementations face computational constraints that limit analysis at scale. We introduce two Julia packages that address this gap. Margins.jl provides a clean two-function API organizing analysis around a 2-by-2 framework: evaluation context (population vs profile) by analytical target (effects vs predictions). The package supports interaction analysis through second differences, elasticity measures, categorical mixtures for representative profiles, and robust standard errors. FormulaCompiler.jl provides the computational foundation, transforming statistical formulas into zero-allocation, type-specialized evaluators that enable O(p) per-row computation independent of dataset size. Together, these packages achieve 622x average speedup and 460x memory reduction compared to R's marginaleffects package, with successful computation of average marginal effects and delta-method standard errors on 500,000 observations where R fails due to memory exhaustion, providing the first comprehensive and efficient marginal effects implementation for Julia's statistical ecosystem.
|
https://arxiv.org/abs/2601.07065
|
Academic Papers
|
svg
|
3bc414259bfb559b59e81b2ec8139e240687f71818b5e41774947e0065abdda2
|
2026-01-13T00:00:00-05:00
|
The Bayesian Intransitive Bradley-Terry Model via Combinatorial Hodge Theory
|
arXiv:2601.07158v1 Announce Type: new Abstract: Pairwise comparison data are widely used to infer latent rankings in areas such as sports, social choice, and machine learning. The Bradley-Terry model provides a foundational probabilistic framework but inherently assumes transitive preferences, explaining all comparisons solely through subject-specific parameters. In many competitive networks, however, cycle-induced effects are intrinsic, and ignoring them can distort both estimation and uncertainty quantification. To address this limitation, we propose a Bayesian extension of the Bradley-Terry model that explicitly separates the transitive and intransitive components. The proposed Bayesian Intransitive Bradley-Terry model embeds combinatorial Hodge theory into a logistic framework, decomposing paired relationships into a gradient flow representing transitive strength and a curl flow capturing cycle-induced structure. We impose global-local shrinkage priors on the curl component, enabling data-adaptive regularization and ensuring a natural reduction to the classical Bradley-Terry model when intransitivity is absent. Posterior inference is performed using an efficient Gibbs sampler, providing scalable computation and full Bayesian uncertainty quantification. Simulation studies demonstrate improved estimation accuracy, well-calibrated uncertainty, and substantial computational advantages over existing Bayesian models for intransitivity. The proposed framework enables uncertainty-aware quantification of intransitivity at both the global and triad levels, while also characterizing cycle-induced competitive advantages among teams.
|
https://arxiv.org/abs/2601.07158
|
Academic Papers
|
svg
|
d9e4005ddb97950c00c7b38fdc754469ac87fd45e288a35b13ac0460c7c8c170
|
2026-01-13T00:00:00-05:00
|
Principal component-guided sparse reduced-rank regression
|
arXiv:2601.07202v1 Announce Type: new Abstract: Reduced-rank regression estimates regression coefficients by imposing a low-rank constraint on the matrix of regression coefficients, thereby accounting for correlations among response variables. To further improve predictive accuracy and model interpretability, several regularized reduced-rank regression methods have been proposed. However, these existing methods cannot bias the regression coefficients toward the leading principal component directions while accounting for the correlation structure among explanatory variables. In addition, when the explanatory variables exhibit a group structure, the correlation structure within each group cannot be adequately incorporated.To address these limitations, we propose a new method that introduces pcLasso into the reduced-rank regression framework. The proposed method improves predictive accuracy by accounting for the correlation among response variables while strongly biasing the matrix of regression coefficients toward principal component directions with large variance. Furthermore, even in settings where the explanatory variables possess a group structure, the proposed method is capable of explicitly incorporating this structure into the estimation process. Finally, we illustrate the effectiveness of the proposed method through numerical simulations and real data application.
|
https://arxiv.org/abs/2601.07202
|
Academic Papers
|
svg
|
41c7b39a8d0aa0ee79e8817bab373e7ad76f4cb68a60808186673d6488d0793e
|
2026-01-13T00:00:00-05:00
|
Compounded Linear Failure Rate Distribution: Properties, Simulation and Analysis
|
arXiv:2601.07249v1 Announce Type: new Abstract: This paper proposes a new extension of the linear failure rate (LFR) model to better capture real-world lifetime data. The model incorporates an additional shape parameter to increase flexibility. It helps model the minimum survival time from a set of LFR distributed variables. We define the model, derive certain statistical properties such as the mean residual life, the mean inactivity time, moments, quantile, order statistics and also discuss the results on stochastic orders of the proposed distribution. The proposed model has increasing, bathtub shaped and inverse bathtub shaped hazard rate function. We use the method of maximum likelihood estimation to estimate the unknown parameters. We conduct simulation studies to examine the behavior of the estimators. We also use three real datasets to evaluate the model, which turns out superior compared to classical alternatives.
|
https://arxiv.org/abs/2601.07249
|
Academic Papers
|
svg
|
8e7f00f4c86dc52835bc906c33de6fcd07174ad158a7e5124bae9de57a95c74d
|
2026-01-13T00:00:00-05:00
|
Connections as treatment: causal inference with edge interventions in networks
|
arXiv:2601.07267v1 Announce Type: new Abstract: Causal inference has traditionally focused on interventions at the unit level. In many applications, however, the central question concerns the causal effects of connections between units, such as transportation links, social relationships, or collaborative ties. We develop a causal framework for edge interventions in networks, where treatments correspond to the presence or absence of edges. Our framework defines causal estimands under stochastic interventions on the network structure and introduces an inverse probability weighting estimator under an unconfoundedness assumption on edge assignment. We estimate edge probabilities using exponential random graph models, a widely used class of network models. We establish consistency and asymptotic normality of the proposed estimator. Finally, we apply our methodology to China's transportation network to estimate the causal impact of railroad connections on regional economic development.
|
https://arxiv.org/abs/2601.07267
|
Academic Papers
|
svg
|
b8ee56d4eb328c10a2dc97045b8485ac90981ec7533c6f9c5352c08eeb3fdb52
|
2026-01-13T00:00:00-05:00
|
Minimum Wasserstein distance estimator under covariate shift: closed-form, super-efficiency and irregularity
|
arXiv:2601.07282v1 Announce Type: new Abstract: Covariate shift arises when covariate distributions differ between source and target populations while the conditional distribution of the response remains invariant, and it underlies problems in missing data and causal inference. We propose a minimum Wasserstein distance estimation framework for inference under covariate shift that avoids explicit modeling of outcome regressions or importance weights. The resulting W-estimator admits a closed-form expression and is numerically equivalent to the classical 1-nearest neighbor estimator, yielding a new optimal transport interpretation of nearest neighbor methods. We establish root-$n$ asymptotic normality and show that the estimator is not asymptotically linear, leading to super-efficiency relative to the semiparametric efficient estimator under covariate shift in certain regimes, and uniformly in missing data problems. Numerical simulations, along with an analysis of a rainfall dataset, underscore the exceptional performance of our W-estimator.
|
https://arxiv.org/abs/2601.07282
|
Academic Papers
|
svg
|
cadd67651edafb9b7cd1f00ad9f0066112a3c97875f48305e7bd66f3d7b72baf
|
2026-01-13T00:00:00-05:00
|
Cauchy-Gaussian Overbound for Heavy-tailed GNSS Measurement Errors
|
arXiv:2601.07299v1 Announce Type: new Abstract: Overbounds of heavy-tailed measurement errors are essential to meet stringent navigation requirements in integrity monitoring applications. This paper proposes to leverage the bounding sharpness of the Cauchy distribution in the core and the Gaussian distribution in the tails to tightly bound heavy-tailed GNSS measurement errors. We develop a procedure to determine the overbounding parameters for both symmetric unimodal (s.u.) and not symmetric unimodal (n.s.u.) heavy-tailed errors and prove that the overbounding property is preserved through convolution. The experiment results on both simulated and real-world datasets reveal that our method can sharply bound heavy-tailed errors at both core and tail regions. In the position domain, the proposed method reduces the average vertical protection level by 15% for s.u. heavy-tailed errors compared to the single-CDF Gaussian overbound, and by 21% to 47% for n.s.u. heavy-tailed errors compared to the Navigation Discrete ENvelope and two-step Gaussian overbounds.
|
https://arxiv.org/abs/2601.07299
|
Academic Papers
|
svg
|
f91f85a5b78f19bfcba65608d02bb90cae6a74cee64a90ba9fe4bf3c07186924
|
2026-01-13T00:00:00-05:00
|
Inference for Multiple Change-points in Piecewise Locally Stationary Time Series
|
arXiv:2601.07400v1 Announce Type: new Abstract: Change-point detection and locally stationary time series modeling are two major approaches for the analysis of non-stationary data. The former aims to identify stationary phases by detecting abrupt changes in the dynamics of a time series model, while the latter employs (locally) time-varying models to describe smooth changes in dependence structure of a time series. However, in some applications, abrupt and smooth changes can co-exist, and neither of the two approaches alone can model the data adequately. In this paper, we propose a novel likelihood-based procedure for the inference of multiple change-points in locally stationary time series. In contrast to traditional change-point analysis where an abrupt change occurs in a real-valued parameter, a change in locally stationary time series occurs in a parameter curve, and can be classified as a jump or a kink depending on whether the curve is discontinuous or not. We show that the proposed method can consistently estimate the number, locations, and the types of change-points. Two different asymptotic distributions corresponding respectively to jump and kink estimators are also established.Extensive simulation studies and a real data application to financial time series are provided.
|
https://arxiv.org/abs/2601.07400
|
Academic Papers
|
svg
|
c95743a0f27a797823c720b3aad87c34d752abfe8e3d3e65c0a362a0f1518c6f
|
2026-01-13T00:00:00-05:00
|
Penalized Likelihood Optimization for Adaptive Neighborhood Clustering in Time-to-Event Data with Group-Level Heterogeneity
|
arXiv:2601.07446v1 Announce Type: new Abstract: The identification of patient subgroups with comparable event-risk dynamics plays a key role in supporting informed decision-making in clinical research. In such settings, it is important to account for the inherent dependence that arises when individuals are nested within higher-level units, such as hospitals. Existing survival models account for group-level heterogeneity through frailty terms but do not uncover latent patient subgroups, while most clustering methods ignore hierarchical structure and are not estimated jointly with survival outcomes. In this work, we introduce a new framework that simultaneously performs patient clustering and shared-frailty survival modeling through a penalized likelihood approach. The proposed methodology adaptively learns a patient-to-patient similarity matrix via a modified version of spectral clustering, enabling cluster formation directly from estimated risk profiles while accounting for group membership. A simulation study highlights the proposed model's ability to recover latent clusters and to correctly estimate hazard parameters. We apply our method to a large cohort of heart-failure patients hospitalized with COVID-19 between 2020 and 2021 in the Lombardy region (Italy), identifying clinically meaningful subgroups characterized by distinct risk profiles and highlighting the role of respiratory comorbidities and hospital-level variability in shaping mortality outcomes. This framework provides a flexible and interpretable tool for risk-based patient stratification in hierarchical data settings.
|
https://arxiv.org/abs/2601.07446
|
Academic Papers
|
svg
|
1ec248051438164093ad1d83253890f7c7476b1df8fb38a51a1a50fb826d31c9
|
2026-01-13T00:00:00-05:00
|
Ridge-penalised spectral least-squares estimation for point processes
|
arXiv:2601.07490v1 Announce Type: new Abstract: Penalised estimation methods for point processes usually rely on a large amount of independent repetitions for cross-validation purposes. However, in the case of a single realisation of the process, existing cross-validation methods may be impractical depending on the chosen model. To overcome this issue, this paper presents a Ridge-penalised spectral least-squares estimation method for second-order stationary point processes. This is achieved through two novel approaches: a p-thinning-based cross-validation method to tune the penalisation parameter, relying on the spectral representation of the process; and the introduction of a spectral least-squares contrast based around the asymptotic properties of the periodogram of the sample. The proposed method is then illustrated by a simulation study on linear Hawkes processes in the context of parametric estimation, highlighting its performances against more traditional approaches, specifically when working with short observation windows.
|
https://arxiv.org/abs/2601.07490
|
Academic Papers
|
svg
|
10db849439c83d611fea7e505e1cc6508ce84b28116fbdab475a40f3b45b4131
|
2026-01-13T00:00:00-05:00
|
Population-Adjusted Indirect Treatment Comparison with the outstandR Package in R
|
arXiv:2601.07532v1 Announce Type: new Abstract: Indirect treatment comparisons (ITCs) are essential in Health Technology Assessment (HTA) when head-to-head clinical trials are absent. A common challenge arises when attempting to compare a treatment with available individual patient data (IPD) against a competitor with only reported aggregate-level data (ALD), particularly when trial populations differ in effect modifiers. While methods such as Matching-Adjusted Indirect Comparison (MAIC) and Simulated Treatment Comparison (STC) exist to adjust for these cross-trial differences, software implementations have often been fragmented or limited in scope. This article introduces outstandR, an R package designed to provide a comprehensive and unified framework for population-adjusted indirect comparison (PAIC). Beyond standard weighting and regression approaches, outstandR implements advanced G-computation methods within both maximum likelihood and Bayesian frameworks, and Multiple Imputation Marginalization (MIM) to address non-collapsibility and missing data. By streamlining the workflow of covariate simulation, model standardization, and contrast estimation, outstandR enables robust and compatible evidence synthesis in complex decision-making scenarios.
|
https://arxiv.org/abs/2601.07532
|
Academic Papers
|
svg
|
0366b513250ff8903b1ab12c2acea376ecde9cd107111edc9dcdf82868224827
|
2026-01-13T00:00:00-05:00
|
Bayesian Handwriting Evidence Evaluation using MANOVA via Fourier-Based Extracted Features
|
arXiv:2601.07534v1 Announce Type: new Abstract: This paper proposes a novel statistical approach that aims at the identification of valid and useful patterns in handwriting examination via Bayesian modeling. Starting from a sample of characters selected among 13 French native writers, an accurate loop reconstruction can be achieved through Fourier analysis. The contour shape of handwritten characters can be described by the first four pairs of Fourier coefficients and by the surface size. Six Bayesian models are considered for such handwritten features. These models arise from two likelihood structures: (a) a multivariate Normal model, and (b) a MANOVA model that accounts for character-level variability. For each likelihood, three different prior formulations are examined, resulting in distinct Bayesian models: (i) a conjugate Normal-Inverse-Wishart prior, (ii) a hierarchical Normal-Inverse-Wishart prior, and (iii) a Normal-LogNormal-LKJ prior specification. The hierarchical prior formulations are of primary interest because they can incorporate the between-writers variability, a distinguishing element that sets writers apart. These approaches do not allow calculation of the marginal likelihood in a closed-form expression. Therefore, bridge sampling is used to estimate it. The Bayes factor is estimated to compare the performance of the proposed models and to evaluate their efficiency for discriminating purposes. Bayesian MANOVA with Normal-LogNormal-LKJ prior showed an overall better performance, in terms of discriminatory capacity and model fitting. Finally, a sensitivity analysis for the elicitation of the prior distribution parameters is performed.
|
https://arxiv.org/abs/2601.07534
|
Academic Papers
|
svg
|
138e8ad011966ad83c2d4b1423d48ad011b0aea7fd8466c345c3aad9164d3199
|
2026-01-13T00:00:00-05:00
|
Functional Synthetic Control Methods for Metric Space-Valued Outcomes
|
arXiv:2601.07539v1 Announce Type: new Abstract: The synthetic control method (SCM) is a widely used tool for evaluating causal effects of policy changes in panel data settings. Recent studies have extended its framework to accommodate complex outcomes that take values in metric spaces, such as distributions, functions, networks, covariance matrices, and compositional data. However, due to the lack of linear structure in general metric spaces, theoretical guarantees for estimation and inference within these extended frameworks remain underdeveloped. In this study, we propose the functional synthetic control (FSC) method as an extension of the SCM for metric space-valued outcomes. To address challenges arising from the nonlinearlity of metric spaces, we leverage isometric embeddings into Hilbert spaces. Building on this approach, we develop the FSC and augmented FSC estimators for counterfactual outcomes, with the latter being a bias-corrected version of the former. We then derive their finite-sample error bounds to establish theoretical guarantees for estimation, and construct prediction sets based on these estimators to conduct inference on causal effects. We demonstrate the usefulness of the proposed framework through simulation studies and three empirical applications.
|
https://arxiv.org/abs/2601.07539
|
Academic Papers
|
svg
|
137e3f3b34b3a49fb1f390493d6a935ea223b41fa869e68b35f6fae9c4271211
|
2026-01-13T00:00:00-05:00
|
An evaluation of empirical equations for assessing local scour around bridge piers using global sensitivity analysis
|
arXiv:2601.07594v1 Announce Type: new Abstract: Bridge scour is a complex phenomenon combining hydrological, geotechnical and structural processes. Bridge scour is the leading cause of bridge collapse, which can bring catastrophic consequences including the loss of life. Estimating scour on bridges is an important task for engineers assessing bridge system performance. Overestimation of scour depths during design may lead to excess spendings on construction whereas underestimation can lead to the collapse of a bridge. Many empirical equations have been developed over the years to assess scour depth at bridge piers. These equations have only been calibrated with laboratory data or very few field data. This paper compares eight equations including the UK CIRIA C742 approach to establish their accuracy using the open access USGS pier-scour database for both field and laboratory conditions. A one-at-the-time sensitivity assessment and a global sensitivity analysis were then applied to identify the most significant parameters in the eight scour equations. The paper shows that using a global approach, i.e. one where all parameters are varied simultaneously, provides more insights than a traditional one-at-the-time approach. The main findings are that the CIRIA and Froehlich equations are the most accurate equations for field conditions, and that angle of attack, pier shape and the approach flow depth are the most influential parameters. Efforts to reduce uncertainty of these three parameters would maximise increase of scour estimate precision.
|
https://arxiv.org/abs/2601.07594
|
Academic Papers
|
svg
|
4feb8f04f594e78b64f87c90a38179f6c2003a1e4dcf7016d2735e00b658a851
|
2026-01-13T00:00:00-05:00
|
Omitted covariates bias and finite mixtures of regression models for longitudinal responses
|
arXiv:2601.07609v1 Announce Type: new Abstract: Individual-specific, time-constant, random effects are often used to model dependence and/or to account for omitted covariates in regression models for longitudinal responses. Longitudinal studies have known a huge and widespread use in the last few years as they allow to distinguish between so-called age and cohort effects; these relate to differences that can be observed at the beginning of the study and stay persistent through time, and changes in the response that are due to the temporal dynamics in the observed covariates. While there is a clear and general agreement on this purpose, the random effect approach has been frequently criticized for not being robust to the presence of correlation between the observed (i.e. covariates) and the unobserved (i.e. random effects) heterogeneity. Starting from the so-called correlated effect approach, we argue that the random effect approach may be parametrized to account for potential correlation between observables and unobservables. Specifically, when the random effect distribution is estimated non-parametrically using a discrete distribution on finite number of locations, a further, more general, solution is developed. This is illustrated via a large scale simulation study and the analysis of a benchmark dataset.
|
https://arxiv.org/abs/2601.07609
|
Academic Papers
|
svg
|
798540811e2aacd1206848c88ec36991b27dacb0e0b87afd01e8622831b2883e
|
2026-01-13T00:00:00-05:00
|
Dual-Level Models for Physics-Informed Multi-Step Time Series Forecasting
|
arXiv:2601.07640v1 Announce Type: new Abstract: This paper develops an approach for multi-step forecasting of dynamical systems by integrating probabilistic input forecasting with physics-informed output prediction. Accurate multi-step forecasting of time series systems is important for the automatic control and optimization of physical processes, enabling more precise decision-making. While mechanistic-based and data-driven machine learning (ML) approaches have been employed for time series forecasting, they face significant limitations. Incomplete knowledge of process mathematical models limits mechanistic-based direct employment, while purely data-driven ML models struggle with dynamic environments, leading to poor generalization. To address these limitations, this paper proposes a dual-level strategy for physics-informed forecasting of dynamical systems. On the first level, input variables are forecast using a hybrid method that integrates a long short-term memory (LSTM) network into probabilistic state transition models (STMs). On the second level, these stochastically predicted inputs are sequentially fed into a physics-informed neural network (PINN) to generate multi-step output predictions. The experimental results of the paper demonstrate that the hybrid input forecasting models achieve a higher log-likelihood and lower mean squared errors (MSE) compared to conventional STMs. Furthermore, the PINNs driven by the input forecasting models outperform their purely data-driven counterparts in terms of MSE and log-likelihood, exhibiting stronger generalization and forecasting performance across multiple test cases.
|
https://arxiv.org/abs/2601.07640
|
Academic Papers
|
svg
|
55c79ababdd3d91e63d1c9ab3fed96e9a6a00a4d7cfa4e1c8309ddaf50153b20
|
2026-01-13T00:00:00-05:00
|
The Role of Confounders and Linearity in Ecological Inference: A Reassessment
|
arXiv:2601.07668v1 Announce Type: new Abstract: Estimating conditional means using only the marginal means available from aggregate data is commonly known as the ecological inference problem (EI). We provide a reassessment of EI, including a new formalization of identification conditions and a demonstration of how these conditions fail to hold in common cases. The identification conditions reveal that, similar to causal inference, credible ecological inference requires controlling for confounders. The aggregation process itself creates additional structure to assist in estimation by restricting the conditional expectation function to be linear in the predictor variable. A linear model perspective also clarifies the differences between the EI methods commonly used in the literature, and when they lead to ecological fallacies. We provide an overview of new methodology which builds on both the identification and linearity results to flexibly control for confounders and yield improved ecological inferences. Finally, using datasets for common EI problems in which the ground truth is fortuitously observed, we show that, while covariates can help, all methods are prone to overestimating both racial polarization and nationalized partisan voting.
|
https://arxiv.org/abs/2601.07668
|
Academic Papers
|
svg
|
ac25d9917d80099a9793d029bc1964e41784ce0dbd1cb473cc0df68da4b78630
|
2026-01-13T00:00:00-05:00
|
Cluster-based name embeddings reduce ethnic disparities in record linkage quality under realistic name corruption: evidence from the North Carolina Voter Registry
|
arXiv:2601.07693v1 Announce Type: new Abstract: Differential ethnic-based record linkage errors can bias epidemiologic estimates. Prior evidence often conflates heterogeneity in error mechanisms with unequal exposure to error. Using snapshots of the North Carolina Voter Registry (Oct 2011-Oct 2022), we derived empirical name-discrepancy profiles to parameterise realistic corruptions. From an Oct 2022 extract (n=848,566), we generated five replicate corrupted datasets under three settings that separately varied mechanism heterogeneity and exposure inequality, and linked records back to originals using unadjusted Jaro-Winkler, Term Frequency (TF)-adjusted Jaro-Winkler, and a cluster-based forename-embedding comparator combined with TF-adjusted surname comparison. We evaluated false match rate (FMR), missed match rate (MMR) and white-centric disparities. At a fixed MMR near 0.20, overall error rates and ethnic disparities diverged substantially by model under disproportionate exposure to corruption. Term-frequency (TF)-adjusted Jaro-Winkler achieved very low overall FMR (0.55% (95% CI 0.54-0.57)) at overall MMR 20.34% (20.30-20.39), but large white-centric under-linkage disparities persisted: Hispanic voters had 36.3% (36.1-36.6) and Non-Hispanic Black voters 8.6% (8.6-8.7) higher FMRs compared to Non-Hispanic White groups. Relative to unadjusted string similarity, TF adjustment reduced these disparities (Hispanic: +60.4% (60.1-60.7) to +36.3%; Black: +13.1% (13.0-13.2) to +8.6%). The cluster-based forename-embedding model reduced missed-match disparities further (Hispanic: +10.2% (9.8-10.3); Black: +0.6% (0.4-0.7)), but at a cost of increasing overall FMR (4.28% (4.22-4.35)) at the same threshold. Unequal exposure to identifier error drove substantially larger disparities than mechanism heterogeneity alone; cluster-based embeddings markedly narrowed under-linkage disparities beyond TF adjustment.
|
https://arxiv.org/abs/2601.07693
|
Academic Papers
|
svg
|
287728d7d7b6598f3abd12f7ca2bee5630769979ad0f720bb9e8c007ba54c194
|
2026-01-13T00:00:00-05:00
|
Reinforcement Learning for Micro-Level Claims Reserving
|
arXiv:2601.07637v1 Announce Type: cross Abstract: Outstanding claim liabilities are revised repeatedly as claims develop, yet most modern reserving models are trained as one-shot predictors and typically learn only from settled claims. We formulate individual claims reserving as a claim-level Markov decision process in which an agent sequentially updates outstanding claim liability (OCL) estimates over development, using continuous actions and a reward design that balances accuracy with stable reserve revisions. A key advantage of this reinforcement learning (RL) approach is that it can learn from all observed claim trajectories, including claims that remain open at valuation, thereby avoiding the reduced sample size and selection effects inherent in supervised methods trained on ultimate outcomes only. We also introduce practical components needed for actuarial use -- initialisation of new claims, temporally consistent tuning via a rolling-settlement scheme, and an importance-weighting mechanism to mitigate portfolio-level underestimation driven by the rarity of large claims. On CAS and SPLICE synthetic general insurance datasets, the proposed Soft Actor-Critic implementation delivers competitive claim-level accuracy and strong aggregate OCL performance, particularly for the immature claim segments that drive most of the liability.
|
https://arxiv.org/abs/2601.07637
|
Academic Papers
|
svg
|
6160995bfc566234925f78ceceba9b5a9f966e3c2d241ef6a565a531b11df0cc
|
2026-01-13T00:00:00-05:00
|
Physics-Informed Singular-Value Learning for Cross-Covariances Forecasting in Financial Markets
|
arXiv:2601.07687v1 Announce Type: cross Abstract: A new wave of work on covariance cleaning and nonlinear shrinkage has delivered asymptotically optimal analytical solutions for large covariance matrices. Building on this progress, these ideas have been generalized to empirical cross-covariance matrices, whose singular-value shrinkage characterizes comovements between one set of assets and another. Existing analytical cross-covariance cleaners are derived under strong stationarity and large-sample assumptions, and they typically rely on mesoscopic regularity conditions such as bounded spectra; macroscopic common modes (e.g., a global market factor) violate these conditions. When applied to real equity returns, where dependence structures drift over time and global modes are prominent, we find that these theoretically optimal formulas do not translate into robust out-of-sample performance. We address this gap by designing a random-matrix-inspired neural architecture that operates in the empirical singular-vector basis and learns a nonlinear mapping from empirical singular values to their corresponding cleaned values. By construction, the network can recover the analytical solution as a special case, yet it remains flexible enough to adapt to non-stationary dynamics and mode-driven distortions. Trained on a long history of equity returns, the proposed method achieves a more favorable bias-variance trade-off than purely analytical cleaners and delivers systematically lower out-of-sample cross-covariance prediction errors. Our results demonstrate that combining random-matrix theory with machine learning makes asymptotic theories practically effective in realistic time-varying markets.
|
https://arxiv.org/abs/2601.07687
|
Academic Papers
|
svg
|
de76722fe0b76fc14c4280a67c02f004d73ce9c41797f44b88c54a47900958fc
|
2026-01-13T00:00:00-05:00
|
Non-Convex Portfolio Optimization via Energy-Based Models: A Comparative Analysis Using the Thermodynamic HypergRaphical Model Library (THRML) for Index Tracking
|
arXiv:2601.07792v1 Announce Type: cross Abstract: Portfolio optimization under cardinality constraints transforms the classical Markowitz mean-variance problem from a convex quadratic problem into an NP-hard combinatorial optimization problem. This paper introduces a novel approach using THRML (Thermodynamic HypergRaphical Model Library), a JAX-based library for building and sampling probabilistic graphical models that reformulates index tracking as probabilistic inference on an Ising Hamiltonian. Unlike traditional methods that seek a single optimal solution, THRML samples from the Boltzmann distribution of high-quality portfolios using GPU-accelerated block Gibbs sampling, providing natural regularization against overfitting. We implement three key innovations: (1) dynamic coupling strength that scales inversely with market volatility (VIX), adapting diversification pressure to market regimes; (2) rebalanced bias weights prioritizing tracking quality over momentum for index replication; and (3) sector-aware post-processing ensuring institutional-grade diversification. Backtesting on a 100-stock S and P 500 universe from 2023 to 2025 demonstrates that THRML achieves 4.31 percent annualized tracking error versus 5.66 to 6.30 percent for baselines, while simultaneously generating 128.63 percent total return against the index total return of 79.61 percent. The Diebold-Mariano test confirms statistical significance with p less than 0.0001 across all comparisons. These results position energy-based models as a promising paradigm for portfolio construction, bridging statistical mechanics and quantitative finance.
|
https://arxiv.org/abs/2601.07792
|
Academic Papers
|
svg
|
705bb0a28be78e55f3abafd31d7a118db14371b66d352ce2d406c74bfe723da2
|
2026-01-13T00:00:00-05:00
|
Accumulation of Sub-Sampling Matrices with Applications to Statistical Computation
|
arXiv:2103.04031v2 Announce Type: replace Abstract: With appropriately chosen sampling probabilities, sampling-based random projection can be used to implement large-scale statistical methods, substantially reducing computational cost while maintaining low statistical error. However, computing optimal sampling probabilities is often itself expensive, and in practice one typically resorts to suboptimal schemes. This generally leads to increased time and space costs, as more subsamples are required and the resulting projection matrices become larger, thereby making the inference procedure more computationally demanding. In this paper, we extend the framework of sampling-based random projection and propose a new projection method, \emph{accumulative sub-sampling}. By carefully accumulating multiple such projections, accumulative sub-sampling improves statistical efficiency while controlling the effective matrix size throughout the statistical computation. On the theoretical side, we quantify how the quality of the subsampling scheme affects the error in approximating matrix products and positive semidefinite matrices, and show how the proposed accumulation strategy mitigates this effect. Moreover, we apply our method to statistical models involving intensive matrix operations, such as eigendecomposition in spectral clustering and matrix inversion in kernel ridge regression, and demonstrate that reducing the effective matrix size leads to substantial computational savings. Numerical experiments across a range of problems further show that our approach consistently improves computational efficiency compared to existing random projection baselines under suboptimal sampling schemes.
|
https://arxiv.org/abs/2103.04031
|
Academic Papers
|
svg
|
20f8d2cb6e09378a6bdf5a47474f5d8a41764f73c1faa690a30f73bb4dd0de77
|
2026-01-13T00:00:00-05:00
|
Relaxed Gaussian process interpolation: a goal-oriented approach to Bayesian optimization
|
arXiv:2206.03034v4 Announce Type: replace Abstract: This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling, with a relaxation of the interpolation constraints outside ranges of interest: the mean of the predictive distributions no longer necessarily interpolates the observed values when they are outside ranges of interest, but are simply constrained to remain outside. This method called relaxed Gaussian process (reGP) interpolation provides better predictive distributions in ranges of interest, especially in cases where a stationarity assumption for the GP model is not appropriate. It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization, for example, for the minimization of an objective function, where good predictive distributions for low function values are important. When the expected improvement criterion and reGP are used for sequentially choosing evaluation points, the convergence of the resulting optimization algorithm is theoretically guaranteed (provided that the function to be optimized lies in the reproducing kernel Hilbert space attached to the known covariance of the underlying Gaussian process). Experiments indicate that using reGP instead of stationary GP models in Bayesian optimization is beneficial.
|
https://arxiv.org/abs/2206.03034
|
Academic Papers
|
svg
|
87ca7cad113528cb5d1631b504f27acfa11d21ffa6d27c354cfa8b03840c1b5b
|
2026-01-13T00:00:00-05:00
|
Experiment-selector cross-validated targeted maximum likelihood estimator for hybrid RCT-external data studies
|
arXiv:2210.05802v4 Announce Type: replace Abstract: Augmenting a randomized controlled trial (RCT) with external data may increase power at the risk of introducing bias. To select and analyze the experiment (RCT alone or combined with external data) with the optimal bias-variance tradeoff, we develop a novel experiment-selector cross-validated targeted maximum likelihood estimator for randomized-external data studies (ES-CVTMLE). This estimator utilizes two estimates of bias to determine whether to integrate external data based on 1) a function of the difference in conditional mean outcome under control between the RCT and combined experiments and 2) an estimate of the average treatment effect on a negative control outcome (NCO). We define the asymptotic distribution of the ES-CVTMLE under varying magnitudes of bias and construct confidence intervals by Monte Carlo simulation. We evaluate ES-CVTMLE compared to three other data fusion estimators in simulations and demonstrate the ability of ES-CVTMLE to distinguish biased from unbiased external controls in a real data analysis of the effect of liraglutide on glycemic control from the LEADER trial. The ES-CVTMLE has the potential to improve power while providing relatively robust inference for future hybrid RCT-external data studies.
|
https://arxiv.org/abs/2210.05802
|
Academic Papers
|
svg
|
8d48b7aea56b13826744ec300ebdc9b2b17c5bfb3b8a692c3671631aa33f4cba
|
2026-01-13T00:00:00-05:00
|
The Interpolating Information Criterion for Overparameterized Models
|
arXiv:2307.07785v2 Announce Type: replace Abstract: The problem of model selection is considered for the setting of interpolating estimators, where the number of model parameters exceeds the size of the dataset. Classical information criteria typically consider the large-data limit, penalizing model size. However, these criteria are not appropriate in modern settings where overparameterized models tend to perform well. For any overparameterized model, we show that there exists a dual underparameterized model that possesses the same marginal likelihood, thus establishing a form of Bayesian duality. This enables more classical methods to be used in the overparameterized setting, revealing the Interpolating Information Criterion, a measure of model quality that naturally incorporates the choice of prior into the model selection. Our new information criterion accounts for prior misspecification, geometric and spectral properties of the model, and is numerically consistent with known empirical and theoretical behavior in this regime.
|
https://arxiv.org/abs/2307.07785
|
Academic Papers
|
svg
|
70ff429a3355298d7703387871dccf269ce6683b3be71e0e923dce590f1df5eb
|
2026-01-13T00:00:00-05:00
|
A Convex Framework for Confounding Robust Inference
|
arXiv:2309.12450v3 Announce Type: replace Abstract: We study policy evaluation of offline contextual bandits subject to unobserved confounders. Sensitivity analysis methods are commonly used to estimate the policy value under the worst-case confounding over a given uncertainty set. However, existing work often resorts to some coarse relaxation of the uncertainty set for the sake of tractability, leading to overly conservative estimation of the policy value. In this paper, we propose a general estimator that provides a sharp lower bound of the policy value using convex programming. The generality of our estimator enables various extensions such as sensitivity analysis with f-divergence, model selection with cross validation and information criterion, and robust policy learning with the sharp lower bound. Furthermore, our estimation method can be reformulated as an empirical risk minimization problem thanks to the strong duality, which enables us to provide strong theoretical guarantees of the proposed estimator using techniques of the M-estimation.
|
https://arxiv.org/abs/2309.12450
|
Academic Papers
|
svg
|
2202caf973b6b46f00403e3ec8ee957ccc238f0303e633b2ad160410733eb72d
|
2026-01-13T00:00:00-05:00
|
Expectile Periodograms
|
arXiv:2403.02060v4 Announce Type: replace Abstract: This paper introduces a novel periodogram-like function, called the expectile periodogram, for modeling spectral features of time series and detecting hidden periodicities. The expectile periodogram is constructed from trigonometric expectile regression, in which a specially designed check function is used to substitute the squared $l_2$ norm that leads to the ordinary periodogram. The expectile periodogram retains the key properties of the ordinary periodogram as a frequency-domain representation of serial dependence in time series, while offering a more comprehensive understanding by examining the data across the entire range of expectile levels. We establish the asymptotic theory and investigate the relationship between the expectile periodogram and the so called expectile spectrum. Simulations demonstrate the efficiency of the expectile periodogram in the presence of hidden periodicities. Finally, by leveraging the inherent two-dimensional nature of the expectile periodogram, we train a deep learning (DL) model to classify earthquake waveform data. Remarkably, our approach outperforms alternative periodogram-based methods in terms of classification accuracy.
|
https://arxiv.org/abs/2403.02060
|
Academic Papers
|
svg
|
64f7675ee779a82c25e75a4d80d24e61de6e5f89a7f940e37df8137efbdb2315
|
2026-01-13T00:00:00-05:00
|
Data-Driven Strategies for Detecting and Sampling Misrepresented Subgroups
|
arXiv:2405.01342v2 Announce Type: replace Abstract: Economic policy research frequently examines population well-being, with a particular focus on the relationships between unequal living conditions, low educational attainment, and social exclusion. Sample surveys, such as EU-SILC, are widely used for this purpose and inform public policy; yet, their sampling designs may fail to adequately represent rare, hard-to-sample, or under-covered subgroups. This limitation can hinder socio-demographic analyses and evidence-based policy design. We propose a generalisable approach based on univariate and multivariate unsupervised learning techniques to detect outliers in survey data that may signal under-represented subgroups. Identified groups can then be characterised to inform targeted resampling strategies that improve survey inclusiveness. An empirical application using the 2019 EU-SILC data for the Italian region of Liguria shows that citizenship, material deprivation, large household size, and economic vulnerability are key indicators of under-representation.
|
https://arxiv.org/abs/2405.01342
|
Academic Papers
|
svg
|
3abc34c92634777517ae6d1deb2ed7e5c4fd9b412920f06026140c6534260b24
|
2026-01-13T00:00:00-05:00
|
Iterative Methods for Full-Scale Gaussian Process Approximations for Large Spatial Data
|
arXiv:2405.14492v5 Announce Type: replace Abstract: Gaussian processes are flexible probabilistic regression models which are widely used in statistics and machine learning. However, a drawback is their limited scalability to large data sets. To alleviate this, full-scale approximations (FSAs) combine predictive process methods and covariance tapering, thus approximating both global and local structures. We show how iterative methods can be used to reduce computational costs in calculating likelihoods, gradients, and predictive distributions with FSAs. In particular, we introduce a novel preconditioner and show theoretically and empirically that it accelerates the conjugate gradient method's convergence speed and mitigates its sensitivity with respect to the FSA parameters and the eigenvalue structure of the original covariance matrix, and we demonstrate empirically that it outperforms a state-of-the-art pivoted Cholesky preconditioner. Furthermore, we introduce an accurate and fast way to calculate predictive variances using stochastic simulation and iterative methods. In addition, we show how our newly proposed fully independent training conditional (FITC) preconditioner can also be used in iterative methods for Vecchia approximations. In our experiments, it outperforms existing state-of-the-art preconditioners for Vecchia approximations. All methods are implemented in a free C++ software library with high-level Python and R packages.
|
https://arxiv.org/abs/2405.14492
|
Academic Papers
|
svg
|
709f2018ee74475c50fff0a24453296eb762f7910bfee8eef8f881c21d9059e2
|
2026-01-13T00:00:00-05:00
|
Berezinskii--Kosterlitz--Thouless transition in a context-sensitive random language model
|
arXiv:2412.01212v2 Announce Type: replace Abstract: Several power-law critical properties involving different statistics in natural languages -- reminiscent of scaling properties of physical systems at or near phase transitions -- have been documented for decades. The recent rise of large language models has added further evidence and excitement by providing intriguing similarities with notions in physics such as scaling laws and emergent abilities. However, specific instances of classes of generative language models that exhibit phase transitions, as understood by the statistical physics community, are lacking. In this work, inspired by the one-dimensional Potts model in statistical physics, we construct a simple probabilistic language model that falls under the class of context-sensitive grammars, which we call the context-sensitive random language model, and numerically demonstrate an unambiguous phase transition in the framework of a natural language model. We explicitly show that a precisely defined order parameter -- that captures symbol frequency biases in the sentences generated by the language model -- changes from strictly zero to a strictly nonzero value (in the infinite-length limit of sentences), implying a mathematical singularity arising when tuning the parameter of the stochastic language model we consider. Furthermore, we identify the phase transition as a variant of the Berezinskii--Kosterlitz--Thouless (BKT) transition, which is known to exhibit critical properties not only at the transition point but also in the entire phase. This finding leads to the possibility that critical properties in natural languages may not require careful fine-tuning nor self-organized criticality, but are generically explained by the underlying connection between language structures and the BKT phases.
|
https://arxiv.org/abs/2412.01212
|
Academic Papers
|
svg
|
f380525f586684ac188c3653a17bc4feb85f3bf8c8697449324c654c07da660b
|
2026-01-13T00:00:00-05:00
|
Asymptotics of Non-Convex Generalized Linear Models in High-Dimensions: A proof of the replica formula
|
arXiv:2502.20003v2 Announce Type: replace Abstract: The analytic characterization of the high-dimensional behavior of optimization for Generalized Linear Models (GLMs) with Gaussian data has been a central focus in statistics and probability in recent years. While convex cases, such as the LASSO, ridge regression, and logistic regression, have been extensively studied using a variety of techniques, the non-convex case remains far less understood despite its significance. A non-rigorous statistical physics framework has provided remarkable predictions for the behavior of high-dimensional optimization problems, but rigorously establishing their validity for non-convex problems has remained a fundamental challenge. In this work, we address this challenge by developing a systematic framework that rigorously proves replica-symmetric formulas for non-convex GLMs and precisely determines the conditions under which these formulas are valid. Remarkably, the rigorous replica-symmetric predictions align exactly with the conjectures made by physicists, and the so-called replicon condition. The originality of our approach lies in connecting two powerful theoretical tools: the Gaussian Min-Max Theorem, which we use to provide precise lower bounds, and Approximate Message Passing (AMP), which is shown to achieve these bounds algorithmically. We demonstrate the utility of this framework through significant applications: (i) by proving the optimality of the Tukey loss over the more commonly used Huber loss under a $\varepsilon$ contaminated data model, (ii) establishing the optimality of negative regularization in high-dimensional non-convex regression and (iii) characterizing the performance limits of linearized AMP algorithms. By rigorously validating statistical physics predictions in non-convex settings, we aim to open new pathways for analyzing increasingly complex optimization landscapes beyond the convex regime.
|
https://arxiv.org/abs/2502.20003
|
Academic Papers
|
svg
|
e339a8ed154bd0ee0011505adbe90a74810aca45dec74be8f97da3e8e52bb30e
|
2026-01-13T00:00:00-05:00
|
A Spatiotemporal, Quasi-experimental Causal Inference Approach to Characterize the Effects of Global Plastic Waste Export and Burning on Air Quality Using Remotely Sensed Data
|
arXiv:2503.04491v3 Announce Type: replace Abstract: Open burning of plastic waste may pose a significant threat to global health by degrading air quality, but quantitative research on this problem -- crucial for policy making -- has been stunted by lack of data. Many low- and middle-income countries, where open burning is most concerning, have little to no air quality monitoring. Here, we leverage remotely sensed data products combined with spatiotemporal causal analytic techniques to evaluate the impact of large-scale plastic waste policies on air quality. Throughout, we study Indonesia before and after 2018, when China halted its import of plastic waste, resulting in diversion of this massive waste stream to other countries. We tailor cutting-edge statistical methods to this setting, estimating effects of increased plastic waste imports on fine particulate matter (PM$_{2.5}$) near waste dump sites in Indonesia as a function of proximity to ports, an induced continuous exposure. We observe strong evidence that monthly PM$_{2.5}$increased after China's ban (2018-2019) relative to expected business-as-usual (2012-2017), with increases up to 1.68 $\mu$g/m$^3$ (95\% CI = [0.72, 2.48]) at dump sites with medium-high port proximity. Effects were more modest at sites with very high port proximity, possibly reflecting smaller increases in dumping/burning where government oversight is greater.
|
https://arxiv.org/abs/2503.04491
|
Academic Papers
|
svg
|
7d3a01bb42183e51f26225be1178d449a74600b0be85ce67f4e2f5f7d6b1945f
|
2026-01-13T00:00:00-05:00
|
Simulation of Multivariate Extremes: a Wasserstein-Aitchison GAN approach
|
arXiv:2504.21438v3 Announce Type: replace Abstract: Economically responsible mitigation of multivariate extreme risks-such as extreme rainfall over large areas, large simultaneous variations in many stock prices, or widespread breakdowns in transportation systems-requires assessing the resilience of the systems under plausible stress scenarios. This paper uses Extreme Value Theory (EVT) to develop a new approach to simulating such multivariate extreme events. Specifically, we assume that after transformation to a standard scale the distribution of the random phenomenon of interest is multivariate regular varying and use this to provide a sampling procedure for extremes on the original scale. Our procedure combines a Wasserstein-Aitchison Generative Adversarial Network (WA-GAN) to simulate the tail dependence structure on the standard scale with joint modeling of the univariate marginal tails on the original scale. The WA-GAN procedure relies on the angular measure-encoding the distribution on the unit simplex of the angles of extreme observations-after transformation to Aitchison coordinates, which allows the Wasserstein-GAN algorithm to be run in a linear space. Our method is applied both to simulated data under various tail dependence scenarios and to a financial data set from the Kenneth French Data Library. The proposed algorithm demonstrates strong performance compared to existing alternatives in the literature, both in capturing tail dependence structures and in generating accurate new extreme observations.
|
https://arxiv.org/abs/2504.21438
|
Academic Papers
|
svg
|
5a6190dcd3a6ba15947aaeaab095ebb32179ae46518f72820a6308a68e5c0be2
|
2026-01-13T00:00:00-05:00
|
Convergence Rates of Constrained Expected Improvement
|
arXiv:2505.11323v2 Announce Type: replace Abstract: Constrained Bayesian optimization (CBO) methods have seen significant success in black-box optimization with constraints. One of the most commonly used CBO methods is the constrained expected improvement (CEI) algorithm. CEI is a natural extension of expected improvement (EI) when constraints are incorporated. However, the theoretical convergence rate of CEI has not been established. In this work, we study the convergence rate of CEI by analyzing its simple regret upper bound. First, we show that when the objective function $f$ and constraint function $c$ are assumed to each lie in a reproducing kernel Hilbert space (RKHS), CEI achieves the convergence rates of $\mathcal{O} \left(t^{-\frac{1}{2}}\log^{\frac{d+1}{2}}(t) \right) \ \text{and }\ \mathcal{O}\left(t^{\frac{-\nu}{2\nu+d}} \log^{\frac{\nu}{2\nu+d}}(t)\right)$ for the commonly used squared exponential and Mat\'{e}rn kernels ($\nu>\frac{1}{2}$), respectively. Second, we show that when $f$ is assumed to be sampled from Gaussian processes (GPs), CEI achieves similar convergence rates with a high probability. Numerical experiments are performed to validate the theoretical analysis.
|
https://arxiv.org/abs/2505.11323
|
Academic Papers
|
svg
|
782540336aed474bef10e9c0e52d639c3efa8ab77dbc688d602aeb45866bdc12
|
2026-01-13T00:00:00-05:00
|
Model-X Change-Point Detection of Conditional Distribution
|
arXiv:2505.12023v3 Announce Type: replace Abstract: The dynamic nature of many real-world systems can lead to temporal outcome model shifts, causing a deterioration in model accuracy and reliability over time. This requires change-point detection on the outcome models to guide model retraining and adjustments. However, inferring the change point of conditional models is more prone to loss of validity or power than classic detection problems for marginal distributions. This is due to both the temporal covariate shift and the complexity of the outcome model. Also, the existing method of conditional change points detection both have many limitations including linear assumption and low dimension prerequisite which sometimes is not suitable for real world application. To address these challenges, we propose a novel Model-X changE-point detectioN of conditional Distribution (MEND) method computationally enhanced with distillation function for simultaneous change-point detection and localization of the conditional outcome model. We extend and combine our model with neural network to accommodate complex nonlinear and high dimensional situation, which is proved to be valid in both simulation and real data. Theoretical validity of the proposed method is justified. Extensive simulation studies and two real-world examples demonstrate the statistical effectiveness and computational scalability of our method as well as its significant improvements over existing methods.
|
https://arxiv.org/abs/2505.12023
|
Academic Papers
|
svg
|
e4e120fd7710f43b2b9c8a0503169ce7c2bd0187a3ab43a774a625f3e9ec7b0c
|
2026-01-13T00:00:00-05:00
|
Testing for sufficient follow-up in cure models with categorical covariates
|
arXiv:2505.13128v2 Announce Type: replace Abstract: In survival analysis, estimating the fraction of 'immune' or 'cured' subjects who will never experience the event of interest, requires a sufficiently long follow-up period. A few statistical tests have been proposed to test the assumption of sufficient follow-up, i.e. whether the right extreme of the censoring distribution exceeds that of the survival time of the uncured subjects. However, in practice the problem remains challenging. To address this, a relaxed notion of 'practically' sufficient follow-up has been introduced recently, suggesting that the follow-up would be considered sufficiently long if the probability for the event occurring after the end of the study is very small. All these existing tests do not incorporate covariate information, which might affect the cure rate and the survival times. We extend the test for 'practically' sufficient follow-up to settings with categorical covariates. While a straightforward intersection-union type test could reject the null hypothesis of insufficient follow-up only if such hypothesis is rejected for all covariate values, in practice this approach is overly conservative and lacks power. To improve upon this, we propose a novel test procedure that relies on the test decision for one properly chosen covariate value. Our approach relies on the assumption that the conditional density of the uncured survival time is a non-increasing function of time in the tail region. We show that both methods yield tests of asymptotically level $\alpha$ and investigate their finite sample performance through simulations. The practical application of the methods is illustrated using a skin melanoma dataset.
|
https://arxiv.org/abs/2505.13128
|
Academic Papers
|
svg
|
cd99aab4d2ab71f4e0197537a0314b5072d2e8519651612d759eceae35994800
|
2026-01-13T00:00:00-05:00
|
Data-Adaptive Automatic Threshold Calibration for Stability Selection
|
arXiv:2505.22012v2 Announce Type: replace Abstract: Stability selection has gained popularity as a method for enhancing the performance of variable selection algorithms while controlling false discovery rates. However, achieving these desirable properties depends on correctly specifying the stable threshold parameter, which can be challenging. An arbitrary choice of this parameter can substantially alter the set of selected variables, as the variables' selection probabilities are inherently data-dependent. To address this issue, we propose Exclusion Automatic Threshold Selection (EATS), a data-adaptive algorithm that streamlines stability selection by automating the threshold specification process. EATS initially filters out potential noise variables using an exclusion probability threshold, derived from applying stability selection to a randomly shuffled version of the dataset. Following this, EATS selects the stable threshold parameter using the elbow method, balancing the marginal utility of including additional variables against the risk of selecting superfluous variables. We evaluate our approach through an extensive simulation study, benchmarking across commonly used variable selection algorithms and static stable threshold values.
|
https://arxiv.org/abs/2505.22012
|
Academic Papers
|
svg
|
a620a2b632d25361480ff3c0234ff967f14596b7b090a65503ccbad48ec92c82
|
2026-01-13T00:00:00-05:00
|
PCA-Guided Quantile Sampling: Preserving Data Structure in Large-Scale Subsampling
|
arXiv:2506.18249v2 Announce Type: replace Abstract: We introduce Principal Component Analysis guided Quantile Sampling (PCA QS), a novel sampling framework designed to preserve both the statistical and geometric structure of large scale datasets. Unlike conventional PCA, which reduces dimensionality at the cost of interpretability, PCA QS retains the original feature space while using leading principal components solely to guide a quantile based stratification scheme. This principled design ensures that sampling remains representative without distorting the underlying data semantics. We establish rigorous theoretical guarantees, deriving convergence rates for empirical quantiles, Kullback Leibler divergence, and Wasserstein distance, thus quantifying the distributional fidelity of PCA QS samples. Practical guidelines for selecting the number of principal components, quantile bins, and sampling rates are provided based on these results. Extensive empirical studies on both synthetic and real-world datasets show that PCA QS consistently outperforms simple random sampling, yielding better structure preservation and improved downstream model performance. Together, these contributions position PCA QS as a scalable, interpretable, and theoretically grounded solution for efficient data summarization in modern machine learning workflows.
|
https://arxiv.org/abs/2506.18249
|
Academic Papers
|
svg
|
761555f337378d67769f62749b3f799b89b7547a46b91e50284291cd51c65aca
|
2026-01-13T00:00:00-05:00
|
Stable Minima of ReLU Neural Networks Suffer from the Curse of Dimensionality: The Neural Shattering Phenomenon
|
arXiv:2506.20779v4 Announce Type: replace Abstract: We study the implicit bias of flatness / low (loss) curvature and its effects on generalization in two-layer overparameterized ReLU networks with multivariate inputs -- a problem well motivated by the minima stability and edge-of-stability phenomena in gradient-descent training. Existing work either requires interpolation or focuses only on univariate inputs. This paper presents new and somewhat surprising theoretical results for multivariate inputs. On two natural settings (1) generalization gap for flat solutions, and (2) mean-squared error (MSE) in nonparametric function estimation by stable minima, we prove upper and lower bounds, which establish that while flatness does imply generalization, the resulting rates of convergence necessarily deteriorate exponentially as the input dimension grows. This gives an exponential separation between the flat solutions compared to low-norm solutions (i.e., weight decay), which are known not to suffer from the curse of dimensionality. In particular, our minimax lower bound construction, based on a novel packing argument with boundary-localized ReLU neurons, reveals how flat solutions can exploit a kind of "neural shattering" where neurons rarely activate, but with high weight magnitudes. This leads to poor performance in high dimensions. We corroborate these theoretical findings with extensive numerical simulations. To the best of our knowledge, our analysis provides the first systematic explanation for why flat minima may fail to generalize in high dimensions.
|
https://arxiv.org/abs/2506.20779
|
Academic Papers
|
svg
|
378f84a7799b0d60ad0e4ac22e1a6020b0dba2c95fb7b167e1de0a9557ba22f1
|
2026-01-13T00:00:00-05:00
|
When Less Is More: Binary Feedback Can Outperform Ordinal Comparisons in Ranking Recovery
|
arXiv:2507.01613v4 Announce Type: replace Abstract: Paired comparison data, where users evaluate items in pairs, play a central role in ranking and preference learning tasks. While ordinal comparison data intuitively offer richer information than binary comparisons, this paper challenges that conventional wisdom. We propose a general parametric framework for modeling ordinal paired comparisons without ties. The model adopts a generalized additive structure, featuring a link function that quantifies the preference difference between two items and a pattern function that governs the distribution over ordinal response levels. This framework encompasses classical binary comparison models as special cases, by treating binary responses as binarized versions of ordinal data. Within this framework, we show that binarizing ordinal data can significantly improve the accuracy of ranking recovery. Specifically, we prove that under the counting algorithm, the ranking error associated with binary comparisons exhibits a faster exponential convergence rate than that of ordinal data. Furthermore, we characterize a substantial performance gap between binary and ordinal data in terms of a signal-to-noise ratio (SNR) determined by the pattern function. We identify the pattern function that minimizes the SNR and maximizes the benefit of binarization. Extensive simulations and a real application on the MovieLens dataset further corroborate our theoretical findings.
|
https://arxiv.org/abs/2507.01613
|
Academic Papers
|
svg
|
32d79a3a9f80c9aadace154024ce0709e47afe3c21456e73ab2ac216cc8bb8f1
|
2026-01-13T00:00:00-05:00
|
Bootstrapped Control Limits for Score-Based Concept Drift Control Charts
|
arXiv:2507.16749v2 Announce Type: replace Abstract: Monitoring for changes in a predictive relationship represented by a fitted supervised learning model (i.e., concept drift detection) is a widespread problem in modern data-driven applications. A general and powerful Fisher score-based concept drift approach was recently proposed, in which detecting concept drift reduces to detecting changes in the mean of the model's score vector using a multivariate exponentially weighted moving average (MEWMA). To implement the approach, the initial data must be split into two subsets. The first subset serves as the training sample to which the model is fit, and the second subset serves as an out-of-sample test set from which the MEWMA control limit (CL) is determined. In this paper, we retain the same score-based MEWMA monitoring statistic as the existing method and focus instead on improving the computation of the control limit. We develop a novel nested bootstrap procedure for calibrating the CL that allows the entire initial sample to be used for model fitting, thereby yielding a more accurate baseline model while eliminating the need for a large holdout set. We show that a standard nested bootstrap substantially underestimates the variability of the monitoring statistic and develop a 0.632-like correction that appropriately accounts for this. We demonstrate the advantages with numerical examples.
|
https://arxiv.org/abs/2507.16749
|
Academic Papers
|
svg
|
59a0bed9581195feaec82badd92bcbe19c97fa49a22fc08d470c6851d6793191
|
2026-01-13T00:00:00-05:00
|
Bag of Coins: A Statistical Probe into Neural Confidence Structures
|
arXiv:2507.19774v2 Announce Type: replace Abstract: Modern neural networks often produce miscalibrated confidence scores and struggle to detect out-of-distribution (OOD) inputs, while most existing methods post-process outputs without testing internal consistency. We introduce the Bag-of-Coins (BoC) probe, a non-parametric diagnostic of logit coherence that compares softmax confidence $\hat p$ to an aggregate of pairwise Luce-style dominance probabilities $\bar q$, yielding a deterministic coherence score and a p-value-based structural score. Across ViT, ResNet, and RoBERTa with ID/OOD test sets, the coherence gap $\Delta=\bar q-\hat p$ reveals clear ID/OOD separation for ViT (ID ${\sim}0.1$-$0.2$, OOD ${\sim}0.5$-$0.6$) but substantial overlap for ResNet and RoBERTa (both ${\sim}0$), indicating architecture-dependent uncertainty geometry. As a practical method, BoC improves calibration only when the base model is poorly calibrated (ViT: ECE $0.024$ vs.\ $0.180$) and underperforms standard calibrators (ECE ${\sim}0.005$), while for OOD detection it fails across architectures (AUROC $0.020$-$0.253$) compared to standard scores ($0.75$-$0.99$). We position BoC as a research diagnostic for interrogating how architectures encode uncertainty in logit geometry rather than a production calibration or OOD detection method.
|
https://arxiv.org/abs/2507.19774
|
Academic Papers
|
svg
|
da7a8fa61fafc53953473bf7e14c5887e4c5782b7496e0c5b598ab9ec4d1b9df
|
2026-01-13T00:00:00-05:00
|
From Sublinear to Linear: Fast Convergence in Deep Networks via Locally Polyak-Lojasiewicz Regions
|
arXiv:2507.21429v2 Announce Type: replace Abstract: Gradient descent (GD) on deep neural network loss landscapes is non-convex, yet often converges far faster in practice than classical guarantees suggest. Prior work shows that within locally quasi-convex regions (LQCRs), GD converges to stationary points at sublinear rates, leaving the commonly observed near-exponential training dynamics unexplained. We show that, under a mild local Neural Tangent Kernel (NTK) stability assumption, the loss satisfies a PL-type error bound within these regions, yielding a Locally Polyak-Lojasiewicz Region (LPLR) in which the squared gradient norm controls the suboptimality gap. For properly initialized finite-width networks, we show that under local NTK stability this PL-type mechanism holds around initialization and establish linear convergence of GD as long as the iterates remain within the resulting LPLR. Empirically, we observe PL-like scaling and linear-rate loss decay in controlled full-batch training and in a ResNet-style CNN trained with mini-batch SGD on a CIFAR-10 subset, indicating that LPLR signatures can persist under modern architectures and stochastic optimization. Overall, the results connect local geometric structure, local NTK stability, and fast optimization rates in a finite-width setting.
|
https://arxiv.org/abs/2507.21429
|
Academic Papers
|
svg
|
47d20798b6b52f167c0c4eb47e20320f00c4bb23be439e9d79e129411edebb5d
|
2026-01-13T00:00:00-05:00
|
Bayesian inference of antibody evolutionary dynamics using multitype branching processes
|
arXiv:2508.09519v2 Announce Type: replace Abstract: When our immune system encounters foreign antigens (i.e., from pathogens), the B cells that produce our antibodies undergo a cyclic process of proliferation, mutation, and selection, improving their ability to bind to the specific antigen. Immunologists have recently developed powerful experimental techniques to investigate this process in mouse models. In one such experiment, mice are engineered with a monoclonal B-cell precursor and immunized with a model antigen. B cells are sampled from sacrificed mice after the immune response has progressed, and the mutated genetic loci encoding antibodies are sequenced. This experiment allows parallel replay of antibody evolution, but produces data at only one time point; we are unable to observe the evolutionary trajectories that lead to optimized antibody affinity in each mouse. To address this, we model antibody evolution as a multitype branching process and integrate over unobserved histories conditioned on phylogenetic signal in sequence data, leveraging parallel experimental replays for parameter inference. We infer the functional relationship between B-cell fitness and antigen binding affinity in a Bayesian framework, equipped with an efficient likelihood calculation algorithm and Markov chain Monte Carlo posterior approximation. In a simulation study, we demonstrate that a sigmoidal relationship between fitness and binding affinity can be recovered from realizations of the branching process. We then perform inference for experimental data from 52 replayed B-cell lineages sampled 15 days after immunization, yielding a total of 3,758 sampled B cells. The recovered sigmoidal curve indicates that the fitness of high-affinity B cells is over six times larger than that of low-affinity B cells, with a sharp transition from low to high fitness values as affinity increases.
|
https://arxiv.org/abs/2508.09519
|
Academic Papers
|
svg
|
68718d0e11429812d43675abcfebb914b5a98f6ebbe4f5b86a9e724d203213e1
|
2026-01-13T00:00:00-05:00
|
Precision Dose-Finding Design for Phase I Oncology Trials by Integrating Pharmacology Data
|
arXiv:2509.05120v2 Announce Type: replace Abstract: Phase I oncology trials aim to identify a safe dose - often the maximum tolerated dose (MTD) - for subsequent studies. Conventional designs focus on population-level toxicity modeling, with recent attention on leveraging pharmacokinetic (PK) data to improve dose selection. We propose the Precision Dose-Finding (PDF) design, a novel Bayesian phase I framework that integrates individual patient PK profiles into the dose-finding process. By incorporating patient-specific PK parameters (such as volume of distribution and elimination rate), PDF models toxicity risk at the individual level, in contrast to traditional methods that ignore inter-patient variability. The trial is structured in two stages: an initial training stage to update model parameters using cohort-based dose escalation, and a subsequent test stage in which doses for new patients are chosen based on each patient's own PK-predicted toxicity probability. This two-stage approach enables truly personalized dose assignment while maintaining rigorous safety oversight. Extensive simulation studies demonstrate the feasibility of PDF and suggest that it provides improved safety and dosing precision relative to the continual reassessment method (CRM). The PDF design thus offers a refined dose-finding strategy that tailors the MTD to individual patients, aligning phase I trials with the ideals of precision medicine.
|
https://arxiv.org/abs/2509.05120
|
Academic Papers
|
svg
|
40b4feaa614d707800382efa6f68ad31dd96b9530a8595bb7efb968eb484e27e
|
2026-01-13T00:00:00-05:00
|
Copula-Stein Discrepancy: A Generator-Based Stein Operator for Archimedean Dependence
|
arXiv:2510.24056v2 Announce Type: replace Abstract: Kernel Stein discrepancies (KSDs) are widely used for goodness-of-fit testing, but standard KSDs can be insensitive to higher-order dependence features such as tail dependence. We introduce the Copula-Stein Discrepancy (CSD), which defines a Stein operator directly on the copula density to target dependence geometry rather than the joint score. For Archimedean copulas, CSD admits a closed-form Stein kernel derived from the scalar generator. We prove that CSD metrizes weak convergence of copula distributions, admits an empirical estimator with minimax-optimal rate $O_P(n^{-1/2})$, and is sensitive to differences in tail dependence coefficients. We further extend the framework beyond Archimedean families to general copulas, including elliptical and vine constructions. Computationally, exact CSD kernel evaluation is linear in dimension, and a random-feature approximation reduces the quadratic $O(n^2)$ sample scaling to near-linear $\tilde{O}(n)$; experiments show near-nominal Type~I error, increasing power, and rapid concentration of the approximation toward the exact $\widehat{\mathrm{CSD}}_n^2$ as the number of features grows.
|
https://arxiv.org/abs/2510.24056
|
Academic Papers
|
svg
|
c30902bd9f0af1cb3e941905496e90dd4343e931b905a2ea58fd8e57804b8456
|
2026-01-13T00:00:00-05:00
|
Discretization approximation: An alternative to Monte Carlo in Bayesian computation
|
arXiv:2512.11475v2 Announce Type: replace Abstract: In this paper we propose a new deterministic approximation method, called discretization approximation, for Bayesian computation. Discretization approximation is very simple to understand and to implement, It only requires calculating posterior density values as probability masses at pre-specified support points. The resulted discrete distribution can be a good approximation to the target posterior distribution. All posterior quantities, including means, standard deviations, and quantiles, can be approximated by those of this completely known discrete distribution. We establish the convergence rate of discretization approximation as the number of support points goes to infinity. If the support points are generated from quasi-Monte Carlo sequences, then the rate is actually the same as that in integration approximation, generally faster than the optimal statistical rate. In this sense, discretization approximation is superior to the popular Markov chain Monte Carlo method. We also provide random sampling and representation point construction methods from discretization approximation. Numerical examples including some benchmarks demonstrate that the proposed method performs quite well for both low-dimensional and high-dimensional cases.
|
https://arxiv.org/abs/2512.11475
|
Academic Papers
|
svg
|
4c4c62f7a785e57e67286468d04d148cd39435060f06e4586a9b729fd0bb0782
|
2026-01-13T00:00:00-05:00
|
A Concentration Bound for TD(0) with Function Approximation
|
arXiv:2312.10424v4 Announce Type: replace-cross Abstract: We derive uniform all-time concentration bound of the type 'for all $n \geq n_0$ for some $n_0$' for TD(0) with linear function approximation. We work with online TD learning with samples from a single sample path of the underlying Markov chain. This makes our analysis significantly different from offline TD learning or TD learning with access to independent samples from the stationary distribution of the Markov chain. We treat TD(0) as a contractive stochastic approximation algorithm, with both martingale and Markov noises. Markov noise is handled using the Poisson equation and the lack of almost sure guarantees on boundedness of iterates is handled using the concept of relaxed concentration inequalities.
|
https://arxiv.org/abs/2312.10424
|
Academic Papers
|
svg
|
e28faff5eb4189b2b19f63ff56e8ac2c4da202c7886b684a40396f583feecb43
|
2026-01-13T00:00:00-05:00
|
Reimagining Anomalies: What If Anomalies Were Normal?
|
arXiv:2402.14469v2 Announce Type: replace-cross Abstract: Deep learning-based methods have achieved a breakthrough in image anomaly detection, but their complexity introduces a considerable challenge to understanding why an instance is predicted to be anomalous. We introduce a novel explanation method that generates multiple alternative modifications for each anomaly, capturing diverse concepts of anomalousness. Each modification is trained to be perceived as normal by the anomaly detector. The method provides a semantic explanation of the mechanism that triggered the detector, allowing users to explore ``what-if scenarios.'' Qualitative and quantitative analyses across various image datasets demonstrate that applying this method to state-of-the-art detectors provides high-quality semantic explanations.
|
https://arxiv.org/abs/2402.14469
|
Academic Papers
|
svg
|
2fac5c4edcb0442a4c39ac58322daf8667a482fe11f90f5a14bd3d81e96070ad
|
2026-01-13T00:00:00-05:00
|
Data-Driven Knowledge Transfer in Batch $Q^*$ Learning
|
arXiv:2404.15209v3 Announce Type: replace-cross Abstract: In data-driven decision-making in marketing, healthcare, and education, it is desirable to utilize a large amount of data from existing ventures to navigate high-dimensional feature spaces and address data scarcity in new ventures. We explore knowledge transfer in dynamic decision-making by concentrating on batch stationary environments and formally defining task discrepancies through the lens of Markov decision processes (MDPs). We propose a framework of Transferred Fitted $Q$-Iteration algorithm with general function approximation, enabling the direct estimation of the optimal action-state function $Q^*$ using both target and source data. We establish the relationship between statistical performance and MDP task discrepancy under sieve approximation, shedding light on the impact of source and target sample sizes and task discrepancy on the effectiveness of knowledge transfer. We show that the final learning error of the $Q^*$ function is significantly improved from the single task rate both theoretically and empirically.
|
https://arxiv.org/abs/2404.15209
|
Academic Papers
|
svg
|
198cd83090655e6118838e24e1ef7b5b9d6b9d0491655da1e9458b55db8f5052
|
2026-01-13T00:00:00-05:00
|
Low-Rank Online Dynamic Assortment with Dual Contextual Information
|
arXiv:2404.17592v2 Announce Type: replace-cross Abstract: As e-commerce expands, delivering real-time personalized recommendations from vast catalogs poses a critical challenge for retail platforms. Maximizing revenue requires careful consideration of both individual customer characteristics and available item features to continuously optimize assortments over time. In this paper, we consider the dynamic assortment problem with dual contexts -- user and item features. In high-dimensional scenarios, the quadratic growth of dimensions complicates computation and estimation. To tackle this challenge, we introduce a new low-rank dynamic assortment model to transform this problem into a manageable scale. Then we propose an efficient algorithm that estimates the intrinsic subspaces and utilizes the upper confidence bound approach to address the exploration-exploitation trade-off in online decision making. Theoretically, we establish a regret bound of $\tilde{O}((d_1+d_2)r\sqrt{T})$, where $d_1, d_2$ represent the dimensions of the user and item features respectively, $r$ is the rank of the parameter matrix, and $T$ denotes the time horizon. This bound represents a substantial improvement over prior literature, achieved by leveraging the low-rank structure. Extensive simulations and an application to the Expedia hotel recommendation dataset further demonstrate the advantages of our proposed method.
|
https://arxiv.org/abs/2404.17592
|
Academic Papers
|
svg
|
8f85745557598a07814e3403f4d8a6dbfb7cd216f1b046ddd7364af1bbc90f4c
|
2026-01-13T00:00:00-05:00
|
Model Privacy: A Unified Framework to Understand Model Stealing Attacks and Defenses
|
arXiv:2502.15567v2 Announce Type: replace-cross Abstract: The use of machine learning (ML) has become increasingly prevalent in various domains, highlighting the importance of understanding and ensuring its safety. One pressing concern is the vulnerability of ML applications to model stealing attacks. These attacks involve adversaries attempting to recover a learned model through limited query-response interactions, such as those found in cloud-based services or on-chip artificial intelligence interfaces. While existing literature proposes various attack and defense strategies, these often lack a theoretical foundation and standardized evaluation criteria. In response, this work presents a framework called ``Model Privacy'', providing a foundation for comprehensively analyzing model stealing attacks and defenses. We establish a rigorous formulation for the threat model and objectives, propose methods to quantify the goodness of attack and defense strategies, and analyze the fundamental tradeoffs between utility and privacy in ML models. Our developed theory offers valuable insights into enhancing the security of ML models, especially highlighting the importance of the attack-specific structure of perturbations for effective defenses. We demonstrate the application of model privacy from the defender's perspective through various learning scenarios. Extensive experiments corroborate the insights and the effectiveness of defense mechanisms developed under the proposed framework.
|
https://arxiv.org/abs/2502.15567
|
Academic Papers
|
svg
|
f34c229beaa378af20132d181959e35731df1bb16ac61184ceeaa32f7153b85a
|
2026-01-13T00:00:00-05:00
|
The Power of Iterative Filtering for Supervised Learning with (Heavy) Contamination
|
arXiv:2505.20177v2 Announce Type: replace-cross Abstract: Inspired by recent work on learning with distribution shift, we give a general outlier removal algorithm called iterative polynomial filtering and show a number of striking applications for supervised learning with contamination: (1) We show that any function class that can be approximated by low-degree polynomials with respect to a hypercontractive distribution can be efficiently learned under bounded contamination (also known as nasty noise). This is a surprising resolution to a longstanding gap between the complexity of agnostic learning and learning with contamination, as it was widely believed that low-degree approximators only implied tolerance to label noise. In particular, it implies the first efficient algorithm for learning halfspaces with $\eta$-bounded contamination up to error $2\eta+\epsilon$ with respect to the Gaussian distribution. (2) For any function class that admits the (stronger) notion of sandwiching approximators, we obtain near-optimal learning guarantees even with respect to heavy additive contamination, where far more than $1/2$ of the training set may be added adversarially. Prior related work held only for regression and in a list-decodable setting. (3) We obtain the first efficient algorithms for tolerant testable learning of functions of halfspaces with respect to any fixed log-concave distribution. Even the non-tolerant case for a single halfspace in this setting had remained open. These results significantly advance our understanding of efficient supervised learning under contamination, a setting that has been much less studied than its unsupervised counterpart.
|
https://arxiv.org/abs/2505.20177
|
Academic Papers
|
svg
|
492db51240ec75c238ec129fc5d08881f2f983b6c45dd0e6799b06c191131544
|
2026-01-13T00:00:00-05:00
|
ORACLE: Explaining Feature Interactions in Neural Networks with ANOVA
|
arXiv:2509.10825v4 Announce Type: replace-cross Abstract: We introduce ORACLE, a framework for explaining neural networks on tabular data and scientific factorial designs. ORACLE summarizes a trained network's prediction surface with main effects and pairwise interactions by treating the network as a black-box response, discretizing the inputs onto a grid, and fitting an orthogonal factorial (ANOVA-style) surrogate -- the $L^2$ orthogonal projection of the model response onto a finite-dimensional factorial subspace. A simple centering and $\mu$-rebalancing step then expresses this surrogate as main- and interaction-effect tables that remain faithful to the original model in the $L^2$ sense. The resulting grid-based interaction maps are easy to visualize, comparable across backbones, and directly aligned with classical design-of-experiments practice. On synthetic factorial benchmarks and low- to medium-dimensional tabular regression tasks, ORACLE more accurately recovers ground-truth interaction structure and hotspots than Monte Carlo SHAP-family interaction methods, as measured by ranking, localization, and cross-backbone stability. We also discuss its scope in latent image and text settings: grid-based factorial surrogates are most effective when features admit an interpretable factorial structure, making ORACLE particularly well-suited to scientific and engineering workflows that require stable DoE-style interaction summaries.
|
https://arxiv.org/abs/2509.10825
|
Academic Papers
|
svg
|
4c62708432c8216ef314e9650c465010f4976d881efcd61e4c850dbbe81812b3
|
2026-01-13T00:00:00-05:00
|
Wide Neural Networks as a Baseline for the Computational No-Coincidence Conjecture
|
arXiv:2510.06527v2 Announce Type: replace-cross Abstract: We establish that randomly initialized neural networks, with large width and a natural choice of hyperparameters, have nearly independent outputs exactly when their activation function is nonlinear with zero mean under the Gaussian measure: $\mathbb{E}_{z \sim \mathcal{N}(0,1)}[\sigma(z)]=0$. For example, this includes ReLU and GeLU with an additive shift, as well as tanh, but not ReLU or GeLU by themselves. Because of their nearly independent outputs, we propose neural networks with zero-mean activation functions as a promising candidate for the Alignment Research Center's computational no-coincidence conjecture -- a conjecture that aims to measure the limits of AI interpretability.
|
https://arxiv.org/abs/2510.06527
|
Academic Papers
|
svg
|
e4059efb0e57de3c553439c6863e631ec3479f26c5e080773ba8f98ea2213f3e
|
2026-01-13T00:00:00-05:00
|
SAVeD: Semantic Aware Version Discovery
|
arXiv:2511.17298v2 Announce Type: replace-cross Abstract: Our work introduces SAVeD (Semantically Aware Version Detection), a contrastive learning-based framework for identifying versions of structured datasets without relying on metadata, labels, or integration-based assumptions. SAVeD addresses a common challenge in data science of repeated labor due to a difficulty of similar work or transformations on datasets. SAVeD employs a modified SimCLR pipeline, generating augmented table views through random transformations (e.g., row deletion, encoding perturbations). These views are embedded via a custom transformer encoder and contrasted in latent space to optimize semantic similarity. Our model learns to minimize distances between augmented views of the same dataset and maximize those between unrelated tables. We evaluate performance using validation accuracy and separation, defined respectively as the proportion of correctly classified version/non-version pairs on a hold-out set, and the difference between average similarities of versioned and non-versioned tables (defined by a benchmark, and not provided to the model). Our experiments span five canonical datasets from the Semantic Versioning in Databases Benchmark, and demonstrate substantial gains post-training. SAVeD achieves significantly higher accuracy on completely unseen tables in, and a significant boost in separation scores, confirming its capability to distinguish semantically altered versions. Compared to untrained baselines and prior state-of-the-art dataset-discovery methods like Starmie, our custom encoder achieves competitive or superior results.
|
https://arxiv.org/abs/2511.17298
|
Academic Papers
|
svg
|
d04da4cd46056f8eb778cc16e5c7b31069c45e0b73c8592387a540cfa6ee6c3d
|
2026-01-13T00:00:00-05:00
|
A Regime-Aware Fusion Framework for Time Series Classification
|
arXiv:2512.15378v2 Announce Type: replace-cross Abstract: Kernel-based methods such as Rocket are among the most effective default approaches for univariate time series classification (TSC), yet they do not perform equally well across all datasets. We revisit the long-standing intuition that different representations capture complementary structure and show that selectively fusing them can yield consistent improvements over Rocket on specific, systematically identifiable kinds of datasets. We introduce Fusion-3 (F3), a lightweight framework that adaptively fuses Rocket, SAX, and SFA representations. To understand when fusion helps, we cluster UCR datasets into six groups using meta-features capturing series length, spectral structure, roughness, and class imbalance, and treat these clusters as interpretable data-structure regimes. Our analysis shows that fusion typically outperforms strong baselines in regimes with structured variability or rich frequency content, while offering diminishing returns in highly irregular or outlier-heavy settings. To support these findings, we combine three complementary analyses: non-parametric paired statistics across datasets, ablation studies isolating the roles of individual representations, and attribution via SHAP to identify which dataset properties predict fusion gains. Sample-level case studies further reveal the underlying mechanism: fusion primarily improves performance by rescuing specific errors, with adaptive increases in frequency-domain weighting precisely where corrections occur. Using 5-fold cross-validation on the 113 UCR datasets, F3 yields small but consistent average improvements over Rocket, supported by frequentist and Bayesian evidence and accompanied by clearly identifiable failure cases. Our results show that selectively applied fusion provides dependable and interpretable extension to strong kernel-based methods, correcting their weaknesses precisely where the data support it.
|
https://arxiv.org/abs/2512.15378
|
Academic Papers
|
svg
|
e6cda8b017cc05c2a5d19acf52807d6d66ac2a3ac9684b90c59cc84cfe0839e7
|
2026-01-13T00:00:00-05:00
|
The Qutrit Bloch Sphere
|
arXiv:2601.06240v1 Announce Type: new Abstract: It is very important to understand if a qutrit can be visualized in a 3-dimensional Bloch sphere. In this work, a mathematical model for performing this operation is presented.
|
https://arxiv.org/abs/2601.06240
|
Academic Papers
|
svg
|
043bf92814358e332f751ee08044f7c9d3f41de6406d5dcfaf431b6e4d11d1c5
|
2026-01-13T00:00:00-05:00
|
Universal Predictors for Mixing Time more than Liouvillian Gap
|
arXiv:2601.06256v1 Announce Type: new Abstract: We analyze the mixing time of open quantum systems governed by the Lindblad master equation, showing it is not only determined by the Liouvillian gap, but also the trace norm of the lowest excited state of Liouvillian superoperator. By utilizing these universal predictors of mixing time, we establish general conditions for the fast and rapid mixing respectively. Specifically, we derive rapid mixing conditions for both the strong and weak dissipation regimes, formulated as sparsity constraints on the Hamiltonian and the local Lindblad operators. Our findings provide a general framework for calculating mixing time and offer a guide for designing dissipation to achieve desired mixing speeds, which has significant implications for efficient experimental state preparation.
|
https://arxiv.org/abs/2601.06256
|
Academic Papers
|
svg
|
0ac04c8fc8dc54a0a481ca90f74f96c610a3f867e204927ca6b4eb450819cd29
|
2026-01-13T00:00:00-05:00
|
Latent splitting as a causal probe
|
arXiv:2601.06265v1 Announce Type: new Abstract: Generalizations of Bell's framework to causal networks have yielded new foundational insights and applications, including the use of interventions to enhance the detection of nonclassicality in scenarios with communication. Such interventions, however, become uninformative when all observable variables are space-like separated. To address this limitation, we introduce the latent splitting procedure, a generalization of interventions to quantum networks in which controlled manipulations are applied to latent quantum systems. We show that latent splitting enables the detection of nonclassicality by combining observational and interventional data even when conventional interventions fail. Focusing on the triangle network, we derive new analytical witnesses that robustly certify nonclassicality, including nonlinear inequalities for minimal binary-variable scenarios and extensions of the nonclassical region of previously proposed experiments.
|
https://arxiv.org/abs/2601.06265
|
Academic Papers
|
svg
|
0f1e34f7f02a88ab3836a32dbc0475199508102d2fc4f3133aa0fca757eccb87
|
2026-01-13T00:00:00-05:00
|
Quantum algorithm for dephasing of coupled systems: decoupling and IQP duality
|
arXiv:2601.06298v1 Announce Type: new Abstract: Noise and decoherence are ubiquitous in the dynamics of quantum systems coupled to an external environment. In the regime where environmental correlations decay rapidly, the evolution of a subsytem is well described by a Lindblad quantum master equation. In this work, we introduce a quantum algorithm for simulating unital Lindbladian dynamics by sampling unitary quantum channels without extra ancillas. Using ancillary qubits we show that this algorithm allows approximating general Lindbladians as well. For interacting dephasing Lindbladians coupling two subsystems, we develop a decoupling scheme that reduces the circuit complexity of the simulation. This is achieved by sampling from a time-correlated probability distribution - determined by the evolution of one subsystem, which specifies the stochastic circuit implemented on the complementary subsystem. We demonstrate our approach by studying a model of bosons coupled to fermions via dephasing, which naturally arises from anharmonic effects in an electron-phonon system coupled to a bath. Our method enables tracing out the bosonic degrees of freedom, reducing part of the dynamics to sampling an instantaneous quantum polynomial (IQP) circuit. The sampled bitstrings then define a corresponding fermionic problem, which in the non-interacting case can be solved efficiently classically. We comment on the computational complexity of this class of dissipative problems, using the known fact that sampling from IQP circuits is believed to be difficult classically.
|
https://arxiv.org/abs/2601.06298
|
Academic Papers
|
svg
|
18fafd03bf78750d3bd0c136dd38dc26325c197f374e7c314d511a8682cc732b
|
2026-01-13T00:00:00-05:00
|
The pros and cons of using deep reinforcement learning or genetic algorithms to design control schemes for quantum state transfer on qubit chains
|
arXiv:2601.06303v1 Announce Type: new Abstract: In recent years, control methods based on different optimization techniques have shed light on the possibilities of processing information in many quantum systems. When exploring the transmission of quantum states, faster transmission times are mandatory to avoid the deleterious effects of multiple sources of decoherence that spoil the transmission process. In particular, using Reinforcement Learning to devise sequences of step-wise external controls provides good transfer policies at short transmission times. We present two approaches to control the transmission of quantum states in qubit chains using external controls to force the dynamical evolution of the chain state. The first approach relies on the well-known Genetic Algorithm to generate a sequence of external controls, while the second approach uses a variant of Reinforcement Learning. The Genetic algorithm achieves excellent transmission fidelity at as short transmission times as Reinforcement Learning, surpassing the fidelities achieved by the latter method. Nevertheless, the Reinforcement Learning method offers robust control policies when the control pulses are noisy enough, owing to an imperfect timing of the pulses, deficient control devices, or other sources of phase decoherence. We present the regime where each method is best suited to control the transmission of arbitrary qubit states.
|
https://arxiv.org/abs/2601.06303
|
Academic Papers
|
svg
|
b16a11ad3a404ac8d652cb5d179843c2667b630716d6c01d09efd813fb289399
|
2026-01-13T00:00:00-05:00
|
Informationally Complete Distributed Metrology Without a Shared Reference Frame
|
arXiv:2601.06393v1 Announce Type: new Abstract: In quantum information processing, implementing arbitrary preparations and measurements on qubits necessitates precise information to identify a specific reference frame (RF). In space quantum communication and sensing, where a shared RF is absent, the interplay between locality and symmetry imposes fundamental restrictions on physical systems. A restriction on realizable unitary operations results in a no-go theorem prohibiting the extraction of locally encoded information in RF-independent distributed metrology. Here, we propose a reversed-encoding method applied to two copies of local-unitary-invariant network states. This approach circumvents the no-go theorem while simultaneously mitigating decoherence-like noise caused by RF misalignment, thereby enabling the complete recovery of the quantum Fisher information (QFI). Furthermore, we confirm local Bell-state measurements as an optimal strategy to saturate the QFI. Our findings pave the way for the field application of distributed quantum sensing, which is inherently subject to unknown RF misalignment and was previously precluded by the no-go theorem.
|
https://arxiv.org/abs/2601.06393
|
Academic Papers
|
svg
|
51450995a89d12a1b55a79f90ced57553c6b8d8fdeab10c258b3b7cb3519165e
|
2026-01-13T00:00:00-05:00
|
Restoring Locality: The Heisenberg Picture as a Separable Description of Quantum Theory
|
arXiv:2601.06522v1 Announce Type: new Abstract: Local realism has been the subject of much discussion in modern physics, partly because our deepest theories of physics appear to contradict one another in regard to whether reality is local. According to general relativity, it is, as physical quantities (perceptible or not) in two spacelike separated regions cannot affect one another. Yet, in quantum theory, it has traditionally been thought that local realism cannot hold and that such effects do occur. This apparent discrepancy between the two theories is resolved by Everettian quantum theory, as first proven by Deutsch & Hayden (2000). In this paper, I will explain how local realism is respected in quantum theory and review the advances in our understanding of locality since Deutsch & Hayden's work, including the concept of local branching and the more general analysis by Raymond-Robichaud (2021)
|
https://arxiv.org/abs/2601.06522
|
Academic Papers
|
svg
|
e2bd41a43ead5bb64f98d583307d3763539e21c4eebacab03ea240e0155a528f
|
2026-01-13T00:00:00-05:00
|
Digital Predistortion of Power Amplifiers for Quantum Computing
|
arXiv:2601.06524v1 Announce Type: new Abstract: Power amplifiers (PA) are essential for microwavecontrolled trapped-ion and semiconductor spin based quantum computers (QC). They adjust the power level of the control signal and therefore the processing time of the QC. Their nonlinearities and memory effects degrade the signal quality and, thus, the fidelity of qubit gate operations. Driving the PA with a significant input power back-off reduces nonlinear effects but is neither power-efficient nor cost-effective. To overcome this limitation, this letter augments the conventional signal generation system applied in QCs by digital predistortion (DPD) to linearize the radio frequency (RF) channel. Numerical analysis of the qubit behavior based on measured representative control signals indicates that DPD improves its fidelity.
|
https://arxiv.org/abs/2601.06524
|
Academic Papers
|
svg
|
d7e49d18f3ee4bbb975edd8140c25ab207236a65087ef44402a80f3c24e1c72a
|
2026-01-13T00:00:00-05:00
|
Magnetic levitation and spatial superposition of a nanodiamond with a current-carrying chip
|
arXiv:2601.06608v1 Announce Type: new Abstract: We propose a current-carrying-chip scheme for generating spatial quantum superpositions using a levitating nanodiamond with a built-in nitrogen-vacancy (NV) centre defect. Our setup is quite versatile and we aim to create the superposition for a mass range of $10^{-19}~{\rm kg}< m< 10^{-15}~{\rm kg}$ and a superposition size ${\cal O}(10) {\rm \mu m} < \Delta x < {\cal O}(1){\rm nm}$, respectively, in $t\leq 0.1$s, depending on the position we launch from the center of the diamagnetic trap. We provide an in-depth analysis of two parallel chips that can create levitation and spatial superposition along the $x$-axis, while producing a very tight trap in the $y$ direction, and the direction of gravity, i.e., the $z$ direction. Numerical simulations demonstrate that our setup can create a one-dimensional spatial superposition state along the x-axis. Throughout this process, the particle is stably levitated in the z-direction, and its motion is effectively confined in the y-direction for a Gaussian initial condition. This setup presents a viable platform for a diamagnetically levitated nanoparticle for a table-top experiment exploring the possibility of creating a macroscopic Schr\"odinger Cat state to test the quantum gravity induced entanglement of masses (QGEM) protocol.
|
https://arxiv.org/abs/2601.06608
|
Academic Papers
|
svg
|
d0c4e27549ef3c510e02ffc8625d8514ca655ef61e0323ff755faca80c1e9a9b
|
2026-01-13T00:00:00-05:00
|
Rydberg atom parity gate based on dark state resonances
|
arXiv:2601.06665v1 Announce Type: new Abstract: Quantum computation (QC) and digital quantum simulation (DQS) essentially require two- or multi-qubit controlled-NOT or -phase gates. We propose an alternative pathway for QC and DQS using a three-qubit parity gate in a Rydberg atom array. The basic principle of the Rydberg atom parity gate (RPG) is that the operation on the target qubit is controlled by the parity of the control qubits. First, we discuss how to construct an RPG based on a dark state resonance. We optimize the gate parameters by numerically analyzing the time evolution of the computational basis states to maximize the gate fidelity. We also show that our proposed RPG is extremely robust against the Rydberg blockade error. To demonstrate the efficiency of the proposed RPG over the conventional CNOT or CZ gate in QC and DQS, we implement the Deutsch-Jozsa algorithm and simulate the Ising Hamiltonian. The results show that RPG can be a better substitute of the CNOT gate to yield better results, as it decreases the circuit noise by reducing circuit depth.
|
https://arxiv.org/abs/2601.06665
|
Academic Papers
|
svg
|
1de61122ac4e592a24ea007972f421fcafad11b38f3b2ebd41e8c61b48a6a5cc
|
2026-01-13T00:00:00-05:00
|
A paradigm for universal quantum information processing with integrated acousto-optic frequency beamsplitters
|
arXiv:2601.06752v1 Announce Type: new Abstract: Frequency-bin encoding offers tremendous potential in quantum photonic information processing, in which a single waveguide can support hundreds of lightpaths in a naturally phase-stable fashion. This stability, however, comes at a cost: arbitrary unitary operations can be realized by cascaded electro-optic phase modulators and pulse shapers, but require nontrivial numerical optimization for design and have thus far been limited to discrete tabletop components. In this article, we propose, formalize, and computationally evaluate a new paradigm for universal frequency-bin quantum information processing using acousto-optic scattering processes between distinct transverse modes. We show that controllable phase matching in intermodal processes enables 2$\times$2 frequency beamsplitters and transverse-mode-dependent phase shifters, which together comprise cascadable FRequency-transverse-mODe Operations (FRODOs) that can synthesize any unitary via analytical decomposition procedures. Modeling the performance of both random gates and discrete Fourier transforms, we demonstrate the feasibility of high-fidelity quantum operations with existing integrated photonics technology, highlighting prospects of parallelizable operations achieving 100\% bandwidth utilization. Our approach is realizable with CMOS technology, opening the door to scalable on-chip quantum information processing in the frequency domain.
|
https://arxiv.org/abs/2601.06752
|
Academic Papers
|
svg
|
b99df70bf023a471d4e4aaebb08fc933ddf13c8362abe5de6189b631b8a9ab97
|
2026-01-13T00:00:00-05:00
|
Noise-Resistant Feature-Aware Attack Detection Using Quantum Machine Learning
|
arXiv:2601.06762v1 Announce Type: new Abstract: Continuous-variable quantum key distribution (CV-QKD) is a quantum communication technology that offers an unconditional security guarantee. However, the practical deployment of CV-QKD systems remains vulnerable to various quantum attacks. In this paper, we propose a quantum machine learning (QML)-based attack detection framework (QML-ADF) that safeguards the security of high-rate CV-QKD systems. In particular, two alternative QML models -- quantum support vector machines (QSVM) and quantum neural networks (QNN) -- are developed to perform noise-resistant and feature-aware attack detection before conventional data postprocessing. Leveraging feature-rich quantum data from Gaussian modulation and homodyne detection, the QML-ADF effectively detects quantum attacks, including both known and unknown types defined by these distinctive features. The results indicate that all twelve distinct QML variants for both QSVM and QNN exhibit remarkable performance in detecting both known and previously undiscovered quantum attacks, with the best-performing QSVM variant outperforming the top QNN counterpart. Furthermore, we systematically evaluate the performance of the QML-ADF under various physically interpretable noise backends, demonstrating its strong robustness and superior detection performance. We anticipate that the QML-ADF will not only enable robust detection of quantum attacks under realistic deployment conditions but also strengthen the practical security of quantum communication systems.
|
https://arxiv.org/abs/2601.06762
|
Academic Papers
|
svg
|
70301e4793577467bc1fb2313f4ec6b6f56ba0104b5eac7fc284776a6846d380
|
2026-01-13T00:00:00-05:00
|
Experimental Coherent One-Way Quantum Key Distribution with Simplicity and Practical Security
|
arXiv:2601.06772v1 Announce Type: new Abstract: Coherent one-way quantum key distribution (COW-QKD) has been widely investigated, and even been deployed in real-world quantum network. However, the proposal of the zero-error attack has critically undermined its security guarantees, and existing experimental implementations have not yet established security against coherent attacks. In this work, we propose and experimentally demonstrate an information-theoretically secure COW-QKD protocol that can resist source side-channel attacks, with secure transmission distances up to 100 km. Our system achieves a secure key rate on the order of kilobits per second over 50 km in the finite-size regime, sufficient for real-time secure voice communication across metropolitan networks. Furthermore, we demonstrate the encrypted transmission of a logo with information-theoretic security over 100 km of optical fiber. These results confirm that COW-QKD can simultaneously provide simplicity and security, establishing it as a strong candidate for deployment in small-scale quantum networks.
|
https://arxiv.org/abs/2601.06772
|
Academic Papers
|
svg
|
4e2d1c7f9b53ecda88cb907170cecdeff0f14ba63dfebeab352e06e8b50b244e
|
2026-01-13T00:00:00-05:00
|
Geometric and Operational Characterization of Two-Qutrit Entanglement
|
arXiv:2601.06783v1 Announce Type: new Abstract: We investigate the entanglement structure of bipartite two-qutrit pure states from both geometric and operational perspectives.Using the eigenvalues of the reduced density matrix, we analyze how symmetric polynomials characterize pairwise and genuinely three-level correlations. We show that the determinant of the coefficient matrix defines a natural, rank-sensitive geometric invariant that vanishes for all rank-2 states and is nonzero only for rank-3 entangled states. An explicit analytic constraint relating this determinant-based invariant to the I-concurrence is derived, thereby defining the physically accessible region of two-qutrit states in invariant space. Furthermore, we establish an operational correspondence with three-path optical interferometry and analyze conditional visibility and predictability in a qutrit quantum erasure protocol, including the effects of unequal path transmittances. Numerical demonstrations confirm the analytic results and the associated complementarity relations. These findings provide a unified geometric and operational framework for understanding two-qutrit entanglement.
|
https://arxiv.org/abs/2601.06783
|
Academic Papers
|
svg
|
77029c64afec7fdd17a569d55e45b2a6462070b9f4da046becc9a75328ce3114
|
2026-01-13T00:00:00-05:00
|
Cancelling second order frequency shifts in Ge hole spin qubits via bichromatic control
|
arXiv:2601.06805v1 Announce Type: new Abstract: Germanium quantum dot hole spin qubits are compatible with fully electrical control and are progressing toward multi-qubit operations. However, their coherence is limited by charge noise and driving field induced frequency shifts, and the resulting ensemble $1/f$ dephasing. Here we theoretically demonstrate that a bichromatic driving scheme cancels the second order frequency shift from the control field without sacrificing the electric dipole spin resonance (EDSR) rate, and without additional gate design or microwave engineering. Based on this property, we further demonstrate that bichromatic control creates a wide operating window that reduces sensitivity to quasi-static charge noise and thus enhances single qubit gate fidelity. This method provides a low-power route to a stabler frequency operation in germanium hole spin qubits and is readily transferable to other semiconductor spin qubit platforms.
|
https://arxiv.org/abs/2601.06805
|
Academic Papers
|
svg
|
16ffa89b6f66688ae625314b5a811e81cb60886251cadcaa21e36347118d613f
|
2026-01-13T00:00:00-05:00
|
Axion Signal Search Using Hybrid Nuclear-Electronic Spin Systems
|
arXiv:2601.06816v1 Announce Type: new Abstract: Conventional nuclear magnetic resonance searches for the galactic axion wind lose sensitivity at low frequencies due to the unfavourable scaling of inductive readout. Here, we propose a hybrid architecture where the hyperfine interaction transduces axion-driven nuclear precession into a high-bandwidth electron-spin readout channel. We demonstrate analytically that this dispersive upconversion preserves the specific sidereal and annual modulation signatures required to distinguish dark matter signals from instrumental backgrounds. When instantiated in a silicon ${ }^{209} \text{Bi}$ donor platform, the hybrid sensor is projected to outperform direct nuclear detection by more than an order of magnitude over the $10^{-16}-10^{-6} \text{eV}$ wide mass range. With collective enhancement, the design reaches a $5 \sigma$ sensitivity to DFSZ axion-nucleon couplings within one year, establishing hyperfine-mediated sensing as a competitive path for compact, solid-state dark matter searches.
|
https://arxiv.org/abs/2601.06816
|
Academic Papers
|
svg
|
7845c13bfce3b81ffb60e9e057d98f9561eaacbd8a51d5cdfb9c84855c16c9e7
|
2026-01-13T00:00:00-05:00
|
Quantum Circuit-Based Adaptation for Credit Risk Analysis
|
arXiv:2601.06865v1 Announce Type: new Abstract: Noisy and Intermediate-Scale Quantum, or NISQ, processors are sensitive to noise, prone to quantum decoherence, and are not yet capable of continuous quantum error correction for fault-tolerant quantum computation. Hence, quantum algorithms designed in the pre-fault-tolerant era cannot neglect the noisy nature of the hardware, and investigating the relationship between quantum hardware performance and the output of quantum algorithms is essential. In this work, we experimentally study how hardware-aware variational quantum circuits on a superconducting quantum processing unit can model distributions relevant to specific use-case applications for Credit Risk Analysis, e.g., standard Gaussian distributions for latent factor loading in the Gaussian Conditional Independence model. We use a transpilation technique tailored to the specific quantum hardware topology, which minimizes gate depth and connectivity violations, and we calibrate the gate rotations of the circuit to achieve an optimized output from quantum algorithms. Our results demonstrate the viability of quantum adaptation on a small-scale, proof-of-concept model inspired by financial applications and offer a good starting point for understanding the practical use of NISQ devices.
|
https://arxiv.org/abs/2601.06865
|
Academic Papers
|
svg
|
c4715353e08a23fc2e2f098f496c1b6b899e1bee18742c9d1f9f44605098853e
|
2026-01-13T00:00:00-05:00
|
High capacity dual degrees of freedom quantum secret sharing protocol beyond the linear rate-distance bound
|
arXiv:2601.06919v1 Announce Type: new Abstract: Quantum secret sharing (QSS) is the multipartite cryptographic primitive. Most of existing QSS protocols are limited by the linear rate-distance bound, and cannot realize the long-distance and high-capacity multipartite key distribution. This paper proposes a polarization (Pol) and phase (Ph) dual degrees of freedom (dual-DOF) QSS protocol based on the weak coherent pulse (WCP) sources. Our protocol combines the single-photon interference, two-photon interference and non-interference principles, and can resist the internal attack from the dishonest player. We develop simulation method to estimate its performance under the beam splitting attack. The simulation results show that our protocol can surpass the linear bound. Comparing with the differential-phase-shift twin-field QSS and WCP-Ph-QSS protocols, our protocol has stronger resistance against the beam splitting attack, and thus has longer maximal communication distance and higher key rate. By using the WCPs with high average photon number ($\mu$ = 1.5), our protocol achieves a key rate about 5.4 times of that in WCP-Ph-QSS protocol. Its maximal communication distance (441.7 km) is about 7.9% longer than that of the WCP-Ph-QSS. Our protocol is highly feasible with current experimental technology and offers a promising approach for long-distance and high-capacity quantum networks.
|
https://arxiv.org/abs/2601.06919
|
Academic Papers
|
svg
|
74b951de757344bfb89b9e53813c389aae84dc1e8a002a1fbd71f6eb098a4a69
|
2026-01-13T00:00:00-05:00
|
Extending the Handover-Iterative VQE to Challenging Strongly Correlated Systems: $N_2$ and Fe-S Cluster
|
arXiv:2601.06935v1 Announce Type: new Abstract: Accurately describing strongly correlated electronic systems remains a central challenge in quantum chemistry, as electron-electron interactions give rise to complex many-body wavefunctions that are difficult to capture with conventional approximations. Classical wavefunction-based approaches, such as the Semistochastic Heat-bath Configuration Interaction (SHCI) and the Density Matrix Renormalization Group (DMRG), currently define the state of the art, systematically converging toward the Full Configuration Interaction (FCI) limit, but at a rapidly increasing computational cost. Quantum computing algorithms promise to alleviate this scaling bottleneck by leveraging entanglement and superposition to represent correlated states more compactly. We introduced the Handover-Iterative Variational Quantum Eigensolver (HI-VQE) as a practical quantum computing algorithm with an iterative "handover" mechanism that dynamically exchanges information between quantum and classical computers, even using Noisy Intermediate-Scale Quantum (NISQ) computers. In this work, we extend the HI-VQE to benchmark two prototypical strongly correlated systems, the nitrogen molecule $N_2$ and iron-sulfur (Fe-S) cluster, which serve as stringent tests for both classical and quantum electronic-structure methods. By comparing HI-VQE results against Heat-bath Configuration Interaction (HCI) benchmarks, we assess its accuracy, scalability, and ability to capture multireference correlation effects. Achieving quantitative agreement on these canonical systems demonstrates a viable pathway toward quantum-enhanced simulations of complex bioinorganic molecules, catalytic mechanisms, and correlated materials.
|
https://arxiv.org/abs/2601.06935
|
Academic Papers
|
svg
|
1ddb57a69872eb03a727b325d8567333f77ecd8ed7e197bb3113efac9be8f252
|
2026-01-13T00:00:00-05:00
|
Dynamical Correlation of the Post-quench Non-thermal Equilibrium State
|
arXiv:2601.06987v1 Announce Type: new Abstract: After a quantum quench, the integrable system is expected to relax to a non-thermal equilibrium state (NTES) whose local properties are believed to be governed by a generalized Gibbs ensemble (GGE). Combining quench action and the form factor approach, we compute the field-field correlation in the NTES produced by an interaction quench of the Lieb-Liniger model. The spectral distribution is shown to be qualitatively different from that of a thermal equilibrium state (TES): a new dispersion branch appears whose microscopic mechanism can be traced to the algebraic decaying tail for the root density distribution function, and indicates the existence of a broader family of NTES featuring similar spectral property.
|
https://arxiv.org/abs/2601.06987
|
Academic Papers
|
svg
|
dc360f31df7352def817e72a216e622e96636f3837716207c935ff7c26b58784
|
2026-01-13T00:00:00-05:00
|
Counter-diabatic driving for fast spin control in a two-electron double quantum dot
|
arXiv:2601.06988v1 Announce Type: new Abstract: The techniques of shortcuts to adiabaticity have been proposed to accelerate the "slow" adiabatic processes in various quantum systems with the applications in quantum information processing. In this paper, we study the counter-diabatic driving for fast adiabatic spin manipulation in a two-electron double quantum dot by designing time-dependent electric fields in the presence of spin-orbit coupling. To simplify implementation and find an alternative shortcut, we further transform the Hamiltonian in term of Lie algebra, which allows one to use a single Cartesian component of electric fields. In addition, the relation between energy and time is quantified to show the lower bound for the operation time when the maximum amplitude of electric fields is given. Finally, the fidelity is discussed with respect to noise and systematic errors, which demonstrates that the decoherence effect induced by stochastic environment can be avoided in speeded-up adiabatic control.
|
https://arxiv.org/abs/2601.06988
|
Academic Papers
|
svg
|
66497b0d9bf4148c941382f4ad3ae6f73ac16583e949b17485908abe99ba7963
|
2026-01-13T00:00:00-05:00
|
Quantum state engineering of spin-orbit coupled ultracold atoms in a Morse potential
|
arXiv:2601.06996v1 Announce Type: new Abstract: Achieving full control of a Bose-Einstein condensate can have valuable applications in metrology, quantum information processing, and quantum condensed matter physics. We propose protocols to simultaneously control the internal (related to its pseudospin-1/2) and motional (position-related) states of a spin-orbit-coupled Bose-Einstein condensate confined in a Morse potential. In the presence of synthetic spin-orbit coupling, the state transition of a noninteracting condensate can be implemented by Raman coupling and detuning terms designed by invariant-based inverse engineering. The state transfer may also be driven by tuning the direction of the spin-orbit-coupling field and modulating the magnitude of the effective synthetic magnetic field. The results can be generalized for interacting condensates by changing the time-dependent detuning to compensate for the interaction. We find that a two-level algorithm for the inverse engineering remains numerically accurate even if the entire set of possible states is considered. The proposed approach is robust against the laser-field noise and systematic device-dependent errors.
|
https://arxiv.org/abs/2601.06996
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.