id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
f243be73b4b7b6881f7fa2ded26cc839009db2d726ed91c686c5c4c87d53a39c
2026-01-21T00:00:00-05:00
A longitudinal Bayesian framework for estimating causal dose-response relationships
arXiv:2505.20893v3 Announce Type: replace Abstract: Existing causal methods for time-varying exposure and time-varying confounding focus on estimating the average causal effect of a time-varying binary treatment on an end-of-study outcome, offering limited tools for characterizing marginal causal dose-response relationships under continuous exposures. We propose a scalable, nonparametric Bayesian framework for estimating marginal longitudinal causal dose-response functions with repeated outcome measurements. Our approach targets the average potential outcome at any fixed dose level and accommodates time-varying confounding through the generalized propensity score. The proposed approach embeds a Dirichlet process specification within a generalized estimating equations structure, capturing temporal correlation while making minimal assumptions about the functional form of the continuous exposure. We apply the proposed methods to monthly metro ridership and COVID-19 case data from major international cities, identifying causal relationships and the dose-response patterns between higher ridership and increased case counts.
https://arxiv.org/abs/2505.20893
Academic Papers
svg
da742a7f19a66eae20b21c6a632cb1c9f741a31369955061463d602c870d4dee
2026-01-21T00:00:00-05:00
Boosting In-Context Learning in LLMs Through the Lens of Classical Supervised Learning
arXiv:2505.23783v2 Announce Type: replace Abstract: In-Context Learning (ICL) allows Large Language Models (LLMs) to adapt to new tasks with just a few examples, but their predictions often suffer from systematic biases, leading to unstable performances in classification. While calibration techniques are proposed to mitigate these biases, we show that, in the logit space, many of these methods are equivalent to merely shifting the LLM's decision boundary without having the ability to alter its orientation. This proves inadequate when biases cause the LLM to be severely misdirected. To address these limitations and provide a unifying framework, we propose Supervised Calibration (SC), a loss-minimization based framework which learns an optimal, per-class affine transformation of the LLM's predictive probabilities in the logit space without requiring external data beyond the context. By using a more expressive functional class, SC not only subsumes many existing calibration methods in ICL as special cases, but also enables the ability to alter and even completely reverse the orientation of the LLM's decision boundary. Furthermore, SC's loss-based nature facilitates the seamless integration of two purpose-built regularization techniques: context-invariance and directional trust-region. The former is designed to tackle the instability issue in ICL, while the latter controls the degree of calibration. Finally, SC delivers state-of-the-art performance over calibration baselines in the 4-shot, 8-shot, and 16-shot settings across all nine datasets for Mistral-7B-Instruct-v0.3, LLaMA-2-7B-chat, and Qwen2-7B-Instruct.
https://arxiv.org/abs/2505.23783
Academic Papers
svg
b1257c86fd62a4116a01afa608ebc1d434a1b6bcb863c62ce8522f3ab5a03353
2026-01-21T00:00:00-05:00
A sequential ensemble approach to epidemic modeling: Combining Hawkes and SEIR models using SMC$^2$
arXiv:2506.15511v2 Announce Type: replace Abstract: This paper proposes a sequential ensemble methodology for epidemic modeling that integrates discrete-time Hawkes processes (DTHP) and Susceptible-Exposed-Infectious-Removed (SEIR) models. Motivated by the need for accurate and reliable epidemic forecasts to inform timely public health interventions, we develop a flexible model averaging (MA) framework using Sequential Monte Carlo Squared. While generating estimates from each model individually, our approach dynamically assigns them weights based on their incrementally estimated marginal likelihoods, accounting for both model and parameter uncertainty, to produce a single ensemble estimate. We assess the methodology through simulation studies mimicking abrupt changes in epidemic dynamics, followed by an application to the Irish influenza and COVID-19 epidemics. Our results show that combining the two models can improve both estimates of the infection trajectory and reproduction number compared to using either model alone. Moreover, the MA consistently produces more stable and informative estimates of the time-varying reproduction number, with credible intervals that provide a realistic assessment of uncertainty. These features are particularly useful when epidemic dynamics change rapidly, enabling more reliable short-term forecasts and timely public health decisions. This research contributes to pandemic preparedness by enhancing forecast reliability and supporting more informed public health responses.
https://arxiv.org/abs/2506.15511
Academic Papers
svg
4ae4a0e4251be29b5d3f9ba21e0cf9f15e18999cee34acf8b9376063b4231329
2026-01-21T00:00:00-05:00
Thompson Sampling in Function Spaces via Neural Operators
arXiv:2506.21894v3 Announce Type: replace Abstract: We propose an extension of Thompson sampling to optimization problems over function spaces where the objective is a known functional of an unknown operator's output. We assume that queries to the operator (such as running a high-fidelity simulator or physical experiment) are costly, while functional evaluations on the operator's output are inexpensive. Our algorithm employs a sample-then-optimize approach using neural operator surrogates. This strategy avoids explicit uncertainty quantification by treating trained neural operators as approximate samples from a Gaussian process (GP) posterior. We derive regret bounds and theoretical results connecting neural operators with GPs in infinite-dimensional settings. Experiments benchmark our method against other Bayesian optimization baselines on functional optimization tasks involving partial differential equations of physical systems, demonstrating better sample efficiency and significant performance gains.
https://arxiv.org/abs/2506.21894
Academic Papers
svg
d1c04afd6ba7485382aa825c2ce798c66742e7ee542f9ffb5b25ee0ca6f01256
2026-01-21T00:00:00-05:00
General measures of effect size to calculate power and sample size for Wald tests with generalized linear models
arXiv:2506.22324v2 Announce Type: replace Abstract: Power and sample size calculations for Wald tests in generalized linear models (GLMs) are often limited to specific cases like logistic regression. More general methods typically require detailed study parameters that are difficult to obtain during planning. We introduce two new effect size measures for estimating power and sample size in studies using Wald tests across any GLM. These measures accommodate any number of predictors or adjusters and require only basic study information. We provide practical guidance for interpreting and applying these measures to approximate a key parameter in power calculations. We also derive asymptotic bounds on the relative error of these approximations, showing that accuracy depends on features of the GLM such as the nonlinearity of the link function. To complement this analysis, we conduct simulation studies across common model specifications, identifying best use cases and opportunities for improvement. Finally, we test the methods in finite samples to confirm their practical utility, using a case study on the relationship between education and receipt of mental health treatment.
https://arxiv.org/abs/2506.22324
Academic Papers
svg
b54cc8d2ad28c40ba09dd3cf5f9b687406854d5ac994b50de07e189acba87a30
2026-01-21T00:00:00-05:00
Algorithms for Approximating Conditionally Optimal Bounds
arXiv:2507.15529v3 Announce Type: replace Abstract: This work develops algorithms for non-parametric confidence regions for samples from a univariate distribution whose support is a discrete mesh bounded on the left. We generalize the theory of Learned-Miller to preorders over the sample space. In this context, we show that the lexicographic low and lexicographic high orders are in some way extremal in the class of monotone preorders. From this theory we derive several approximation algorithms: 1) Closed form approximations for the lexicographic low and high orders with error tending to zero in the mesh size; 2) A polynomial-time approximation scheme for quantile orders with error tending to zero in the mesh size; 3) Monte Carlo methods for calculating quantile and lexicographic low orders applicable to any mesh size.
https://arxiv.org/abs/2507.15529
Academic Papers
svg
c3c9d7bb31b49ea31ad116230de7400e740385e7ffd583775d4f9d8a815d99b4
2026-01-21T00:00:00-05:00
Predicting Parkinson's Disease Progression Using Statistical and Neural Mixed Effects Models: Comparative Study on Longitudinal Biomarkers
arXiv:2507.20058v2 Announce Type: replace Abstract: Predicting Parkinson's Disease (PD) progression is crucial, and voice biomarkers offer a non-invasive method for tracking symptom severity (UPDRS scores) through telemonitoring. Analyzing this longitudinal data is challenging due to within-subject correlations and complex, nonlinear patient-specific progression patterns. This study benchmarks LMMs against two advanced hybrid approaches: the Generalized Neural Network Mixed Model (GNMM) (Mandel 2021), which embeds a neural network within a GLMM structure, and the Neural Mixed Effects (NME) model (Wortwein 2023), allowing nonlinear subject-specific parameters throughout the network. Using the Oxford Parkinson's telemonitoring voice dataset, we evaluate these models' performance in predicting Total UPDRS to offer practical guidance for PD research and clinical applications.
https://arxiv.org/abs/2507.20058
Academic Papers
svg
c08267c63caaa90dbb838b6c9eee7d65172912427d7d6f5167672bfa563a0590
2026-01-21T00:00:00-05:00
Surrogate-based Bayesian calibration methods for chaotic systems: a comparison of traditional and non-traditional approaches
arXiv:2508.13071v2 Announce Type: replace Abstract: Parameter calibration is essential for reducing uncertainty and improving predictive fidelity in physics-based models, yet it is often limited by the high computational cost of model evaluations. Bayesian calibration methods provide a principled framework for combining prior information with data while rigorously quantifying uncertainty. In this work, we compare four emulator-based Bayesian calibration strategies, Calibrate-Emulate-Sample (CES), History Matching (HM), Bayesian Optimal Experimental Design (BOED), and a goal-oriented extension of BOED (GBOED). The proposed GBOED formulation explicitly targets information gain with respect to the calibration posterior, aligning design decisions with downstream inference. We assess methods using accuracy and uncertainty quantification metrics, convergence behavior under increasing computational budgets, and practical considerations such as implementation complexity and robustness. For the Lorenz '96 system, CES, HM, and GBOED all yield strong calibration performance, even with limited numbers of model evaluations, while standard BOED generally underperforms in this setting. Differences among the strongest methods are modest, particularly as computational budgets increase. For the two-layer quasi-geostrophic system, all methods produce reasonable posterior estimates, and convergence behavior is more consistent. Overall, our results indicate that multiple emulator-based calibration strategies can perform comparably well when applied appropriately, with method selection often guided more by computational and practical considerations than by accuracy alone. These findings highlight both the limitations of standard BOED for calibration and the promise of goal-oriented and iterative approaches for efficient Bayesian inference in complex dynamical systems.
https://arxiv.org/abs/2508.13071
Academic Papers
svg
c2e4a114459ce91fa9482a42fa3aaabeee7e082b2cdfc908b844323330cd6ace
2026-01-21T00:00:00-05:00
On the relationship between the Wasserstein distance and differences in life expectancy at birth
arXiv:2508.17235v3 Announce Type: replace Abstract: The Wasserstein distance is a metric for assessing distributional differences. The measure originates in optimal transport theory and can be interpreted as the minimal cost of transforming one distribution into another. In this paper, the Wasserstein distance is applied to life table age-at-death distributions. The main finding is that, under certain conditions, the Wasserstein distance between two age-at-death distributions equals the corresponding gap in life expectancy at birth ($e_0$). More specifically, the paper shows mathematically and empirically that this equivalence holds whenever the survivorship functions do not cross. For example, this applies when comparing mortality between women and men from 1990 to 2020 using data from the Human Mortality Database. In such cases, the gap in $e_0$ reflects not only a difference in mean ages at death but can also be interpreted directly as a measure of distributional difference.
https://arxiv.org/abs/2508.17235
Academic Papers
svg
e7157f8f6adf6ebdea9c7655e8975a846d83319c8084337c743ff175a702c49b
2026-01-21T00:00:00-05:00
Tidy simulation: Designing robust, reproducible, and scalable Monte Carlo simulations
arXiv:2509.11741v2 Announce Type: replace Abstract: Monte Carlo simulation studies are at the core of the modern applied, computational, and theoretical statistical literature. Simulation is a broadly applicable research tool, used to collect data on the relative performance of methods or data analysis approaches under a well-defined data-generating process. However, extant literature focuses largely on design aspects of simulation, rather than implementation strategies aligned with the current state of (statistical) programming languages, portable data formats, and multi-node cluster computing. In this work, I propose tidy simulation: a simple, language-agnostic, yet flexible functional framework for designing, writing, and running simulation studies. It has four components: a tidy simulation grid, a data generation function, an analysis function, and a results table. Using this structure, even the smallest simulations can be written in a consistent, modular way, yet they can be readily scaled to thousands of nodes in a computer cluster should the need arise. Tidy simulation also supports the iterative, sometimes exploratory nature of simulation-based experiments. By adopting the tidy simulation approach, researchers can implement their simulations in a robust, reproducible, and scalable way, which contributes to high-quality statistical science.
https://arxiv.org/abs/2509.11741
Academic Papers
svg
1c3c05e7cd2b939e3aa04bc7ab0409e8cd6c44e321de4c2fa0a1f814abe2cf54
2026-01-21T00:00:00-05:00
An Italian Gender Equality Index
arXiv:2509.17140v2 Announce Type: replace Abstract: Composite indices like the Gender Equality Index (GEI) are widely used to monitor gender disparities and guide evidence-based policy. However, their original design is often limited when applied to subnational contexts. Building on the GEI framework and the WeWorld Index Italia, this study proposes a composite indicator tailored to measure gender disparities across Italian regions. The methodology, based on a variation of the Mazziotta-Pareto Index, introduces a novel aggregation approach that penalizes uneven performances across domains. Indicators cover employment, economic resources, education, use of time, political participation, and health, reflecting multidimensional gender inequality. Using open regional data for 2024, the proposed Italian Gender Equality Index (IGEI) provides a comparable and robust measure across regions, highlighting both high-performing and lagging areas. The approach addresses compensatory limitations of traditional aggregation and offers a practical tool for regional monitoring and targeted interventions, benefiting from the fact that the IGEI is specifically tailored on the GEI framework.
https://arxiv.org/abs/2509.17140
Academic Papers
svg
7b1b2f19b69f38416b962c4a753938eecbb363f945c4c090f8e3df9ff735b155
2026-01-21T00:00:00-05:00
Chinese vs. World Bank Development Projects: Insights from Earth Observation and Computer Vision on Wealth Gains in Africa, 2002-2013
arXiv:2509.25648v2 Announce Type: replace Abstract: Debates about whether development projects improve living conditions persist, partly because observational estimates can be biased by incomplete adjustment and because reliable outcome data are scarce at the neighborhood level. We address both issues in a continent-scale, sector-specific evaluation of Chinese and World Bank projects across 9,899 neighborhoods in 36 African countries (2002-2013), representative of ~88% of the population. First, we use a recent dataset that measures living conditions with a machine-learned wealth index derived from contemporaneous satellite imagery, yielding a consistent panel of 6.7 km square mosaics. Second, to strengthen identification, we proxy officials' map-based placement criteria using pre-treatment daytime satellite images and fuse these with tabular covariates to estimate funder- and sector-specific ATEs via inverse-probability weighting. Incorporating imagery often shrinks effects relative to tabular-only models. On average, both donors raise wealth, with larger and more consistent gains for China; sector extremes in our sample include Trade and Tourism (330) for the World Bank (+12.29 IWI points), and Emergency Response (700) for China (+15.15). Assignment-mechanism analyses also show World Bank placement is often more predictable from imagery alone (as well as from tabular covariates). This suggests that Chinese project placements are more driven by non-visible, political, or event-driven factors than World Bank placements. To probe residual concerns about selection on observables, we also estimate within-neighborhood (unit) fixed-effects models at a spatial resolution about 67 times finer than prior fixed-effects analyses, leveraging the computer-vision-imputed IWI panels; these deliver smaller but, for Chinese projects, directionally consistent effects.
https://arxiv.org/abs/2509.25648
Academic Papers
svg
d6f43dca4ed5b40236c8e271c698ed0e8c745551bdf70f4f65542452a0c23f58
2026-01-21T00:00:00-05:00
Multidata Causal Discovery for Statistical Hurricane Intensity Forecasting
arXiv:2510.02050v2 Announce Type: replace Abstract: Improving statistical forecasts of Tropical Cyclone (TC) intensity is limited by complex nonlinear interactions and difficulty in identifying relevant predictors. Conventional methods prioritize correlation or fit, often overlooking confounding variables and limiting generalizability to unseen TCs. To address this, we leverage a multidata causal discovery framework with a replicated dataset based on Statistical Hurricane Intensity Prediction Scheme (SHIPS) using ERA5 meteorological reanalysis. We conduct multiple experiments to identify and select predictors causally linked to TC intensity changes. We then train multiple linear regression models to compare causal feature selection with no selection, correlation, and random forest feature importance across five forecast lead times from 1 to 5 days (24 to 120 hours). Causal feature selection consistently outperforms on unseen test cases, especially for lead times shorter than 3 days. The causal features primarily include vertical shear, mid-tropospheric potential vorticity and surface moisture conditions, which are physically significant yet often underutilized in TC intensity predictions. We build an extended predictor set (SHIPS plus) by adding selected features to the standard SHIPS predictors. SHIPS plus yields increased short-term predictive skill at lead times of 24, 48, and 72 hours. Adding nonlinearity using multilayer perceptron further extends skill to longer lead times, despite our framework being purely regional and not requiring global forecast data. Operational SHIPS tests confirm that three of the six added causally discovered predictors improve forecast skill, with the largest gains at longer lead times. Our results demonstrate that causal discovery improves TC intensity prediction and pave the way toward more empirical forecasts.
https://arxiv.org/abs/2510.02050
Academic Papers
svg
12b465dd3adec5dfdc7fa2348efbe5f426631ea27e4226f36a7de35b1236e197
2026-01-21T00:00:00-05:00
PAC Learnability in the Presence of Performativity
arXiv:2510.08335v2 Announce Type: replace Abstract: Following the wide-spread adoption of machine learning models in real-world applications, the phenomenon of performativity, i.e. model-dependent shifts in the test distribution, becomes increasingly prevalent. Unfortunately, since models are usually trained solely based on samples from the original (unshifted) distribution, this performative shift may lead to decreased test-time performance. In this paper, we study the question of whether and when performative binary classification problems are learnable, via the lens of the classic PAC (Probably Approximately Correct) learning framework. We motivate several performative scenarios, accounting in particular for linear shifts in the label distribution, as well as for more general changes in both the labels and the features. We construct a performative empirical risk function, which depends only on data from the original distribution and on the type performative effect, and is yet an unbiased estimate of the true risk of a classifier on the shifted distribution. Minimizing this notion of performative risk allows us to show that any PAC-learnable hypothesis space in the standard binary classification setting remains PAC-learnable for the considered performative scenarios. We also conduct an extensive experimental evaluation of our performative risk minimization method and showcase benefits on synthetic and real data.
https://arxiv.org/abs/2510.08335
Academic Papers
svg
f094e411a5c15fe4a1458572307e6bbb5c3552467616855b2f8b11584a33ae0f
2026-01-21T00:00:00-05:00
Improving the Accuracy of Amortized Model Comparison with Self-Consistency
arXiv:2512.14308v2 Announce Type: replace Abstract: Amortized Bayesian inference (ABI) offers fast, scalable approximations to posterior densities by training neural surrogates on data simulated from the statistical model. However, ABI methods are highly sensitive to model misspecification: when observed data fall outside the training distribution (generative scope of the statistical models), neural surrogates can behave unpredictably. This makes it a challenge in a model comparison setting, where multiple statistical models are considered, of which at least some are misspecified. Recent work on self-consistency (SC) provides a promising remedy to this issue, accessible even for empirical data (without ground-truth labels). In this work, we investigate how SC can improve amortized model comparison conceptualized in four different ways. Across two synthetic and two real-world case studies, we find that approaches for model comparison that estimate marginal likelihoods through approximate parameter posteriors consistently outperform methods that directly approximate model evidence or posterior model probabilities. SC training improves robustness when the likelihood is available, even under severe model misspecification. The benefits of SC for methods without access of analytic likelihoods are more limited and inconsistent. Our results suggest practical guidance for reliable amortized Bayesian model comparison: prefer parameter posterior-based methods and augment them with SC training on empirical datasets to mitigate extrapolation bias under model misspecification.
https://arxiv.org/abs/2512.14308
Academic Papers
svg
0bb49607202abe910593632fb0a4c951e07fa9be4f2c73d9d4037273402e32be
2026-01-21T00:00:00-05:00
Memorize Early, Then Query: Inlier-Memorization-Guided Active Outlier Detection
arXiv:2601.10993v2 Announce Type: replace Abstract: Outlier detection (OD) aims to identify abnormal instances, known as outliers or anomalies, by learning typical patterns of normal data, or inliers. Performing OD under an unsupervised regime-without any information about anomalous instances in the training data-is challenging. A recently observed phenomenon, known as the inlier-memorization (IM) effect, where deep generative models (DGMs) tend to memorize inlier patterns during early training, provides a promising signal for distinguishing outliers. However, existing unsupervised approaches that rely solely on the IM effect still struggle when inliers and outliers are not well-separated or when outliers form dense clusters. To address these limitations, we incorporate active learning to selectively acquire informative labels, and propose IMBoost, a novel framework that explicitly reinforces the IM effect to improve outlier detection. Our method consists of two stages: 1) a warm-up phase that induces and promotes the IM effect, and 2) a polarization phase in which actively queried samples are used to maximize the discrepancy between inlier and outlier scores. In particular, we propose a novel query strategy and tailored loss function in the polarization phase to effectively identify informative samples and fully leverage the limited labeling budget. We provide a theoretical analysis showing that the IMBoost consistently decreases inlier risk while increasing outlier risk throughout training, thereby amplifying their separation. Extensive experiments on diverse benchmark datasets demonstrate that IMBoost not only significantly outperforms state-of-the-art active OD methods but also requires substantially less computational cost.
https://arxiv.org/abs/2601.10993
Academic Papers
svg
2cc64ef93609862fce242fe841d306c449ccc45dcd1123c309c17fe5906b4102
2026-01-21T00:00:00-05:00
TSQCA: Threshold-Sweep Qualitative Comparative Analysis in R
arXiv:2601.11229v2 Announce Type: replace Abstract: Qualitative Comparative Analysis (QCA) requires researchers to choose calibration and dichotomization thresholds, and these choices can substantially affect truth tables, minimization, and resulting solution formulas. Despite this dependency, threshold sensitivity is often examined only in an ad hoc manner because repeated analyses are time-intensive and error-prone. We present TSQCA, an R package that automates threshold-sweep analyses by treating thresholds as explicit analytical variables. It provides four sweep functions (otSweep, ctSweepS, ctSweepM, dtSweep) to explore outcome thresholds, single-condition thresholds, multi-condition threshold grids, and joint outcome-condition threshold spaces, respectively. TSQCA integrates with the established CRAN package QCA for truth table construction and Boolean minimization, while returning structured S3 objects with consistent print/summary methods and optional detailed results. The package also supports automated Markdown report generation and configuration-chart output to facilitate reproducible documentation of cross-threshold results.
https://arxiv.org/abs/2601.11229
Academic Papers
svg
623eed685a6254230a760151a3e2cd23ada6a9c245ebb6a794b3971882c82e0e
2026-01-21T00:00:00-05:00
Learning to Simulate: Generative Metamodeling via Quantile Regression
arXiv:2311.17797v4 Announce Type: replace-cross Abstract: Stochastic simulation models effectively capture complex system dynamics but are often too slow for real-time decision-making. Traditional metamodeling techniques learn relationships between simulator inputs and a single output summary statistic, such as the mean or median. These techniques enable real-time predictions without additional simulations. However, they require prior selection of one appropriate output summary statistic, limiting their flexibility in practical applications. We propose a new concept: generative metamodeling. It aims to construct a "fast simulator of the simulator," generating random outputs significantly faster than the original simulator while preserving approximately equal conditional distributions. Generative metamodels enable rapid generation of numerous random outputs upon input specification, facilitating immediate computation of any summary statistic for real-time decision-making. We introduce a new algorithm, quantile-regression-based generative metamodeling (QRGMM), and establish its distributional convergence and convergence rate. Extensive numerical experiments demonstrate QRGMM's efficacy compared to other state-of-the-art generative algorithms in practical real-time decision-making scenarios.
https://arxiv.org/abs/2311.17797
Academic Papers
svg
f3b6d1fb1f585728c75fe4f7a6889dff096cd18e578cd1efdf2c58eca6e54494
2026-01-21T00:00:00-05:00
Trading off Consistency and Dimensionality of Convex Surrogates for the Mode
arXiv:2402.10818v3 Announce Type: replace-cross Abstract: In multiclass classification over $n$ outcomes, the outcomes must be embedded into the reals with dimension at least $n-1$ in order to design a consistent surrogate loss that leads to the "correct" classification, regardless of the data distribution. For large $n$, such as in information retrieval and structured prediction tasks, optimizing a surrogate in $n-1$ dimensions is often intractable. We investigate ways to trade off surrogate loss dimension, the number of problem instances, and restricting the region of consistency in the simplex for multiclass classification. Following past work, we examine an intuitive embedding procedure that maps outcomes into the vertices of convex polytopes in a low-dimensional surrogate space. We show that full-dimensional subsets of the simplex exist around each point mass distribution for which consistency holds, but also, with less than $n-1$ dimensions, there exist distributions for which a phenomenon called hallucination occurs, which is when the optimal report under the surrogate loss is an outcome with zero probability. Looking towards application, we derive a result to check if consistency holds under a given polytope embedding and low-noise assumption, providing insight into when to use a particular embedding. We provide examples of embedding $n = 2^{d}$ outcomes into the $d$-dimensional unit cube and $n = d!$ outcomes into the $d$-dimensional permutahedron under low-noise assumptions. Finally, we demonstrate that with multiple problem instances, we can learn the mode with $\frac{n}{2}$ dimensions over the whole simplex.
https://arxiv.org/abs/2402.10818
Academic Papers
svg
7083a81a43a3a8c228d71d8883d90555ff024180ab8c1e196b34317d53674bee
2026-01-21T00:00:00-05:00
On Expressive Power of Quantized Neural Networks under Fixed-Point Arithmetic
arXiv:2409.00297v2 Announce Type: replace-cross Abstract: Existing works on the expressive power of neural networks typically assume real parameters and exact operations. In this work, we study the expressive power of quantized networks under discrete fixed-point parameters and inexact fixed-point operations with round-off errors. We first provide a necessary condition and a sufficient condition on fixed-point arithmetic and activation functions for quantized networks to represent all fixed-point functions from fixed-point vectors to fixed-point numbers. Then, we show that various popular activation functions satisfy our sufficient condition, e.g., Sigmoid, ReLU, ELU, SoftPlus, SiLU, Mish, and GELU. In other words, networks using those activation functions are capable of representing all fixed-point functions. We further show that our necessary condition and sufficient condition coincide under a mild condition on activation functions: e.g., for an activation function $\sigma$, there exists a fixed-point number $x$ such that $\sigma(x)=0$. Namely, we find a necessary and sufficient condition for a large class of activation functions. We lastly show that even quantized networks using binary weights in $\{-1,1\}$ can also represent all fixed-point functions for practical activation functions.
https://arxiv.org/abs/2409.00297
Academic Papers
svg
d7bc24755d0435b5b00ccad032ca4c3e5307e32d98378fc6ba66885f1118abb6
2026-01-21T00:00:00-05:00
Neural timescales from a computational perspective
arXiv:2409.02684v3 Announce Type: replace-cross Abstract: Neural activity fluctuates over a wide range of timescales within and across brain areas. Experimental observations suggest that diverse neural timescales reflect information in dynamic environments. However, how timescales are defined and measured from brain recordings vary across the literature. Moreover, these observations do not specify the mechanisms underlying timescale variations, nor whether specific timescales are necessary for neural computation and brain function. Here, we synthesize three directions where computational approaches can distill the broad set of empirical observations into quantitative and testable theories: We review (i) how different data analysis methods quantify timescales across distinct behavioral states and recording modalities, (ii) how biophysical models provide mechanistic explanations for the emergence of diverse timescales, and (iii) how task-performing networks and machine learning models uncover the functional relevance of neural timescales. This integrative computational perspective thus complements experimental investigations, providing a holistic view on how neural timescales reflect the relationship between brain structure, dynamics, and behavior.
https://arxiv.org/abs/2409.02684
Academic Papers
svg
ad5a416285228daa8843a7b1e95e1332cebbd604b9c9de4e020de0c77d1109fe
2026-01-21T00:00:00-05:00
TabDPT: Scaling Tabular Foundation Models on Real Data
arXiv:2410.18164v3 Announce Type: replace-cross Abstract: Tabular data is one of the most ubiquitous sources of information worldwide, spanning a wide variety of domains. This inherent heterogeneity has slowed the development of Tabular Foundation Models (TFMs) capable of fast generalization to unseen datasets. In-Context Learning (ICL) has recently emerged as a promising solution for TFMs, enabling dynamic adaptation to new tasks without additional tuning. While many studies have attempted to re-purpose large language models for tabular ICL, they have had limited success, so recent works have focused on developing tabular-specific foundation models. In this work, we propose an approach to combine ICL-based retrieval with self supervised learning to train tabular foundation models. We also investigate the utility of real vs. synthetic data for model pre-training, and show that real data can contain useful signal not easily captured in synthetic training. Specifically, we show that incorporating real data during the pre-training phase can lead to significantly faster training and better downstream generalization to unseen data. Our resulting model, TabDPT, achieves strong performance on both regression (CTR23) and classification (CC18) benchmarks. Importantly, we also demonstrate that with our pre-training procedure, scaling both model and data size leads to consistent performance improvements that follow power laws. This echoes scaling laws in LLMs and other foundation models, and suggests that large-scale TFMs can be achievable. We open-source our full pipeline: inference code including trained model weights can be found at github.com/layer6ai-labs/TabDPT-inference, and the training code to reproduce experiments can be found at github.com/layer6ai-labs/TabDPT-training.
https://arxiv.org/abs/2410.18164
Academic Papers
svg
a7cb58f6d86962469fdbbc0789a545c839a098eef6f1fc379ddc341d98c4ab8e
2026-01-21T00:00:00-05:00
CausAdv: A Causal-based Framework for Detecting Adversarial Examples
arXiv:2411.00839v3 Announce Type: replace-cross Abstract: Deep learning has led to tremendous success in computer vision, largely due to Convolutional Neural Networks (CNNs). However, CNNs have been shown to be vulnerable to crafted adversarial perturbations. This vulnerability of adversarial examples has has motivated research into improving model robustness through adversarial detection and defense methods. In this paper, we address the adversarial robustness of CNNs through causal reasoning. We propose CausAdv: a causal framework for detecting adversarial examples based on counterfactual reasoning. CausAdv learns both causal and non-causal features of every input, and quantifies the counterfactual information (CI) of every filter of the last convolutional layer. We then perform a statistical analysis of the filters' CI across clean and adversarial samples, to demonstrate that adversarial examples exhibit different CI distributions compared to clean samples. Our results show that causal reasoning enhances the process of adversarial detection without the need to train a separate detector. Moreover, we illustrate the efficiency of causal explanations as a helpful detection tool by visualizing the extracted causal features.
https://arxiv.org/abs/2411.00839
Academic Papers
svg
d226d725add8adb52600edf498fad92cd0caebc99a64e56fce5eaccdcb6c979f
2026-01-21T00:00:00-05:00
From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training
arXiv:2501.06148v2 Announce Type: replace-cross Abstract: We study the problem of training neural stochastic differential equations, or diffusion models, to sample from a Boltzmann distribution without access to target samples. Existing methods for training such models enforce time-reversal of the generative and noising processes, using either differentiable simulation or off-policy reinforcement learning (RL). We prove equivalences between families of objectives in the limit of infinitesimal discretization steps, linking entropic RL methods (GFlowNets) with continuous-time objects (partial differential equations and path space measures). We further show that an appropriate choice of coarse time discretization during training allows greatly improved sample efficiency and the use of time-local objectives, achieving competitive performance on standard sampling benchmarks with reduced computational cost.
https://arxiv.org/abs/2501.06148
Academic Papers
svg
d85c7e7b522253002cd73886e5656f9547368c5f45cbc47e83119183521946c5
2026-01-21T00:00:00-05:00
Single-Step Reconstruction-Free Anomaly Detection and Segmentation via Diffusion Models
arXiv:2508.04818v2 Announce Type: replace-cross Abstract: Generative models have demonstrated significant success in anomaly detection and segmentation over the past decade. Recently, diffusion models have emerged as a powerful alternative, outperforming previous approaches such as GANs and VAEs. In typical diffusion-based anomaly detection, a model is trained on normal data, and during inference, anomalous images are perturbed to a predefined intermediate step in the forward diffusion process. The corresponding normal image is then reconstructed through iterative reverse sampling. However, reconstruction-based approaches present three major challenges: (1) the reconstruction process is computationally expensive due to multiple sampling steps, making real-time applications impractical; (2) for complex or subtle patterns, the reconstructed image may correspond to a different normal pattern rather than the original input; and (3) Choosing an appropriate intermediate noise level is challenging because it is application-dependent and often assumes prior knowledge of anomalies, an assumption that does not hold in unsupervised settings. We introduce Reconstruction-free Anomaly Detection with Attention-based diffusion models in Real-time (RADAR), which overcomes the limitations of reconstruction-based anomaly detection. Unlike current SOTA methods that reconstruct the input image, RADAR directly produces anomaly maps from the diffusion model, improving both detection accuracy and computational efficiency. We evaluate RADAR on real-world 3D-printed material and the MVTec-AD dataset. Our approach surpasses state-of-the-art diffusion-based and statistical machine learning models across all key metrics, including accuracy, precision, recall, and F1 score. Specifically, RADAR improves F1 score by 7% on MVTec-AD and 13% on the 3D-printed material dataset compared to the next best model. Code available at: https://github.com/mehrdadmoradi124/RADAR
https://arxiv.org/abs/2508.04818
Academic Papers
svg
a0fcdb1be43a0618968a9ed424e30039c7161a497fd52e4fa968f97b5c058360
2026-01-21T00:00:00-05:00
Discovering equations from data: symbolic regression in dynamical systems
arXiv:2508.20257v2 Announce Type: replace-cross Abstract: The process of discovering equations from data lies at the heart of physics and in many other areas of research, including mathematical ecology and epidemiology. Recently, machine learning methods known as symbolic regression emerged as a way to automate this task. This study presents an overview of the current literature on symbolic regression, while also comparing the efficiency of five state-of-the-art methods in recovering the governing equations from nine processes, including chaotic dynamics and epidemic models. Benchmark results demonstrate the PySR method as the most suitable for inferring equations, with some estimates being indistinguishable from the original analytical forms. These results highlight the potential of symbolic regression as a robust tool for inferring and modeling real-world phenomena.
https://arxiv.org/abs/2508.20257
Academic Papers
svg
274bb75e312cb08709dd6ebd54cc234784ae1b7b5282e2a837426f14c3f21d31
2026-01-21T00:00:00-05:00
Can the Waymo Open Motion Dataset Support Realistic Behavioral Modeling? A Validation Study with Naturalistic Trajectories
arXiv:2509.03515v2 Announce Type: replace-cross Abstract: The Waymo Open Motion Dataset (WOMD) has become a popular resource for data-driven modeling of autonomous vehicles (AVs) behavior. However, its validity for behavioral analysis remains uncertain due to proprietary post-processing, the absence of error quantification, and the segmentation of trajectories into 20-second clips. This study examines whether WOMD accurately captures the dynamics and interactions observed in real-world AV operations. Leveraging an independently collected naturalistic dataset from Level 4 AV operations in Phoenix, Arizona (PHX), we perform comparative analyses across three representative urban driving scenarios: discharging at signalized intersections, car-following, and lane-changing behaviors. For the discharging analysis, headways are manually extracted from aerial video to ensure negligible measurement error. For the car-following and lane-changing cases, we apply the Simulation-Extrapolation (SIMEX) method to account for empirically estimated error in the PHX data and use Dynamic Time Warping (DTW) distances to quantify behavioral differences. Results across all scenarios consistently show that behavior in PHX falls outside the behavioral envelope of WOMD. Notably, WOMD underrepresents short headways and abrupt decelerations. These findings suggest that behavioral models calibrated solely on WOMD may systematically underestimate the variability, risk, and complexity of naturalistic driving. Caution is therefore warranted when using WOMD for behavior modeling without proper validation against independently collected data.
https://arxiv.org/abs/2509.03515
Academic Papers
svg
c8e0e8052400249a53e072df009b38262bbc9f17cc9ab49831d79498685b5516
2026-01-21T00:00:00-05:00
Message passing-based inference in an autoregressive active inference agent
arXiv:2509.25482v2 Announce Type: replace-cross Abstract: We present the design of an autoregressive active inference agent in the form of message passing on a factor graph. Expected free energy is derived and distributed across a planning graph. The proposed agent is validated on a robot navigation task, demonstrating exploration and exploitation in a continuous-valued observation space with bounded continuous-valued actions. Compared to a classical optimal controller, the agent modulates action based on predictive uncertainty, arriving later but with a better model of the robot's dynamics.
https://arxiv.org/abs/2509.25482
Academic Papers
svg
f567437e3328f646d0a3b81b891243d7696ca7f2051cb433d2e085993c5af726
2026-01-21T00:00:00-05:00
Chain-of-Influence: Tracing Interdependencies Across Time and Features in Clinical Predictive Modelings
arXiv:2510.09895v3 Announce Type: replace-cross Abstract: Modeling clinical time-series data is hampered by the challenge of capturing latent, time-varying dependencies among features. State-of-the-art approaches often rely on black-box mechanisms or simple aggregation, failing to explicitly model how the influence of one clinical variable propagates through others over time. We propose $\textbf{Chain-of-Influence (CoI)}$, an interpretable deep learning framework that constructs an explicit, time-unfolded graph of feature interactions. CoI enables the tracing of influence pathways, providing a granular audit trail that shows how any feature at any time contributes to the final prediction, both directly and through its influence on other variables. We evaluate CoI on mortality and disease progression tasks using the MIMIC-IV dataset and a chronic kidney disease cohort. Our framework achieves state-of-the-art predictive performance (AUROC of 0.960 on CKD progression and 0.950 on ICU mortality), with deletion-based sensitivity analyses confirming that CoI's learned attributions faithfully reflect its decision process. Through case studies, we demonstrate that CoI uncovers clinically meaningful, patient-specific patterns of disease progression, offering enhanced transparency into the temporal and cross-feature dependencies that inform clinical decision-making.
https://arxiv.org/abs/2510.09895
Academic Papers
svg
68153b2963ce267f667c898cbbdca8953a337b228fa30401f29690e0926381f7
2026-01-21T00:00:00-05:00
Mobile Coverage Analysis using Crowdsourced Data
arXiv:2510.13459v2 Announce Type: replace-cross Abstract: Effective assessment of mobile network coverage and the precise identification of service weak spots are paramount for network operators striving to enhance user Quality of Experience (QoE). This paper presents a novel framework for mobile coverage and weak spot analysis utilising crowdsourced QoE data. The core of our methodology involves coverage analysis at the individual cell (antenna) level, subsequently aggregated to the site level, using empirical geolocation data. A key contribution of this research is the application of One-Class Support Vector Machine (OC-SVM) algorithm for calculating mobile network coverage. This approach models the decision hyperplane as the effective coverage contour, facilitating robust calculation of coverage areas for individual cells and entire sites. The same methodology is extended to analyse crowdsourced service loss reports, thereby identifying and quantifying geographically localised weak spots. Our findings demonstrate the efficacy of this novel framework in accurately mapping mobile coverage and, crucially, in highlighting granular areas of signal deficiency, particularly within complex urban environments.
https://arxiv.org/abs/2510.13459
Academic Papers
svg
b3a0514e543dca63929308ed4f3c77b660b95e7eac23057112b8cf1197e8be08
2026-01-21T00:00:00-05:00
Joint Score-Threshold Optimization for Interpretable Risk Assessment
arXiv:2510.21934v2 Announce Type: replace-cross Abstract: Risk assessment tools in healthcare commonly employ point-based scoring systems that map patients to ordinal risk categories via thresholds. While electronic health record (EHR) data presents opportunities for data-driven optimization of these tools, two fundamental challenges impede standard supervised learning: (1) labels are often available only for extreme risk categories due to intervention-censored outcomes, and (2) misclassification cost is asymmetric and increases with ordinal distance. We propose a mixed-integer programming (MIP) framework that jointly optimizes scoring weights and category thresholds in the face of these challenges. Our approach prevents label-scarce category collapse via threshold constraints, and utilizes an asymmetric, distance-aware objective. The MIP framework supports governance constraints, including sign restrictions, sparsity, and minimal modifications to incumbent tools, ensuring practical deployability in clinical workflows. We further develop a continuous relaxation of the MIP problem to provide warm-start solutions for more efficient MIP optimization. We apply the proposed score optimization framework to a case study of inpatient falls risk assessment using the Johns Hopkins Fall Risk Assessment Tool.
https://arxiv.org/abs/2510.21934
Academic Papers
svg
ac486a5f1049ce2f9ba16e976c34375bcc00fbcb5e09e40fd1ca2d6ad3c4ec0a
2026-01-21T00:00:00-05:00
GraphBench: Next-generation graph learning benchmarking
arXiv:2512.04475v4 Announce Type: replace-cross Abstract: Machine learning on graphs has recently achieved impressive progress in various domains, including molecular property prediction and chip design. However, benchmarking practices remain fragmented, often relying on narrow, task-specific datasets and inconsistent evaluation protocols, which hampers reproducibility and broader progress. To address this, we introduce GraphBench, a comprehensive benchmarking suite that spans diverse domains and prediction tasks, including node-level, edge-level, graph-level, and generative settings. GraphBench provides standardized evaluation protocols -- with consistent dataset splits and performance metrics that account for out-of-distribution generalization -- as well as a unified hyperparameter tuning framework. Additionally, we benchmark GraphBench using message-passing neural networks and graph transformer models, providing principled baselines and establishing a reference performance. See www.graphbench.io for further details.
https://arxiv.org/abs/2512.04475
Academic Papers
svg
b350cb5a20fb4c37c9ac8a0980dccc0e665e6a7e3d2718698ff286032b528061
2026-01-21T00:00:00-05:00
Adaptive Multi-task Learning for Probabilistic Load Forecasting
arXiv:2512.20232v2 Announce Type: replace-cross Abstract: Simultaneous load forecasting across multiple entities (e.g., regions, buildings) is crucial for the efficient, reliable, and cost-effective operation of power systems. Accurate load forecasting is a challenging problem due to the inherent uncertainties in load demand, dynamic changes in consumption patterns, and correlations among entities. Multi-task learning has emerged as a powerful machine learning approach that enables the simultaneous learning across multiple related problems. However, its application to load forecasting remains underexplored and is limited to offline learning methods, which cannot capture changes in consumption patterns. This paper presents an adaptive multi-task learning method for probabilistic load forecasting. The proposed method can dynamically adapt to changes in consumption patterns and correlations among entities. In addition, the techniques presented provide reliable probabilistic predictions for loads of multiple entities and assess load uncertainties. Specifically, the method is based on vectorvalued hidden Markov models and uses a recursive process to update the model parameters and provide predictions with the most recent parameters. The performance of the proposed method is evaluated using datasets that contain the load demand of multiple entities and exhibit diverse and dynamic consumption patterns. The experimental results show that the presented techniques outperform existing methods both in terms of forecasting performance and uncertainty assessment.
https://arxiv.org/abs/2512.20232
Academic Papers
svg
5694ea1636ddf84accdc5a2275c9fe2c7e0619d34ef614ccee91fbb980dc230c
2026-01-21T00:00:00-05:00
Geometric Stability: The Missing Axis of Representations
arXiv:2601.09173v2 Announce Type: replace-cross Abstract: Analysis of learned representations has a blind spot: it focuses on $similarity$, measuring how closely embeddings align with external references, but similarity reveals only what is represented, not whether that structure is robust. We introduce $geometric$ $stability$, a distinct dimension that quantifies how reliably representational geometry holds under perturbation, and present $Shesha$, a framework for measuring it. Across 2,463 configurations in seven domains, we show that stability and similarity are empirically uncorrelated ($\rho \approx 0.01$) and mechanistically distinct: similarity metrics collapse after removing the top principal components, while stability retains sensitivity to fine-grained manifold structure. This distinction yields actionable insights: for safety monitoring, stability acts as a functional geometric canary, detecting structural drift nearly 2$\times$ more sensitively than CKA while filtering out the non-functional noise that triggers false alarms in rigid distance metrics; for controllability, supervised stability predicts linear steerability ($\rho = 0.89$-$0.96$); for model selection, stability dissociates from transferability, revealing a geometric tax that transfer optimization incurs. Beyond machine learning, stability predicts CRISPR perturbation coherence and neural-behavioral coupling. By quantifying $how$ $reliably$ systems maintain structure, geometric stability provides a necessary complement to similarity for auditing representations across biological and computational systems.
https://arxiv.org/abs/2601.09173
Academic Papers
svg
aacab8926b07296961c7f20767dad345830fee53923bc7e8e9f4c4aa2909c526
2026-01-21T00:00:00-05:00
$\ell$-Multiranks of Multipartite Quantum States via Tensor Flattening: A Mathematica Codebase
arXiv:2601.11551v1 Announce Type: new Abstract: We present a Mathematica codebase for computing $\ell$-multilinear ranks ($\ell$-multiranks) of multiqudit quantum states using tensor-flattening techniques. By calculating the ranks of all bipartition-induced matricizations, the method provides an efficient criterion for detecting Genuine Multipartite Entangled (GME) states in systems with local dimension $d$. The code automatically generates all required tensor reshapes and outputs the full $\ell$-multirank profile, offering a practical tool for characterizing entanglement in high-dimensional multiqudit systems.
https://arxiv.org/abs/2601.11551
Academic Papers
svg
5617c21b2cf201c51335c156e8899b10e96591e19a4035c731c6122a798948bf
2026-01-21T00:00:00-05:00
QDsiM: A Noise-Aware Simulation Toolkit for Quantum Diamond Microscope
arXiv:2601.11649v1 Announce Type: new Abstract: The nitrogen-vacancy (NV) center in diamond is a leading solid-state platform for room-temperature quantum magnetometry owing to its long spin coherence times, optical spin initialization and readout, and high sensitivity to magnetic, electric, and thermal perturbations. As NV-based optically detected magnetic resonance (ODMR) systems transition from controlled laboratory environments toward portable and field-deployable sensors, a detailed understanding of realistic noise sources and experimental imperfections becomes essential for optimizing performance and sensitivity. In this work, we present a comprehensive simulation framework, i.e., a digital twin, for continuous-wave wide-field ODMR in NV-center ensembles. The model is built upon a physically consistent seven-level description of the NV center and incorporates a broad range of experimentally relevant noise and imperfection mechanisms as modular, parameterized components. These include laser and microwave amplitude fluctuations, microwave phase noise, uncertainty in the NV gyromagnetic ratio, spin dephasing, temperature-induced shifts of the ground-state zero-field splitting, surface-induced magnetic field perturbations, and photon shot noise. Power broadening and contrast degradation arising from optical and microwave driving are captured self-consistently through linewidth calculations. Also, the spatial inhomogeneity is modeled via a Gaussian laser intensity profile across the sensing region...
https://arxiv.org/abs/2601.11649
Academic Papers
svg
ae21947c1a262b1befa1ea16952a63800d7014a91331b2929b3c88dd8747e279
2026-01-21T00:00:00-05:00
Experimental observation of dynamical blockade between transmon qubits via ZZ interaction engineering
arXiv:2601.11714v1 Announce Type: new Abstract: We report the experimental realization of strong longitudinal (ZZ) coupling between two superconducting transmon qubits achieved solely through capacitive engineering. By systematically varying the qubit frequency detuning, we measure cross-Kerr inter-qubit interaction strengths ranging from 10 MHz up to 350 MHz, more than an order of magnitude larger than previously observed in similar capacitively coupled systems. In this configuration, the qubits enter a strong-interaction regime in which the excitation of one qubit inhibits that of its neighbor, demonstrating a dynamical blockade mediated entirely by the engineered ZZ coupling. Circuit quantization simulations accurately reproduce the experimental results, while perturbative models confirm the theoretical origin of the energy shift as a hybridization between the computational states and higher-excitation manifolds. We establish a robust and scalable method to access interaction-dominated physics in superconducting circuits, providing a pathway towards solid-state implementations of globally controlled quantum architectures and cooperative many-body dynamics.
https://arxiv.org/abs/2601.11714
Academic Papers
svg
1c62532ea54a204e9ea0cbd0081be891b86eb569855fa00446029ff8f48bd25f
2026-01-21T00:00:00-05:00
Entanglement Distribution over a Polarization-Stabilized Aerial Fiber
arXiv:2601.11753v1 Announce Type: new Abstract: We experimentally demonstrate the distribution of polarization-entangled photons across a 62-km, partially-aerial fiber. With polarization stabilization applied to the fiber link, we achieve a photon pair rate of approximately 1500 per second and observe a CHSH inequality violation with S=2.34.
https://arxiv.org/abs/2601.11753
Academic Papers
svg
4fe4aee52204df31eb2180dbffb29b37e79bdcb2cce11e2aa9e4d94e587a5b1a
2026-01-21T00:00:00-05:00
Scalable and telecom single-erbium system with record-long room-temperature quantum coherence
arXiv:2601.11879v1 Announce Type: new Abstract: Eliminating cryogenic operating requirements while preserving microsecond-scale quantum coherence and enabling CMOS scalability remains a central challenge for telecom quantum technologies. Addressing this, we introduce a CMOS-compatible quantum system comprising single-erbium-(Er)-ion qudits (five-level systems) operating across the visible and telecom C-band. Through innovative nanofabrication, we achieve self-aligned ion placement, enabling spatial isolation of single-Er ions and suppressing dephasing. We realize individually addressable single-Er-devices with record-long optical coherence times in the telecom C-band exceeding 500 {\mu}s at ambient conditions, a performance previously limited to vacuum conditions at temperatures over 900 times lower. Furthermore, we present the first demonstration of background-free, upconversion-enabled single-photon Er-emissions providing coherent, high-contrast optical readouts. This work showcases the first room-temperature single-Er-qudit system with unprecedented properties enabling next-generation cryogen-free telecom quantum technologies.
https://arxiv.org/abs/2601.11879
Academic Papers
svg
4f94dca4e5d99bc489e66c213aae16809a890d2f41b64263a340b3e5e8764d52
2026-01-21T00:00:00-05:00
Indoor Occupancy Classification using a Compact Hybrid Quantum-Classical Model Enabled by a Physics-Informed Radar Digital Twin
arXiv:2601.11929v1 Announce Type: new Abstract: Indoor occupancy classification enables privacy-preserving monitoring in settings such as remote elder care, where presence information helps triage alarms without cameras or wearables. Radar suits this role by sensing motion through occlusions and in darkness. Modern deep-learning pipelines are the standard for interpreting radar returns effectively; however, they are often parameter-heavy and sensitive at low signal-to-noise ratios (SNR), motivating compact alternatives like Hybrid Quantum Neural Networks (HQNNs). A two-qubit HQNN is benchmarked against convolutional neural networks (CNNs) using a physics-informed 60GHz digital twin and real radar measurements under matched training protocols. In clean conditions, the HQNN achieves high accuracy (99.7% synthetic; 97.0% real) with up to 170x fewer parameters (0.066M). Its parameter efficiency is shown to be structural, as an ablation of the parameterized quantum circuit (PQC) causes sharp performance drops on real data (to 68.5% and 31.5% for the control heads). A domain-dependent sensitivity emerges under additive-noise evaluation, where the HQNN begins recovery earlier in synthetic data while CNNs recover more steeply and peak higher on real measurements. In label-fraction ablations, CNNs prove more sample-efficient on real Range-Doppler Maps (RDMs), with the performance gap being most pronounced (at 50% labels, BA 0.89-0.99 vs. HQNN 0.75). On synthetic data, this gap narrows significantly, largely vanishing by the 50% label mark. Overall, the HQNN's value lies in parameter efficiency and a compact inductive bias that shapes its distinct sensitivity profile; this work establishes a rigorous baseline for hybrid quantum models in privacy-preserving radar occupancy sensing.
https://arxiv.org/abs/2601.11929
Academic Papers
svg
602407b053c6c729471630b9b66361f84978f6db25ccc65a53df7f82a5d3d10f
2026-01-21T00:00:00-05:00
Impact of Circuit Depth versus Qubit Count on Variational Quantum Classifiers for Higgs Boson Signal Detection
arXiv:2601.11937v1 Announce Type: new Abstract: High-Energy Physics (HEP) experiments, such as those at the Large Hadron Collider (LHC), generate massive datasets that challenge classical computational limits. Quantum Machine Learning (QML) offers a potential advantage in processing high-dimensional data; however, finding the optimal architecture for current Noisy Intermediate-Scale Quantum (NISQ) devices remains an open challenge. This study investigates the performance of Variational Quantum Classifiers (VQC) in detecting Higgs Boson signals using the ATLAS Higgs Boson Machine Learning Challenge 2014 experiment dataset. We implemented a dimensionality reduction pipeline using Principal Component Analysis (PCA) to map 30 physical features into 4-qubit and 8-qubit latent spaces. We benchmarked three configurations: (A) a shallow 4-qubit circuit, (B) a deep 4-qubit circuit with increased entanglement layers, and (C) an expanded 8-qubit circuit. Experimental results demonstrate that increasing circuit depth significantly improves performance, yielding the highest accuracy of 56.2% (Configuration B), compared to a baseline of 51.9%. Conversely, simply scaling to 8 qubits resulted in a performance degradation to 50.6% due to optimization challenges associated with Barren Plateaus in the larger Hilbert space. These findings suggest that for near-term quantum hardware, prioritizing circuit depth and entanglement capability is more critical than increasing qubit count for effective anomaly detection in HEP data.
https://arxiv.org/abs/2601.11937
Academic Papers
svg
fdcbebdef606034086601a596fc5e328a2f936817cdbad299642260107ae8b3a
2026-01-21T00:00:00-05:00
Contour-integral based quantum eigenvalue transformation: analysis and applications
arXiv:2601.11959v1 Announce Type: new Abstract: Eigenvalue transformations appear ubiquitously in scientific computation, ranging from matrix polynomials to differential equations, and are beyond the reach of the quantum singular value transformation framework. In this work, we study the efficiency of quantum algorithms based on contour integral representation for eigenvalue transformations from both theoretical and practical aspects. Theoretically, we establish a complete complexity analysis of the contour integral approach proposed in [Takahira, Ohashi, Sogabe, and Usuda. Quant. Inf. Comput., 22, 11\&12, 965--979 (2021)]. Moreover, we combine the contour integral approach and the sampling-based linear combination of unitaries to propose a quantum algorithm for estimating observables of eigenvalue transformations using only $3$ additional qubits. Practically, we design contour integral based quantum algorithms for Hamiltonian simulation, matrix polynomials, and solving linear ordinary differential equations, and show that the contour integral algorithm can outperform all the existing quantum algorithms in the case of solving asymptotically stable differential equations.
https://arxiv.org/abs/2601.11959
Academic Papers
svg
cb5855ee4eb830d9384f76620c24bd207cd169ac1044d3bc4397c7ee2aef96ae
2026-01-21T00:00:00-05:00
Temperature effect on a kicked Tonks-Girardeau gas
arXiv:2601.12071v1 Announce Type: new Abstract: It is widely recognized that finite temperatures degrade quantum coherence and can induce thermalization. Here, we study the effect of finite temperature on a kicked Tonks--Girardeau gas, which is known to exhibit many--body dynamical localization and delocalization under periodic and quasiperiodic kicks, respectively. We find that many--body dynamical localization persists at finite--and even high--temperatures, although the coherence of the localized state is further degraded. In particular, we demonstrate a modified effective thermalization of the localized state by considering the initial temperature. Moreover, we show many--body dynamical localization transition at intermediate temperature. Our work extends the study of many--body dynamical localization and delocalization to the finite--temperature regime, providing comprehensive guidance for cold--atom experiments.
https://arxiv.org/abs/2601.12071
Academic Papers
svg
9554632c2d424e51dae402f326a3a260b010c4e068b3c1fa4dd5c68dcb92d843
2026-01-21T00:00:00-05:00
An unexpected theoretical structure that could explain quantum-mechanics postulates like the Born rule and the wave-function reduction
arXiv:2601.12092v1 Announce Type: new Abstract: A unique postulate is shown to underly the whole quantum mechanics theory: the invariance of the Heisenberg uncertainty inequality under a group of special nonlinear gauge transformations (NLGT). With this postulate, the quantum mechanics of a free particle is derived from classical mechanics, including the statements of the postulates of quantum mechanics, except for the wave-function-collapse postulate. An explanatory mechanism for the latter postulate is derived by performing an analytical continuation of the NLGTs. This extension results in a Schr\"odinger-bridge process, intertwined under the NLGT with the standard unitary quantum evolution, and revealing non-quantum (or beyond-quantum) phenomena. Mechanisms of that latter kind, like the ones associated to the quantum measurement process, occur in a new space-like dimension and hence are non causal in nature, in opposition to a time evolution. The present exercice focusses on the free particle in order to highlight the features of the performed derivation in the simplest possible way. Work is in progress to extend the performed derivation beyond that simple case.
https://arxiv.org/abs/2601.12092
Academic Papers
svg
3d48090b75874491417461a818e13f95d052e3fcdaa4c4cd5fe403b4b901c748
2026-01-21T00:00:00-05:00
Probing multiparameter quantum estimation in the process $e^+e^-\to J/\psi \to \text{B}\bar{\text{B}}$ at BESIII
arXiv:2601.12097v1 Announce Type: new Abstract: The quantum Fisher information matrix (QFIM) is the cornerstone of multiparameter quantum metrology. In this work, we investigate multiparameter quantum estimation in baryon-antibaryon (B bar-B) pairs produced via the e+ e- -> J/psi -> B bar-B process at the BESIII experiment, utilizing the symmetric logarithmic derivative (SLD) formalism. Moreover, the QFIM defines the quantum Cramer-Rao bound and dictates the choice of optimal probe states. We compare individual and simultaneous estimation strategies for two key physical parameters: the scattering angle phi and the decay parameter alpha_psi. The estimation variances are found to depend strongly on the explored region of the (phi, alpha_psi) parameter space and to display markedly different temporal dynamics. In general, higher true values of a parameter increase the system's sensitivity, thereby significantly reducing the associated variance. While both variances increase with evolution time, they do so at distinct rates, revealing parameter-dependent information loss driven by environmental decoherence. These findings demonstrate the utility of the QFIM framework for multiparameter quantum estimation in realistic open systems and provide new insights into the ultimate precision limits achievable for hyperon decay parameters.
https://arxiv.org/abs/2601.12097
Academic Papers
svg
6f4df82d96c4c6f76e11427ae33aeb37f25dfcb803761a2d07cdc6f1fb881e37
2026-01-21T00:00:00-05:00
Quantum interference between spectral bandwidth mismatched photons
arXiv:2601.12129v1 Announce Type: new Abstract: Two-photon interference is a cornerstone of photonic quantum technologies. However, its practical implementation in promising hybrid architectures is severely constrained by the requirement of photon wavepacket indistinguishability, in particular, in terms of the photon linewidth and associated time scale. While narrowband filtering can improve interference visibility, it introduces significant photon loss - a critical limitation for applications. Here, we experimentally demonstrate an efficient approach to enable non-classical two-photon interference between spectral-bandwidth mismatched photons using an electro-optic time lens. We increase the visibility of Hong-Ou-Mandel interference between photons of 10-fold spectral bandwidth mismatch by more than 12 times, achieving non-classical two-photon interference visibility without spectral filtering. This result opens the possibility to efficiently integrate quantum systems operating at different time scales for hybrid quantum communication, teleportation, entanglement swapping, distributed sensing, and hybrid quantum computing.
https://arxiv.org/abs/2601.12129
Academic Papers
svg
6ded6ee4d8a054fb3fb8a5acd321e40b3fc9e26f8cebc8a7632b712c3c1e0a77
2026-01-21T00:00:00-05:00
Single-shot Quantum State Classification via Nonlinear Quantum Amplification
arXiv:2601.12168v1 Announce Type: new Abstract: Quantum amplifiers are intrinsically nonlinear systems whose performance limits are set by quantum mechanics. In quantum measurement, amplifier operation is conventionally optimized in the linear regime by maximizing signal-to-noise ratio, an objective that is well-suited to parameter estimation but is typically insufficient for more general tasks such as arbitrary quantum state discrimination. Here we show that single-shot quantum state classification can benefit from operating a quantum amplifier outside the linear regime, when the measurement chain is optimized end-to-end for a task-specific cost function. We analyze a realistic superconducting readout architecture that includes state preparation, cryogenic nonlinear amplification, and room-temperature detection with finite noise. By introducing performance metrics tailored to state discrimination, we identify operating regimes in which nonlinear amplification provides a measurable advantage and clarify the trade-offs that ultimately limit classification fidelity. Our results propose the utility of practical nonlinear quantum amplifiers for quantum state discrimination, and are the first step in a broader research program aimed at developing a general framework for end-to-end, resource-limited optimization of nonlinear quantum amplifiers for such quantum information processing applications.
https://arxiv.org/abs/2601.12168
Academic Papers
svg
4283b8d38a8b76f60e34c80e2acc73680f41b73241f2a469c54fab813eb57bdc
2026-01-21T00:00:00-05:00
Non-Trivial Topological Majorana Architectures: Mobius and Trefoil Band Topologies evaluated by Signal to Noise Ratio and Coherence time mesuarements
arXiv:2601.12182v1 Announce Type: new Abstract: Topological quantum computing is expected to be less sensitive to noise because information is stored in global states rather than local features. To examine whether different device topologies show measurable differences, we study three geometries with distinct topological invariants: a Mobius strip, a loop, and a trefoil knot, which have been proposed in electronic-structure settings. From quantum capacitance measurements, we extract power versus frequency spectra and fit Lorentzian line shapes to obtain the linewidth, amplitude, signal-to-noise ratio, and coherence time. The signal-to-noise ratio quantifies the ratio of the parity measurement signal to background noise and serves as an indicator of readout quality, while the coherence time characterizes the timescale for decoherence of the quantum state. Across all three topologies, coherence times are similar, with no clear dependence on geometry. In contrast, the signal-to-noise ratio differs in the regime E0 = 10 micro-eV and Z = -1, following the ordering Trefoil, Mobius, and Loop. These results provide a reference point for future experiments aimed at separating genuine topological effects from device-level parameters.
https://arxiv.org/abs/2601.12182
Academic Papers
svg
ec944efb46d560bdacfc7d0006fd3c2a65c1f06d725fcb3751fb0834a1ff3223
2026-01-21T00:00:00-05:00
Maximum precision charging of multi-qubit quantum batteries
arXiv:2601.12183v1 Announce Type: new Abstract: Precision, robustness, and efficiency are crucial aspects in the design of quantum technologies. Here, we show how genuine quantum features, together with non-Gaussianity, can be the key elements to achieve the best of these three aspects during a quantum battery-charging process. Taking inspiration from a light-matter interaction paradigm, i.e., the Jaynes-Cummings model, we employ the Full Counting Statistics to study the stochastic exchanges of energy between an entire stack of qubits and a single-mode electromagnetic field (or mechanical oscillator). Our study allows to conclude that charging the battery through a sequential protocol involving a quantum non-Gaussian field state guarantees extremely high-performances in the charging process, whose precision is maximized even under sub-optimal operating conditions. These results highlight the potential of non-Gaussian quantum state charging to achieve a robust quantum precision advantage over Gaussian states of the field by suppressing detrimental quantum fluctuations, thus making it suitable to ultimate tasks for which a significant degree of accuracy is required.
https://arxiv.org/abs/2601.12183
Academic Papers
svg
10cfef3b2a7b31b597fb5410b2536a7152d704bce1bff39d4e65e46dcbc4aa3b
2026-01-21T00:00:00-05:00
Inverse Quantum Simulation for Quantum Material Design
arXiv:2601.12239v1 Announce Type: new Abstract: Quantum simulation provides a powerful route for exploring many-body phenomena beyond the capabilities of classical computation. Existing approaches typically proceed in the forward direction: a model Hamiltonian is specified, implemented on a programmable quantum platform, and its phase diagram and properties are explored. Here we present a quantum algorithmic framework for inverse quantum simulation, enabling quantum material design with desired properties. Target material characteristics are encoded as a cost function, which is minimized on quantum hardware to prepare a many-body state with the desired properties in quantum memory. Hamiltonian learning is then used to reconstruct a low-energy Hamiltonian for which this state is an approximate ground state, yielding a physically interpretable model that can guide experimental synthesis. As illustrative applications, we outline how the method can be used to search for high-temperature superconductors within the fermionic Hubbard model, enhancing $d$-wave correlations over a broad range of dopings and temperatures, design quantum phases by stabilizing a topological order through continuous Hamiltonian modifications, and optimize dynamical properties relevant for photochemistry and frequency- and momentum-resolved condensed-matter data. These results extend the scope of quantum simulators from exploring quantum many-body systems to designing and discovering new quantum materials.
https://arxiv.org/abs/2601.12239
Academic Papers
svg
8bd74c0ddf0bc60fdd6f2109a95b763011eeac1f53ad92d3660325494348f92f
2026-01-21T00:00:00-05:00
Measuring unconventional causal structures in monitored dynamics
arXiv:2601.12271v1 Announce Type: new Abstract: Causality underpins all logical reasoning. However, the causal structure in quantum processes can be far from intuitive, often differing from its classical counterpart in relativity, which is defined by the light cone. In particular, in systems with measurement and post-selection, causal influence can occur between spacelike separated regions. In this work, we study the causal structure and emergent "arrow of time" in monitored quantum dynamics, particularly their dependence on initial and final states. We propose a new measure, the cross-entropy quantum causal influence, to quantify the extent of causal influence, whose simulation demonstrates exotic causal structures, such as inverted light cones. This quantity can be measured in current quantum computing platforms. Additionally, we provide an analytical understanding of the relation between time arrow and entropy by studying two types of models that are analytically tractable: a quantum Brownian evolution model and a dual-unitary circuit model.
https://arxiv.org/abs/2601.12271
Academic Papers
svg
d63c2fde70e3e896a1af268014c06c65dde6fc5d2368764fcc7b0a8f46d2cb7f
2026-01-21T00:00:00-05:00
Disentanglement by deranking and by suppression of correlation
arXiv:2601.12344v1 Announce Type: new Abstract: The spontaneous disentanglement hypothesis is motivated by some outstanding issues in standard quantum mechanics, including the problem of quantum measurement. The current study compares between some possible methods that can be used to implement the hypothesis. Disentanglement is formulated using a nonlinear operator, which can be used to modify both the Schr\"{o}dinger equation for the quantum state vector, and the master equation for the density operator. Two types of nonlinear disentanglement operators are explored. The first one gives rise to matrix deranking, and the second one to correlation suppression. Both types are demonstrated using a two spin system that is driven close to the Hartmann--Hahn double resonance. It is shown that limit cycle steady state solutions, which are excluded by standard quantum mechanics, become possible in the presence of disentanglement.
https://arxiv.org/abs/2601.12344
Academic Papers
svg
ea1547fbf23a363920fe19e51920f8da4584ad6ba00b86763e49d92aa9abe707
2026-01-21T00:00:00-05:00
Efficient classical simulation of time dynamics in Fermi-Hubbard models with imaginary interactions
arXiv:2601.12368v1 Announce Type: new Abstract: Using a map between the Lindbladian evolution of dephasing in free fermions and the time evolution of imaginary-interaction Fermi-Hubbard models in bipartite lattices, we present an efficient classical algorithm to solve the Schr\"{o}dinger equation in these interacting systems. This algorithm leverages the recently discovered algorithm for simulating Lindbladian evolution by sampling mixed unitary channels (Wang et al arXiv:2601.06298). We comment on the expected classical complexity of the problem for general complex values of the parameters and discuss some applications.
https://arxiv.org/abs/2601.12368
Academic Papers
svg
e747724472edba401ffb65c5bf31f7d66ccd78ef856bb64b26154917a9765d16
2026-01-21T00:00:00-05:00
Operator delocalization in disordered spin chains via exact MPO marginals
arXiv:2601.12446v1 Announce Type: new Abstract: We investigate operator delocalization in disordered one-dimensional spin chains by introducing -- besides the already known operator mass -- a complementary measure of operator complexity: the operator length. Like the operator nonstabilizerness, both these quantities are defined from the expansion of time-evolved operators in the Pauli basis. They characterize, respectively, the number of sites on which an operator acts nontrivially and the spatial extent of its support. We show that both the operator mass and length can be computed efficiently and exactly within a matrix-product-operator (MPS) framework, providing direct access to their full probability distributions, without resorting to stochastic sampling. Applying this approach to the disordered XXZ spin-1/2 chain, we find sharply distinct behaviors in non-interacting and interacting regimes. In the Anderson-localized case, operator mass, length, and operator entanglement entropy rapidly saturate, signaling the absence of scrambling. By contrast, in the many-body localized (MBL) regime, for arbitrarily weak interactions, all quantities exhibit a robust logarithmic growth in time, consistent with the known logarithmic light cone of quantum-correlation propagation in MBL. We demonstrate that this behavior is quantitatively captured by an effective $\ell$-bit model and persists across system sizes accessible via tensor-network simulations.
https://arxiv.org/abs/2601.12446
Academic Papers
svg
e14e758451340d39b390664efa1ca83087bc32a4a4380b98d242303e2145aa08
2026-01-21T00:00:00-05:00
Stochastic Quantum Information Geometry and Speed Limits at the Trajectory Level
arXiv:2601.12475v1 Announce Type: new Abstract: Standard quantum metrology relies on ensemble-averaged quantities, such as the Quantum Fisher Information (QFI), which often mask the fluctuations inherent to single-shot realizations. In this work, we bridge the gap between quantum information geometry and stochastic thermodynamics by introducing the Conditional Quantum Fisher Information (CQFI). Defined via the Symmetric Logarithmic Derivative, the CQFI generalizes the classical stochastic Fisher information to the quantum domain. We demonstrate that the CQFI admits a decomposition into incoherent (population) and coherent (basis rotation) contributions, augmented by a transient interference cross-term absent at the ensemble level. Crucially, we show that this cross-term can be negative, signaling destructive interference between classical and quantum information channels along individual trajectories. Leveraging this framework, we construct a stochastic information geometry that defines thermodynamic length and action for single quantum trajectories. Finally, we derive fundamental quantum speed limits valid at the single-trajectory level and validate our results using the quantum jump unraveling of a driven thermal qubit.
https://arxiv.org/abs/2601.12475
Academic Papers
svg
7ab12634ec9270949fc25da4f2e839d992f2707363e3aded70e85223f1359097
2026-01-21T00:00:00-05:00
Coherence Scaling in Quantum Communication Protocols
arXiv:2601.12516v1 Announce Type: new Abstract: We investigate how quantum coherence scales and is redistributed in quantum communication protocols, using superdense coding and quantum teleportation as paradigmatic case studies. Employing the relative entropy of coherence as a circuit-level resource measure, we show that multipartite resource states relevant to generalized superdense coding can enable scalable communication while exhibiting only logarithmic or even constant coherence growth, depending on their entanglement structure. In sharp contrast, quantum teleportation displays an unavoidable, protocol-induced coherence cost that grows linearly with the number of teleported qubits and is independent of the input state. Through a stage-resolved analysis of the teleportation circuit, we separate protocol-generated coherence from message-dependent contributions and identify a universal two-bit coherence offset per teleported qubit at the maximal-coherence stage. We further demonstrate explicitly that this extensive intermediate coherence generation is fully consistent with information-theoretic bounds, including the Holevo limit, and does not correspond to an increase in accessible classical information.
https://arxiv.org/abs/2601.12516
Academic Papers
svg
632ce8e8906a692f7657932e2d53aa374cb822442982981e3c14db25a74a3b1b
2026-01-21T00:00:00-05:00
Interpolation of unitaries with time-dependent Hamiltonians via Deep Learning
arXiv:2601.12619v1 Announce Type: new Abstract: Quantum systems governed by time-dependent Hamiltonians pose significant challenges for the accurate computation of unitary time-evolution operators, which are essential for predicting quantum state dynamics. In this work, we introduce a physics-informed deep learning approach based on Physics-Informed Neural Networks to estimate these operators over the full time domain. By incorporating physical constraints such as unitarity and leveraging the second-order Magnus expansion on the evolution operator, the proposed framework enables the estimation of unitary matrices at different time intervals. The model is trained using simulated unitary operators and evaluated on quantum systems ranging from 2 to 6 qubits. For larger many-body systems, specifically those with 7 and 8 qubits, the same methodology is employed to reconstruct an effective time-dependent Hamiltonian, from which the corresponding time-evolution operator is computed over the entire temporal domain. The proposed framework achieves fidelities exceeding 0.92 using a limited number of unitary samples, indicating a potential reduction in measurement and data acquisition costs. These results highlight the effectiveness of the approach for data-driven simulation and identification of quantum dynamical systems, with direct relevance to quantum computing and quantum simulation applications.
https://arxiv.org/abs/2601.12619
Academic Papers
svg
ca44e9fe6750e1a43de50a66b14ca333eb6960c97384fe19f548fde2c25f922b
2026-01-21T00:00:00-05:00
Equation-Free Discovery of Open Quantum Systems via Paraconsistent Neural Networks
arXiv:2601.12635v1 Announce Type: new Abstract: Modeling the dynamics of open quantum systems on noisy intermediate-scale quantum (NISQ) devices constitutes a major challenge, as high noise levels and environmental degradations lead to the decay of pure quantum states (decoherence) and energy losses. This situation represents one of the most important problems in the field of quantum information technologies. While existing data-driven methods struggle to generalize beyond the training data (extrapolation), physics-informed neural networks (PINNs) require predefined governing equations, which limit their discovery capability when the underlying physics is incomplete or unknown. In this work, we present the ParaQNN (ParaQuantum neural network) architecture, an equation-free framework for physical discovery. ParaQNN disentangles multi-scale dynamics without relying on a priori laws by employing a dialetheist logic layer that models coherent signal and decoherent noise as independent yet interacting channels. Through extensive benchmark tests performed on Rabi oscillations, Lindblad dynamics, and particularly complex ``mixed regimes'' where relaxation and dephasing processes compete, we show that ParaQNN exhibits a consistent performance advantage compared to Random Forest, XGBoost, and PINN models with incomplete physical information. Unlike its competitors, ParaQNN succeeds in maintaining oscillatory and damping dynamics with high accuracy even in extrapolation regions where training data are unavailable, by ``discovering'' the underlying structural invariants from noisy measurements. These results demonstrate that paraconsistent logic provides a structurally more stable epistemic foundation than classical methods for learning quantum behavior in situations where mathematical equations prove insufficient.
https://arxiv.org/abs/2601.12635
Academic Papers
svg
a3f81b1d1b23371ad879ae1eb45d7a33f30862c03ad78ed273a05939a9704d77
2026-01-21T00:00:00-05:00
Learning at the Edge of Causality: Optimal Learning-Sample Complexity from No-Signaling Constraints
arXiv:2601.12651v1 Announce Type: new Abstract: What ultimately fixes the sample cost of quantum learning -- algorithmic ingenuity or physical law? We study this question in an arena where computation, learning, and causality collide. A twist on Grover's search that reflects about an a priori unknown state can collapse the query complexity from $O(\sqrt{N})$ to $O(\log N)$ over a search space $N$, i.e., an exponential speedup. Yet, standard quantum theory forbids such a unknown-state reflection (no-reflection theorem). We therefore build a state-learning-assisted architecture, called ``amplify-learn,'' which alternates the coherent amplitude amplification with state learning. Embedding this amplify-learn into the Bao-Bouland-Jordan no-signaling framework, we show that the logarithmic-round dream would open a super-luminal communication channel unless each round expends the learning-sample and reflection-circuit budgets scaling at least as $\Omega(\sqrt{N}/\log N)$. In parallel, we derive tight computational learning-theoretic sample bounds for learning circuit-generated pure states, revealing a state-universal ansatz ``lock'' at order $N$ in the worst case. The dramatic closure is that no-signaling does not merely veto the unphysical primitive, but it fixes the only consistent reflection-circuit complexity, and feeding this causality-enforced complexity into the computational learning bound makes it collapse onto the very same $\sqrt{N}/\log N$ scaling demanded by no-signaling alone. No-signaling thus acts as a regulator of learnability: a constraint that mediates between physics and computation, welding query, gate, and sample complexities into a single causality-compatible triangle.
https://arxiv.org/abs/2601.12651
Academic Papers
svg
8edd6daae05412d65d27bbf76d0328630d475d8b30f8dfe175a011a79a61dcd6
2026-01-21T00:00:00-05:00
Constructing the Hamiltonian for a free 1D KFGM particle in an interval
arXiv:2601.12739v1 Announce Type: new Abstract: We analyze the problem of a free 1D Klein-Fock-Gordon-Majorana (KFGM) particle in an interval. By free, we mean that there is no potential within the interval and that its walls are penetrable; hence, the pertinent energy current density does not vanish at the walls. Certainly, quantization in an interval is not trivial because certain restrictions imposed by the domains of the operators involved arise. Here, our objective is to obtain the Hamiltonian for these particles. In practice, the Feshbach-Villars (FV)--free Hamiltonian is the proper operator for characterizing them and is a function of the momentum operator. Additionally, a Majorana condition must also be imposed on the wavefunctions on which these two operators can act. Thus, we start by calculating the pseudo self-adjoint momentum operator. A three-parameter set of boundary conditions (BCs) constitutes its domain. Up to this point, the domain of the Hamiltonian is induced by the domain of the momentum operator; however, we ensure that only the BCs for which the energy current density has the same value at each end of the interval are in its domain. All these BCs essentially belong to a one-parameter set of BCs. Moreover, because the FV equation is invariant under the operation of parity, the parity-transformed wavefunction is also a solution of this equation, which further restricts the domain of the free FV Hamiltonian. Finally, knowing the most general three-parameter set of BCs for the pseudo self-adjoint FV Hamiltonian for a 1D KFGM particle in an interval, we find that only two BCs can remain within the domain of the FV--free Hamiltonian: the periodic BC and the antiperiodic BC. These BCs are satisfied by both the two-component FV wavefunction, with these components being related, and the one-component KFG wavefunction, which can be real or imaginary.
https://arxiv.org/abs/2601.12739
Academic Papers
svg
1e671d906904abbf3b160f5efcf65114e37f51027abc25f9ac0248d551b233ba
2026-01-21T00:00:00-05:00
Connecting Magic Dynamics in Thermofield Double States to Spectral Form Factors
arXiv:2601.12787v1 Announce Type: new Abstract: Under unitary evolution, chaotic quantum systems initialized in simple states rapidly develop high complexity, precluding any efficient classical description. Quantum chaos is traditionally characterized by spectral properties of the Hamiltonian, most notably through the spectral form factor, while the hardness of classical simulation within the stabilizer formalism, commonly referred to as quantum magic, can be quantified by the stabilizer R\'enyi entropy. In this Letter, we propose a relation between the dynamics of the stabilizer R\'enyi entropy for thermofield double states and the spectral form factor, based on general arguments for chaotic systems with all-to-all interactions. This relation implies that the saturation of the stabilizer R\'enyi entropy is governed by a first-order dynamical transition. We then demonstrate this relation explicitly in the Sachdev-Ye-Kitaev model, using an auxiliary-spin representation of the stabilizer R\'enyi entropy that exhibits an emergent $Z_2$ symmetry. We further find that, in the high-temperature regime of the SYK model, the transition occurs at a finite time, with the long-time phase marked by spontaneous $Z_2$ symmetry breaking. In contrast, at low temperatures, the transition is pushed to times exponentially long in the system size. Our results reveal an intriguing interplay between quantum chaos and quantum magic.
https://arxiv.org/abs/2601.12787
Academic Papers
svg
6b973007e64eac93d07f5e9e9a80be82b4f28e968f2aa8c6df0fbc78c1811e34
2026-01-21T00:00:00-05:00
Revealing the non-classicality of a molecular nanomagnet
arXiv:2601.12832v1 Announce Type: new Abstract: Molecular nanomagnets are compounds characterized by a high-spin magnetic core that is protected by organic ligands. They have recently gained attention as potential quantum information carriers in solid-state quantum computing platforms, simultaneously exhibiting classical macroscopic properties and quantum features in light of their complex nature and configuration. Addressing the condition when they manifest unquestionable quantum behavior is key to guarantee their effectiveness as resources for quantum information processing. We address the quantumness of molecular nanomagnets using a recently formulated criterion [cf. Krisnanda et al., Phys. Rev. Lett. 119, 120402 (2017)] demonstrating that these systems exhibit an intrinsic quantum nature, as evidenced by their ability to generate and enhance quantum correlations between two non-interacting probes. Our analysis, which is performed addressing various dynamical regimes, paves the way to the design of experimentally viable tests of non-classicality in multipartite registers consisting of ensembles of molecular nanomagnets.
https://arxiv.org/abs/2601.12832
Academic Papers
svg
5dbc190106e0f51266cf350983f24dc2d2457fa77e977b4d77ba6aa77d22de56
2026-01-21T00:00:00-05:00
Nonreciprocity of intense light field and weak quantum signal in optomechanical systems with three-mode parametric interactions
arXiv:2601.12855v1 Announce Type: new Abstract: We demonstrate nonreciprocal optical transmission for both intense classical fields and weak quantum signals within a reconfigurable optomechanical platform driven by three-mode parametric interactions. The platform is modular, where each three-mode optomechanical system serves as a fundamental building block. Operating independently, a single block achieves nonreciprocity for classical fields. Specifically, asymmetric radiation pressure from intrinsic optomechanical nonlinearity induces nonreciprocal mechanical displacement, modulating the cavity intensity through optomechanical feedback. This enables full isolation of backward transmission without requiring parameter initialization. Alternatively, for quantum signals, the platform is reconfigured by activating photonic and phononic exchange channels between the two blocks. In this configuration, nonreciprocity arises from quantum interference between direct photon hopping and indirect conversion pathways. Constructive interference enables unidirectional low-loss transmission, while destructive interference completely suppresses the reverse direction. After adiabatically eliminating the auxiliary modes, the optimal nonreciprocal frequency and the trade-off between insertion loss and nonreciprocal bandwidth can be controlled by engineering optomechanically induced mechanical dissipation. Additionally, the three-mode-based device requires less control-field power than two-mode systems under resolved-sideband conditions, demonstrating versatile potential for optical nonreciprocity applications across classical and quantum domains.
https://arxiv.org/abs/2601.12855
Academic Papers
svg
bc0e705ae92f10706f88de80253c4c99968e0d4a25c83ee7e6b4e292fb6e6bea
2026-01-21T00:00:00-05:00
Exact dynamics and bound states of a cavity coupled to a two-dimensional reservoir
arXiv:2601.12880v1 Announce Type: new Abstract: We demonstrate a robust scheme for quantum information storage based on bound states in a two-dimensional coupled-cavity array. When a target cavity is tuned to resonance with the array, a bound state in the continuum (BIC) emerges, coexisting with two conventional bound states outside the band. The resulting dynamics reflects a delicate interplay between these bound states, which can be fully captured through exact analytical solutions. In the weak-coupling regime, the BIC dominates, enabling perfect and persistent information storage. At stronger coupling, all bound states contribute, leading to oscillatory behavior and reduced storage fidelity. These results, valid at both zero and finite reservoir temperatures and further supported by a single-particle framework, reveal distinctive non-Markovian features in continuous-variable systems and highlight the potential of photonic lattices for scalable all-optical decoherence-free quantum memory platforms.
https://arxiv.org/abs/2601.12880
Academic Papers
svg
4dce7d591ab1e519a03f2f3ce1ec5592ba349b689d79e6bec3d03b50705250e9
2026-01-21T00:00:00-05:00
No-Signalling Fixes the Hilbert-Space Inner Product
arXiv:2601.13012v1 Announce Type: new Abstract: We investigate whether the inner product structure of quantum mechanics can be modified without violating fundamental physical principles. We consider a generalized inner product defined by a positive operator and assume local unitary dynamics, existence of entangled states and the no-signalling principle. We show that any nontrivial choice of inner product different from standard one inevitably leads to superluminal signalling, in contradiction with relativistic causality. Therefore, the standard Hilbert-space inner product is uniquely enforced by no-signalling.
https://arxiv.org/abs/2601.13012
Academic Papers
svg
2343ed0b1e71e9191c5a2004c17c0e006d522c0d22f2b91c45bb072e2f94b253
2026-01-21T00:00:00-05:00
Quantitative wave-particle duality in uniform multipath interferometers with symmetric which-path detector states
arXiv:2601.13083v1 Announce Type: new Abstract: A quantum system (quanton) traverses an interferometer with $N$ equally probable paths and interacts with another quantum system (detector) that stores path information in a set of symmetric states. In this interferometric framework, we present entropic wave-particle duality relations between quantum coherence, characterized by the relative entropy of coherence of the quanton state, and which-path knowledge, quantified by the mutual information obtained through detector-state discrimination. By applying a general optimal discrimination measurement, which has a closed-form solution and encompasses other fundamental strategies as special cases, we provide an exact quantification of which-path knowledge in a variety of scenarios. This measurement is carried out in two steps. First, an optimal separation map with a prescribed separation level $\xi\in [0,1]$ probabilistically reduces the overlaps between the input detector states with maximum success rate, or increases them in case of failure. Then, a minimum-error (ME) measurement discriminates either only the successful outputs (standard approach) or both the successful and failure outputs (concatenated approach). We show that the duality relation is tighter at $\xi=0$, where both approaches reduce to the ME measurement. For $\xi>0$, each approach yields a distinct relation that becomes less tight as $\xi$ increases, with the concatenated one providing the tighter bound. Finally, by using the discrete uncertainty principle, we determine the sets of detector states that lead to saturation of the duality relation, showing that they span $n$-dimensional subspaces of the detector space, where $n$ divides $N$. As a result, nontrivial saturation occurs only for interferometers with a nonprime number of paths. From the identified saturating sets, we highlight how the quanton-detector correlations underlie this phenomenon.
https://arxiv.org/abs/2601.13083
Academic Papers
svg
12c366560b3ce99396ea22d215d2890de0dbd86f712773b0060fcbaecb8750d1
2026-01-21T00:00:00-05:00
Product-State Approximation Algorithms for the Transverse Field Ising Model
arXiv:2601.13106v1 Announce Type: new Abstract: We study classical polynomial-time approximation algorithms for the transverse-field Ising model (TFIM) Hamiltonian, allowing a mixture of ferromagnetic and anti-ferromagnetic interactions between pairs of qbits, alongside transverse field terms with arbitrary non-negative weights. Our main results are a series of approximation algorithms (all approximation ratios with respect to the true quantum optimum): (i) a simple maximum of two product state rounding algorithm achieving an approximation ratio $\gamma\approx 0.71$ , (ii) a strengthened rounding, inspired by the anticommutation property of the two $X_i, Z_iZ_j$ observables achieving ratio $\gamma\approx 0.7860$, and (iii) a further improvement by interpolation achieving ratio $\gamma \approx 0.8156$. We also give an explicit (purely ferromagnetic) TFIM instance on three qbits for which every product state achieves at most $169/180\approx 0.9389$ of the true optimum, yielding an upper bound for all algorithms producing product state approximations, even in the purely ferromagnetic case.
https://arxiv.org/abs/2601.13106
Academic Papers
svg
90d0dafa16ea200c33c0018239d1b1693e12778f0f70232b717c6550d46d9aa6
2026-01-21T00:00:00-05:00
Quantum Data Structure for Range Minimum Query
arXiv:2601.13195v1 Announce Type: new Abstract: Given an array $a[1..n]$, the Range Minimum Query (RMQ) problem is to maintain a data structure that supports RMQ queries: given a range $[l, r]$, find the index of the minimum element among $a[l..r]$, i.e., $\operatorname{argmin}_{i \in [l, r]} a[i]$. In this paper, we propose a quantum data structure that supports RMQ queries and range updates, with an optimal time complexity $\widetilde \Theta(\sqrt{nq})$ for performing $q = O(n)$ operations without preprocessing, compared to the classical $\widetilde\Theta(n+q)$. As an application, we obtain a time-efficient quantum algorithm for $k$-minimum finding without the use of quantum random access memory.
https://arxiv.org/abs/2601.13195
Academic Papers
svg
40c3d6d1226c030c197c12d4640c5ab14b032c1d17bb3221d3628ffae62022b3
2026-01-21T00:00:00-05:00
All-Dielectric Resonant Cavity Electro-Optic Transduction Between Microwave and Telecom
arXiv:2601.13199v1 Announce Type: new Abstract: We present a resonant electro-optic transducer for efficient conversion between microwave and telecom wavelength photons. Our platform employs a bulk lithium niobate crystal whose large dielectric constant creates wavelength-scale confinement of microwave photons. By incorporating this crystal within a high-finesse Fabry - Perot optical cavity, microwave photons couple to optical photons through the electro-optic effect. We demonstrate the ability to tune our system into triply resonant operation, where microwave photons, optical pump photons, and upconverted optical photons are simultaneously resonant with high quality factor electromagnetic modes of the system. The device achieves photon number conversion efficiency at the percent level, comparable to state-of-the-art devices at room temperature -- sufficient to resolve the thermal occupation of the microwave mode -- while avoiding the noise and loss associated with metal electrodes. These results establish our all-dielectric devices as a promising platform for high-precision sensing of optically detected microwave fields and as a viable route toward single-photon-level microwave - optical quantum transduction.
https://arxiv.org/abs/2601.13199
Academic Papers
svg
9a205d796f1ec221c28d65d099101986936aa655778a7d9c078624415420aaf7
2026-01-21T00:00:00-05:00
Towards Simple and Useful One-Time Programs in the Quantum Random Oracle Model
arXiv:2601.13258v1 Announce Type: new Abstract: We construct simulation-secure one-time memories (OTM) in the random oracle model, and present a plausible argument for their security against quantum adversaries with bounded and adaptive depth. Our contributions include: (1) A simple scheme where we use only single-qubit Wiesner states and conjunction obfuscation (constructible from LPN): no complex entanglement or quantum cryptography is required. (2) A new POVM bound where e prove that any measurement achieving $(1 - \epsilon)$ success on one basis has conjugate-basis guessing probability at most $\frac{1}{2m} + O(\epsilon^\frac{1}{4})$. (3) Simultation-secure OTMs in the quantum random oracle model where an adversary can only query the random oracle classically. (4) Adaptive depth security where, via an informal application of a lifting theorem from Arora et al., we conjecture security against adversaries with polynomial quantum circuit depth between random oracle queries. Security against adaptive, depth-bounded, quantum adversaries captures many realistic attacks on OTMs built from single-qubit states; our work thus paves the way for practical and truly secure one-time programs. Moreover, depth bounded adaptive adversarial models may allow for encoding one-time memories into error corrected memory states, opening the door to implementations of one-time programs which persist for long periods of time.
https://arxiv.org/abs/2601.13258
Academic Papers
svg
d53c6c1f574c3406d2d9312aa6fcab2caaa45f36a82276047b6d1b46f0d3ba92
2026-01-21T00:00:00-05:00
Microscopic Quantum Friction
arXiv:2601.13265v1 Announce Type: new Abstract: We report on a microscopic theory of quantum friction. Our approach investigates the interplay between the dispersive response and the relative center-of-mass motion of two ground-state atoms. This coupling yields a quantum force, which can be expressed as a power series in the velocity. The significance of each contribution depends on its order parity: while even-order terms are reversible, odd-order terms are irreversible and only survive in the presence of an internal dissipation mechanism. In addition, we obtain general, model-independent properties for the work performed by these contributions for arbitrary scattering trajectories. These results enable an unambiguous identification of odd-parity terms with microscopic quantum friction. At room temperature, the dominant microscopic quantum friction is of first order in the velocity and presents a strong quantum character. Our microscopic theory reveals that several properties of quantum friction obtained in specific settings -- such as the cubic dependence on velocity at zero temperature -- are indeed universal features already present at the atomic scale.
https://arxiv.org/abs/2601.13265
Academic Papers
svg
6da25d18e776a0da478f0743194d75697c06492d436e361540d1622988639892
2026-01-21T00:00:00-05:00
Implementation of Leaking Quantum Walks on a Photonic Processor
arXiv:2601.13269v1 Announce Type: new Abstract: Quantum walks represent pillars of quantum dynamics and information processing. They provide a powerful framework for simulating quantum transport, designing search algorithms, and achieving universal quantum computation. Several physical platforms have been employed to implement QWs, such as trapped atoms, trapped ions, nuclear magnetic resonance systems and photonic quantum systems either in bulk optics or waveguide structures and fiber-loop networks. Here we focus on the most promising approach, that is photonic integrated circuits. We will review how the employment of this versatile experimental platform has allowed to explore several phenomena related to QW-based protocols, e.g. the evolution in presence of different kinds of noise. In this landscape, to the best of our knowledge, few examples report on the introduction of absorbing centers and their effects on the coherence of the dynamics. Here we present and discuss the results related to absorbing boundaries in QWs obtained through theoretical simulations and experiments conducted with the universal photonic quantum processors realized by Quix Quantum.
https://arxiv.org/abs/2601.13269
Academic Papers
svg
f6c09540ff1bc5b666efc49df72721a87e0aa1608a5418573bbc9b36acdd230c
2026-01-21T00:00:00-05:00
Rethinking Quantum Noise in Quantum Machine Learning: When Noise Improves Learning
arXiv:2601.13275v1 Announce Type: new Abstract: Quantum noise is conventionally viewed as a fundamental obstacle in near-term quantum computing, motivating extensive error correction and mitigation strategies. We present numerical evidence that challenges this consensus. Through experiments on quantum graph neural networks for molecular property prediction, we discover that quantum noise induces heterogeneous, initialization-dependent responses. Among randomly initialized models with identical architecture, approximately one-third show performance improvement under moderate noise, while a smaller fraction deteriorate and the remainder are marginally affected. We identify a strong negative correlation ($r = -0.62$) between baseline model performance and noise benefit, suggesting that noise acts as an implicit regularizer for under-optimized models while disrupting well-converged ones. The observed optimal noise level falls below theoretical predictions, indicating error cancellation in structured quantum circuits. These findings demonstrate that quantum noise effects depend critically on initialization quality and need not be uniformly detrimental, suggesting a shift from universal noise mitigation toward structure- and noise-aware optimization strategies.
https://arxiv.org/abs/2601.13275
Academic Papers
svg
b27ba9fe83c84e29847927c2d14bae5d14a6836384203a22bc446f9aae553c8d
2026-01-21T00:00:00-05:00
Quantum eigenvalues and eigenfunctions of an electron confined between conducting planes
arXiv:2601.13278v1 Announce Type: new Abstract: Two of the most iconic systems of quantum physics are the particle in a box and the Coulomb potential (the third is, of course, the harmonic oscillator). In this expository paper, we consider the quantum solution to the problem of an electron confined between the grounded planes of an infinite capacitor. The potential arises from the image charges that form in the grounded planes, along with the added condition that at x = 0, L, where L is the distance between the planes, the wavefunction must be zero. This effectively couples a hydrogen like system to a particle-in-a-box (PIB) based on L, the distance between the planes. The problem of finding the electrostatic potential of this infinite series of image charges is an old one, going back to at least 1929. Here, we give a short derivation for one of the limiting cases that yields a compact expression and show how the Kellogg infinite summation formula converges to that value. We note here that this potential is a symmetric double well potential, so there will be many familiar properties of its solutions. Then using that potential, we solve Schr\"odinger's equation using a spectral technique. The limiting forms of a particle in a box for small L (and high E), and that of a (degenerate) bound image charge at large L and small energy are recovered. We also discuss the tunneling level splitting that occurs in the transition from the large L to the small L regime.
https://arxiv.org/abs/2601.13278
Academic Papers
svg
2817bf6d679c61aba400af18ceeb05ed93125eca095033b57c9bc33ed5d16a04
2026-01-21T00:00:00-05:00
Synthesis of Fault-tolerant State Preparation Circuits using Steane-type Error Detection
arXiv:2601.13313v1 Announce Type: new Abstract: Fault-tolerant state preparation is essential for reliable quantum error correction, particularly in Steane-type error correction, which relies on robust ancilla states for syndrome readout. One method of fault-tolerant state preparation is to initialize multiple ancilla states and check them against each other to detect problematic errors. In the worst case, the number of states required for successful initialization grows polynomially with the code distance, but it has been shown that this can be reduced to constant ancilla overhead-in the best case, only four states are required. However, existing techniques for finding low-overhead initialization schemes are limited to codes with large symmetry groups, such as the Golay code. In this work, we propose a general, automated synthesis methodology for Steane-type fault-tolerant state preparation circuits that applies to arbitrary Calderbank-Shor-Steane (CSS) codes and does not rely on code symmetries. We apply the proposed methods to various CSS codes up to a distance of seven and simulate the successful fault-tolerant initialization of logical basis states under circuit-level depolarizing noise. The circuits synthesized using the proposed methodology provide an important step towards experimental realizations of high-fidelity ancilla states for near-term demonstration of fault-tolerant quantum computation.
https://arxiv.org/abs/2601.13313
Academic Papers
svg
097f7d647cdd2c3ddd3a7fcd2a87038ea3442c6484648eca679e70fd9d8cf406
2026-01-21T00:00:00-05:00
Quantum Circuit Pruning: Improving Fidelity via Compilation-Aware Circuit Approximation
arXiv:2601.13322v1 Announce Type: new Abstract: This work presents a routing-aware pruning strategy for quantum circuits executed on Noisy Intermediate-Scale Quantum (NISQ) devices. We propose a method to remove parametric controlled rotations whose small rotation angles do not justify the routing overhead required for their implementation. By selectively pruning such gates, the method mitigates fidelity loss arising from additional SWAP operations introduced during compilation. Our approach evaluates whether executing a gate leads to greater fidelity loss than omitting it. Simulations on benchmark circuits with realistic noise models show that the method reduces two-qubit gate counts (up to 48.6%) while improving final state fidelity (up to 47.7%), especially for larger circuits where routing costs dominate.
https://arxiv.org/abs/2601.13322
Academic Papers
svg
866f41ec66647b01c1671b9adfd4624116252fa7a30f5f3046914b011aaa4460
2026-01-21T00:00:00-05:00
Polynomial-time certification of fidelity for many-body mixed states and mixed-state universality classes
arXiv:2601.13333v1 Announce Type: new Abstract: Computation of Uhlmann fidelity between many-body mixed states generally involves full diagonalization of exponentially large matrices. In this work, we introduce a polynomial-time algorithm to compute certified lower and upper bounds for the fidelity between matrix product density operators (MPDOs). Our method maps the fidelity estimation problem to a variational optimization of sequential quantum circuits, allowing for systematic improvement of the lower bounds by increasing the circuit depth. Complementarily, we obtain certified upper bounds on fidelity by variational lower bounds on the trace distance through the same framework. We demonstrate the power of this approach with two examples: fidelity correlators in critical mixed states, and codeword distinguishability in an approximate quantum error-correcting code. Remarkably, the variational lower bound accurately track the universal scaling behavior of the fidelity with a size-consistent relative error, allowing for the extraction of previously unknown critical exponents. Our results offer an exponential improvement in precision over known moment-based bounds and establish a scalable framework for the verification of many-body quantum systems.
https://arxiv.org/abs/2601.13333
Academic Papers
svg
3a8a7b8c2cf4c67e2b0433bad9e80ae48d8e8612462649087924b4c6c0536dfa
2026-01-21T00:00:00-05:00
Stochastic resetting induces quantum non-Markovianity
arXiv:2601.13367v1 Announce Type: new Abstract: Stochastic resetting describes dynamics which are reinitialized to a reference state at random times. These protocols are attracting significant interest: they can stabilize nonequilibrium stationary states, generate correlations in noninteracting systems, and enable optimal search strategies. While a constant reset probability results in a Markovian dynamics, much less is known about non-Markovian effects in quantum stochastic resetting. Here, we analyze memory effects in these processes -- examining the evolution of quantum states and of observables -- through witnesses of non-Markovianity for open quantum systems. We focus on discrete-time reset processes, which are of particular interest as they can be implemented on existing gate-operated quantum devices. We show that these processes are generically described by non-divisible maps and, in non-classical scenarios where the effective reset probability can become negative, can feature revivals in the state distinguishability. Our results reveal non-Markovian effects in quantum stochastic resetting and show that a time-dependent reset may be exploited to engineer enhanced stationary quantum correlations.
https://arxiv.org/abs/2601.13367
Academic Papers
svg
dec0f1ba6cd4ce8d99b22a3c96b8f6e844568730e41c750473cf8e338486ded4
2026-01-21T00:00:00-05:00
Type-I and Type-II Fusion Protocols for Weighted Graph States
arXiv:2601.13381v1 Announce Type: new Abstract: Weighted graph states extend standard graph states by associating phases with entangling edges, and may serve as resources for measurement-based quantum computation (MBQC). We analyze how the two main fusion operations, Type-I and Type-II, act on weighted graph states. Type-I fusion operates identically to the unweighted case, merging two one-dimensional weighted graphs, while preserving edge weights and success probabilities. In addition, the pool of 2-qubit weighted graph states can be generated easily by GHZ states or Bell pairs. In contrast, Type-II fusion requires a logical qubit, which can be formed only for specific weight configurations, and with success probability below one-half, which is an obstacle one can avoid. When successful, it fuses the states correctly, but its failure outcomes destroy the structure of the graphs, removing the good-failure feature, known from ordinary graph states. We compute the entanglement reduction of the resulting link due to the fused states being weighted graph states (for generalized fusion), and classify the resulting states of a general non-Bell projection. These results define the practical limits of the fusion-based construction of weighted graph states for MBQC.
https://arxiv.org/abs/2601.13381
Academic Papers
svg
c3dc95f8da370975dad9a203f88854bdb619e74302555e1726d38de39055d450
2026-01-21T00:00:00-05:00
Precise estimation of the coupling strength between two nanomechanical modes from four Ramsey fringes
arXiv:2601.13415v1 Announce Type: new Abstract: We experimentally determine the coupling strength between two strongly coupled nanomechanical modes using a Ramsey-inspired technique optimized for signals as short as four fringes. The method is applied to precisely probe the change of the coupling rate induced by a modification of the microwave-cavity readout field. It opens a pathway towards sensing electrostatic field fluctuations approaching single-charge resolution.
https://arxiv.org/abs/2601.13415
Academic Papers
svg
a26bc23d81e6de396ecdd92021407876ba379fcd807f2275c4b54f4db70f0c62
2026-01-21T00:00:00-05:00
Symmetric Informationally Complete Positive Operator Valued Measure and Zauner conjecture
arXiv:2601.13475v1 Announce Type: new Abstract: In this paper, we show that in Hilbert space of any finite dimension N, there are N^2 pure states which constitute Symmetric Informationally Complete Positive Operator Valued Measure (SIC-POVM).
https://arxiv.org/abs/2601.13475
Academic Papers
svg
bb2aa9aa93ca6bb80f3bc0ca70a3883e34b99384933cca755abe4fe328589027
2026-01-21T00:00:00-05:00
Confined non-Hermitian skin effect in a semi-infinite Fock-state lattice
arXiv:2601.13540v1 Announce Type: new Abstract: In this paper, we investigate the non-Hermitian skin effect in a semi-infinite Fock-state lattice, where the inherent coupling scales as \sqrt{n}. By analytically solving a non-uniform, non-reciprocal SSH model, we demonstrate that the intrinsic inhomogeneous coupling, in combination with nonreciprocity, fundamentally modifies the conventional skin effect. Instead of accumulating at the physical boundary, all eigenmodes become compressed and skewed within a finite spatial range determined by the inhomogeneous profile-a phenomenon we term the confined non-Hermitian skin effect. Consequently, the evolution of the probability distribution on the lattice starting from a single site is doubly confined: it is spatially bounded to a finite range by the inhomogeneous coupling, and further restricted to a one-sided trajectory at the edge of this range by the non-reciprocity. Moreover, a feasible experimental scheme based on a single trapped ion is also proposed. This work reveals how engineered coupling profiles in synthetic dimensions can reshape non-Hermitian properties and enable new protocols for quantum state manipulation.
https://arxiv.org/abs/2601.13540
Academic Papers
svg
74ff44781c5b4ef8ebffbedc5282e74c8f5d5581e03f2d57259bbacead02cb4d
2026-01-21T00:00:00-05:00
Fundamental Limits of Continuous Gaussian Quantum Metrology
arXiv:2601.13554v1 Announce Type: new Abstract: Continuous quantum metrology holds promise for realizing high-precision sensing by harnessing information progressively carried away by the radiation quanta emitted into the environment. Despite recent progress, a comprehensive understanding of the fundamental precision limits of continuous metrology with bosonic systems is currently lacking. We develop a general theoretical framework for quantum metrology with multimode free bosons under continuous Gaussian measurements. We derive analytical expressions for the asymptotic growth rates of the global quantum Fisher information (QFI) and the environmental QFI, which quantify the total information encoded in the joint system-environment state and the information accessible from the emitted radiation, respectively. We derive fundamental bounds on these quantities, showing that while Heisenberg-type scaling with the number of modes is attainable, the precision scales at most linearly with time and a meaningful energy resource. To illustrate our findings, we analyze several concrete setups, including coupled cavity arrays and trapped particle arrays. While a local setup yields a standard linear scaling with resources, a globally coupled setup can achieve the optimal quadratic scaling in terms of the mode number. Furthermore, we demonstrate that a nonreciprocal setup can leverage the non-Hermitian skin effect to realize an exponentially enhanced global QFI. Notably, however, this enhancement cannot be reflected in the environmental QFI, highlighting a fundamental distinction between the information stored within the joint state and the information radiated into the environment. These findings establish an understanding of the resource trade-offs and scaling behaviors in continuous bosonic sensing.
https://arxiv.org/abs/2601.13554
Academic Papers
svg
eb36c6f2f957f075929927614eb0a870048060aa74a948796ac54e4f5694231f
2026-01-21T00:00:00-05:00
A scalable near-visible integrated photon-pair source for satellite quantum science
arXiv:2601.13617v1 Announce Type: new Abstract: Quantum state distribution over vast distances is essential for global-scale quantum networks and fundamental test of quantum physics at space scale. While satellite platforms have demonstrated thousand-kilometer entanglement distribution, quantum key distribution and quantum teleportation with ground, future constellations and deep-space missions demand photon sources that are robust, compact, and power-efficient. Integrated photonics offers a scalable solution, yet a critical spectral gap persists. Although telecom-band integrated photon-pair sources are well established, near-visible photons offer distinct advantages for satellite-to-ground links by mitigating diffraction loss and maximizing the collection efficiency of optical telescopes. Scalable integrated sources in this regime have remained elusive due to the fundamental challenge of achieving anomalous dispersion in materials transparent at visible wavelengths. Here we bridge this gap by demonstrating an integrated near-visible photon-pair source based on a wide-bandgap, ultralow-loss, silicon nitride (Si$_3$N$_4$) microresonator. By engineering the dispersion of higher-order waveguide modes, we overcome the intrinsic normal dispersion limit to achieve efficient phase matching. The device exhibits a spectral brightness of 4.87$\times$10$^7$ pairs/s/mW$^2$/GHz and a narrow photon linewidth of 357 MHz. We report high-purity heralded single-photon generation with a heralding rate up to 2.3 MHz and a second-order correlation function as low as 0.0041. Furthermore, we observe energy-time entanglement with 98.4% interference visibility, violating the CHSH limit even at flux exceeding 40.6 million pairs/s. Combined with the proven radiation hardness of Si$_3$N$_4$, this source constitutes a flight-ready hardware foundation for daylight quantum communications and protocols requiring on-orbit multiphoton interference.
https://arxiv.org/abs/2601.13617
Academic Papers
svg
fdaf3f8dcbf654ea8c82672bb6c426c7527c75482988c122d6120f6f3948f03c
2026-01-21T00:00:00-05:00
3D Stacked Surface-Code Architecture for Measurement-Free Fault-Tolerant Quantum Error Correction
arXiv:2601.13648v1 Announce Type: new Abstract: Mid-circuit measurements are a major bottleneck for superconducting quantum processors because they are slower and noisier than gates. Measurement-free quantum error correction (mfec) replaces repeated measurements and classical feed-forward by coherent quantum feedback, but existing mfec protocols suffer from severe connectivity overhead when mapped to planar surface-code architectures: transversal interactions between logical patches require SWAP chains of length $O(d)$ in the code distance, which increase depth and generate hook errors. This work introduces a 3D stacked surface-code architecture for measurement-free fault-tolerant quantum error correction that removes this connectivity bottleneck. Vertical transversal couplers between aligned surface-code patches enable coherent parity mapping and feedback with zero SWAP overhead, realizing constant-depth $O(1)$ inter-layer operations in d while preserving local 2D stabilizer checks. A fault-tolerant mfec protocol for the surface code is constructed that suppresses hook errors under realistic noise. An analytical performance model shows that the 3D architecture overcomes the readout error floor and achieves logical error rates orders of magnitude below both standard measurement-based surface codes and 2D mfec variants in regimes with slow, noisy measurements, identifying 3D integration as a key enabler for scalable measurement-free fault tolerance.
https://arxiv.org/abs/2601.13648
Academic Papers
svg
0a43950f4061f8afb10dcee866b421010752818309b046a798c4d66fa0021941
2026-01-21T00:00:00-05:00
Spectral stability of cavity-enhanced single-photon emitters in silicon
arXiv:2601.13666v1 Announce Type: new Abstract: The unrivaled maturity of its nanofabrication makes silicon a promising hardware platform for quantum information processing. To this end, efficient single-photon sources and spin-photon interfaces have been implemented by integrating color centers or erbium dopants into nanophotonic resonators. However, the optical emission frequencies in this approach are subject to temporal fluctuations on both long and short timescales, which hinders the development of quantum applications. Here, we investigate this limitation and demonstrate that it can be alleviated by integrating the emitters into Fabry-Perot instead of nanophotonic resonators. Their larger optical mode volume enables both increasing the distance to crystal surfaces and operating at a lower dopant concentration, which reduces implantation-induced crystal damage and interactions between emitters. As a result, we observe a fivefold reduction of the spectral diffusion linewidth down to 4.0(2) MHz. Calculations and experimental investigations of isotopically purified 28-Si crystals suggest that the remaining spectral instability is caused by laser-induced electric-field fluctuations. In direct comparison with a nanophotonic device, the instability is significantly reduced at the same intracavity power, enabling a tenfold increase of the optical coherence time up to 20(1) microseconds. These findings represent a key step towards spectrally stable spin-photon interfaces in silicon and their potential applications in quantum networking and distributed quantum information processing.
https://arxiv.org/abs/2601.13666
Academic Papers
svg
5f94a246cebba0fc8179964acb0e1c6aa6a8c414c58fb69a6cfb81d66e69244f
2026-01-21T00:00:00-05:00
Generative Adversarial Networks for Resource State Generation
arXiv:2601.13708v1 Announce Type: new Abstract: We introduce a physics-informed Generative Adversarial Network framework that recasts quantum resource-state generation as an inverse-design task. By embedding task-specific utility functions into training, the model learns to generate valid two-qubit states optimized for teleportation and entanglement broadcasting. Comparing decomposition-based and direct-generation architectures reveals that structural enforcement of Hermiticity, trace-one, and positivity yields higher fidelity and training stability than loss-only approaches. The framework reproduces theoretical resource boundaries for Werner-like and Bell-diagonal states with fidelities exceeding ~98%, establishing adversarial learning as a lightweight yet effective method for constraint-driven quantum-state discovery. This approach provides a scalable foundation for automated design of tailored quantum resources for information-processing applications, exemplified with teleportation and broadcasting of entanglement, and it opens up the possibility of using such states in efficient quantum network design.
https://arxiv.org/abs/2601.13708
Academic Papers
svg
7f9957270523e7931ff8b29528409f32ff01f1ddabaad8797aacfe40b624e5f6
2026-01-21T00:00:00-05:00
Quantum Box-Muller Transform
arXiv:2601.13718v1 Announce Type: new Abstract: The Box-Muller transform is a widely used method to generate Gaussian samples from uniform samples. Quantum amplitude encoding methods encode the multi-variate normal distribution in the amplitudes of a quantum state. This work presents the Quantum Box-Muller transform which creates a superposition of binary-encoded grid points representing the multi-variate normal distribution. The gate complexity of our method depends on quantum arithmetic operations and, using a specific set of known implementations, the complexity is quadratic in the number of qubits. We apply our method to Monte-Carlo integration, in particular to the estimation of the expectation value of a function of Gaussian random variables. Our method implies that the state preparation circuit used multiple times in amplitude estimation requires only quantum arithmetic circuits for the grid points and the function, in addition to a single controlled rotation. We show how to provide the expectation value estimate with an error that is exponentially small in the number of qubits, similar to the amplitude-encoding setting with error-free encoding.
https://arxiv.org/abs/2601.13718
Academic Papers
svg
f16396f1a56a7b9c3b97b8ef64aa8a0024121679017988f0b298fa4c91490bf0
2026-01-21T00:00:00-05:00
On-Chip Generation of Co-Polarized and Spectrally Separable Photon Pairs
arXiv:2601.13740v1 Announce Type: new Abstract: On-chip generation of high-purity single photons is essential for scalable photonic quantum technologies. Spontaneous parametric down-conversion (SPDC) is widely used to generate photon pairs for heralded single-photon sources, but intrinsic spectral correlations of the pairs often limit the purity and interference visibility of the heralded photons. Existing approaches to suppress these correlations rely on narrowband spectral filtering, which introduces loss, or exploiting different polarizations, which complicates on-chip integration. Here, we demonstrate a new strategy for generating spectrally separable photon pairs in thin-film lithium niobate nanophotonic circuits by harnessing higher-order spatial modes, with all interacting fields residing in the same polarization. Spectral separability is achieved by engineering group-velocity matching using higher-order transverse-electric modes, combined with a Gaussian-apodized poling profile to further suppress residual correlations inherent to standard periodic poling. Subsequent on-chip mode conversion with efficiency exceeding 95\% maps the higher-order mode to the fundamental mode and routes the photons into distinct output channels. The resulting heralded photons exhibit spectral purities exceeding 94\% inferred from joint-spectral intensity and 89\% from unheralded $g^{(2)}$ measurement. This approach enables flexible spectral and temporal engineering of on-chip quantum light sources for quantum computing and quantum networking.
https://arxiv.org/abs/2601.13740
Academic Papers
svg
cb6b04b8e243075af4a206233829b068db62f57041f59b99419eb3581eefe4e8
2026-01-21T00:00:00-05:00
Limits of multimode bunching for boson sampling validation: anomalous bunching induced by time delays
arXiv:2601.13792v1 Announce Type: new Abstract: The multimode bunching probability is expected to provide a useful criterion for validating boson sampling experiments. Its applicability, however, is challenged by the existence of anomalous bunching, namely paradoxical situations in which partially distinguishable particles exhibit a higher bunching probability in two or more modes than perfectly indistinguishable ones. Using multimode bunching as a reliable criterion of genuine indistinguishability, therefore, requires a clear identification of the interferometric configurations in which anomalous bunching can or cannot occur. In particular, since uncontrolled small time delays between single-photon pulses constitute a common source of mode mismatch in current photonic platforms, it is essential to determine whether the resulting photon distinguishability might lead to anomalous bunching. Here, we first identify a broad class of interferometric configurations in which anomalous bunching is rigorously excluded, thereby establishing regimes where multimode bunching-based validation remains valid. Then, we find that, quite unexpectedly, temporal mode mismatch does not belong to this class. We exhibit a specific interferometric setup in which temporal distinguishability enhances multimode bunching, demonstrating that time delays can induce an anomalous behavior. These results help clarify the conditions under which multimode bunching remains a reliable validation tool.
https://arxiv.org/abs/2601.13792
Academic Papers
svg
62f4c77a8a5396726f88c33c5fa674fb628129f8357cf347ec7df03fdddbb48c
2026-01-21T00:00:00-05:00
Squeezed-Light-Enhanced Multiparameter Quantum Estimation in Cavity Magnonics
arXiv:2601.13814v1 Announce Type: new Abstract: Improving multiparameter quantum estimation in magnonic systems via quantum noise suppression is a well-established and critical research objective. In this work, we propose an experimentally realistic scheme to improve the precision of simultaneously estimating different parameters in a cavity-magnon system by utilizing a degenerate optical parametric amplifier (OPA). The OPA enhances the estimation precision by decreasing the most informative quantum Cram\'er-Rao bound, calculated employing the symmetric logarithmic derivative (SLD) and the right logarithmic derivative (RLD). We show that when nonlinearity is introduced into the system, quantum noise is significantly suppressed. Our results show how different physical parameters influence multiparameter estimation precision and provide a detailed discussion of the associated physical mechanisms in the steady state. Our results focus on exploring practical Gaussian measurement schemes that can be realized experimentally. Besides, we further analyze the system's dynamics, comparing both the SLD quantum Fisher information (QFI) and the classical Fisher information (CFI) for both homodyne and heterodyne detection. This approach provides a robust foundation for multiparameter quantum estimation, offering significant potential for application in hybrid magnomechanical and optomechanical systems.
https://arxiv.org/abs/2601.13814
Academic Papers
svg
45814e311a90524c91f6c03612313e47e6c4dd022eaa1ab4fbdcccefea3ae7ac
2026-01-21T00:00:00-05:00
Nonclassical photocounting statistics with a single on-off detector
arXiv:2601.13869v1 Announce Type: new Abstract: Any single on-off photocounter, which can only detect the presence or absence of photons without discriminating their number, is not capable of identifying nonclassical nature of light. This limitation arises because any photocounting statistics obtained with such a detector can be easily reproduced with coherent states of a light mode. We show that a simple modification of an on-off detector -- introducing controlled attenuation as a tunable setting -- enables such detectors to reveal nonclassical properties of radiation fields.
https://arxiv.org/abs/2601.13869
Academic Papers
svg
298aac95f3c0adbb6c63a6785b49b9e40c1afd0f7c03f3608c5957f881d21859
2026-01-21T00:00:00-05:00
A phase space approach to the wavefunction and operator spreading in the Krylov basis
arXiv:2601.13872v1 Announce Type: new Abstract: In the Wigner-Weyl phase space formulation of quantum mechanics, we analyse the problem of the spreading of an initial state or an initial operator under time evolution when described in terms of the Krylov basis. After constructing the phase space representations of the Krylov basis states generated by a Hamiltonian from a given initial state by using the Weyl transformation, we subsequently use them to cast the Krylov state complexity as an integral over the phase space in terms of the Wigner function of the time-evolved initial state, so that the contribution of the classical Liouville equation and higher-order quantum corrections to the Wigner function time evolution equation towards the Krylov state complexity can be identified. Next, we construct the double phase space functions associated with the Krylov basis for the operators by using a suitable generalisation of the Weyl transformation applicable for superoperators, and use them to rewrite the Krylov operator complexity as an integral over the double phase space in terms of a generalisation of the usual Wigner function. These results, in particular, show that the complexity measures based on the expansion of a time-evolved state (or an operator) in the Krylov basis can be thought to belong to a general class of complexity measures constructed from the expansion coefficients of the time-dependent Wigner function in an orthonormal basis in the phase space, and help us to connect these complexity measures with measures of complexity of time-evolved state based on harmonic expansion of the time-dependent Wigner function.
https://arxiv.org/abs/2601.13872
Academic Papers
svg
d1b09ea2210d4d94e0ae929d6361b043bf32a69e6033c8e93181a35630bf90c3
2026-01-21T00:00:00-05:00
On spooky action at a distance and conditional probabilities
arXiv:2601.13875v1 Announce Type: new Abstract: The aim of this expos\'e is to make explicit the analogy between the classical notion of non-independent probability distribution and the quantum notion of entangled state. To bring that analogy forth, we consider a classical systems with two dependent random variables and a quantum system with two components. In the classical case, afet observing one of the random variables, the underlying sample space and the probability distribution change. In the quantum case, when and event pertaining to one of the components is observed, the post-measurement state captures, both, the change in the state of the system and implicitly the new probability distribution. The predictions after a measurement in the classical case and in the quantum case, have to be computed with the conditional distribution given the value of the observed variable.
https://arxiv.org/abs/2601.13875
Academic Papers
svg
0d981fe4aae366fa4a2d20d896a272eaf55404bc559d265b5cbeec1ebd871144
2026-01-21T00:00:00-05:00
Low-Resource Quantum Energy Gap Estimation via Randomization
arXiv:2601.13881v1 Announce Type: new Abstract: Estimating the energy spectra of quantum many-body systems is a fundamental task in quantum physics, with applications ranging from chemistry to condensed matter. Algorithmic shadow spectroscopy is a recent method that leverages randomized measurements on time-evolved quantum states to extract spectral information. However, implementing accurate time evolution with low-depth circuits remains a key challenge for near-term quantum hardware. In this work, we propose a hybrid quantum-classical protocol that integrates Time Evolution via Probabilistic Angle Interpolation (TE-PAI) into the shadow spectroscopy framework. TE-PAI enables the simulation of time evolution using shallow stochastic circuits while preserving unbiased estimates through quasiprobability sampling. We construct the combined estimator and derive its theoretical properties. Through numerical simulations, we demonstrate that our method accurately resolves energy gaps and exhibits enhanced robustness to gate noise compared to standard Trotter-based shadow spectroscopy. We further validate the protocol experimentally on up to 20 qubits using IBM quantum hardware. This makes TE-PAI shadow spectroscopy a promising tool for spectral analysis on noisy intermediate-scale quantum (NISQ) devices.
https://arxiv.org/abs/2601.13881
Academic Papers
svg
56254bab16825df2902b0aae11d37181132453186e787cf1d8a4af15dbab6f25
2026-01-21T00:00:00-05:00
Tensor Network Assisted Distributed Variational Quantum Algorithm for Large Scale Combinatorial Optimization Problem
arXiv:2601.13956v1 Announce Type: new Abstract: Although quantum computing holds promise for solving Combinatorial Optimization Problems (COPs), the limited qubit capacity of NISQ hardware makes large-scale instances intractable. Conventional methods attempt to bridge this gap through decomposition or compression, yet they frequently fail to capture global correlations of subsystems, leading to solutions of limited quality. We propose the Distributed Variational Quantum Algorithm (DVQA) to overcome these limitations, enabling the solution of 1,000-variable instances on constrained hardware. A key innovation of DVQA is its use of the truncated higher-order singular value decomposition to preserve inter-variable dependencies without relying on complex long-range entanglement, leading to a natural form of noise localization where errors scale with subsystem size rather than total qubit count, thus reconciling scalability with accuracy. Theoretical bounds confirm the algorithm's robustness for p-local Hamiltonians. Empirically, DVQA achieves state-of-the-art performance in simulations and has been experimentally validated on the Wu Kong quantum computer for portfolio optimization. This work provides a scalable, noise-resilient framework that advances the timeline for practical quantum optimization algorithms.
https://arxiv.org/abs/2601.13956
Academic Papers
svg
af9c97f33a67476372ea4cd78c64f2171e6eac6472a5c855a565b82ec218e116
2026-01-21T00:00:00-05:00
A Converse Bound via the Nussbaum-Szko{\l}a Mapping for Quantum Hypothesis Testing
arXiv:2601.13970v1 Announce Type: new Abstract: Quantum hypothesis testing concerns the discrimination between quantum states. This paper introduces a novel lower bound for asymmetric quantum hypothesis testing that is based on the Nussbaum-Szko{\l}a mapping. The lower bound provides a unified recovery of converse results across all major asymptotic regimes, including large-, moderate-, and small-deviations. Unlike existing bounds, which either rely on technically involved information-spectrum arguments or suffer from fixed prefactors and limited applicability in the non-asymptotic regime, the proposed bound arises from a single expression and enables, in some cases, the direct use of classical results. It is further demonstrated that the proposed bound provides accurate approximations to the optimal quantum error trade-off function at small blocklengths. Numerical comparisons with existing bounds, including those based on fidelity and information spectrum methods, highlight its improved tightness and practical relevance.
https://arxiv.org/abs/2601.13970
Academic Papers
svg
fa4da720e99a3c2a53c12f2fda93fed64d0dd8c6eedc2dbedce97327d3b4a01d
2026-01-21T00:00:00-05:00
Experimental Evidence-Based Sub-Rayleigh Source Discrimination
arXiv:2601.13972v1 Announce Type: new Abstract: We propose a Bayesian evidence-based inference framework based on relative belief ratios and apply it to discriminating between one and two incoherent optical point sources using spatial-mode demultiplexing (SPADE). Unlike the Helstrom measurement, SPADE require no collective detection and its optimal for asymptotically large samples. Our method avoids ad hoc statistical constructs and relies solely on the information contained in the data, with all assumptions entering only through the likelihood model and prior beliefs. Using experimental evidence, we demonstrate the superior resolving performance of SPADE over direct imaging from a new and extensible perspective; one that naturally generalizes to multiple sources and offers a practical robust approach to analyzing quantum-enhanced superresolution.
https://arxiv.org/abs/2601.13972
Academic Papers
svg
5db80d29adb092521a6ca205c16ee9a10fc44d8a84759f591d5c814da2aba37f
2026-01-21T00:00:00-05:00
Optimal Construction of Two-Qubit Gates using the Symmetries of B Gate Equivalence Class
arXiv:2601.13983v1 Announce Type: new Abstract: Two applications of gates from the B gate equivalence class can generate all two-qubit gates. This local equivalence class is invariant under the mirror (multiplication with the SWAP gate) operation, inverse (Hermitian conjugate) operation, and the combined inverse and mirror operations. The last two symmetries are associated with the ability of a two-qubit gate to generate the two-qubit local gates and the SWAP gate in two applications. No single local equivalence class of two-qubit gates, except the B gate equivalence class, has these two symmetries. Only the planar regions of the Weyl chamber, describing the mirror operation, contain the local equivalence classes with either one of the two symmetries. We show that there exist one-parameter families of local equivalence classes on these planes, with and without the B gate equivalence class, such that each of them can be used to construct a parameterized universal two-qubit quantum circuit that involves only two nonlocal two-qubit gates. We also discuss the implementation of the gates from a few families of local equivalence classes on superconducting quantum computers for optimal generation of all two-qubit gates.
https://arxiv.org/abs/2601.13983
Academic Papers
svg
c996ab5e0f218cb23c1400381b50433a92ce2e4cac86f3191f97f9cedef115ba
2026-01-21T00:00:00-05:00
Quantum Pontus-Mpemba Effect Enabled by the Liouvillian Skin Effect
arXiv:2601.14083v1 Announce Type: new Abstract: We unveil a quantum Pontus-Mpemba effect enabled by the Liouvillian skin effect in a dissipative tight-binding chain with asymmetric incoherent hopping and coherent boundary coupling. The skin effect, induced by non-reciprocal dissipation, localizes relaxation modes near the system boundaries and gives rise to non-orthogonal spectral geometry. While such non-normality is often linked to slow relaxation, we show that it can instead accelerate relaxation through a two-step protocol - realizing a quantum Pontus-Mpemba effect. Specifically, we consider a one-dimensional open chain with coherent hopping $J$, asymmetric incoherent hoppings $J_{\rm R} \neq J_{\rm L}$, and a controllable end-to-end coupling $\epsilon$. For $\epsilon=0$, the system exhibits the Liouvillian skin effect, with left and right eigenmodes localized at opposite edges. We compare two relaxation protocols toward the same stationary state: (i) a direct relaxation with $\epsilon=0$, and (ii) a two-step (Pontus) protocol where a brief coherent evolution transfers the excitation across the lattice before relaxation. Although both share the same asymptotic decay rate, the two-step protocol relaxes significantly faster due to its reduced overlap with the slow boundary-localized Liouvillian mode. The effect disappears when $J_{\rm R}=J_{\rm L}$, i.e., when the skin effect vanishes. Our results reveal a clear connection between boundary-induced non-normality and protocol-dependent relaxation acceleration, suggesting new routes for controlling dissipation and transient dynamics in open quantum systems.
https://arxiv.org/abs/2601.14083
Academic Papers
svg