id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
fb017a8c79ffb0f6d5bd61245c16419e8ebce895333b3fe287642e19ef0454a8
2026-01-16T00:00:00-05:00
Risk and Monotone Comparative Statics without Independence
arXiv:2601.10664v1 Announce Type: new Abstract: We extend well-known comparative results under expected utility to models of non-expected utility by providing novel conditions on local utility functions. We illustrate how our results parallel, and are distinct from, existing results for monotone comparative statics under expected utility, as well as risk preferences for non-expected utility. Our conditions generalize existing results for specific preferences (including expected utility) and allow us to verify monotone comparative statics for novel environments and preferences. We apply our results to portfolio choice problems where preferences or wealth might change, as well as precautionary savings.
https://arxiv.org/abs/2601.10664
Academic Papers
svg
b4e26355e380528768c3b446fa7db60cd67c1413af2229731781193dcfc884fb
2026-01-16T00:00:00-05:00
Motivating Effort with Information about Future Rewards
arXiv:2110.05643v4 Announce Type: replace Abstract: This paper studies the optimal mechanism to motivate effort in a dynamic principal-agent model without transfers. An agent is engaged in a task with uncertain future rewards and can quit at any time. The principal knows the reward and provides information over time to motivate effort. We provide a unified framework and derive the optimal information policy in closed form across stationary and nonstationary environments. Within this framework, we identify two precise conditions, each of which guarantees that dynamic disclosure is strictly valuable. First, if the principal is impatient compared to the agent, she prefers the front-loaded effort schedule induced by dynamic disclosure; in a stationary environment, dynamic disclosure is beneficial if and only if the principal is less patient. Second, in an environment where the agent would become pessimistic over time absent any disclosure, dynamic information provision can counteract this downward trend and encourage longer effort. Notably, patience remains a crucial determinant of the structure of the optimal policy.
https://arxiv.org/abs/2110.05643
Academic Papers
svg
f32f62a782850dc22302eab7027a74893000df287f560e8bceed459600fe6571
2026-01-16T00:00:00-05:00
Local-Polynomial Estimation for Multivariate Regression Discontinuity Designs
arXiv:2402.08941v3 Announce Type: replace Abstract: We study a multivariate regression discontinuity design in which treatment is assigned by crossing a boundary in the space of multiple running variables. We document that the existing bandwidth selector is suboptimal for a multivariate regression discontinuity design when the distance to a boundary point is used for its running variable, and introduce a multivariate local-linear estimator for multivariate regression discontinuity designs. Our estimator is asymptotically valid and can capture heterogeneous treatment effects over the boundary. We demonstrate that our estimator exhibits smaller root mean squared errors and often shorter confidence intervals in numerical simulations. We illustrate our estimator in our empirical applications of multivariate designs of a Colombian scholarship study and a U.S. House of representative voting study and demonstrate that our estimator reveals richer heterogeneous treatment effects with often shorter confidence intervals than the existing estimator.
https://arxiv.org/abs/2402.08941
Academic Papers
svg
09cbde97859d9926103199c6ba98f78630f3602e951b79016689790ec4767248
2026-01-16T00:00:00-05:00
Difference-in-Differences with Time-varying Continuous Treatments using Double/Debiased Machine Learning
arXiv:2410.21105v2 Announce Type: replace Abstract: We propose a difference-in-differences (DiD) framework designed for time-varying continuous treatments across multiple periods. Specifically, we estimate the average treatment effect on the treated (ATET) by comparing distinct non-zero treatment intensities. Identification rests on a conditional parallel trends assumption that accounts for observed covariates and past treatment histories. Our approach allows for lagged treatment effects and, in repeated cross-sectional settings, accommodates compositional changes in covariates. We develop kernel-based ATET estimators for both repeated cross-sections and panel data, leveraging the double/debiased machine learning framework to handle potentially high-dimensional covariates and histories. We establish the asymptotic properties of our estimators under mild regularity conditions and demonstrate via simulations that their undersmoothed versions perform well in finite samples. As an empirical illustration, we apply our estimator to assess the effect of the second-dose COVID-19 vaccination rate in Brazil and find that higher vaccination rates reduce COVID-19-related mortality after a lag of several weeks.
https://arxiv.org/abs/2410.21105
Academic Papers
svg
88e403537e42e11bef49e8f4fbd4f0b8532a3f9c38f6130f5cef2514c54d7daf
2026-01-16T00:00:00-05:00
The Identification Power of Combining Experimental and Observational Data for Distributional Treatment Effect Parameters
arXiv:2508.12206v4 Announce Type: replace Abstract: This study investigates the identification power gained by combining experimental data, in which treatment is randomized, with observational data, in which treatment is self-selected, for distributional treatment effect (DTE) parameters. While experimental data identify average treatment effects, many DTE parameters, such as the distribution of individual treatment effects, are only partially identified. We examine whether and how combining these two data sources tightens the identified set for such parameters. For broad classes of DTE parameters, we derive nonparametric sharp bounds under the combined data and clarify the mechanism through which data combination improves identification relative to using experimental data alone. Our analysis highlights that self-selection in observational data is a key source of identification power. We establish necessary and sufficient conditions under which the combined data shrink the identified set, showing that such shrinkage generally occurs unless selection-on-observables holds in the observational data. We also propose a linear programming approach to compute sharp bounds that can incorporate additional structural restrictions, such as positive dependence between potential outcomes and the generalized Roy selection model. An empirical application using data on negative campaign advertisements in the 2008 U.S. presidential election illustrates the practical relevance of the proposed approach.
https://arxiv.org/abs/2508.12206
Academic Papers
svg
6b732ac069882f39b5e3b15ed0c9ab83bd108348ef25bfc0b5dd900d07bf0572
2026-01-16T00:00:00-05:00
Warp speed price moves: Jumps after earnings announcements
arXiv:2601.08962v2 Announce Type: replace Abstract: Corporate earnings announcements unpack large bundles of public information that should, in efficient markets, trigger jumps in stock prices. Testing this implication is difficult in practice, as it requires noisy high-frequency data from after-hours markets, where most earnings announcements are released. Using a unique dataset and a new microstructure noise-robust jump test, we show that earnings announcements almost always induce jumps in the stock price of announcing firms. They also significantly raise the probability of price co-jumps in non-announcing firms and the market. We find that returns from a post-announcement trading strategy are consistent with efficient price formation after 2016.
https://arxiv.org/abs/2601.08962
Academic Papers
svg
63d7c4ea2174a1ecf9093d9cad0ce5dfd974f000d79f8cc8c6e3982c5a820e95
2026-01-16T00:00:00-05:00
The drift burst hypothesis
arXiv:2601.08974v2 Announce Type: replace Abstract: The drift burst hypothesis postulates the existence of short-lived locally explosive trends in the price paths of financial assets. The recent U.S. equity and treasury flash crashes can be viewed as two high-profile manifestations of such dynamics, but we argue that drift bursts of varying magnitude are an expected and regular occurrence in financial markets that can arise through established mechanisms of liquidity provision. We show how to build drift bursts into the continuous-time It\^o semimartingale model, elaborate on the conditions required for the process to remain arbitrage-free, and propose a nonparametric test statistic that identifies drift bursts from noisy high-frequency data. We apply the test and demonstrate that drift bursts are a stylized fact of the price dynamics across equities, fixed income, currencies and commodities. Drift bursts occur once a week on average, and the majority of them are accompanied by subsequent price reversion and can thus be regarded as "flash crashes." The reversal is found to be stronger for negative drift bursts with large trading volume, which is consistent with endogenous demand for immediacy during market crashes.
https://arxiv.org/abs/2601.08974
Academic Papers
svg
3f73d4fd7bfe1157af043d899890835300a9bc8964312a0c86e31c791e50aed0
2026-01-16T00:00:00-05:00
Modified Delayed Acceptance MCMC for Quasi-Bayesian Inference with Linear Moment Conditions
arXiv:2511.17117v4 Announce Type: replace-cross Abstract: We develop a computationally efficient framework for quasi-Bayesian inference based on linear moment conditions. The approach employs a delayed acceptance Markov chain Monte Carlo (DA-MCMC) algorithm that uses a surrogate target kernel and a proposal distribution derived from an approximate conditional posterior, thereby exploiting the structure of the quasi-likelihood. Two implementations are introduced. DA-MCMC-Exact fully incorporates prior information into the proposal distribution and maximizes per-iteration efficiency, whereas DA-MCMC-Approx omits the prior in the proposal to reduce matrix inversions, improving numerical stability and computational speed in higher dimensions. Simulation studies on heteroskedastic linear regressions show substantial gains over standard MCMC and conventional DA-MCMC baselines, measured by multivariate effective sample size per iteration and per second. The Approx variant yields the best overall throughput, while the Exact variant attains the highest per-iteration efficiency. Applications to two empirical instrumental variable regressions corroborate these findings: the Approx implementation scales to larger designs where other methods become impractical, while still delivering precise inference. Although developed for moment-based quasi-posteriors, the proposed approach also extends to risk-based quasi-Bayesian formulations when first-order conditions are linear and can be transformed analogously. Overall, the proposed algorithms provide a practical and robust tool for quasi-Bayesian analysis in statistical applications.
https://arxiv.org/abs/2511.17117
Academic Papers
svg
ef3e3591ca34879c765e9cee9b927d08dbd15ddd87b2147b79f78242141f3f2f
2026-01-16T00:00:00-05:00
Estimation of Parameters of the Truncated Normal Distribution with Unknown Bounds
arXiv:2601.09857v1 Announce Type: new Abstract: Estimators of parameters of truncated distributions, namely the truncated normal distribution, have been widely studied for a known truncation region. There is also literature for estimating the unknown bounds for known parent distributions. In this work, we develop a novel algorithm under the expectation-solution (ES) framework, which is an iterative method of solving nonlinear estimating equations, to estimate both the bounds and the location and scale parameters of the parent normal distribution utilizing the theory of best linear unbiased estimates from location-scale families of distribution and unbiased minimum variance estimation of truncation regions. The conditions for the algorithm to converge to the solution of the estimating equations for a fixed sample size are discussed, and the asymptotic properties of the estimators are characterized using results on M- and Z-estimation from empirical process theory. The proposed method is then compared to methods utilizing the known truncation bounds via Monte Carlo simulation.
https://arxiv.org/abs/2601.09857
Academic Papers
svg
e8ffce080b23d0de966669b7a56c85c1164e305cf4ac4d5fe691500d99fce2ba
2026-01-16T00:00:00-05:00
High Dimensional Gaussian and Bootstrap Approximations in Generalized Linear Models
arXiv:2601.09925v1 Announce Type: new Abstract: Generalized Linear Models (GLMs) extend ordinary linear regression by linking the mean of the response variable to covariates through appropriate link functions. This paper investigates the asymptotic behavior of GLM estimators when the parameter dimension $d$ grows with the sample size $n$. In the first part, we establish Gaussian approximation results for the distribution of a properly centered and scaled GLM estimator uniformly over class of convex sets and Euclidean balls. Using high-dimensional results from Fang and Koike (2024) for the leading Bahadur term, bounding remainder terms as in He and Shao (2000), and applying Nazarov's (2003) Gaussian isoperimetric inequality, we show that Gaussian approximation holds when $d = o(n^{2/5})$ for convex sets and $d = o(n^{1/2})$ for Euclidean balls-the best possible rates matching those for high-dimensional sample means. We further extend these results to the bootstrap approximation when the covariance matrix is unknown. In the second part, when $d>>n$, a natural question is to answer whether all covariates are equally important. To answer that, we employ sparsity in GLM through the Lasso estimator. While Lasso is widely used for variable selection, it cannot achieve both Variable Selection Consistency (VSC) and $n^{1/2}$-consistency simultaneously (Lahiri, 2021). Under the regime ensuring VSC, we show that Gaussian approximation for the Lasso estimator fails. To overcome this, we propose a Perturbation Bootstrap (PB) approach and establish a Berry-Esseen type bound for its approximation uniformly over class of convex sets. Simulation studies confirm the strong finite-sample performance of the proposed method.
https://arxiv.org/abs/2601.09925
Academic Papers
svg
fe325e1afaad2ed08777f0be68345fcef926bb6ca36a06e32b1a21cc95a0c059
2026-01-16T00:00:00-05:00
Tree Estimation and Saddlepoint-Based Diagnostics for the Nested Dirichlet Distribution: Application to Compositional Behavioral Data
arXiv:2601.09941v1 Announce Type: new Abstract: The Nested Dirichlet Distribution (NDD) provides a flexible alternative to the Dirichlet distribution for modeling compositional data, relaxing constraints on component variances and correlations through a hierarchical tree structure. While theoretically appealing, the NDD is underused in practice due to two main limitations: the need to predefine the tree structure and the lack of diagnostics for evaluating model fit. This paper addresses both issues. First, we introduce a data-driven, greedy tree-finding algorithm that identifies plausible NDD tree structures from observed data. Second, we propose novel diagnostic tools, including pseudo-residuals based on a saddlepoint approximation to the marginal distributions and a likelihood displacement measure to detect influential observations. These tools provide accurate and computationally tractable assessments of model fit, even when marginal distributions are analytically intractable. We demonstrate our approach through simulation studies and apply it to data from a Morris water maze experiment, where the goal is to detect differences in spatial learning strategies among cognitively impaired and unimpaired mice. Our methods yield interpretable structures and improved model evaluation in a realistic compositional setting. An accompanying R package is provided to support reproducibility and application to new datasets.
https://arxiv.org/abs/2601.09941
Academic Papers
svg
dd26f36fe1cdb69c3041a1fcbaabd899ba0edfb9b841250d075cfd066b936d44
2026-01-16T00:00:00-05:00
Derivations for the Cumulative Standardized Binomial EWMA (CSB-EWMA) Control Chart
arXiv:2601.09968v1 Announce Type: new Abstract: This paper presents the exact mathematical derivation of the mean and variance properties for the Exponentially Weighted Moving Average (EWMA) statistic applied to binomial proportion monitoring in Multiple Stream Processes (MSPs). We develop a Cumulative Standardized Binomial EWMA (CSB-EWMA) formulation that provides adaptive control limits based on exact time-varying variance calculations, overcoming the limitations of asymptotic approximations during early-phase monitoring. The derivations are rigorously validated through Monte Carlo simulations, demonstrating remarkable agreement between theoretical predictions and empirical results. This work establishes a theoretical foundation for distribution-free monitoring of binary outcomes across parallel data streams, with applications in statistical process control across diverse domains including manufacturing, healthcare, and cybersecurity.
https://arxiv.org/abs/2601.09968
Academic Papers
svg
816c34be8cefd43cc65f0e603e84a7dff7e4ebe5b1ed6d584911a892504e6260
2026-01-16T00:00:00-05:00
Estimating the effect of lymphovascular invasion on 2-year survival probability under endogeneity: a recursive copula-based approach
arXiv:2601.09984v1 Announce Type: new Abstract: Lymphovascular invasion (LVI) is an important prognostic marker for head and neck squamous cell carcinoma (HNSC), but the true effect of LVI on survival may be distorted by endogeneity arising from unmeasured confounding. Conventional one-stage conditional models and instrument-based two-stage estimators are prone to bias under endogeneity, and sufficiently strong instruments are often unavailable in practice. To address these challenges, we propose a semiparametric recursive copula framework that jointly specifies marginal models for both LVI, treated as an endogenous exposure, and a binary 2-year survival outcome, and links them through a flexible copula to account for latent confounding and accommodate censoring without requiring strong instruments. In two simulation studies, we systematically varied sample sizes, censoring rates from 0% to 60%, and endogeneity strengths, and assessed robustness under moderate model misspecification. The proposed copula framework exhibited reduced bias and improved interval coverage compared with both one-stage and two-stage approaches while maintaining robustness to moderate misspecification. We applied the method to HNSC cases with associated clinical and microRNA data from The Cancer Genome Atlas (n = 215), and found that LVI significantly reduced 2-year survival probability by approximately 47%, with a 95% confidence interval of -0.61 to -0.29 on the probability scale. The estimated positive dependence parameter indicates that the attenuation is driven by residual dependence between unobserved components of LVI and survival. Overall, the proposed copula framework yields more credible effect estimates for survival outcomes in the absence of strong instruments, mitigating biases due to endogeneity and censoring and strengthening quantitative evidence for HNSC research.
https://arxiv.org/abs/2601.09984
Academic Papers
svg
792cb7484729624df13f7f34d55efb6ba4e553eeb5f6ad61c5d7a738cb9f9c7d
2026-01-16T00:00:00-05:00
The Knowable Future: Mapping the Decay of Past-Future Mutual Information Across Forecast Horizons
arXiv:2601.10006v1 Announce Type: new Abstract: The ability to assess ex-ante whether a time series is likely to be accurately forecast is important for forecasting practice because it informs the degree of modelling effort warranted. We define forecastability as a property of a time series (given a declared information set), and measure horizon-specific forecastability as the reduction in uncertainty provided by the past, using auto-mutual information (AMI) at lag h. AMI is estimated from training data using a k-nearest-neighbour estimator and evaluated against out-of-sample forecast error (sMAPE) on a filtered, balanced sample of 1,350 M4 series across six sampling frequencies. Seasonal Naive, ETS, and N-BEATS are used as probes of out-of-sample forecast performance. Training-only AMI provides a frequency-conditional diagnostic for forecast difficulty: for Hourly, Weekly, Quarterly, and Yearly series, AMI exhibits consistently negative rank correlation with sMAPE across probes. Under N-BEATS, the correlation is strongest for Hourly (p= -0.52) and Weekly (p= -0.51), with Quarterly (p= -0.42) and Yearly (p = -0.36) also substantial. Monthly is probe-dependent (Seasonal Naive p= -0.12; ETS p = -0.26; N-BEATS p = -0.24). Daily shows notably weaker AMI-sMAPE correlation under this protocol, suggesting limited ability to discriminate between series despite the presence of temporal dependence. The findings support within-frequency triage and effort allocation based on measurable signal content prior to forecasting, rather than between-frequency comparisons of difficulty.
https://arxiv.org/abs/2601.10006
Academic Papers
svg
ec72287dd45a8c9333c72badd4846d270fa94e0bb3a4d1863292ffe26ee19757
2026-01-16T00:00:00-05:00
Weighted least squares estimation by multivariate-dependent weights for linear regression models
arXiv:2601.10049v1 Announce Type: new Abstract: Multivariate linear regression models often face the problem of heteroscedasticity caused by multiple explanatory variables. The weighted least squares estimation with univariate-dependent weights has limitations in constructing weight functions. Therefore, this paper proposes a multivariate dependent weighted least squares estimation method. By constructing a linear combination of explanatory variables and maximizing their Spearman rank correlation coefficient with the absolute residual value, combined with maximum likelihood method to depict heteroscedasticity, it can comprehensively reflect the trend of variance changes in the random error and improve the accuracy of the model. This paper demonstrates that the optimal linear combination exponent estimator for heteroscedastic volatility obtained by our algorithm possesses consistency and asymptotic normality. In the simulation experiment, three scenarios of heteroscedasticity were designed, and the comparison showed that the proposed method was superior to the univariate-dependent weighting method in parameter estimation and model prediction. In the real data applications, the proposed method was applied to two real-world datasets about consumer spending in China and housing prices in Boston. From the perspectives of MAE, RSE, cross-validation, and fitting performance, its accuracy and stability were verified in terms of model prediction, interval estimation, and generalization ability. Additionally, the proposed method demonstrated relative advantages in fitting data with large fluctuations. This study provides an effective new approach for dealing with heteroscedasticity in multivariate linear regression.
https://arxiv.org/abs/2601.10049
Academic Papers
svg
60ecdf8eb026438710df6701e74bcdb9dace6016cf2461f7cf8a664cdd44dffa
2026-01-16T00:00:00-05:00
Asymptotic Theory of Tail Dependence Measures for Checkerboard Copula and the Validity of Multiplier Bootstrap
arXiv:2601.10252v1 Announce Type: new Abstract: Nonparametric estimation and inference for lower and upper tail copulas under unknown marginal distributions are considered. To mitigate the inherent discreteness and boundary irregularities of the empirical tail copula, a checkerboard smoothed tail copula estimator based on local bilinear interpolation is introduced. Almost sure uniform consistency and weak convergence of the centered and scaled empirical checkerboard tail copula process are established in the space of bounded functions. The resulting Gaussian limit differs from its known-marginal counterpart and incorporates additional correction terms that account for first-order stochastic errors arising from marginal estimation. Since the limiting covariance structure depends on the unknown tail copula and its partial derivatives, direct asymptotic inference is generally infeasible. To address this challenge, a direct multiplier bootstrap procedure tailored to the checkerboard tail copula is developed. By combining multiplier reweighting with checkerboard smoothing, the bootstrap preserves the extremal dependence structure of the data and consistently captures both joint tail variability and the effects of marginal estimation. Conditional weak convergence of the bootstrap process to the same Gaussian limit as the original estimator is established, yielding asymptotically valid inference for smooth functionals of the tail copula, including the lower and upper tail dependence coefficient. The proposed approach provides a fully feasible framework for confidence regions and hypothesis testing in tail dependence analysis without requiring explicit estimation of the limiting covariance structure. A simulation study illustrates the finite-sample performance of the proposed estimator and demonstrates the accuracy and reliability of the bootstrap confidence intervals under various dependence structures and tuning parameter choices.
https://arxiv.org/abs/2601.10252
Academic Papers
svg
ec90d0d34825a3c40e9f439e4754628fd3f86048c2fe3f44e6877dfa75693aef
2026-01-16T00:00:00-05:00
Modeling mental health trajectories during the COVID-19 pandemic using UK-wide data in the presence of sociodemographic variables
arXiv:2601.10445v1 Announce Type: new Abstract: Background: The negative effects of the COVID-19 pandemic on the mental health and well-being of populations are an important public health issue. Our study aims to determine the underlying factors shaping mental health trajectories during the COVID-19 pandemic in the UK. Methods: Data from the Understanding Society COVID-19 Study were utilized and the core analysis focussed on GHQ36 scores as the outcome variable. We used GAMs to evaluate trends over time and the role of sociodemographic variables, i.e., age, sex, ethnicity, country of residence (in UK), job status (employment), household income, living with a partner, living with children under age 16, and living with a long-term illness, on the variation of mental health during the study period. Results: Statistically significant differences in mental health were observed for age, sex,ethnicity, country of residence (in UK), job status (employment), household income, living with a partner, living with children under age 16, and living with a long-term illness. Women experienced higher GHQ36 scores relative to men with the GHQ36 score expected to increase by 1.260 (95%CI: 1.176, 1.345). Individuals living without a partner were expected to have higher GHQ36 scores, of 1.050 (95%CI: 0.949, 1.148) more than those living with a partner, and age groups 16-34, 35-44, 45-54, 55-64 experienced higher GHQ36 scores relative to those who were 65+. Individuals with relatively lower household income were likely to have poorer mental health relative to those who were more well off. Conclusion: This study identifies key demographic determinants shaping mental health trajectories during the COVID-19 pandemic in the UK. Policies aiming to reduce mental health inequalities should target women, youth, individuals living without a partner, individuals living with children under 16, individuals with a long-term illness, and lower income families.
https://arxiv.org/abs/2601.10445
Academic Papers
svg
2f3da7fd502f7d736618f491e4821ff5633eb4275975772b6c08b39e47ba36d7
2026-01-16T00:00:00-05:00
MitoFREQ: A Novel Approach for Mitogenome Frequency Estimation from Top-level Haplogroups and Single Nucleotide Variants
arXiv:2601.10464v1 Announce Type: new Abstract: Lineage marker population frequencies can serve as one way to express evidential value in forensic genetics. However, for high-quality whole mitochondrial DNA genome sequences (mitogenomes), population data remain limited. In this paper, we offer a new method, MitoFREQ, for estimating the population frequencies of mitogenomes. MitoFREQ uses the mitogenome resources HelixMTdb and gnomAD, harbouring information from 195,983 and 56,406 mitogenomes, respectively. Neither HelixMTdb nor gnomAD can be queried directly for individual mitogenome frequencies, but offers single nucleotide variant (SNV) allele frequencies for each of 30 "top-level" haplogroups (TLHG). We propose using the HelixMTdb and gnomAD resources by classifying a given mitogenome within the TLHG scheme and subsequently using the frequency of its rarest SNV within that TLHG weighted by the TLHG frequency. We show that this method is guaranteed to provide a higher population frequency estimate than if a refined haplogroup and its SNV frequencies were used. Further, we show that top-level haplogrouping can be achieved by using only 227 specific positions for 99.9% of the tested mitogenomes, potentially making the method available for low-quality samples. The method was tested on two types of datasets: high-quality forensic reference datasets and a diverse collection of scrutinised mitogenomes from GenBank. This dual evaluation demonstrated that the approach is robust across both curated forensic data and broader population-level sequences. This method produced likelihood ratios in the range of 100-100,000, demonstrating its potential to strengthen the statistical evaluation of forensic mtDNA evidence. We have developed an open-source R package `mitofreq` that implements our method, including a Shiny app where custom TLHG frequencies can be supplied.
https://arxiv.org/abs/2601.10464
Academic Papers
svg
6074982a6666df9fe8dd985c0d0d5831f61f8801bda965802a6d5a888988bae1
2026-01-16T00:00:00-05:00
Mesh Denoising
arXiv:2601.10487v1 Announce Type: new Abstract: In this paper, we study four mesh denoising methods: linear filtering, a heat diffusion method, Sobolev regularization, and, to a lesser extent, a barycentric approach based on the Sinkhorn algorithm. We illustrate that, for a simple image denoising task, a naive choice of a Gibbs kernel can lead to unsatisfactory results. We demonstrate that while Sobolev regularization is the fastest method in our implementation, it produces slightly less faithful denoised meshes than the best results obtained with iterative filtering or heat diffusion. We empirically show that, for the large mesh considered, the heat diffusion method is slower and not more effective than filtering, whereas on a small mesh an appropriate choice of diffusion parameters can improve the quality. Finally, we observe that all three mesh-based methods perform markedly better on the large mesh than on the small one.
https://arxiv.org/abs/2601.10487
Academic Papers
svg
c01fe0b6057ddbb80679de4b43b9223b6205332408c87b3b44044bce872becdf
2026-01-16T00:00:00-05:00
A Propagation Framework for Network Regression
arXiv:2601.10533v1 Announce Type: new Abstract: We introduce a unified and computationally efficient framework for regression on network data, addressing limitations of existing models that require specialized estimation procedures or impose restrictive decay assumptions. Our Network Propagation Regression (NPR) models outcomes as functions of covariates propagated through network connections, capturing both direct and indirect effects. NPR is estimable via ordinary least squares for continuous outcomes and standard routines for binary, categorical, and time-to-event data, all within a single interpretable framework. We establish consistency and asymptotic normality under weak conditions and develop valid hypothesis tests for the order of network influence. Simulation studies demonstrate that NPR consistently outperforms established approaches, such as the linear-in-means model and regression with network cohesion, especially under model misspecification. An application to social media sentiment analysis highlights the practical utility and robustness of NPR in real-world settings.
https://arxiv.org/abs/2601.10533
Academic Papers
svg
a5a0378faec7e302b4c1618caa19986f5e79ab584511d3b86b43169fec69bbf1
2026-01-16T00:00:00-05:00
From aggressive to conservative early stopping in Bayesian group sequential designs
arXiv:2601.10590v1 Announce Type: new Abstract: Group sequential designs (GSDs) are widely used in confirmatory trials to allow interim monitoring while preserving control of the type I error rate. In the frequentist framework, O'Brien-Fleming-type stopping boundaries dominate practice because they impose highly conservative early stopping while allowing more liberal decisions as information accumulates. Bayesian GSDs, in contrast, are most often implemented using fixed posterior probability thresholds applied uniformly at all analyses. While such designs can be calibrated to control the overall type I error rate, they do not penalise early analyses and can therefore lead to substantially more aggressive early stopping. Such behaviour can risk premature conclusions and inflation of treatment effect estimates, raising concerns for confirmatory trials. We introduce two practically implementable refinements that restore conservative early stopping in Bayesian GSDs. The first introduces a two-phase structure for posterior probability thresholds, applying more stringent criteria in the early phase of the trial and relaxing them later to preserve power. The second replaces posterior probability monitoring at interim looks with predictive probability criteria, which naturally account for uncertainty in future data and therefore suppress premature stopping. Both strategies require only one additional tuning parameter and can be efficiently calibrated. In the HYPRESS setting, both approaches achieve higher power than the conventional Bayesian design while producing alpha-spending profiles closely aligned with O'Brien-Fleming-type behaviour at early looks. These refinements provide a principled and tractable way to align Bayesian GSDs with accepted frequentist practice and regulatory expectations, supporting their robust application in confirmatory trials.
https://arxiv.org/abs/2601.10590
Academic Papers
svg
524cd8aa6f7fe2a4c80917858adc51e42e2189f61c96abb6e665e3330e778229
2026-01-16T00:00:00-05:00
A Bayesian Discrete Framework for Enhancing Decision-Making Processes in Clinical Trial Designs and Evaluations
arXiv:2601.10615v1 Announce Type: new Abstract: This study examines the application of Bayesian approach in the context of clinical trials, emphasizing their increasing importance in contemporary biomedical research. While conventional frequentist approach provides a foundational basis for analysis, it often lacks the flexibility to integrate prior knowledge, which can constrain its effectiveness in adaptive settings. In contrast, Bayesian methods enable continual refinement of statistical inferences through the assimilation of accumulating evidence, thereby supporting more informed decision-making and improving the reliability of trial findings. This paper also considers persistent challenges in clinical investigations, including replication difficulties and the misinterpretation of statistical results, suggesting that Bayesian strategies may offer a path toward enhanced analytical robustness. Moreover, discrete probability models, specifically the Binomial, Poisson, and Negative Binomial distributions are explored for their suitability in modeling clinical endpoints, particularly in trials involving binary responses or data with overdispersion. The discussion further incorporates Bayesian networks and Bayesian estimation techniques, with a comparative evaluation against maximum likelihood estimation to elucidate differences in inferential behavior and practical implementation.
https://arxiv.org/abs/2601.10615
Academic Papers
svg
2d6d768c6f7d028e705d0e29b22b1f85da84461b44025615eeb3390a1118ff84
2026-01-16T00:00:00-05:00
Fair Regression under Demographic Parity: A Unified Framework
arXiv:2601.10623v1 Announce Type: new Abstract: We propose a unified framework for fair regression tasks formulated as risk minimization problems subject to a demographic parity constraint. Unlike many existing approaches that are limited to specific loss functions or rely on challenging non-convex optimization, our framework is applicable to a broad spectrum of regression tasks. Examples include linear regression with squared loss, binary classification with cross-entropy loss, quantile regression with pinball loss, and robust regression with Huber loss. We derive a novel characterization of the fair risk minimizer, which yields a computationally efficient estimation procedure for general loss functions. Theoretically, we establish the asymptotic consistency of the proposed estimator and derive its convergence rates under mild assumptions. We illustrate the method's versatility through detailed discussions of several common loss functions. Numerical results demonstrate that our approach effectively minimizes risk while satisfying fairness constraints across various regression settings.
https://arxiv.org/abs/2601.10623
Academic Papers
svg
54ab5d6a3e2959b6a9dfe21e89a2c3feb25ac97f097f61d26bd68aea0624ca4c
2026-01-16T00:00:00-05:00
Robust Bayesian Inference for Measurement Error Misspecification: The Berkson and Classical Cases
arXiv:2306.01468v3 Announce Type: replace Abstract: Measurement error occurs when a covariate influencing a response variable is corrupted by noise. This can lead to misleading inference outcomes, particularly in problems where accurately estimating the relationship between covariates and response variables is crucial, such as causal effect estimation. Existing methods for dealing with measurement error often rely on strong assumptions such as knowledge of the error distribution or its variance and availability of replicated measurements of the covariates. We propose a Bayesian Nonparametric Learning framework that is robust to misspecification of these assumptions and does not require replicate measurements. This approach gives rise to a general framework that is suitable for both Classical and Berkson error models via the appropriate specification of the prior centering measure of a Dirichlet Process (DP). Moreover, it offers flexibility in the choice of loss function depending on the type of regression model. We provide bounds on the generalisation error based on the Maximum Mean Discrepancy (MMD) loss which allows for generalisation to non-Gaussian distributed errors and nonlinear covariate-response relationships. We showcase the effectiveness of the proposed framework versus prior art in real-world problems containing either Berkson or Classical measurement errors.
https://arxiv.org/abs/2306.01468
Academic Papers
svg
5c53e98f3496591699b42a95f83f0846b6730be02650b4d8a2ed0c1462019b68
2026-01-16T00:00:00-05:00
Collective Outlier Detection and Enumeration with Conformalized Closed Testing
arXiv:2308.05534v3 Announce Type: replace Abstract: This paper develops a flexible distribution-free method for collective outlier detection and enumeration, designed for situations in which the presence of outliers can be detected powerfully even though their precise identification may be challenging due to the sparsity, weakness, or elusiveness of their signals. This method builds upon recent developments in conformal inference and integrates classical ideas from other areas, including multiple testing, locally most powerful and adaptive rank tests, and non-parametric large-sample asymptotics. The key innovation lies in developing a principled and effective approach for automatically choosing the most appropriate machine learning classifier and two-sample testing procedure for a given data set. The performance of our method is investigated through extensive empirical demonstrations, including an analysis of the LHCO high-energy particle collision data set.
https://arxiv.org/abs/2308.05534
Academic Papers
svg
05ee9a4e8ef6509bf00264d21280a6a55e7398b13d0d76561f10c45cab27473f
2026-01-16T00:00:00-05:00
Composite likelihood inference for the Poisson log-normal model
arXiv:2402.14390v3 Announce Type: replace Abstract: The Poisson log-normal model is a latent variable model that provides a generic framework for the analysis of multivariate count data. Inferring its parameters can be a daunting task since the conditional distribution of the latent variables given the observed ones is intractable. For this model, variational approaches are the golden standard solution as they prove to be computationally efficient but lack theoretical guarantees on the estimates. Sampling-based solutions are quite the opposite. We first define a Monte Carlo EM algorithm that can achieve maximum likelihood estimators, but that is computationally efficient only for low-dimensional latent spaces. We then propose a novel inference procedure combining the EM framework with composite likelihood and importance sampling estimates. The algorithm preserves the desirable asymptotic properties of maximum likelihood estimators while circumventing the high-dimensional integration bottleneck, thus maintaining computational feasibility for moderately large datasets. This approach enables grounded parameter estimation, confidence intervals, and hypothesis testing. Application to the Barents Sea fish dataset demonstrates the algorithm capacity to identify significant environmental effects and residual interspecies correlations.
https://arxiv.org/abs/2402.14390
Academic Papers
svg
7123a53258c8b8bedad41f8bdc76ddf38a27c1881dd2a2336e4f4645c8a589b2
2026-01-16T00:00:00-05:00
Flexible modeling of nonnegative continuous data: Box-Cox symmetric regression and its zero-adjusted extension
arXiv:2601.08600v2 Announce Type: replace Abstract: The Box-Cox symmetric distributions constitute a broad class of probability models for positive continuous data, offering flexibility in modeling skewness and tail behavior. Their parameterization allows a straightforward quantile-based interpretation, which is particularly useful in regression modeling. Despite their potential, only a few specific distributions within this class have been explored in regression contexts, and zero-adjusted extensions have not yet been formally addressed in the literature. This paper formalizes the class of Box-Cox symmetric regression models and introduces a new zero-adjusted extension suitable for modeling data with a non-negligible proportion of observations equal to zero. We discuss maximum likelihood estimation, assess finite-sample performance through simulations, and develop diagnostic tools including residual analysis, local influence measures, and goodness-of-fit statistics. An empirical application on basic education expenditure illustrates the models' ability to capture complex patterns in zero-inflated and highly skewed nonnegative data. To support practical use, we developed the new BCSreg R package, which implements all proposed methods.
https://arxiv.org/abs/2601.08600
Academic Papers
svg
0b2ea41982e337f539d82fab2e80461e7d60277b1b3f9495ea1ed89e79dc38b6
2026-01-16T00:00:00-05:00
Fractional Revival Dynamics in Kerr-Type Systems: Angular Momentum Moments and Classical Analogs
arXiv:2601.09763v1 Announce Type: new Abstract: Wave packet revivals and fractional revivals are hallmark quantum interference phenomena that arise in systems with nonlinear energy spectra, and their signatures in expectation values of observables have been studied extensively in earlier work. In this article, we build on these studies and extend the analysis in two important directions. First, we investigate fractional revival dynamics in angular momentum observables, deriving explicit expressions for the time evolution of their moments and demonstrating that higher-order angular momentum moments provide clear and selective signatures of fractional revivals. Second, we examine classical analogs of quantum revival phenomena and elucidate structural similarities between quantum fractional revivals and recurrence behavior in representative classical systems. Using the Kerr-type nonlinear Hamiltonian as a paradigmatic model, we analyze the autocorrelation function, moment dynamics, and phase-space structures, supported by visualizations such as quantum carpets. Our results broaden the range of experimentally accessible diagnostics of fractional revivals and provide a unified perspective on revival phenomena across quantum and classical dynamical systems.
https://arxiv.org/abs/2601.09763
Academic Papers
svg
e08a91c406d11548eecc183a90c3c9dc1c0d07cddd87ddee4a03ac38ce2e7b8d
2026-01-16T00:00:00-05:00
Three questions on the future of quantum science and technology
arXiv:2601.09769v1 Announce Type: new Abstract: The answers on the current status and future development of Quantum Science and Technology are presented.
https://arxiv.org/abs/2601.09769
Academic Papers
svg
8287dc80b90c66f7889aca3a0a9992c731073f827118a1731ff051a2416148e0
2026-01-16T00:00:00-05:00
Hierarchical time crystals
arXiv:2601.09779v1 Announce Type: new Abstract: Spontaneous symmetry breaking is one of the central organizing principles in physics. Time crystals have emerged as an exotic phase of matter, spontaneously breaking the time translational symmetry, and are mainly categorized as discrete or continuous. While these distinct types of time crystals have been extensively explored as standalone systems, intriguing effects can arise from their mutual interaction. Here, we demonstrate that a time-independent coupled system of discrete and continuous time crystals induces a simultaneous two-fold temporal symmetry breaking, resulting in a hierarchical time crystal phase. Interestingly, one of the subsystems breaks an emergent discrete temporal symmetry that does not exist in the dynamical generator but rather emerges dynamically, leading to a convoluted non-equilibrium phase. We demonstrate that hierarchical time crystals are robust, emerging for fundamentally different coupling schemes and persisting across wide ranges of system parameters.
https://arxiv.org/abs/2601.09779
Academic Papers
svg
ec7b97ab65931754ad165ffc473d9ca1fbd9d9787df9814b89050eaf86f4c5d7
2026-01-16T00:00:00-05:00
Background cancellation for frequency-selective quantum sensing
arXiv:2601.09792v1 Announce Type: new Abstract: A key challenge in quantum sensing is the detection of weak time dependent signals, particularly those that arise as specific frequency perturbations over a background field. Conventional methods usually demand complex dynamical control of the quantum sensor and heavy classical post-processing. We propose a quantum sensor that leverages time independent interactions and entanglement to function as a passive, tunable, thresholded frequency filter. By encoding the frequency selectivity and thresholding behavior directly into the dynamics, the sensor is responsive only to a target frequency of choice whose amplitude is above a threshold. This approach circumvents the need for complex control schemes and reduces the post-processing overhead.
https://arxiv.org/abs/2601.09792
Academic Papers
svg
0367da7dffdd54e015b81470062b240edacaa1c14b9e70e8a0e5dfd786ba1fc6
2026-01-16T00:00:00-05:00
Fragmented Topological Excitations in Generalized Hypergraph Product Codes
arXiv:2601.09850v1 Announce Type: new Abstract: Product code construction is a powerful tool for constructing quantum stabilizer codes, which serve as a promising paradigm for realizing fault-tolerant quantum computation. Furthermore, the natural mapping between stabilizer codes and the ground states of exactly solvable spin models also motivates the exploration of many-body orders in the stabilizer codes. In this work, we investigate the fracton topological orders in a family of codes obtained by a recently proposed general construction. More specifically, this code family can be regarded as a class of generalized hypergraph product (HGP) codes. We term the corresponding exactly solvable spin models \textit{orthoplex models}, based on the geometry of the stabilizers. In the 3D orthoplex model, we identify a series of intriguing properties within this model family, including non-monotonic ground state degeneracy (GSD) as a function of system size and non-Abelian lattice defects. Most remarkably, in 4D we discover \textit{fragmented topological excitations}: while such excitations manifest as discrete, isolated points in real space, their projections onto lower-dimensional subsystems form connected objects such as loops, revealing the intrinsic topological nature of these excitations. Therefore, fragmented excitations constitute an intriguing intermediate class between point-like and spatially extended topological excitations. In addition, these rich features establish the generalized HGP codes as a versatile and analytically tractable platform for studying the physics of fracton orders.
https://arxiv.org/abs/2601.09850
Academic Papers
svg
b25936924fb3f764f8b7b9b425f361cfa2e662946ae3bbc681a8fddf11f47e6e
2026-01-16T00:00:00-05:00
Time-Dynamic Circuits for Fault-Tolerant Shift Automorphisms in Quantum LDPC Codes
arXiv:2601.09911v1 Announce Type: new Abstract: Quantum low-density parity-check (qLDPC) codes have emerged as a promising approach for realizing low-overhead logical quantum memories. Recent theoretical developments have established shift automorphisms as a fundamental building block for completing the universal set of logical gates for qLDPC codes. However, practical challenges remain because the existing SWAP-based shift automorphism yields logical error rates that are orders of magnitude higher than those for fault-tolerant idle operations. In this work, we address this issue by dynamically varying the syndrome measurement circuits to implement the shift automorphisms without reducing the circuit distance. We benchmark our approach on both twisted and untwisted weight-6 generalized toric codes, including the gross code family. Our time-dynamic circuits for shift automorphisms achieve performance comparable to the idle operations under the circuit-level noise model (SI1000). Specifically, the dynamic circuits achieve more than an order of magnitude reduction in logical error rates relative to the SWAP-based scheme for the gross code at a physical error rate of $10^{-3}$, employing the BP-OSD decoder. Our findings improve both the error resilience and the time overhead of the shift automorphisms in qLDPC codes. Furthermore, our work can lead to alternative syndrome extraction circuit designs, such as leakage removal protocols, providing a practical pathway to utilizing dynamic circuits that extend beyond surface codes towards qLDPC codes.
https://arxiv.org/abs/2601.09911
Academic Papers
svg
f3f47ef776fa710b830b0751b9a3c3e35a72ce8d334db685b96d393721115aeb
2026-01-16T00:00:00-05:00
Beyond Optimization: Harnessing Quantum Annealer Dynamics for Machine Learning
arXiv:2601.09938v1 Announce Type: new Abstract: Quantum annealing is typically regarded as a tool for combinatorial optimization, but its coherent dynamics also offer potential for machine learning. We present a model that encodes classical data into an Ising Hamiltonian, evolves it on a quantum annealer, and uses the resulting probability distributions as feature maps for classification. Experiments on the quantum annealer machine with the Digits dataset, together with simulations on MNIST, demonstrate that short annealing times yield higher classification accuracy, while longer times reduce accuracy but lower sampling costs. We introduce the participation ratio as a measure of the effective model size and show its strong correlation with generalization.
https://arxiv.org/abs/2601.09938
Academic Papers
svg
4abfeeff2722132d326159f27e98fa7c6ef4c3cf7d9ded53ed8fce14e738a056
2026-01-16T00:00:00-05:00
Three Months in the Life of Cloud Quantum Computing
arXiv:2601.09943v1 Announce Type: new Abstract: Quantum Computing (QC) has evolved from a few custom quantum computers, which were only accessible to their creators, to an array of commercial quantum computers that can be accessed on the cloud by anyone. Accessing these cloud quantum computers requires a complex chain of tools that facilitate connecting, programming, simulating algorithms, estimating resources, submitting quantum computing jobs, retrieving results, and more. Some steps in the chain are hardware dependent and subject to change as both hardware and software tools, such as available gate sets and optimizing compilers, evolve. Understanding the trade-offs inherent in this process is essential for evaluating the power and utility of quantum computers. ARLIS has been systematically investigating these environments to understand these complexities. The work presented here is a detailed summary of three months of using such quantum programming environments. We show metadata obtained from these environments, including the connection metrics to the different services, the execution of algorithms, the testing of the effects of varying the number of qubits, comparisons to simulations, execution times, and cost. Our objective is to provide concrete data and insights for those who are exploring the potential of quantum computing. It is not our objective to present any new algorithms or optimize performance on any particular machine or cloud platform; rather, this work is focused on providing a consistent view of a single algorithm executed using out-of-the-box settings and tools across machines, cloud platforms, and time. We present insights only available from these carefully curated data.
https://arxiv.org/abs/2601.09943
Academic Papers
svg
e436df5a3cdbb7c6e3d231493fa987b34fe94669cc5afa89ca39778a9037f8a7
2026-01-16T00:00:00-05:00
Parallelizing the Variational Quantum Eigensolver: From JIT Compilation to Multi-GPU Scaling
arXiv:2601.09951v1 Announce Type: new Abstract: The Variational Quantum Eigensolver (VQE) is a hybrid quantum-classical algorithm for computing ground state energies of molecular systems. We implement VQE to calculate the potential energy surface of the hydrogen molecule (H$_2$) across 100 bond lengths using the PennyLane quantum computing framework on an HPC cluster featuring 4$\times$ NVIDIA H100 GPUs (80GB each). We present a comprehensive parallelization study with four phases: (1) Optimizer + JIT compilation achieving 4.13$\times$ speedup, (2) GPU device acceleration achieving 3.60$\times$ speedup at 4 qubits scaling to 80.5$\times$ at 26 qubits, (3) MPI parallelization achieving 28.5$\times$ speedup, and (4) Multi-GPU scaling achieving 3.98$\times$ speedup with 99.4% parallel efficiency across 4 H100 GPUs. The combined effect yields 117$\times$ total speedup for the H$_2$ potential energy surface (593.95s $\rightarrow$ 5.04s). We conduct a CPU vs GPU scaling study from 4--26 qubits, finding GPU advantage at all scales with speedups ranging from 10.5$\times$ to 80.5$\times$. Multi-GPU benchmarks demonstrate near-perfect scaling with 99.4% efficiency and establish that a single H100 can simulate up to 29 qubits before hitting memory limits. The optimized implementation reduces runtime from nearly 10 minutes to 5 seconds, enabling interactive quantum chemistry exploration.
https://arxiv.org/abs/2601.09951
Academic Papers
svg
f289d270ed6bbe2ab45330a86f3e7ea4b5258483998de60384e54671988585d5
2026-01-16T00:00:00-05:00
Double Markovity for quantum systems
arXiv:2601.09995v1 Announce Type: new Abstract: The subadditivity-doubling-rotation (SDR) technique is a powerful route to Gaussian optimality in classical information theory and relies on strict subadditivity and its equality-case analysis, where double Markovity is a standard tool. We establish quantum analogues of double Markovity. For tripartite states, we characterize the simultaneous Markov conditions A-B-C and A-C-B via compatible projective measurements on B and C that induce a common classical label J yielding A-J-(BC). For strictly positive four-party states, we show that A-(BD)-C and A-(CD)-B hold if and only if A-D-(BC) holds. These results remove a key bottleneck in extending SDR-type arguments to quantum systems.
https://arxiv.org/abs/2601.09995
Academic Papers
svg
e4d967a3f723cbae33cb679238bdd9a0bac2e934f753ac6a9ba76beef9bf11b5
2026-01-16T00:00:00-05:00
Reentrant topological phases and entanglement scalings in moir\'e-modulated extended Su-Schrieffer-Heeger Model
arXiv:2601.09997v1 Announce Type: new Abstract: Recent studies of moir\'e physics have unveiled a wealth of opportunities for significantly advancing the field of quantum phase transitions. However, properties of reentrant phase transitions driven by moir\'e strength are poorly understood. Here, we investigate the reentrant sequence of phase transitions and the invariant of universality class in moir\'e-modulated extended Su-Schrieffer-Heeger (SSH) model. For the simplified case with intercell hopping $w=0$, we analytically derive renormalization relations of Hamiltonian parameters to explain the reentrant phenomenon. For the general case, numerical phase boundaries are calculated in the thermodynamic limit. The bulk boundary correspondence between zero-energy edge modes and entanglement spectrum is revealed from the degeneracy of both quantities. We also address the correspondence between the central charge obtained from entanglement entropy and the change in winding number during the phase transition. Our results shed light on the understanding of universal characteristics and bulk-boundary correspondence for moir\'e induced reentrant phase transitions in 1D condensed-matter systems.
https://arxiv.org/abs/2601.09997
Academic Papers
svg
170f9a4c34f5bd919eb5b85d9d68cdbce975b0792653f8e7c80df480a2299bc7
2026-01-16T00:00:00-05:00
Contextuality Derived from Minimal Decision Dynamics: Quantum Tug-of-War Decision Making
arXiv:2601.10034v1 Announce Type: new Abstract: Decision making often exhibits context dependence that challenges classical probability theory. While quantum cognition has successfully modeled such phenomena, it remains unclear whether quantum probability is merely a convenient assumption or a necessary consequence of decision dynamics. Here we present a theoretical framework in which contextuality arises generatively from physically grounded constraints on decision making. By developing a quantum extension of the Tug-of-War (TOW) model, we show that conservation-based internal state updates and measurement-induced disturbance preclude any non-contextual classical description with a single, unified internal state. Contextuality therefore emerges as a structural consequence of adaptive learning dynamics. We further show that the resulting measurement structure admits Klyachko-Can-Binicioglu-Shumovsky (KCBS)-type contextuality witnesses in a minimal single-system setting. These results indicate that quantum probability is not merely a descriptive convenience, but an unavoidable effective theory for adaptive decision dynamics.
https://arxiv.org/abs/2601.10034
Academic Papers
svg
5bcf714b1fa41ad32b6957df26507345d9c57e8eaa2ed55b1d7bbef2876625d2
2026-01-16T00:00:00-05:00
Towards Minimal Fault-tolerant Error-Correction Sequence with Quantum Hamming Codes
arXiv:2601.10042v1 Announce Type: new Abstract: The high overhead of fault-tolerant measurement sequences (FTMSs) poses a major challenge for implementing quantum stabilizer codes. Here, we address this problem by constructing efficient FTMSs for the class of quantum Hamming codes $[\![2^r-1, 2^r-1-2r, 3]\!]$ with $r=3k+1$ ($k \in \mathbb{Z}^+$). Our key result demonstrates that the sequence length can be reduced to exactly $2r+1$-only one additional measurement beyond the original non-fault-tolerant sequence, establishing a tight lower bound. The proposed method leverages cyclic matrix transformations to systematically combine rows of the initial stabilizer matrix and preserving a self-dual CSS-like symmetry analogous to that of the original quantum Hamming codes. This induced symmetry enables hardware-efficient circuit reuse: the measurement circuits for the first $r$ stabilizers are transformed into circuits for the remaining $r$ stabilizers simply by toggling boundary Hadamard gates, eliminating redundant hardware. For distance-3 fault-tolerant error correction, our approach simultaneously reduces the time overhead via shorting the FTMS length and the hardware overhead through symmetry-enabled circuit multiplexing. These results provide an important advance towards the important open problem regarding the design of minimal FTMSs for quantum Hamming codes and may shed light on similar challenges in other quantum stabilizer codes.
https://arxiv.org/abs/2601.10042
Academic Papers
svg
4200166a16d956f830021e5cc9774ee6a119b767bf5a7d03a975948a56ef3600
2026-01-16T00:00:00-05:00
Optimal qudit overlapping tomography and optimal measurement order
arXiv:2601.10059v1 Announce Type: new Abstract: Quantum state tomography is essential for characterizing quantum systems, but it becomes infeasible for large systems due to exponential resource scaling. Overlapping tomography addresses this challenge by reconstructing all $k$-body marginals using few measurement settings, enabling the efficient extraction of key information for many quantum tasks. While optimal schemes are known for qubits, the extension to higher-dimensional qudit systems remains largely unexplored. Here, we investigate optimal qudit overlapping tomography, constructing local measurement settings from generalized Gell-Mann matrices. By establishing a correspondence with combinatorial covering arrays, we present two explicit constructions of optimal measurement schemes. For $n$-qutrit systems, we prove that pairwise tomography requires at most $8 + 56\left\lceil \log_{8} n \right\rceil$ measurement settings, and provide an explicit scheme achieving this bound. Furthermore, we develop an efficient algorithm to determine the optimal order of these measurement settings, minimizing the experimental overhead associated with switching configurations. Compared to the worst-case ordering, our optimized schedule reduces switching costs by approximately 50\%. These results provide a practical pathway for efficient characterization of qudit systems, facilitating their application in quantum communication and computation.
https://arxiv.org/abs/2601.10059
Academic Papers
svg
8279e79ce3e8af7561c498e478f47ab2217c64f5b2f7ed97ded5f9db1ca7ff27
2026-01-16T00:00:00-05:00
Pseudomode approach to Fano effect in dissipative cavity quantum electrodynamics
arXiv:2601.10087v1 Announce Type: new Abstract: We study the Fano effect in dissipative cavity quantum electrodynamics, which originates from the interference between the emitter's direct radiation and that mediated by a cavity mode. Starting from a two-level system coupled to a structured reservoir, we show that a quantum master equation previously derived within the Born-Markov approximation can be rederived by introducing a single auxiliary mode via pseudomode approach. We identify the corresponding spectral function of the system--environment interaction and demonstrate that it consists of a constant and a non-Lorentzian contribution forming the Fano profile. The constant term is shown to be essential for obtaining a Lindblad master equation and is directly related to the rate associated with this Fano interference. Furthermore, by applying Fano diagonalization to a common-environment setup including an explicit cavity mode, we independently derive the same spectral function in the strongest-interference regime. Our results establish a unified framework for describing the Fano effect in single-mode cavity QED systems and clarify its non-Markovian origin encoded in the spectral function.
https://arxiv.org/abs/2601.10087
Academic Papers
svg
c2dee7db77ec05b54b6f07c66b7f715b640a6fbf6eb86294a6a2b6eae38dce6e
2026-01-16T00:00:00-05:00
Classical simulation of a quantum circuit with noisy magic inputs
arXiv:2601.10111v1 Announce Type: new Abstract: Magic states are essential for universal quantum computation and are widely viewed as a key source of quantum advantage, yet in realistic devices they are inevitably noisy. In this work, we characterize how noise on injected magic resources changes the classical simulability of quantum circuits and when it induces a transition from classically intractable behavior to efficient classical simulation. We adopt a resource-centric noise model in which only the injected magic components are noisy, while the baseline states, operations, and measurements belong to an efficiently simulable family. Within this setting, we develop an approximate classical sampling algorithm with controlled error and prove explicit noise-dependent conditions under which the algorithm runs in polynomial time. Our framework applies to both qubit circuits with Clifford baselines and fermionic circuits with matchgate baselines, covering representative noise channels such as dephasing and particle loss. We complement the analysis with numerical estimates of the simulation cost, providing concrete thresholds and runtime scaling across practically relevant parameter regimes.
https://arxiv.org/abs/2601.10111
Academic Papers
svg
a6612c4f23baf5bb43f8527046efd807dbaa42707e028099a0cee96d55eea7ee
2026-01-16T00:00:00-05:00
Bridging Superconducting and Neutral-Atom Platforms for Efficient Fault-Tolerant Quantum Architectures
arXiv:2601.10144v1 Announce Type: new Abstract: The transition to the fault-tolerant era exposes the limitations of homogeneous quantum systems, where no single qubit modality simultaneously offers optimal operation speed, connectivity, and scalability. In this work, we propose a strategic approach to Heterogeneous Quantum Architectures (HQA) that synthesizes the distinct advantages of the superconducting (SC) and neutral atom (NA) platforms. We explore two architectural role assignment strategies based on hardware characteristics: (1) We offload the latency-critical Magic State Factory (MSF) to fast SC devices while performing computation on scalable NA arrays, a design we term MagicAcc, which effectively mitigates the resource-preparation bottleneck. (2) We explore a Memory-Compute Separation (MCSep) paradigm that utilizes NA arrays for high-density qLDPC memory storage and SC devices for fast surface-code processing. Our evaluation, based on a comprehensive end-to-end cost model, demonstrates that principled heterogeneity yields significant performance gains. Specifically, our designs achieve $752\times$ speedup over NA-only baselines on average and reduce the physical qubit footprint by over $10\times$ compared to SC-only systems. These results chart a clear pathway for leveraging cross-modality interconnects to optimize the space-time efficiency of future fault-tolerant quantum computers.
https://arxiv.org/abs/2601.10144
Academic Papers
svg
2ced8d7abba0c3180f3135a114c9796329eb89699f7fe54359e40a3dc2fa6e25
2026-01-16T00:00:00-05:00
Fluctuation-induced quenching of chaos in quantum optics
arXiv:2601.10147v1 Announce Type: new Abstract: Recent studies have extensively explored chaotic dynamics in quantum optical systems through the mean-field approximation, which corresponds to an ideal, fluctuation-free scenario. However, the inherent sensitivity of chaos to initial conditions implies that even minute fluctuations can be amplified, thereby questioning the applicability of this approximation. Here, we analyze these chaotic effects using stochastic Langevin equations or the Lindblad master equation. For systems operating at frequencies of $10^5$ to $10^7$ Hz, we demonstrate that room-temperature thermal fluctuations are sufficient to suppress chaos at the level of expectation values, even under weak nonlinearity. Furthermore, nonlinearity induces deviations from Gaussian phase-space distributions of the quantum state, revealing attractor-like features in the Wigner function. With increasing nonlinearity, the noise threshold for chaos suppression decreases, approaching the scale of vacuum fluctuations. These results provide a bidirectional validation of the quantum mechanical suppression of chaos.
https://arxiv.org/abs/2601.10147
Academic Papers
svg
b4d3ade1042f8ab8e7fd13a47078f32911afb5a92842ef285e805b4f0396edb1
2026-01-16T00:00:00-05:00
Exponential Analysis for Entanglement Distillation
arXiv:2601.10190v1 Announce Type: new Abstract: Historically, the focus in entanglement distillation has predominantly been on the distillable entanglement, and the framework assumes complete knowledge of the initial state. In this paper, we study the reliability function of entanglement distillation, which specifies the optimal exponent of the decay of the distillation error when the distillation rate is below the distillable entanglement. Furthermore, to capture greater operational significance, we extend the framework from the standard setting of known states to a black-box setting, where distillation is performed from a set of possible states. We establish an exact finite blocklength result connecting to composite correlated hypothesis testing without any redundant correction terms. Based on this, the reliability function of entanglement distillation is characterized by the regularized quantum Hoeffding divergence. In the special case of a pure initial state, our result reduces to the error exponent for entanglement concentration derived by Hayashi et al. in 2003. Given full prior knowledge of the state, we construct a concrete optimal distillation protocol. Additionally, we analyze the strong converse exponent of entanglement distillation. While all the above results assume the free operations to be non-entangling, we also investigate other free operation classes, including PPT-preserving, dually non-entangling, and dually PPT-preserving operations.
https://arxiv.org/abs/2601.10190
Academic Papers
svg
6b61b302e0c4978d8c942934ee4e0a97a31d87bd7a594dd2bba525e08a618752
2026-01-16T00:00:00-05:00
On the average-case complexity of learning states from the circular and Gaussian ensembles
arXiv:2601.10197v1 Announce Type: new Abstract: Studying the complexity of states sampled from various ensembles is a central component of quantum information theory. In this work we establish the average-case hardness of learning, in the statistical query model, the Born distributions of states sampled uniformly from the circular and (fermionic) Gaussian ensembles. These ensembles of states are induced variously by the uniform measures on the compact symmetric spaces of type AI, AII, and DIII. This finding complements analogous recent results for states sampled from the classical compact groups. On the technical side, we employ a somewhat unconventional approach to integrating over the compact groups which may be of some independent interest. For example, our approach allows us to exactly evaluate the total variation distances between the output distributions of Haar random unitary and orthogonal circuits and the constant distribution, which were previously known only approximately.
https://arxiv.org/abs/2601.10197
Academic Papers
svg
da59301c7aa38d5ce5cf66724cde269b245ef5c3cbf20100b0fcaaee5284c2d5
2026-01-16T00:00:00-05:00
Topology-Aware Block Coordinate Descent for Qubit Frequency Calibration of Superconducting Quantum Processors
arXiv:2601.10203v1 Announce Type: new Abstract: Pre-execution calibration is a major bottleneck for operating superconducting quantum processors, and qubit frequency allocation is especially challenging due to crosstalk-coupled objectives. We establish that the widely-used Snake optimizer is mathematically equivalent to Block Coordinate Descent (BCD), providing a rigorous theoretical foundation for this calibration strategy. Building on this formalization, we present a topology-aware block ordering obtained by casting order selection as a Sequence-Dependent Traveling Salesman Problem (SD-TSP) and solving it efficiently with a nearest-neighbor heuristic. The SD-TSP cost reflects how a given block choice expands the reduced-circuit footprint required to evaluate the block-local objective, enabling orders that minimize per-epoch evaluation time. Under local crosstalk/bounded-degree assumptions, the method achieves linear complexity in qubit count per epoch, while retaining calibration quality. We formalize the calibration objective, clarify when reduced experiments are equivalent or approximate to the full objective, and analyze convergence of the resulting inexact BCD with noisy measurements. Simulations on multi-qubit models show that the proposed BCD-NNA ordering attains the same optimization accuracy at markedly lower runtime than graph-based heuristics (BFS, DFS) and random orders, and is robust to measurement noise and tolerant to moderate non-local crosstalk. These results provide a scalable, implementation-ready workflow for frequency calibration on NISQ-era processors.
https://arxiv.org/abs/2601.10203
Academic Papers
svg
e4846f3ee15c645a3dbb5d3a6ba1ce2a38471408e58b97e9f4cf1f5dbf99d51f
2026-01-16T00:00:00-05:00
Noise-Resilient Quantum Evolution in Open Systems through Error-Correcting Frameworks
arXiv:2601.10206v1 Announce Type: new Abstract: We analyze quantum state preservation in open quantum systems using quantum error-correcting (QEC) codes that are explicitly embedded into microscopic system-bath models. Instead of abstract quantum channels, we consider multi-qubit registers coupled to bosonic thermal environments, derive a second-order master equation for the reduced dynamics, and use it to benchmark the five-qubit, Steane, and toric codes under local and collective noise. We compute state fidelities for logical qubits as functions of coupling strength, bath temperature, and the number of correction cycles. In the low-temperature regime, we find that repeated error-correction with the five-qubit code strongly suppresses decoherence and relaxation, while in the high-temperature regime, thermal excitations dominate the dynamics and reduce the benefit of all codes, though the five-qubit code still outperforms the Steane and toric codes. For two-qubit Werner states, we identify a critical evolution time before which QEC does not improve fidelity, and this time increases as entanglement grows. After this critical time, QEC does improve fidelity. Comparative analysis further reveals that the five-qubit code (the smallest perfect code) offers consistently higher fidelities than topological and concatenated architectures in these open-system settings. These findings establish a quantitative framework for evaluating QEC under realistic noise environments and provide guidance for developing noise-resilient quantum architectures in near-term quantum technologies.
https://arxiv.org/abs/2601.10206
Academic Papers
svg
c1763cee382f9dfa9303d74265d2ecdd6577f2cb7702d970fb5976f5816c0b9c
2026-01-16T00:00:00-05:00
Coherence Limits in Interference-Based cos(2$\varphi$) Qubits
arXiv:2601.10209v1 Announce Type: new Abstract: We investigate the coherence properties of parity-protected $\cos(2\varphi)$ qubits based on interferences between two Josephson elements in a superconducting loop. We show that qubit implementations of a $\cos(2\varphi)$ potential using a single loop, such as those employing semiconducting junctions, rhombus circuits, flowermon and KITE structures, can be described by the same Hamiltonian as two multi-harmonic Josephson junctions in a SQUID geometry. We find that, despite the parity protection arising from the suppression of single Cooper pair tunneling, there exists a fundamental trade-off between charge and flux noise dephasing channels. Using numerical simulations, we examine how relaxation and dephasing rates depend on external flux and circuit parameters, and we identify the best compromise for maximum coherence. With currently existing circuit parameters, the qubit lifetime $T_1$ can exceed milliseconds while the dephasing time $T_\varphi$ remains limited to only a few microseconds due to either flux or charge noise. Our findings establish practical limits on the coherence of this class of qubits and raise questions about the long-term potential of this approach.
https://arxiv.org/abs/2601.10209
Academic Papers
svg
569cab2a2643ede162d6b9b4d1756b14ca21a90a15ce1069cd0a19f37479e6a5
2026-01-16T00:00:00-05:00
Quantitative approach for the Dicke-Ising chain with an effective self-consistent matter Hamiltonian
arXiv:2601.10210v1 Announce Type: new Abstract: In the thermodynamic limit, the Dicke-Ising chain maps exactly onto an effective self-consistent matter Hamiltonian with the photon field acting solely as a self-consistent effective field. As a consequence, no quantum correlations between photons and spins are needed to understand the quantum phase diagram. This enables us to determine the quantum phase diagram in the thermodynamic limit using numerical linked-cluster expansions combined with density matrix renormalization group calculations (NLCE+DMRG) to solve the resulting self-consistent matter Hamiltonian. This includes magnetically ordered phases with significantly improved accuracy compared to previous estimates. For ferromagnetic Ising couplings, we refine the location of the multicritical point governing the change in the order of the superradiant phase transition, reaching a relative accuracy of $10^{-4}$. For antiferromagnetic Ising couplings, we confirm the existence of the narrow antiferromagnetic superradiant phase in the thermodynamic limit. The effective matter Hamiltonian framework identifies the antiferromagnetic superradiant phase as the many-body ground state of an antiferromagnetic transverse-field Ising model with longitudinal field. This phase emerges through continuous Dicke-type polariton condensation from the antiferromagnetic normal phase, followed by a first-order transition to the paramagnetic superradiant phase. Thus, NLCE+DMRG provides a precise determination of the Dicke-Ising phase diagram in one dimension by solving the self-consistent effective matter Hamiltonian.
https://arxiv.org/abs/2601.10210
Academic Papers
svg
09573470dc9d048f86b686d80c5a39f8296dfd1a2ccf10c0069a41e88b4e3685
2026-01-16T00:00:00-05:00
Optimal control of a dissipative micromaser quantum battery in the ultrastrong coupling regime
arXiv:2601.10281v1 Announce Type: new Abstract: We investigate the open system dynamics of a micromaser quantum battery operating in the ultrastrong coupling (USC) regime under environmental dissipation. The battery consists of a single-mode electromagnetic cavity sequentially interacting, via the Rabi Hamiltonian, with a stream of qubits acting as chargers. Dissipative effects arise from the weak coupling of the qubit-cavity system to a thermal bath. Non-negligible in the USC regime, the counter-rotating terms substantially improve the charging speed, but also lead, in the absence of dissipation, to unbounded energy growth and highly mixed cavity states. Dissipation during each qubit-cavity interaction mitigates these detrimental effects, yielding steady-state of finite energy and ergotropy. Optimal control on qubit preparation and interaction times enhances battery's performance in: (i) Maximizing the stored ergotropy trhough an optimized charging protocol; (ii) Stabilizing the stored ergotropy against dissipative losses through an optimized measurement-based passive-feedback strategy. Overall, our numerical results demonstrate that the interplay of ultrastrong light-matter coupling, controlled dissipation, and optimized control strategies enables micromaser quantum batteries to achieve both enhanced charging performance and long-term stability under realistic conditions.
https://arxiv.org/abs/2601.10281
Academic Papers
svg
5e2c1575f466c663deb145d52889f3b64afcdc093a6debafa6cd06c6b312aaa0
2026-01-16T00:00:00-05:00
Exponential improvement in benchmarking multiphoton interference
arXiv:2601.10289v1 Announce Type: new Abstract: Several photonic quantum technologies rely on the ability to generate multiple indistinguishable photons. Benchmarking the level of indistinguishability of these photons is essential for scalability. The Hong-Ou-Mandel dip provides a benchmark for the indistinguishability between two photons, and extending this test to the multi-photon setting has so far resulted in a protocol that computes the genuine n-photon indistinguishability (GI). However, this protocol has a sample complexity that increases exponentially with the number of input photons for an estimation of GI up to a given additive error. To address this problem, we introduce new theorems that strengthen our understanding of the relationship between distinguishability and the suppression laws of the quantum Fourier transform interferometer (QFT). Building on this, we propose a protocol using the QFT for benchmarking GI that achieves constant sample complexity for the estimation of GI up to a given additive error for prime photon numbers, and sub-polynomial scaling otherwise, representing an exponential improvement over the state of the art. We prove the optimality of our protocol in many relevant scenarios and validate our approach experimentally on Quandela's reconfigurable photonic quantum processor, where we observe a clear advantage in runtime and precision over the state of the art. We therefore establish the first scalable method for computing multi-photon indistinguishability, which applies naturally to current and near-term photonic quantum hardware.
https://arxiv.org/abs/2601.10289
Academic Papers
svg
e94b30cc6727988c812073dabffa1411f743d1ca8a8bcc2a387890705858a541
2026-01-16T00:00:00-05:00
Complex scalar relativistic field as a probability amplitude
arXiv:2601.10302v1 Announce Type: new Abstract: A relativistic equation for a neutral complex field as a probability amplitude is proposed. The continuity equation for the probability density is obtained. It is shown that there are two types of excitations of this field, which describe particles with positive energy and different dispersion laws. Based on the Lagrangian formalism, conservation laws are obtained. The transition to secondary quantization is considered.
https://arxiv.org/abs/2601.10302
Academic Papers
svg
42e5a999b09c52e2766c44f463b5a084b0bfbf7a7d4dee9a0f505621359e3a14
2026-01-16T00:00:00-05:00
Addition to the dynamic Stark shift of the coherent population trapping resonance
arXiv:2601.10319v1 Announce Type: new Abstract: This paper presents a theoretical study of the light-induced shift of the coherent population trapping resonance. An analytical model is proposed that describes the interaction of two radiation components with an atomic system using a ${\Lambda}$ scheme and takes into account an additional level of excited state. Both weak and strong coupling regimes with off-resonant transitions are considered. It is shown that, in addition to the conventional dynamic Stark shift, an extra shift arises due to the distortion of the resonance line shape when bichromatic laser radiation interacts with off-resonant atomic transitions. An analytical expression for this additional shift is derived in the weak-coupling limit, and its significant impact on the resonance shape and sensitivity to the intensities of the laser field components is demonstrated. It is found that under strong coupling conditions, the additional shift can deviate substantially from a linear dependence on light intensity, suggesting new opportunities for controlling light shifts in precision atomic devices such as quantum frequency standards.
https://arxiv.org/abs/2601.10319
Academic Papers
svg
c57268c57de2e3c30e44ae62780a623ddde190ca07c795ea09a5ddd1927a4718
2026-01-16T00:00:00-05:00
Principles of Optics in the Fock Space: Scalable Manipulation of Giant Quantum States
arXiv:2601.10325v1 Announce Type: new Abstract: The manipulation of distinct degrees of freedom of photons plays a critical role in both classical and quantum information processing. While the principles of wave optics provide elegant and scalable control over classical light in spatial and temporal domains, engineering quantum states in Fock space has been largely restricted to few-photon regimes, hindered by the computational and experimental challenges of large Hilbert spaces. Here, we introduce ``Fock-space optics", establishing a conceptual framework of wave propagation in the quantum domain by treating photon number as a synthetic dimension. Using a superconducting microwave resonator, we experimentally demonstrate Fock-space analogues of optical propagation, refraction, lensing, dispersion, and interference with up to 180 photons. These results establish a fundamental correspondence between Schr\"{o}dinger evolution in a single bosonic mode and classical paraxial wave propagation. By mapping intuitive optical concepts onto high-dimensional quantum state engineering, our work opens a path toward scalable control of large-scale quantum systems with thousands of photons and advanced bosonic information processing.
https://arxiv.org/abs/2601.10325
Academic Papers
svg
0d4ce0b3f4b48d5f1d1a28cf9dfaf3fc078839870a8c7393674b18a91c6bf1c8
2026-01-16T00:00:00-05:00
Realistic prospects for testing a relativistic local quantum measurement inequality
arXiv:2601.10354v1 Announce Type: new Abstract: We investigate the experimental prospects for testing a relativistic local quantum measurement inequality that quantifies the trade-off between vacuum insensitivity and responsiveness to excitations for finite-size detectors. Building on the Reeh--Schlieder approximation for coherent states, we derive an explicit and practically applicable bound for arbitrary coherent states. To connect with realistic photodetection scenarios, we model the detection region as a square prism operating over a finite time window and consider a normally incident single-mode coherent state. Numerical results exhibit the expected qualitative behavior: suppressing dark counts necessarily tightens the achievable click probability.
https://arxiv.org/abs/2601.10354
Academic Papers
svg
70b29bb0e3553b500d6dd437270e1132cb71c58d004175a216d824cabe59bccb
2026-01-16T00:00:00-05:00
Learning Hamiltonians in the Heisenberg limit with static single-qubit fields
arXiv:2601.10380v1 Announce Type: new Abstract: Learning the Hamiltonian governing a quantum system is a central task in quantum metrology, sensing, and device characterization. Existing Heisenberg-limited Hamiltonian learning protocols either require multi-qubit operations that are prone to noise, or single-qubit operations whose frequency or strength increases with the desired precision. These two requirements limit the applicability of Hamiltonian learning on near-term quantum platforms. We present a protocol that learns a quantum Hamiltonian with the optimal Heisenberg-limited scaling using only single-qubit control in the form of static fields with strengths that are independent of the target precision. Our protocol is robust against the state preparation and measurement (SPAM) error. By overcoming these limitations, our protocol provides new tools for device characterization and quantum sensing. We demonstrate that our method achieves the Heisenberg-limited scaling through rigorous mathematical proof and numerical experiments. We also prove an information-theoretic lower bound showing that a non-vanishing static field strength is necessary for achieving the Heisenberg limit unless one employs an extensive number of discrete control operations.
https://arxiv.org/abs/2601.10380
Academic Papers
svg
2de6e524e0ea69a5ee4a3f88da28e6fac85490f967e77ef7c1a8b67ec73ff289
2026-01-16T00:00:00-05:00
Experimental Realization of Rabi-Driven Reset for Fast Cooling of a High-Q Cavity
arXiv:2601.10385v1 Announce Type: new Abstract: High-Q bosonic memories are central to hardware-efficient quantum error correction, but their isolation makes fast, high-fidelity reset a persistent bottleneck. Existing approaches either rely on weak intermode cross-Kerr conversion or on measurement-based sequences with substantial latency. Here we demonstrate a hardware-efficient Rabi-Driven Reset (RDR) that implements continuous, measurement-free cooling of a superconducting cavity mode. A strong resonant Rabi drive on a transmon, together with sideband drives on the memory and readout modes detuned by the Rabi frequency, converts the dispersive interaction into an effective Jaynes-Cummings coupling between the qubit dressed states and each mode. This realizes a tunable dissipation channel from the memory to the cold readout bath. Crucially, the engineered coupling scales with the qubit-mode dispersive interaction and the drive amplitude, rather than with the intermode cross-Kerr, enabling fast cooling even in very weakly coupled architectures that deliberately suppress direct mode-mode coupling. We demonstrate RDR of a single photon with a decay time of $1.2 \mu s$, more than two orders of magnitude faster than the intrinsic lifetime. Furthermore, we reset about 30 thermal photons in about $80 \mu s$ to a steady-state average photon number of $\bar{n} = 0.045 \pm 0.025$.
https://arxiv.org/abs/2601.10385
Academic Papers
svg
315622fb703628119b6c2ac98b267ad30428a0cffc533725ee6fa79aff18c183
2026-01-16T00:00:00-05:00
A Collection of Pinsker-type Inequalities for Quantum Divergences
arXiv:2601.10395v1 Announce Type: new Abstract: Pinsker's inequality sets a lower bound on the Umegaki divergence of two quantum states in terms of their trace distance. In this work, we formulate corresponding estimates for a variety of quantum and classical divergences including $f$-divergences like Hellinger and $\chi^2$-divergences as well as R\'enyi divergences and special cases thereof like the Umegaki divergence, collision divergence, max divergence. We further provide a strategy on how to adapt these bounds to smoothed divergences.
https://arxiv.org/abs/2601.10395
Academic Papers
svg
256b9c5e4935bb3611dbfe2d6d50f4da7bb27aad5e79352469ba5fcb2057a397
2026-01-16T00:00:00-05:00
Bounding many-body properties under partial information and finite measurement statistics
arXiv:2601.10408v1 Announce Type: new Abstract: Calculating bounds of properties of many-body quantum systems is of paramount importance, since they guide our understanding of emergent quantum phenomena and complement the insights obtained from estimation methods. Recent semidefinite programming approaches enable probabilistic bounds from finite-shot measurements of easily accessible, yet informationally incomplete, observables. Here we render these methods scalable in the number of qubits by instead utilizing moment-matrix relaxations. After introducing the general formalism, we show how the approach can be adapted with specific knowledge of the system, such as it being the ground state of a given Hamiltonian, possessing specific symmetries or being the steady state of a given Lindbladian. Our approach defines a scalable real-world certification scheme leveraging semidefinite programming relaxations and experimental estimations which, unavoidably, contain shot noise.
https://arxiv.org/abs/2601.10408
Academic Papers
svg
97580a3f777255245c86d461f4cc4cb68e9f3b9d098c5ba6b0c9e5efd8a01e58
2026-01-16T00:00:00-05:00
Tight bounds on recurrence time in closed quantum systems
arXiv:2601.10409v1 Announce Type: new Abstract: The evolution of an isolated quantum system inevitably exhibits recurrence: the state returns to the vicinity of its initial condition after finite time. Despite its fundamental nature, a rigorous quantitative understanding of recurrence has been lacking. We establish upper bounds on the recurrence time, $t_{\mathrm{rec}} \lesssim t_{\mathrm{exit}}(\epsilon)(1/\epsilon)^d$, where $d$ is the Hilbert-space dimension, $\epsilon$ the neighborhood size, and $t_{\mathrm{exit}}(\epsilon)$ the escape time from this neighborhood. For pure states evolving under a Hamiltonian $H$, estimating $t_{\mathrm{exit}}$ is equivalent to an inverse quantum speed limit problem: finding upper bounds on the time a time-evolved state $\psi_t$ needs to depart from the $\epsilon$-vicinity of the initial state $\psi_0$. We provide a partial solution, showing that under mild assumptions $t_{\mathrm{exit}}(\epsilon) \approx \epsilon /\sqrt{ \Delta(H^2)}$, with $\Delta(H^2)$ the Hamiltonian variance in $\psi_0$. We show that our upper bound on $t_{\mathrm{rec}}$ is generically saturated for random Hamiltonians. Finally, we analyze the impact of coherence of the initial state in the eigenbasis of $H$ on recurrence behavior.
https://arxiv.org/abs/2601.10409
Academic Papers
svg
e100328af7b9eb3c27b9ff2a382aad04a20c245512c7569cdd2eb35d5b79ffc8
2026-01-16T00:00:00-05:00
Unifying Quantum and Classical Dynamics
arXiv:2601.10423v1 Announce Type: new Abstract: Classical and quantum physics represent two distinct theories; however, quantum physics is regarded as the more fundamental of the two. It is posited that classical mechanics should arise from quantum mechanics under certain limiting conditions. Nevertheless, this remains a challenging objective. In this work, we explore the potential for unifying the dynamics of classical and quantum physics. This discussion does not suggest that classical behavior emerges from quantum mechanics; rather, it demonstrates the exact equivalence between the dynamics of quantum observables and their classical counterparts. It is shown that the Heisenberg equations of motion can be cast in a form that is identical to Newton's equations of motion, with $\hbar$ being absent from the formulation. This implies that both quantum and classical dynamics are governed by the same equations, with the Heisenberg operators substituting the classical observables.
https://arxiv.org/abs/2601.10423
Academic Papers
svg
562a3336c34ec0d26898217b46a69b58373b5cf83c44dae257a74f0a177cb962
2026-01-16T00:00:00-05:00
Reduction of thermodynamic uncertainty by a virtual qubit
arXiv:2601.10429v1 Announce Type: new Abstract: The thermodynamic uncertainty relation (TUR) imposes a fundamental constraint between current fluctuations and entropy production, providing a refined formulation of the second law for micro- and nanoscale systems. Quantum violations of the classical TUR reveal genuinely quantum thermodynamic effects, which are essential for improving performance and enabling optimization in quantum technologies. In this work, we analyze the TUR in a class of paradigmatic quantum thermal-machine models whose operation is enabled by coherent coupling between two energy levels forming a virtual qubit. Steady-state coherences are confined to this virtual-qubit subspace, while in the absence of coherent coupling the system satisfies detailed balance with the thermal reservoirs and supports no steady-state heat currents. We show that the steady-state currents and entropy production can be fully reproduced by an effective classical Markov process, whereas current fluctuations acquire an additional purely quantum correction originating from coherence. As a result, the thermodynamic uncertainty naturally decomposes into a classical (diagonal) contribution and a coherent contribution. The latter becomes negative under resonant conditions and reaches its minimum at the coupling strength that maximizes steady-state coherence. We further identify the optimization conditions and the criteria for surpassing the classical TUR bound in the vicinity of the reversible limit.
https://arxiv.org/abs/2601.10429
Academic Papers
svg
e01316f43b65d9bb0f779977cef9809af15e448adf3cfac32dab6e701f994001
2026-01-16T00:00:00-05:00
The SpinPulse library for transpilation and noise-accurate simulation of spin qubit quantum computers
arXiv:2601.10435v1 Announce Type: new Abstract: We introduce SpinPulse, an open-source python package for simulating spin qubit-based quantum computers at the pulse-level. SpinPulse models the specific physics of spin qubits, particularly through the inclusion of classical non-Markovian noise. This enables realistic simulations of native gates and quantum circuits, in order to support hardware development. In SpinPulse, a quantum circuit is first transpiled into the native gate set of our model and then converted to a pulse sequence. This pulse sequence is subsequently integrated numerically in the presence of a simulated noisy experimental environment. We showcase workflows including transpilation, pulse-level compilation, hardware benchmarking, quantum error mitigation, and large-scale simulations via integration with the tensor-network library quimb. We expect SpinPulse to be a valuable open-source tool for the quantum computing community, fostering efforts to devise high-fidelity quantum circuits and improved strategies for quantum error mitigation and correction.
https://arxiv.org/abs/2601.10435
Academic Papers
svg
b3deee7f3c1209fc989404a8a92ab89f3876e68da71d3387cd2eb4587e9fbc8f
2026-01-16T00:00:00-05:00
Minimal-Energy Optimal Control of Tunable Two-Qubit Gates in Superconducting Platforms Using Continuous Dynamical Decoupling
arXiv:2601.10446v1 Announce Type: new Abstract: We present a unified scheme for generating high-fidelity entangling gates in superconducting platforms by continuous dynamical decoupling (CDD) combined with variational minimal-energy optimal control. During the CDD stage, we suppress residual couplings, calibration drifting, and quasistatic noise, resulting in a stable effective Hamiltonian that preserves the designed ZZ interaction intended for producing tunable couplers. In this stable $\mathrm{SU}(4)$ manifold, we calculate smooth low-energy single-quibt control functions using a variational geodesic optimization process that directly minimizes gate infidelity. We illustrate the methodology by applying it to CZ, CX, and generic engangling gates, achieving virtually unit fidelity and robustness under restricted single-qubit action, with experimentally realistic control fields. These results establish CDD-enhanced variational geometric optimal control as a practical and noise-resilient scheme for designing superconducting entangling gates.
https://arxiv.org/abs/2601.10446
Academic Papers
svg
2f9404d82a06a768faca383c6c1095218a21353584f84b1bf54e39e667c43f09
2026-01-16T00:00:00-05:00
Localization Landscape in Non-Hermitian and Floquet quantum systems
arXiv:2601.10451v1 Announce Type: new Abstract: We propose a generalization of the Filoche--Mayboroda localization landscape that extends the theory well beyond the static, elliptic and Hermitian settings while preserving its geometric interpretability. Using the positive operator $H^\dagger H$, we obtain a landscape that predicts localization across non-Hermitian, Floquet, and topological systems without computing eigenstates. Singular-value collapse reveals spectral instabilities and skin effects, the Sambe formulation captures coherent destruction of tunneling, and topological zero modes emerge directly from the landscape. Applications to Hatano--Nelson chains, driven two-level systems, and driven Aubry--Andr\'e--Harper models confirm quantitative accuracy, establishing a unified predictor for localization in equilibrium and driven quantum matter.
https://arxiv.org/abs/2601.10451
Academic Papers
svg
c0d5155279a8a7aacbe0b4c7b72779f6f0ff002902a2f2f00be8f2caba5baebb
2026-01-16T00:00:00-05:00
Erasure conversion for singlet-triplet spin qubits enables high-performance shuttling-based quantum error correction
arXiv:2601.10461v1 Announce Type: new Abstract: Fast and high fidelity shuttling of spin qubits has been demonstrated in semiconductor quantum dot devices. Several architectures based on shuttling have been proposed; it has been suggested that singlet-triplet (dual-spin) qubits could be optimal for the highest shuttling fidelities. Here we present a fault-tolerant framework for quantum error correction based on such dual-spin qubits, establishing them as a natural realisation of erasure qubits within semiconductor architectures. We introduce a hardware-efficient leakage-detection protocol that automatically projects leaked qubits back onto the computational subspace, without the need for measurement feedback or increased classical control overheads. When combined with the XZZX surface code and leakage-aware decoding, we demonstrate a twofold increase in the error correction threshold and achieve orders-of-magnitude reductions in logical error rates. This establishes the singlet-triplet encoding as a practical route toward high-fidelity shuttling and erasure-based, fault-tolerant quantum computation in semiconductor devices.
https://arxiv.org/abs/2601.10461
Academic Papers
svg
832b054cd1a0f49b85146b85fda155a26572392834301eb0efff0d27bb1659cb
2026-01-16T00:00:00-05:00
Nonlinear quantum Kibble-Zurek ramps in open systems at finite temperature
arXiv:2601.10465v1 Announce Type: new Abstract: We analyze quantum systems under a broad class of protocols in which the temperature and a Hamiltonian control parameter are ramped simultaneously and, in general, in a nonlinear fashion toward a quantum critical point. Using an open-system version of a Kitaev quantum wire as an example, we show that, unlike finite-temperature protocols at fixed temperature, these protocols allow us to probe, in an out-of-equilibrium situation and at finite temperature, the universality class (characterized by the critical exponents $\nu$ and $z$) of an equilibrium quantum phase transition at zero temperature. Key to this is the identification of ramps in which both coherent and incoherent parts of the open-system dynamics affect the excitation density in a non-negligible way. We also identify the specific ramps for which subleading corrections to the asymptotic scaling laws are suppressed, which serves as a guide to dynamically probing quantum critical exponents in experimentally realistic finite-temperature situations.
https://arxiv.org/abs/2601.10465
Academic Papers
svg
12018ae02c6422bdf3325195ccaebac7d570abf1b1ff7376a004db0f04ee0843
2026-01-16T00:00:00-05:00
Analysis and Experimental Demonstration of Amplitude Amplification for Combinatorial Optimization
arXiv:2601.10473v1 Announce Type: new Abstract: Quantum Amplitude Amplification (QAA), the generalization of Grover's algorithm, is capable of yielding optimal solutions to combinatorial optimization problems with high probabilities. In this work we extend the conventional 2-dimensional representation of Grover's (orthogonal collective states) to oracles which encode cost functions such as QUBO, and show that linear cost functions are a special case whereby an exact formula exists for determining optimal oracle parameter settings. Using simulations of problem sizes up to 40 qubits we demonstrate QAA's algorithmic performance across all possible solutions, with an emphasis on the closeness in Grover-like performance for solutions near the global optimum. We conclude with experimental demonstrations of generalized QAA on both IBMQ (superconducting) and IonQ (trapped ion) qubits, showing that the observed probabilities of each basis state match our equations as a function of varying the free parameters in the oracle and diffusion operators.
https://arxiv.org/abs/2601.10473
Academic Papers
svg
45d21956688f82815521b5f17d523d7996ee38f3eba9701834ca69fb0eb97e7b
2026-01-16T00:00:00-05:00
Optimized readout strategies for neutral atom quantum processors
arXiv:2601.10492v1 Announce Type: new Abstract: Neutral atom quantum processors have emerged as a promising platform for scalable quantum information processing, offering high-fidelity operations and exceptional qubit scalability. A key challenge in realizing practical applications is efficiently extracting readout outcomes while maintaining high system throughput, i.e., the rate of quantum task executions. In this work, we develop a theoretical framework to quantify the trade-off between readout fidelity and atomic retention. Moreover, we introduce a metric of quantum circuit iteration rate (qCIR) and employ normalized quantum Fisher information to characterize system overall performance. Further, by carefully balancing fidelity and retention, we demonstrate a readout strategy for optimizing information acquisition efficiency. Considering the experimentally feasible parameters for 87Rb atoms, we demonstrate that qCIRs of 197.2Hz and 154.5Hz are achievable using single photon detectors and cameras, respectively. These results provide practical guidance for constructing scalable and high-throughput neutral atom quantum processors for applications in sensing, simulation, and near-term algorithm implementation.
https://arxiv.org/abs/2601.10492
Academic Papers
svg
a60df1d23dea6ec8e7b0c466cddd6e12df3adfd40cb5f963dc62de4b1afc4e43
2026-01-16T00:00:00-05:00
Deterministic and scalable generation of large Fock states
arXiv:2601.10559v1 Announce Type: new Abstract: The scalable and deterministic preparation of large Fock-number states represents a long-standing frontier in quantum science, with direct implications for quantum metrology, communication, and simulation. Despite significant progress in small-scale implementations, extending such state generation to large excitation numbers while maintaining high fidelity remains a formidable challenge. Here, we present a scalable protocol for generating large Fock states with fidelities exceeding 0.9 up to photon numbers on the order of 100, achieved using only native control operations and, when desired, further enhanced by an optional post-selection step. Our method employs a hybrid Genetic-Adam optimization framework that combines the global search efficiency of genetic algorithms with the adaptive convergence of Adam to optimize multi-pulse control sequences comprising Jaynes-Cummings interactions and displacement operations, both of which are native to leading experimental platforms. The resulting control protocols achieve high fidelities with shallow circuit depths and strong robustness against parameter variations. These results establish an efficient and scalable pathway toward high-fidelity non-classical state generation for precision metrology and fault-tolerant quantum technologies.
https://arxiv.org/abs/2601.10559
Academic Papers
svg
7a215b0567eb0d084f7f309f8f28e6237cdbe95f3b615acbd119f3dbde8d3c3b
2026-01-16T00:00:00-05:00
Quantum solver for single-impurity Anderson models with particle-hole symmetry
arXiv:2601.10594v1 Announce Type: new Abstract: Quantum embedding methods, such as dynamical mean-field theory (DMFT), provide a powerful framework for investigating strongly correlated materials. A central computational bottleneck in DMFT is in solving the Anderson impurity model (AIM), whose exact solution is classically intractable for large bath sizes. In this work, we develop and benchmark a quantum-classical hybrid solver tailored for DMFT applications, using the variational quantum eigensolver (VQE) to prepare the ground state of the AIM with shallow quantum circuits. The solver uses a unified ansatz framework to prepare the particle and hole excitations of the ground-state from parameter-shifted circuits, enabling the reconstruction of the impurity Green's function through a continued-fraction expansion. We evaluate the performance of this approach across a few bath sizes and interaction strengths under noisy, shot-limited conditions. We compare three optimization routines (COBYLA, Adam, and L-BFGS-B) in terms of convergence and fidelity, assess the benefits of estimating a quantum-computed moment (QCM) correction to the variational energies, and benchmark the approach by comparing the reconstructed density of states (DOS) against that obtained using a classical pipeline. Our results demonstrate the feasibility of Green's function reconstruction on near-term devices and establish practical benchmarks for quantum impurity solvers embedded within self-consistent DMFT loops.
https://arxiv.org/abs/2601.10594
Academic Papers
svg
a01c1a3bec23a6524e5126f6619f103caa47e1224e3d074b4056087d3369cacc
2026-01-16T00:00:00-05:00
Quantifying the properties of evolutionary quantum states of the XXZ spin model using quantum computing
arXiv:2601.10650v1 Announce Type: new Abstract: The entanglement distance of evolutionary quantum states of a two-spin system with the XXZ model has been studied. The analysis has been conducted both analytically and using quantum computing. An analytical dependence of the entanglement distance on the values of the model coupling constants and the parameters of the initial states has been obtained. The speed of evolution of a two-spin system has been investigated. The analysis has been performed analytically and using quantum computing. An explicit dependence of the speed of evolution on the coupling constants and on the parameters of the initial state has been obtained. The results of quantum computations are in good agreement with the theoretical predictions.
https://arxiv.org/abs/2601.10650
Academic Papers
svg
598c9fc26d0bf511789d21e73373c162b27e55a5430a22a7241a12f5bb3e7d6e
2026-01-16T00:00:00-05:00
Symmetry-based Perspectives on Hamiltonian Quantum Search Algorithms and Schrodinger's Dynamics between Orthogonal States
arXiv:2601.10655v1 Announce Type: new Abstract: It is known that the continuous-time variant of Grover's search algorithm is characterized by quantum search frameworks that are governed by stationary Hamiltonians, which result in search trajectories confined to the two-dimensional subspace of the complete Hilbert space formed by the source and target states. Specifically, the search approach is ineffective when the source and target states are orthogonal. In this paper, we employ normalization, orthogonality, and energy limitations to demonstrate that it is unfeasible to breach time-optimality between orthogonal states with constant Hamiltonians when the evolution is limited to the two-dimensional space spanned by the initial and final states. Deviations from time-optimality for unitary evolutions between orthogonal states can only occur with time-dependent Hamiltonian evolutions or, alternatively, with constant Hamiltonian evolutions in higher-dimensional subspaces of the entire Hilbert space. Ultimately, we employ our quantitative analysis to provide meaningful insights regarding the relationship between time-optimal evolutions and analog quantum search methods. We determine that the challenge of transitioning between orthogonal states with a constant Hamiltonian in a sub-optimal time is closely linked to the shortcomings of analog quantum search when the source and target states are orthogonal and not interconnected by the search Hamiltonian. In both scenarios, the fundamental cause of the failure lies in the existence of an inherent symmetry within the system.
https://arxiv.org/abs/2601.10655
Academic Papers
svg
898e20331b3d67e6b599578500c5ac9291aad0efcea79601f5ca42cfc10117cd
2026-01-16T00:00:00-05:00
Geometric Aspects of Entanglement Generating Hamiltonian Evolutions
arXiv:2601.10662v1 Announce Type: new Abstract: We examine the pertinent geometric characteristics of entanglement that arise from stationary Hamiltonian evolutions transitioning from separable to maximally entangled two-qubit quantum states. From a geometric perspective, each evolution is characterized by means of geodesic efficiency, speed efficiency, and curvature coefficient. Conversely, from the standpoint of entanglement, these evolutions are quantified using various metrics, such as concurrence, entanglement power, and entangling capability. Overall, our findings indicate that time-optimal evolution trajectories are marked by high geodesic efficiency, with no energy resource wastage, no curvature (i.e., zero bending), and an average path entanglement that is less than that observed in time-suboptimal evolutions. Additionally, when analyzing separable-to-maximally entangled evolutions between nonorthogonal states, time-optimal evolutions demonstrate a greater short-time degree of nonlocality compared to time-suboptimal evolutions between the same initial and final states. Interestingly, the reverse is generally true for separable-to-maximally entangled evolutions involving orthogonal states. Our investigation suggests that this phenomenon arises because suboptimal trajectories between orthogonal states are characterized by longer path lengths with smaller curvature, which are traversed with a higher energy resource wastage compared to suboptimal trajectories between nonorthogonal states. Consequently, a higher initial degree of nonlocality in the unitary time propagators appears to be essential for achieving the maximally entangled state from a separable state. Furthermore, when assessing optimal and suboptimal evolutions...
https://arxiv.org/abs/2601.10662
Academic Papers
svg
6c67018f0d474a23ac1e46359ecf8c03c01d358091c9bd60dd0e6f333aed11e8
2026-01-16T00:00:00-05:00
Efficiency, Curvature, and Complexity of Quantum Evolutions for Qubits in Nonstationary Magnetic Fields
arXiv:2601.10672v1 Announce Type: new Abstract: In optimal quantum-mechanical evolutions, motion can take place along paths of minimal length within an optimal time frame. Alternatively, optimal evolutions may occur along established paths without any waste of energy resources and achieving 100% speed efficiency. Unfortunately, realistic physical scenarios often lead to less-than-ideal evolutions that demonstrate suboptimal efficiency, nonzero curvature, and a high level of complexity. In this paper, we provide an exact analytical expression for the curvature of a quantum evolution pertaining to a two-level quantum system subjected to various time-dependent magnetic fields. Specifically, we examine the dynamics produced by a two-parameter nonstationary Hermitian Hamiltonian with unit speed efficiency. To enhance our understanding of the physical implications of the curvature coefficient, we analyze the curvature behavior in relation to geodesic efficiency, speed efficiency, and the complexity of the quantum evolution (as described by the ratio of the difference between accessible and accessed Bloch-sphere volumes for the evolution from initial to final state to the accessible volume for the given quantum evolution). Our findings indicate that, generally, efficient quantum evolutions exhibit lower complexity compared to inefficient ones. However, we also note that complexity transcends mere length. In fact, longer paths that are sufficiently curved can demonstrate a complexity that is less than that of shorter paths with a lower curvature coefficient.
https://arxiv.org/abs/2601.10672
Academic Papers
svg
90141a30d84631e91a1d116f7123d53acf584a6b7ac61011ae34eba5e7bfc4d3
2026-01-16T00:00:00-05:00
Scalable Spin Squeezing in Power-Law Interacting XXZ Models with Disorder
arXiv:2601.10703v1 Announce Type: new Abstract: While spin squeezing has been traditionally considered in all-to-all interacting models, recent works have shown that spin squeezing can occur in systems with power-law interactions, leading to direct testing in Rydberg atoms, trapped ions, ultracold atoms and nitrogen vacancy (NV) centers in diamond. For the latter, Wu. et al. Nature 646 (2025) demonstrated that spin squeezing is heavily affected by positional disorder, reducing any capacity for a practical squeezing advantage, which requires scalability with the system size. In this Letter we explore the robustness of spin-squeezing in two-dimensional lattices with a fraction of unoccupied lattice sites. Using semi-classical modeling, we demonstrate the existence of scalable squeezing in power-law interacting XXZ models up to a disorder threshold, above which squeezing is not scalable. We produce a phase diagram for scalable squeezing, and explain its absence in the aforementioned NV experiment. Our work illustrates the maximum disorder allowed for realizing scalable spin squeezing in a host of quantum simulators, highlights a regime with substantial tolerance to disorder, and identifies controlled defect creation as a promising route for scalable squeezing in solid-state systems.
https://arxiv.org/abs/2601.10703
Academic Papers
svg
ec3da9cfa6aa1768cb30af62a55afbdbb38369528e952cf93dcc2ea5dafe5f1d
2026-01-16T00:00:00-05:00
Emergent Nonperturbative Universal Floquet Localization
arXiv:2601.09793v1 Announce Type: cross Abstract: We show that a robust, nonperturbative localization plateau emerges in periodically driven quasiperiodic lattices, independent of the static localization properties and drive protocol. Using exact Floquet dynamics, Floquet perturbation theory, and optimal-order van Vleck analysis, we identify a fine-tuned amplitude-to-frequency ratio where all Floquet states become localized despite dense resonances. The van Vleck expansion achieves superasymptotic accuracy up to an optimal orde; it ultimately breaks down due to resonant hybridization at a weak quasiperiodic potential, revealing that the observed localization is nonperturbative.
https://arxiv.org/abs/2601.09793
Academic Papers
svg
c561884ae4be75163ae8b8f9541bff07a6f3ccd2413f745dc001c30088f45d1b
2026-01-16T00:00:00-05:00
Probing the Chaos to Integrability Transition in Double-Scaled SYK
arXiv:2601.09801v1 Announce Type: cross Abstract: We investigate how a thermodynamical first-order phase transition affects the dynamical chaotic behaviour of a given model. To this effect, we analyze the model of Berkooz, Brukner, Jia and Mamroud that interpolates between the double-scaled SYK model and an integrable chord Hamiltonian. This model displays a first-order phase transition given by a kink in the free energy. We map out the dynamical behaviour, as characterized by chord number, Krylov complexity, and operator size, of the model across the phase diagram. We observe a jump in the chord numbers at the transition point, in agreement with the first-order transition. We further determine how scrambling measures, i.e.~the growth of the Lanczos coefficients and the time dependence of the operator size, change across the phase diagram. Deep inside the two phases, these measures indeed display integrable and chaotic behaviour, respectively. Across the transition however, we observe no qualitative change in these measures. This means that the thermodynamical transition does not imply a sharp transition in the growth exponent characterizing the dynamical chaotic behaviour. We also discuss a possible holographic interpretation of the model.
https://arxiv.org/abs/2601.09801
Academic Papers
svg
2d00d9d292830a0157dbeb8718c951c733ac8bb5781b9e770ac740561e124861
2026-01-16T00:00:00-05:00
Quantum Optical Inspired Models for Unitary Black Hole Evaporation
arXiv:2601.09820v1 Announce Type: cross Abstract: In this work, we describe optically inspired models for unitary black hole (BH) evaporation. The goal of these models are (i) to be operationally simple, (ii) approximately preserve the thermal nature of the emitted Hawking Radiation (HR), and (iii) attempt to reproduce the Page Curve that purports that information flows forth from the BH when it has evaporated to approximately half its initial mass. We concentrate on modeling the BH as a single mode squeezed state successively interacting, by means of beam splitters and squeezers, with vacuum modes near the horizon, giving rise to entangled pairs representing the external Hawking radiation and its partner particle inside the horizon. Since all states and operations are Gaussian throughout, we use a symplectic formalism to track the evolution of the composite system through the evolving means and variances of their quadrature operators. This allows us to easily compute correlations and entanglement between the BH and the HR, as well as calculate correlations between the BH at early and late times.
https://arxiv.org/abs/2601.09820
Academic Papers
svg
8aa0d47de78d60c0383b04c8f9ab52b7f4cc62a7c03f5024d2f202dc8d9477d4
2026-01-16T00:00:00-05:00
Combinatorial properties of holographic entropy inequalities
arXiv:2601.09987v1 Announce Type: cross Abstract: A holographic entropy inequality (HEI) is a linear inequality obeyed by Ryu-Takayanagi holographic entanglement entropies, or equivalently by the minimum cut function on weighted graphs. We establish a new combinatorial framework for studying HEIs, and use it to prove several properties they share, including two majorization-related properties as well as a necessary and sufficient condition for an inequality to be an HEI. We thereby resolve all the conjectures presented in [arXiv:2508.21823], proving two of them and disproving the other two. In particular, we show that the null reduction of any superbalanced HEI passes the majorization test defined in [arXiv:2508.21823], thereby providing strong new evidence that all HEIs are obeyed in time-dependent holographic states.
https://arxiv.org/abs/2601.09987
Academic Papers
svg
c69b391582be7fd3a50dbda14af6b3012ec3004cfc90da1ae04c08d46c3fa02a
2026-01-16T00:00:00-05:00
Holographic entropy inequalities pass the majorization test
arXiv:2601.09989v1 Announce Type: cross Abstract: Quantities computed by minimal cuts, such as entanglement entropies achievable by the Ryu-Takayanagi proposal in the AdS/CFT correspondence, are constrained by linear inequalities. We prove a previously conjectured property of all such constraints: Any $k$ systems on the "greater-than" side of the inequality are subsumed in some $k$ systems on its "less-than" side (accounting for multiplicity). This finding adds evidence that the same inequalities also constrain the entropies under time-dependent conditions because it preempts a large class of potential counterexamples. We prove several other properties of holographic entropy inequalities and comment on their relation to quantum erasure correction and the Renormalization Group.
https://arxiv.org/abs/2601.09989
Academic Papers
svg
a3a484bcbee6bac97c05df2431867442e9a5cf8cf12881608eef2118cce6572c
2026-01-16T00:00:00-05:00
Hybrid superinductance with Al/InAs
arXiv:2601.10023v1 Announce Type: cross Abstract: We report microwave spectroscopy of Josephson junctions chains made from an epitaxial Al/InAs heterostructure. The chains exhibit superinductance, with characteristic wave impedance exceeding $R_{Q} = \hbar/(2e)^{2}$. The planar nature of the junctions results in a large plasma frequency, with no measurable deviations from ideal dispersion up to $12~\mathrm{GHz}$. Internal quality factors decrease sharply with frequency, which we describe with a simple loss model. The possibility of a loss mechanism intrinsic to the superconductor-semiconductor junction is considered.
https://arxiv.org/abs/2601.10023
Academic Papers
svg
7306af488e323f10d8c4316681c2d1bc99bb3dc6b16172bd58c0a607302448c0
2026-01-16T00:00:00-05:00
Anomalous transport in quasiperiodic lattices: emergent exceptional points at band edges and log-periodic oscillations
arXiv:2601.10056v1 Announce Type: cross Abstract: Quasiperiodic systems host exotic transport regimes that are distinct from those found in periodic or disordered lattices. In this work, we study quantum transport in the Aubry-Andr\'e-Harper lattice in a two-terminal setup coupled to zero-temperature reservoirs, where the conductance is evaluated via the nonequilibrium Green's function method. In the extended phase, we uncover a universal subdiffusive transport when the bath chemical potential aligns with the band edges. Specifically, the typical conductance displays a scaling of $\mathcal{G}_{\text{typ}}\sim L^{-2}$ with system size $L$. We attribute this behavior to the emergence of an exceptional point (Jordan normal form) in the transfer matrix in the thermodynamic limit. In the localized phase, the conductance shows exponential decay governed by the Lyapunov exponent. Intriguingly, in the critical phase, we identify pronounced log-periodic oscillations of the conductance as a function of system size, arising from the discrete scale invariance inherent to the singular-continuous spectrum. We further extend our analysis to the generalized Aubry-Andr\'e-Harper model and provide numerical evidence suggesting that the exact mobility edge resides within a finite spectral gap. This results in a counter-intuitive exponential suppression of conductance precisely at the mobility edge. Our work highlights the distinct transport behaviors in quasiperiodic systems and elucidates how they are rigorously dictated by the underlying local spectral structure.
https://arxiv.org/abs/2601.10056
Academic Papers
svg
cf25d2ed91cb7053591f08ae37aa1800120db589709c6bc6bc4a00528cdbdc09
2026-01-16T00:00:00-05:00
Minimally Truncated SU(3) Lattice Gauge Theory and String Tension
arXiv:2601.10065v1 Announce Type: cross Abstract: We study SU(3) gauge theory on small lattices in the minimal (qutrit) electric field truncation retaining only the ${\bf 1}, {\bf 3}, {\bf \overline{3}}$ representations for the link variables. Explicit expressions are given for the Kogut-Susskind Hamiltonian for the square plaquette chain and the two-dimensional honeycomb lattice. Our formalism can be easily extended to the minimally truncated general SU($N_c$) gauge theory. The addition of (static) quarks is discussed. We present results for the energy spectrum of the gauge field on these lattices by exact diagonalization of the Hamiltonian and analyze its statistical properties. We also compute the SU(3) string tension and discuss how it is modified by vacuum fluctuations. Finally, we calculate the potential energies of a static quark-antiquark pair and three static quarks and study their screening at finite temperature.
https://arxiv.org/abs/2601.10065
Academic Papers
svg
a57f7056a04cce594cc8fbc847fa922d2e1be1b17380c2b08aa8edff968d68c5
2026-01-16T00:00:00-05:00
Random matrix theory universality of current operators in spin-$S$ Heisenberg chains
arXiv:2601.10211v1 Announce Type: cross Abstract: Quantum chaotic systems exhibit certain universal statistical properties that closely resemble predictions from random matrix theory (RMT). With respect to observables, it has recently been conjectured that, when truncated to a sufficiently narrow energy window, their statistical properties can be described by an unitarily invariant ensemble, and testable criteria have been introduced, which are based on the scaling behavior of free cumulants. In this paper, we investigate the conjecture numerically in translationally invariant Heisenberg spin chains with spin quantum number $S =\frac{1}{2},1,\frac{3}{2}$. Combining a quantum-typicality-based numerical method with the exploitation of the system's symmetries, we study the spin current operator and find clear evidence of consistency with the proposed criteria in chaotic cases. Our findings further support the conjecture of the existence of RMT universality as manifest in the observable properties in quantum chaotic systems.
https://arxiv.org/abs/2601.10211
Academic Papers
svg
32634c623a8fb68313e2c0bb6a595d032f082f5ff895f24836d06bf8eb6f5a79
2026-01-16T00:00:00-05:00
Quantum Theory and Unusual Dielectric Functions of Graphene
arXiv:2601.10478v1 Announce Type: cross Abstract: We address the spatially nonlocal dielectric functions of graphene at any frequency derived starting fromthe first principles of thermal quantum field theory using the formalism of the polarization tensor. After a brief review of this formalism, the longitudinal and transverse dielectric functions are considered at any relationship between the frequency and the wave vector. The analytic properties of their real and imaginary parts are investigated at low and high frequencies. Emphasis is given to the double pole at zero frequency which arises in the transverse dielectric function. The role of this unusual property for solving the problem of disagreement between experiment and theory in the Casimir effect is discussed. We guess that a more complete dielectric response of ordinary metals should also be spatially nonlocal and its transverse part may possess the double pole in the region of evanescent waves.
https://arxiv.org/abs/2601.10478
Academic Papers
svg
59d5e116f7ae321ad767bdc86d5f280f9427cb3d074376391ac45cd4b7bd34d5
2026-01-16T00:00:00-05:00
Energy Landscape Structure of Small Graph Isomorphism Under Variational Optimization
arXiv:2111.09821v3 Announce Type: replace Abstract: We investigate a quadratic unconstrained binary optimization (QUBO) formulation of the graph isomorphism problem using the Quantum Approximate Optimization Algorithm (QAOA) and the Variational Quantum Eigensolver (VQE). For small graph instances, we observe that isomorphic pairs exhibit consistent clustering in variational energies, indicating that the Hamiltonian successfully encodes structural features. However, we demonstrate that low variational energy alone is an unreliable certifier of isomorphism due to the high probability of converging to infeasible states that violate bijection constraints. To address this, we analyze optimization trajectories rather than final energies; consistently outperform naive energy thresholding, though absolute performance remains limited. Our results characterize the current limits of variational algorithms for graph isomorphism, positioning energy landscape analysis as a diagnostic tool rather than a scalable decision procedure in the NISQ regime.
https://arxiv.org/abs/2111.09821
Academic Papers
svg
174c563639913f6bef59f176b55802c93a881b5f8ea2dc2186d73f407cafcbf0
2026-01-16T00:00:00-05:00
Classifying Measurement Incompatibility under Classical Pre- and Post-Processing Operations
arXiv:2401.01236v3 Announce Type: replace Abstract: Measurement incompatibility has proved to be an important resource for quantum information processing. In this work, we present an operational approach that leverages classical operations on the inputs (pre-processing) and outputs (post-processing) of measurement devices to explore different layers of incompatibility among the measurements performed by the device. We study classifications of measurement incompatibility with respect to these two types of classical operations, viz., post-processing or coarse-graining of measurement outcomes and pre-processing or convex-mixing of different measurements. We derive analytical criteria for determining when a set of projective measurements is fully incompatible with respect to coarse-graining or convex-mixing. Robustness against white noise for different layers of incompatibility for mutually unbiased bases is investigated. Furthermore, we study operational witnesses for incompatibility subject to these classical operations, using the input-output statistics of Bell-type experiments as well as experiments in the prepare-and-measure scenario.
https://arxiv.org/abs/2401.01236
Academic Papers
svg
2968f19faa4093df17ca3a209b8de3686187cfc75fb5fa3f7b085a32c3b58548
2026-01-16T00:00:00-05:00
Useful entanglement can be extracted from noisy graph states
arXiv:2402.00937v3 Announce Type: replace Abstract: Cluster states and graph states in general offer a useful model of the stabilizer formalism and a path toward the development of measurement-based quantum computation. Their defining structure - the stabilizer group - encodes all possible correlations that can be observed during measurement. The measurement outcomes which are consistent with the stabilizer structure make error correction possible. Here, we leverage both properties to design feasible families of states that can be used as robust building blocks of quantum computation. This procedure reduces the effect of experimentally relevant noise models on the extraction of smaller entangled states from the larger noisy graph state. In particular, we study the extraction of Bell pairs from linearly extended graph states - this has the immediate consequence for state teleportation across the graph. We show that robust entanglement can be extracted by proper design of the linear graph with only a minimal overhead of the physical qubits. This scenario is relevant to systems in which the entanglement can be created between neighboring sites. The results shown in this work provide a mathematical framework for noise reduction in measurement-based quantum computation. With proper connectivity structures, the effect of noise can be minimized for a large class of realistic noise processes.
https://arxiv.org/abs/2402.00937
Academic Papers
svg
939a28d1ff8378d40cc660b7b080378f35bdcf0f7dc6cb36ee1f3529787e7281
2026-01-16T00:00:00-05:00
Quantum Analog of Vicsek Model for Active Matter
arXiv:2407.09860v2 Announce Type: replace Abstract: We propose a quantum model consisting of an ensemble of overdamped spin$-1/2$ particles with ferromagnetic couplings, driven by a radially homogeneous magnetic field. The spontaneous magnetization of the spin components breaks the $SO(3)$ (or $SO(2)$) symmetry, inducing an ordered phase of flocking. Our model converges to the Vicsek model in the classical limit and corresponds to the Toner-Tu model in the continuous limit. Our investigation not only elucidates the intrinsic connection between these two models, but also introduces new opportunities for exploring the mechanisms underlying flocking order and correlations at the quantum level, which maybe pave the way for a new field of research -- the quantum active matter.
https://arxiv.org/abs/2407.09860
Academic Papers
svg
62cc936b4c9e1347c77f0fd11cab846aeef946acabb9b3683b2e755ee6733504
2026-01-16T00:00:00-05:00
Undecidability of the spectral gap in rotationally symmetric Hamiltonians
arXiv:2410.13589v2 Announce Type: replace Abstract: The problem of determining the existence of a spectral gap in a lattice quantum spin system was previously shown to be undecidable for one [J. Bausch et al., "Undecidability of the spectral gap in one dimension", Physical Review X 10 (2020)] or more dimensions [T. S. Cubitt et al., "Undecidability of the spectral gap", Nature 528 (2015)]. In these works, families of nearest-neighbor interactions are constructed whose spectral gap depends on the outcome of a Turing machine Halting problem, therefore making it impossible for an algorithm to predict its existence. While these models are translationally invariant, they are not invariant under the other symmetries of the lattice, a property which is commonly found in physically relevant cases. This poses the question of whether the spectral gap problem could be decidable for Hamiltonians with stronger symmetry constraints. We give a negative answer to this question, in the case of models with 4-body (plaquette) interactions on the square lattice satisfying rotation, but not reflection, symmetry: rotational symmetry is not enough to make the problem decidable.
https://arxiv.org/abs/2410.13589
Academic Papers
svg
3fb1555e3155b30b6bfc9bfdc5cf28ee4c4f046a68a54120900964cc982dd444
2026-01-16T00:00:00-05:00
Entropy Density Benchmarking of Near-Term Quantum Circuits
arXiv:2412.18007v2 Announce Type: replace Abstract: Understanding the limitations imposed by noise on current and next-generation quantum devices is a crucial step towards demonstrating practical quantum advantage. In this work, we investigate the accumulation of entropy density as a benchmark to monitor the performance of quantum processing units. We provide a proof-of-principle demonstration of our novel methodology which entails developing simple heuristic models of how entropy accumulates, testing them against real QPU experiments, and finally using these models to determine a circuit volume threshold above which quantum advantage is unattainable. Monitoring entropy density not only offers a novel approach that complements existing circuit-level benchmarking techniques, but more importantly, it bridges the gap between circuit-level and application-level benchmarking protocols. In particular, our heuristic model of entropy accumulation allows us to outperform existing techniques that bound the circuit size threshold for quantum advantage.
https://arxiv.org/abs/2412.18007
Academic Papers
svg
802af1dbb1afcef3853835580f91b0e87303623aaf340be93554ebc1105085f2
2026-01-16T00:00:00-05:00
Randomized measurements for multi-parameter quantum metrology
arXiv:2502.03536v3 Announce Type: replace Abstract: The optimal quantum measurements for estimating different unknown parameters in a parameterized quantum state are usually incompatible with each other. Traditional approaches to addressing the measurement incompatibility issue, such as the Holevo Cram\'{e}r--Rao bound, suffer from multiple difficulties towards practical applicability, as the optimal measurement strategies are usually state-dependent, difficult to implement and also take complex analyses to determine. Here we study randomized measurements as a new approach for multi-parameter quantum metrology. We show quantum measurements on single copies of quantum states given by $3$-designs perform near-optimally when estimating an arbitrary number of parameters in pure states and more generally, {approximately low-rank well-conditioned states}, whose metrological information is largely concentrated in a low-dimensional subspace. The near-optimality is also shown in estimating the maximal number of parameters for three types of mixed states that are well-conditioned on their supports. Examples of fidelity estimation and Hamiltonian estimation are explicitly provided to demonstrate the power and limitation of randomized measurements in multi-parameter quantum metrology.
https://arxiv.org/abs/2502.03536
Academic Papers
svg
49be27362ed48eb6348c7106fdf09e0eaa23804a01a4a0b7bb18d471df70e6b6
2026-01-16T00:00:00-05:00
Simulating Noncausality with Quantum Control of Causal Orders
arXiv:2502.15579v3 Announce Type: replace Abstract: Logical consistency with free local operations is compatible with non-trivial classical communications, where all parties can be both in each other's past and future-a phenomenon known as noncausality. Noncausal processes, such as the "Lugano (AF/BW) process", violate causal inequalities, yet their physical realizability remains an open question. In contrast, the quantum switch-a physically realizable process with indefinite causal order-can only generate causal correlations. Building on a recently established correspondence [Kunjwal & Baumeler, PRL 131, 120201 (2023)] between the SHIFT measurement, which exhibits nonlocality without entanglement, and the Lugano process, we demonstrate that the SHIFT measurement can be implemented using a quantum switch of classical communications in a scenario with quantum inputs. This shows that the structure of the Lugano process can be simulated by a quantum switch and that successful SHIFT discrimination witnesses causal nonseparability rather than noncausality. Finally, we identify a broad class of "superposition of classical communications" derived from classical processes without global past capable of realizing similar causally indefinite measurements. We examine these results in relation to the ongoing debate on implementations of indefinite causal orders.
https://arxiv.org/abs/2502.15579
Academic Papers
svg
48afdc760b490c4cb7be2d53358fecd22734c1e25f917e70d85a94261cbb7dd2
2026-01-16T00:00:00-05:00
Protected phase gate for the $0$-$\pi$ qubit using its internal modes
arXiv:2503.14634v3 Announce Type: replace Abstract: Protected superconducting qubits such as the $0$-$\pi$ qubit promise to substantially reduce physical error rates. However, a key challenge in the field is designing gates for these qubits that do not compromise their protection, or become infeasibly slow as the protection of the qubit is improved. In this work we propose a protected phase gate that is compatible with the protected regime of the $0$-$\pi$ qubit, and does not suffer from spurious coupling to additional circuit modes. Our gate utilises an internal mode of the circuit as an ancilla, and is achieved by varying the qubit-ancilla coupling via a tunable Josephson element. Through numerical simulations, we study how the gate error scales with the circuit parameters of the $0$-$\pi$ qubit and the tunable Josephson element that enacts the gate. Ultimately, we find that a protected gate with the $0$-$\pi$ qubit is possible with near-term circuit parameters. Our work opens up the possibility of performing protected gates on protected superconducting qubits, which may significantly reduce hardware overheads for quantum computation.
https://arxiv.org/abs/2503.14634
Academic Papers
svg
31ef70c70b7fc0fe4655ebde0c0debd71c9708326ccc2f0284e629a9b86cc1b4
2026-01-16T00:00:00-05:00
Optomechanical quantum bus for donor spins in silicon
arXiv:2503.18764v3 Announce Type: replace Abstract: Silicon is the foundation of current information technology, and a promising platform for future quantum information technology as silicon-based qubits exhibit some of the longest coherence times in solid-state. At the same time, silicon is the underlying material for advanced photonics activity, and photonics structures in silicon can be used to define optomechanical cavities where the vibrations of nanoscale mechanical resonators can be probed down to the quantum level with laser light. Here, we propose to bring all these developments together by coupling silicon donor spins into optomechanical structures. We show theoretically and numerically that this allows telecom wavelength optical readout of the spin-qubits and implementing high-fidelity entangling two-qubit gates between donor spins that are spatially separated by tens of micrometers. We present an optimized geometry of the proposed device and discuss with the help of numerical simulations the predicted performance of the proposed quantum bus. We analyze the optomechanical spin readout fidelity and find the optimal donor species for different coupling mechanisms.
https://arxiv.org/abs/2503.18764
Academic Papers
svg
e88619d6bebb0d392e23932ec739210828b79005d082eb7b276bb07e89b5427b
2026-01-16T00:00:00-05:00
Resonant fragility and nonresonant robustness of Floquet eigenstates in kicked spin systems
arXiv:2504.13257v3 Announce Type: replace Abstract: In classical systems, the Kolmogorov-Arnold-Moser (KAM) theorem establishes that resonant tori of integrable Hamiltonians are destroyed by any nonintegrable perturbation, whereas nonresonant tori are only deformed up to a finite value of the perturbation parameter. In this contribution, we identify a quantum analog of this differentiated sensitivity for one-degree-of-freedom spin Hamiltonians subject to periodic instantaneous kicks. After detecting quantum signatures of resonances in the participation ratio and in the quasiprobability phase-space distribution of Floquet eigenstates of the perturbed Hamiltonian, we show that eigenstates of the unperturbed Hamiltonian exhibit greater sensitivity against the perturbation when they satisfy a resonant condition. The sensitivity is quantified through the fidelity between perturbed and unperturbed eigenstates. This differentiated sensitivity becomes increasingly pronounced as the system size grows. Our findings are supported by numerical results and insights from analytical calculations based on unitary perturbation theory. Although our analysis focuses on kicked models, the mechanism could be extended to more general periodic drivings, providing a preliminary step toward a quantum counterpart of the classical breaking of resonant tori.
https://arxiv.org/abs/2504.13257
Academic Papers
svg
1cab72f49ac450a3e22daa6c54df92aea6ed4ffffe2f54c99c425768367bf1a4
2026-01-16T00:00:00-05:00
Comparing classical and quantum conditional disclosure of secrets
arXiv:2505.02939v3 Announce Type: replace Abstract: The conditional disclosure of secrets (CDS) setting is among the most basic primitives studied in information-theoretic cryptography. Motivated by a connection to non-local quantum computation and position-based cryptography, CDS with quantum resources has recently been considered. Here, we study the differences between quantum and classical CDS, with the aims of clarifying the power of quantum resources in information-theoretic cryptography. We establish the following results: 1) We prove a $\Omega(\log \mathsf{R}_{0,A\rightarrow B}(f)+\log \mathsf{R}_{0,B\rightarrow A}(f))$ lower bound on quantum CDS where $\mathsf{R}_{0,A\rightarrow B}(f)$ is the classical one-way communication complexity with perfect correctness. 2) We prove a lower bound on quantum CDS in terms of two round, public coin, two-prover interactive proofs. 3) For perfectly correct CDS, we give a separation for a promise version of the not-equals function, showing a quantum upper bound of $O(\log n)$ and classical lower bound of $\Omega(n)$. 4) We give a logarithmic upper bound for quantum CDS on forrelation, while the best known classical algorithm is linear. We interpret this as preliminary evidence that classical and quantum CDS are separated even with correctness and security error allowed. We also give a separation for classical and quantum private simultaneous message passing for a partial function, improving on an earlier relational separation. Our results use novel combinations of techniques from non-local quantum computation and communication complexity.
https://arxiv.org/abs/2505.02939
Academic Papers
svg