id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
7d708f790f059ad52df1fdecb3be2a915eea55722d58e8b93e28f3fc2f657955
2026-01-01T00:00:00-05:00
Extrapolating LATE with Weak IVs
arXiv:2512.23854v1 Announce Type: new Abstract: To evaluate the effectiveness of a counterfactual policy, it is often necessary to extrapolate treatment effects on compliers to broader populations. This extrapolation relies on exogenous variation in instruments, which is often weak in practice. This limited variation leads to invalid confidence intervals that are typically too short and cannot be accurately detected by classical methods. For instance, the F-test may falsely conclude that the instruments are strong. Consequently, I develop inference results that are valid even with limited variation in the instruments. These results lead to asymptotically valid confidence sets for various linear functionals of marginal treatment effects, including LATE, ATE, ATT, and policy-relevant treatment effects, regardless of identification strength. This is the first paper to provide weak instrument robust inference results for this class of parameters. Finally, I illustrate my results using data from Agan, Doleac, and Harvey (2023) to analyze counterfactual policies of changing prosecutors' leniency and their effects on reducing recidivism.
https://arxiv.org/abs/2512.23854
Academic Papers
svg
f51bb05a60820fdfba8a2cb0f5847d519b107775cbaaf840681f959d85b14ed6
2026-01-01T00:00:00-05:00
Evaluating Counterfactual Policies Using Instruments
arXiv:2512.24096v1 Announce Type: new Abstract: We study settings in which a researcher has an instrumental variable (IV) and seeks to evaluate the effects of a counterfactual policy that alters treatment assignment, such as a directive encouraging randomly assigned judges to release more defendants. We develop a general and computationally tractable framework for computing sharp bounds on the effects of such policies. Our approach does not require the often tenuous IV monotonicity assumption. Moreover, for an important class of policy exercises, we show that IV monotonicity -- while crucial for a causal interpretation of two-stage least squares -- does not tighten the bounds on the counterfactual policy impact. We analyze the identifying power of alternative restrictions, including the policy invariance assumption used in the marginal treatment effect literature, and develop a relaxation of this assumption. We illustrate our framework using applications to quasi-random assignment of bail judges in New York City and prosecutors in Massachusetts.
https://arxiv.org/abs/2512.24096
Academic Papers
svg
02b1bff5b49ac7bde2a01fadac3873da45c0c7cf0a91d416ee873564f292e4a0
2026-01-01T00:00:00-05:00
Optimal Carbon Prices in an Unequal World: The Role of Regional Welfare Weights
arXiv:2512.24520v1 Announce Type: new Abstract: How should nations price carbon? This paper examines how the treatment of global inequality, captured by regional welfare weights, affects optimal carbon prices. I develop theory to identify the conditions under which accounting for differences in marginal utilities of consumption across countries leads to more stringent global climate policy in the absence of international transfers. I further establish a connection between the optimal uniform carbon prices implied by different welfare weights and heterogeneous regional preferences over climate policy stringency. In calibrated simulations, I find that accounting for global inequality reduces optimal global emissions relative to an inequality-insensitive benchmark. This holds both when carbon prices are regionally differentiated, with emissions 21% lower, and when they are constrained to be globally uniform, with the uniform carbon price 15% higher.
https://arxiv.org/abs/2512.24520
Academic Papers
svg
4372ac7a86c46913b18567bed72161f47b6335c0bc71f722d13847398c1769ba
2026-01-01T00:00:00-05:00
Scaling Charitable Incentives: Policy Selection, Beliefs, and Evidence from a Field Experiment
arXiv:2512.24852v1 Announce Type: new Abstract: Why are interventions with weak evidence still adopted? We study charitable incentives for physical activity in Japan using three linked methods, including a randomized field experiment (N=808), a stakeholder belief survey (local government officials and private-sector employees, N=2,400), and a conjoint experiment on policy choice. Financial incentives increase daily steps by about 1,000, whereas charitable incentives deliver a precisely estimated null. Nonetheless, stakeholders greatly overpredict charitable incentives' effects on walking, participation, and prosociality. Conjoint choices show policymakers value step gains as well as other outcomes, shaping policy choice. Adoption thus reflects multidimensional beliefs and objectives, highlighting policy selection as a scaling challenge.
https://arxiv.org/abs/2512.24852
Academic Papers
svg
34fadea8bd16af641ded581bab50d3b3156eafcfec615e9e87052bf97e252cd8
2026-01-01T00:00:00-05:00
Antecedents of Consumer Regret Frequency: The Roles of Decision Agency, Status Signaling, and Online Shopping Preference
arXiv:2512.24862v1 Announce Type: new Abstract: Consumer regret is a widespread post-purchase emotion that significantly impacts satisfaction, product returns, complaint behavior, and customer loyalty. Despite its prevalence, there is a limited understanding of why certain consumers experience regret more frequently as a chronic aspect of their engagement in the marketplace. This study explores the antecedents of consumer regret frequency by integrating decision agency, status signaling motivations, and online shopping preferences into a cohesive framework. By analyzing survey data (n=338), we assess whether consumers' perceived agency and decision-making orientation correlate with the frequency of regret, and whether tendencies towards status-related consumption and preferences for online shopping environments exacerbate regret through mechanisms such as increased social comparison, expanded choice sets, and continuous exposure to alternative offers. The findings reveal that regret frequency is significantly linked to individual differences in decision-related orientations and status signaling, with a preference for online shopping further contributing to regret-prone consumption behaviors. These results extend the scope of regret and cognitive dissonance research beyond isolated decision episodes by emphasizing regret frequency as a persistent consumer outcome. From a managerial standpoint, the findings suggest that retailers can alleviate regret-driven dissatisfaction by enhancing decision support, minimizing choice overload, and developing post-purchase reassurance strategies tailored to segments prone to regret..
https://arxiv.org/abs/2512.24862
Academic Papers
svg
91c3ede823582255bc79af165a385927ce3aff24daea38332288e2ae5cf667b5
2026-01-01T00:00:00-05:00
Recent Contributions to Theories of Discrimination
arXiv:2205.05994v4 Announce Type: replace Abstract: This paper surveys the literature on theories of discrimination, focusing mainly on new contributions. Recent theories expand on the traditional taste-based and statistical discrimination frameworks by considering specific features of learning and signaling environments, often using novel information- and mechanism-design language; analyzing learning and decision making by algorithms; and introducing agents with behavioral biases and misspecified beliefs. An online appendix attempts to narrow the gap between the economic perspective on ``theories of discrimination'' and the broader study of discrimination in the social science literature by identifying a class of models of discriminatory institutions, made up of theories of discriminatory social norms and discriminatory institutional design.
https://arxiv.org/abs/2205.05994
Academic Papers
svg
98e4c4c187c38ecfbd09b4384f15ebe86a3941080b9a4d5023c87b6bd0f44fa1
2026-01-01T00:00:00-05:00
Degree-Weighted Social Learning
arXiv:2311.07010v3 Announce Type: replace Abstract: We study social learning in which agents weight neighbors' opinions differently based on their degrees, capturing situations in which agents place more trust in well-connected individuals or, conversely, discount their influence. We derive asymptotic properties of learning outcomes in large stochastic networks and analyze how the weighting rule affects societal wisdom and convergence speed. We find that assigning greater weight to higher-degree neighbors harms wisdom but has a non-monotonic effect on convergence speed, depending on the diversity of views within high- and low-degree groups, highlighting a potential trade-off between convergence speed and wisdom.
https://arxiv.org/abs/2311.07010
Academic Papers
svg
e4730d566f309c5424227b92941b329d43da5f5635c784a1c8f7277d912ed717
2026-01-01T00:00:00-05:00
Optimal longevity of a dynasty
arXiv:2409.15978v4 Announce Type: replace Abstract: Standard optimal growth models implicitly impose a ``perpetual existence'' constraint, which can ethically justify infinite misery in stagnant economies. This paper investigates the optimal longevity of a dynasty within a Critical-Level Utilitarian (CLU) framework. By treating the planning horizon as an endogenous choice variable, we establish a structural isomorphism between static population ethics and dynamic growth theory. Our analysis derives closed-form solutions for optimal consumption and longevity in a roundabout production economy. We show that under low productivity, a finite horizon is structurally optimal to avoid the creation of lives not worth living. This result suggests that the termination of a dynasty can be interpreted not as a failure of sustainability, but as an {altruistic termination} to prevent intergenerational suffering. We also highlight an ethical asymmetry: while a finite horizon is optimal for declining economies, growing economies under intergenerational equity demand the ultimate sacrifice from the current generation.
https://arxiv.org/abs/2409.15978
Academic Papers
svg
0f54f4835493890e9183b7b05c3596512c906ad9afe894cc0e6340b9315e7b46
2026-01-01T00:00:00-05:00
Persistent gender attitudes and women entrepreneurship
arXiv:2503.04435v2 Announce Type: replace Abstract: We examine whether gender norms - proxied by the outcome of Switzerland's 1981 public referendum on constitutional gender equality - continue to shape local female startup activity today, despite substantial population changes over the past four decades. Using startup data for all Swiss municipalities from 2016 to 2023, we find that municipalities that historically expressed stronger support for gender equality have significantly higher present women-to-men startup ratios. The estimated elasticity of this ratio with respect to the share of "yes" votes in the 1981 referendum is 0.165. This finding is robust to controlling for a subsequent referendum on gender roles, a rich set of municipality-specific characteristics, and contemporary policy measures. The relationship between historical voting outcomes and current women's entrepreneurship is stronger in municipalities with greater population stability - measured by the share of residents born locally - and in municipalities where residents are less likely to report a religious affiliation. While childcare spending is not statistically related to startup rates on its own, it is positively associated with the women-to-men startup ratio when interacted with historical gender norms, consistent with both formal and informal support mechanisms jointly shaping women's entrepreneurial activity.
https://arxiv.org/abs/2503.04435
Academic Papers
svg
b432f390582f322b1b420ef3fd54438200548645efe2a24896997bae5f85e399
2026-01-01T00:00:00-05:00
Multivariate quantile regression
arXiv:2508.15749v2 Announce Type: replace Abstract: This paper introduces a new framework for multivariate quantile regression based on the multivariate distribution function, termed multivariate quantile regression (MQR). In contrast to existing approaches--such as directional quantiles, vector quantile regression, or copula-based methods--MQR defines quantiles through the conditional probability structure of the joint conditional distribution function. The method constructs multivariate quantile curves using sequential univariate quantile regressions derived from conditioning mechanisms, allowing for an intuitive interpretation and flexible estimation of marginal effects. The paper develops theoretical foundations of MQR, including asymptotic properties of the estimators. Through simulation exercises, the estimator demonstrates robust finite sample performance across different dependence structures. As an empirical application, the MQR framework is applied to the analysis of exchange rate pass-through in Argentina from 2004 to 2024.
https://arxiv.org/abs/2508.15749
Academic Papers
svg
d21bbf50a1522cbcc95df5202c82715e53d4b564dc7c25e3406ddf17d1a00bf6
2026-01-01T00:00:00-05:00
A further look at Modified ML estimation of the panel AR(1) model with fixed effects and arbitrary initial conditions
arXiv:2508.20753v2 Announce Type: replace Abstract: In this paper we consider two generalizations of Lancaster's (Review of Economic Studies, 2002) Modified ML estimator (MMLE) for the panel AR(1) model with fixed effects and arbitrary initial conditions and possibly covariates when the time dimension, T, is fixed. When the autoregressive parameter rho=1, the limiting modified profile log-likelihood function for this model has a stationary point of inflection and rho is first-order underidentified but second-order identified. We show that, unlike the Random Effects and Transformed MLEs for this type of model, the generalized MMLEs are uniquely defined in finite samples w.p.1. for any value of |rho| =< 1. When rho=1, the rate of convergence of the MMLEs is N^{1/4}, where N is the cross-sectional dimension of the panel. We derive the limiting distributions of the MMLEs when rho=1. They are generally asymmetric. We also show that Quasi LM tests that are based on the modified profile log-likelihood function and use its expected rather than observed Hessian for hypotheses that include a restriction on rho, and confidence regions that are based on inverting these tests have correct asymptotic size in a uniform sense when |rho| =< 1. Finally, we investigate the finite sample properties of the MMLEs and the QLM test in a Monte Carlo study.
https://arxiv.org/abs/2508.20753
Academic Papers
svg
d279baba8a1d08b7e199f4577dc8d6424a08e8039245c8e3960991e1f694c132
2026-01-01T00:00:00-05:00
Targeted Advertising in Elections
arXiv:2509.10422v2 Announce Type: replace Abstract: How does targeted advertising influence electoral outcomes? This paper presents a one-dimensional spatial model of voting in which a privately informed challenger persuades voters to support him over the status quo. I show that targeted advertising enables the challenger to persuade voters with opposing preferences and swing elections decided by such voters; under simple majority, the challenger can defeat the status quo even when it is located at the median voter's bliss point. Ex-ante commitment power is unnecessary -- the challenger succeeds by strategically revealing different pieces of verifiable information to different voters. Publicizing all political ads would mitigate the negative effects of targeted advertising and help voters collectively make the right choice.
https://arxiv.org/abs/2509.10422
Academic Papers
svg
c4219bde3d145be465fbb9c09b9f5d5287f8767a0f8819300839e9049d15b0ac
2026-01-01T00:00:00-05:00
A Characterization of Egalitarian and Proportional Sharing Principles: An Efficient Extension Operator Approach
arXiv:2510.24388v2 Announce Type: replace Abstract: Some well-known solutions for cooperative games with transferable utility (TU-games), such as the Banzhaf value, the Myerson value, and the Aumann-Dreze value, fail to satisfy efficiency. Despite their desirable normative properties, this inefficiency motivates the search for a systematic method to restore efficiency while preserving their underlying normative structure. This paper proposes an efficient extension operator as a general approach to restore efficiency by extending any underlying solution to an efficient one. We consider novel axioms for those operators and characterize the egalitarian surplus sharing method and the proportional sharing method in a unified manner. As applications, we demonstrate the generality of our method by developing an efficient-fair extension of solutions for TU games with communication networks, as well as a variant for TU games with coalition structures.
https://arxiv.org/abs/2510.24388
Academic Papers
svg
bce94aedbe146bb82be59b95e3dc370af76d337cd73e0d59ab59c3affb9a39dd
2026-01-01T00:00:00-05:00
A Practical Guide to Estimating Conditional Marginal Effects: Modern Approaches
arXiv:2504.01355v2 Announce Type: replace-cross Abstract: This Element offers a practical guide to estimating conditional marginal effects-how treatment effects vary with a moderating variable-using modern statistical methods. Commonly used approaches, such as linear interaction models, often suffer from unclarified estimands, limited overlap, and restrictive functional forms. This guide begins by clearly defining the estimand and presenting the main identification results. It then reviews and improves upon existing solutions, such as the semiparametric kernel estimator, and introduces robust estimation strategies, including augmented inverse propensity score weighting with Lasso selection (AIPW-Lasso) and double machine learning (DML) with modern algorithms. Each method is evaluated through simulations and empirical examples, with practical recommendations tailored to sample size and research context. All tools are implemented in the accompanying \texttt{interflex} package for \texttt{R}.
https://arxiv.org/abs/2504.01355
Academic Papers
svg
d52be4120b476c76bf80c491ad693e27087cb0b91595ab0afaa60184489eef3d
2026-01-01T00:00:00-05:00
A Fuzzy Approach for Randomized Confidence Intervals
arXiv:2512.23866v1 Announce Type: new Abstract: We propose randomized confidence intervals based on the Neyman-Pearson lemma, in order to make them more broadly applicable to distributions that do not satisfy regularity conditions. This is achieved by using the definition of fuzzy confidence intervals. These intervals are compared with methods described in the literature for well-known distributions such as normal, binomial, and Poisson. The results show that in high-variance situations, the new intervals provide better performance. Furthermore, through these intervals, it is possible to compute a lower bound for the expected length, demonstrating that they achieve the minimal maximum expected length for a Bernoulli trial observation.
https://arxiv.org/abs/2512.23866
Academic Papers
svg
c8117293c30b531fd7d7ef248d84fdd9f6bb8dab4e3465a5ebbc0364f2cf91a8
2026-01-01T00:00:00-05:00
Forecasting the Term Structure of Interest Rates with SPDE-Based Models
arXiv:2512.23910v1 Announce Type: new Abstract: The Dynamic Nelson--Siegel (DNS) model is a widely used framework for term structure forecasting. We propose a novel extension that models DNS residuals as a Gaussian random field, capturing dependence across both time and maturity. The residual field is represented via a stochastic partial differential equation (SPDE), enabling flexible covariance structures and scalable Bayesian inference through sparse precision matrices. We consider a range of SPDE specifications, including stationary, non-stationary, anisotropic, and nonseparable models. The SPDE--DNS model is estimated in a Bayesian framework using the integrated nested Laplace approximation (INLA), jointly inferring latent DNS factors and the residual field. Empirical results show that the SPDE-based extensions improve both point and probabilistic forecasts relative to standard benchmarks. When applied in a mean--variance bond portfolio framework, the forecasts generate economically meaningful utility gains, measured as performance fees relative to a Bayesian DNS benchmark under monthly rebalancing. Importantly, incorporating the structured SPDE residual substantially reduces cross-maturity and intertemporal dependence in the remaining measurement error, bringing it closer to white noise. These findings highlight the advantages of combining DNS with SPDE-driven residual modeling for flexible, interpretable, and computationally efficient yield curve forecasting.
https://arxiv.org/abs/2512.23910
Academic Papers
svg
b6a00c6badd180062cbac3b63502f247144fd3683d9c182b1275259adbd89b90
2026-01-01T00:00:00-05:00
Least Square Estimation: SDEs Perturbed by L\'evy Noise with Sparse Sample Paths
arXiv:2512.24005v1 Announce Type: new Abstract: This article investigates the least squares estimators (LSE) for the unknown parameters in stochastic differential equations (SDEs) that are affected by L\'evy noise, particularly when the sample paths are sparse. Specifically, given $n$ sparsely observed curves related to this model, we derive the least squares estimators for the unknown parameters: the drift coefficient, the diffusion coefficient, and the jump-diffusion coefficient. We also establish the asymptotic rate of convergence for the proposed LSE estimators. Additionally, in the supplementary materials, the proposed methodology is applied to a benchmark dataset of functional data/curves, and a small simulation study is conducted to illustrate the findings.
https://arxiv.org/abs/2512.24005
Academic Papers
svg
6f46f03f7cc38dc6f3ffb727d1514be3a8e8b420d166dc4a123b6bc14c141d5b
2026-01-01T00:00:00-05:00
A persistent-homology-based Bayesian prior to identify Robin coefficient in parabolic problems
arXiv:2512.24046v1 Announce Type: new Abstract: We adopt a Bayesian inference approach with persistent-homology-based prior to estimate a temporally dependent Robin coefficient arising in the analysis of convective heat transfer. And we also discuss the use of a hierarchical Bayesian method for automatic selection of the regularization parameter. Numerical results demonstrate that the PH prior shows consistent improvement compared to the Gaussian and the total variation prior.
https://arxiv.org/abs/2512.24046
Academic Papers
svg
2952c629ce8a63ccdaa1eb568cb8a86ad3a5ae8e763bb0f125dffdabb63d28a9
2026-01-01T00:00:00-05:00
The Malaysian Election Corpus (MECo): Electoral Maps and Cartograms from 1954 to 2025
arXiv:2512.24211v1 Announce Type: new Abstract: Electoral boundaries in Malaysia are not publicly available in machine-readable form. This prevents rigorous analysis of geography-centric issues such as malapportionment and gerrymandering, and constrains spatial perspectives on electoral outcomes. We present the second component of the Malaysian Election Corpus (MECo), an open-access collection of digital electoral boundaries covering all 19 approved delimitation exercises in Malaysia's history, from the first set of Malayan boundaries in 1954 until the 2019 Sabah delimitation. We also auto-generate election-time maps for all federal and state elections up to 2025, and include equal-area and electorate-weighted cartograms to support deeper geospatial analysis. This is the first complete, publicly-available, and machine-readable record of Malaysia's electoral boundaries, and fills a critical gap in the country's electoral data infrastructure.
https://arxiv.org/abs/2512.24211
Academic Papers
svg
f44d9968e9b9652efb559342346a278f5cf724e7051a29a9d81508613aa05d17
2026-01-01T00:00:00-05:00
A Robust Persistent Homology : Trimming Approach
arXiv:2512.24222v1 Announce Type: new Abstract: This article studies the robust version of persistent homology based on trimming methodology to capture the geometric feature through support of the data in presence of outliers. Precisely speaking, the proposed methodology works when the outliers lie outside the main data cloud as well as inside the data cloud. In the course of theoretical study, it is established that the Bottleneck distance between the proposed robust version of persistent homology and its population analogue can be made arbitrary small with a certain rate for a sufficiently large sample size. The practicability of the methodology is shown for various simulated data and bench mark real data associated with cellular biology.
https://arxiv.org/abs/2512.24222
Academic Papers
svg
e709fca91cc41f64a9298a03c6f47b13cd0c3f4d3ee9a2d947864f8827a9031c
2026-01-01T00:00:00-05:00
Valid and Efficient Two-Stage Latent Subgroup Analysis with Observational Data
arXiv:2512.24223v1 Announce Type: new Abstract: Subgroup analysis evaluates treatment effects across multiple sub-populations. When subgroups are defined by latent memberships inferred from imperfect measurements, the analysis typically involves two inter-connected models, a latent class model and a subgroup outcome model. The classical one-stage framework, which models the joint distribution of the two models, may be infeasible with observational data containing many confounders. The two-stage framework, which first estimates the latent class model and then performs subgroup analysis using estimated latent memberships, can accommodate potential confounders but may suffer from bias issues due to misclassification of latent subgroup memberships. This paper focuses on latent subgroups inferred from binary item responses and addresses when and how a valid two-stage latent subgroup analysis can be made with observational data. We investigate the maximum misclassification rate that a valid two-stage framework can tolerate. Introducing a spectral method perspective, we propose a two-stage approach to achieve the desired misclassification rate with the blessing of many item responses. Our method accommodates high-dimensional confounders, is computationally efficient and robust to noninformative items. In observational studies, our methods lead to consistent estimation and valid inference on latent subgroup effects. We demonstrate its merit through simulation studies and an application to educational assessment data.
https://arxiv.org/abs/2512.24223
Academic Papers
svg
29189c93c94095acdbd65cf2e9c75999f45b3c215b26b97bca0c860efcc1e008
2026-01-01T00:00:00-05:00
A Novel Approach for Data Integration with Multiple Heterogeneous Data Sources
arXiv:2512.24342v1 Announce Type: new Abstract: The integration of data from multiple sources is increasingly used to achieve larger sample sizes and enhance population diversity. Our previous work established that, under random sampling from the same underlying population, integrating large incomplete datasets with summary-level data produces unbiased parameter estimates. In this study, we develop a novel statistical framework that enables the integration of summary-level data with information from heterogeneous data sources by leveraging auxiliary information. The proposed approach estimates study-specific sampling weights using this auxiliary information and calibrates the estimating equations to obtain the full set of model parameters. We evaluate the performance of the proposed method through simulation studies under various sampling designs and illustrate its application by reanalyzing U.S. cancer registry data combined with summary-level odds ratio estimates for selected colorectal cancer (CRC) risk factors, while relaxing the random sampling assumption.
https://arxiv.org/abs/2512.24342
Academic Papers
svg
f7f1d3276100e3704f9bf1a2b8b17ea326010d04ca66894a30c2c5c27c32000f
2026-01-01T00:00:00-05:00
Bayesian inference for functional extreme events defined via partially unobserved processes
arXiv:2512.24356v1 Announce Type: new Abstract: In order to describe the extremal behaviour of some stochastic process $X$, approaches from univariate extreme value theory are typically generalized to the spatial domain. In particular, generalized peaks-over-threshold approaches allow for the consideration of single extreme events. These can be flexibly defined as exceedances of a risk functional $r$, such as a spatial average, applied to $X$. Inference for the resulting limit process, the so-called $r$-Pareto process, requires the evaluation of $r(X)$ and thus the knowledge of the whole process $X$. In many practical applications, however, observations of $X$ are only available at scattered sites. To overcome this issue, we propose a two-step MCMC-algorithm in a Bayesian framework. In a first step, we sample from $X$ conditionally on the observations in order to evaluate which observations lead to $r$-exceedances. In a second step, we use these exceedances to sample from the posterior distribution of the parameters of the limiting $r$-Pareto process. Alternating these steps results in a full Bayesian model for the extremes of $X$. We show that, under appropriate assumptions, the probability of classifying an observation as $r$-exceedance in the first step converges to the desired probability. Furthermore, given the first step, the distribution of the Markov chain constructed in the second step converges to the posterior distribution of interest. The procedure is compared to the Bayesian version of the standard procedure in a simulation study.
https://arxiv.org/abs/2512.24356
Academic Papers
svg
a1a7a856119b1b9928afc60dd3145368414ebf941ad3f09f96bc036c3eb7e518
2026-01-01T00:00:00-05:00
Geometric criteria for identifying extremal dependence and flexible modeling via additive mixtures
arXiv:2512.24392v1 Announce Type: new Abstract: The framework of geometric extremes is based on the convergence of scaled sample clouds onto a limit set, characterized by a gauge function, with the shape of the limit set determining extremal dependence structures. While it is known that a blunt limit set implies asymptotic independence, the absence of bluntness can be linked to both asymptotic dependence and independence. Focusing on the bivariate case, under a truncated gamma modeling assumption with bounded angular density, we show that a ``pointy'' limit set implies asymptotic dependence, thus offering practical geometric criteria for identifying extremal dependence classes. Suitable models for the gauge function offer the ability to capture asymptotically independent or dependent data structures, without requiring prior knowledge of the true extremal dependence structure. The geometric approach thus offers a simple alternative to various parametric copula models that have been developed for this purpose in recent years. We consider two types of additively mixed gauge functions that provide a smooth interpolation between asymptotic dependence and asymptotic independence. We derive their explicit forms, explore their properties, and establish connections to the developed geometric criteria. Through a simulation study, we evaluate the effectiveness of the geometric approach with additively mixed gauge functions, comparing its performance to existing methodologies that account for both asymptotic dependence and asymptotic independence. The methodology is computationally efficient and yields reliable performance across various extremal dependence scenarios.
https://arxiv.org/abs/2512.24392
Academic Papers
svg
8aee4e955dc94e3424d429217d3bc51bbed43ff4861082138d80d6af5987cdd6
2026-01-01T00:00:00-05:00
Demystifying Proximal Causal Inference
arXiv:2512.24413v1 Announce Type: new Abstract: Proximal causal inference (PCI) has emerged as a promising framework for identifying and estimating causal effects in the presence of unobserved confounders. While many traditional causal inference methods rely on the assumption of no unobserved confounding, this assumption is likely often violated. PCI mitigates this challenge by relying on an alternative set of assumptions regarding the relationships between treatment, outcome, and auxiliary variables that serve as proxies for unmeasured confounders. We review existing identification results, discuss the assumptions necessary for valid causal effect estimation via PCI, and compare different PCI estimation methods. We offer practical guidance on operationalizing PCI, with a focus on selecting and evaluating proxy variables using domain knowledge, measurement error perspectives, and negative control analogies. Through conceptual examples, we demonstrate tensions in proxy selection and discuss the importance of clearly defining the unobserved confounding mechanism. By bridging formal results with applied considerations, this work aims to demystify PCI, encourage thoughtful use in practice, and identify open directions for methodological development and empirical research.
https://arxiv.org/abs/2512.24413
Academic Papers
svg
4ab1e97ce1cef814ce9e3856d79a0530213eb544c36b9e961eb7b570545eb60b
2026-01-01T00:00:00-05:00
Model-Assisted Bayesian Estimators of Transparent Population Level Summary Measures for Ordinal Outcomes in Randomized Controlled Trials
arXiv:2512.24442v1 Announce Type: new Abstract: In randomized controlled trials, ordinal outcomes typically improve statistical efficiency over binary outcomes. The treatment effect on an ordinal outcome is usually described by the odds ratio from a proportional odds model, but this summary measure lacks transparency with respect to its emphasis on the components of the ordinal outcome when proportional odds is violated. We propose various summary measures for ordinal outcomes that are fully transparent in this regard, including 'weighted geometric mean' odds ratios and relative risks, and 'weighted mean' risk differences. We also develop and evaluate efficient model-assisted Bayesian estimators for these population level summary measures based on non-proportional odds models that facilitate covariate adjustment with marginalization via the Bayesian bootstrap. We propose a weighting scheme that engenders appealing invariance properties, including to whether the ordinal outcome is ordered from best to worst versus worst to best. Using computer simulation, we show that comparative testing based on the proposed population level summary measures performs well relative to the conventional proportional odds approach. We also report an analysis of the COVID-OUT trial, which exhibits evidence of non-proportional odds.
https://arxiv.org/abs/2512.24442
Academic Papers
svg
c0ef3791bdec05dca0f12fd0eb3648bdb0be19e402e1276095ab04d42d7ad3f3
2026-01-01T00:00:00-05:00
Robust reduced rank regression under heavy-tailed noise and missing data via non-convex penalization
arXiv:2512.24450v1 Announce Type: new Abstract: Reduced rank regression (RRR) is a fundamental tool for modeling multiple responses through low-dimensional latent structures, offering both interpretability and strong predictive performance in high-dimensional settings. Classical RRR methods, however, typically rely on squared loss and Gaussian noise assumptions, rendering them sensitive to heavy-tailed errors, outliers, and data contamination. Moreover, the presence of missing data--common in modern applications--further complicates reliable low-rank estimation. In this paper, we propose a robust reduced rank regression framework that simultaneously addresses heavy-tailed noise, outliers, and missing data. Our approach combines a robust Huber loss with nonconvex spectral regularization, specifically the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD). Unlike convex nuclear-norm regularization, the proposed nonconvex penalties alleviate excessive shrinkage and enable more accurate recovery of the underlying low-rank structure. The method also accommodates missing data in the response matrix without requiring imputation. We develop an efficient proximal gradient algorithm based on alternating updates and tailored spectral thresholding. Extensive simulation studies demonstrate that the proposed methods substantially outperform nuclear-norm-based and non-robust alternatives under heavy-tailed noise and contamination. An application to cancer cell line data set further illustrates the practical advantages of the proposed robust RRR framework. Our method is implemented in the R package rrpackrobust available at https://github.com/tienmt/rrpackrobust.
https://arxiv.org/abs/2512.24450
Academic Papers
svg
502fdaeb08abaeba47bd948762b9f49bd296d8333f522613032d2677ed369f09
2026-01-01T00:00:00-05:00
Multiple Testing of One-Sided Hypotheses with Conservative $p$-values
arXiv:2512.24588v1 Announce Type: new Abstract: We study a large-scale one-sided multiple testing problem in which test statistics follow normal distributions with unit variance, and the goal is to identify signals with positive mean effects. A common approach is to compute $p$-values under the assumption that all null means are exactly zero and then apply standard multiple testing procedures such as the Benjamini--Hochberg (BH) or Storey--BH method. However, because the null hypothesis is composite, some null means may be strictly negative. In this case, the resulting $p$-values are conservative, leading to a substantial loss of power. Existing methods address this issue by modifying the multiple testing procedure itself, for example through conditioning strategies or discarding rules. In contrast, we focus on correcting the $p$-values so that they are exact under the null. Specifically, we estimate the marginal null distribution of the test statistics within an empirical Bayes framework and construct refined $p$-values based on this estimated distribution. These refined $p$-values can then be directly used in standard multiple testing procedures without modification. Extensive simulation studies show that the proposed method substantially improves power when $p$-values are conservative, while achieving comparable performance to existing methods when $p$-values are exact. An application to phosphorylation data further demonstrates the practical effectiveness of our approach.
https://arxiv.org/abs/2512.24588
Academic Papers
svg
22580dec19ef3ee3c4450d625c5096e610bea8ed0aff02d9e99a208a19285f79
2026-01-01T00:00:00-05:00
Generalized Poisson Matrix Factorization for Overdispersed Count Data
arXiv:2512.24604v1 Announce Type: new Abstract: Non-negative matrix factorization (NMF) is widely used as a feature extraction technique for matrices with non-negative entries, such as image data, purchase histories, and other types of count data. In NMF, a non-negative matrix is decomposed into the product of two non-negative matrices, and the approximation accuracy is evaluated by a loss function. If the Kullback-Leibler divergence is chosen as the loss function, the estimation coincides with maximum likelihood under the assumption that the data entries are distributed according to a Poisson distribution. To address overdispersion, negative binomial matrix factorization has recently been proposed as an extension of the Poisson-based model. However, the negative binomial distribution often generates an excessive number of zeros, which limits its expressive capacity. In this study, we propose a non-negative matrix factorization based on the generalized Poisson distribution, which can flexibly accommodate overdispersion, and we introduce a maximum likelihood approach for parameter estimation. This methodology provides a more versatile framework than existing models, thereby extending the applicability of NMF to a broader class of count data.
https://arxiv.org/abs/2512.24604
Academic Papers
svg
33ec5e67c7083e2ea874ecc9ccd4a1d9f2328b3a8a70f30cebbdcf164dd60d10
2026-01-01T00:00:00-05:00
Empirical Bayes Method for Large Scale Multiple Testing with Heteroscedastic Errors
arXiv:2512.24611v1 Announce Type: new Abstract: In this paper, we address the normal mean inference problem, which involves testing multiple means of normal random variables with heteroscedastic variances. Most existing empirical Bayes methods for this setting are developed under restrictive assumptions, such as the scaled inverse-chi-squared prior for variances and unimodality for the non-null mean distribution. However, when either of these assumptions is violated, these methods often fail to control the false discovery rate (FDR) at the target level or suffer from a substantial loss of power. To overcome these limitations, we propose a new empirical Bayes method, gg-Mix, which assumes only independence between the normal means and variances, without imposing any structural restrictions on their distributions. We thoroughly evaluate the FDR control and power of gg-Mix through extensive numerical studies and demonstrate its superior performance compared to existing methods. Finally, we apply gg-Mix to three real data examples to further illustrate the practical advantages of our approach.
https://arxiv.org/abs/2512.24611
Academic Papers
svg
0b025d4e69b7cbb510f89c65563dee356cb6866137c2a18c597e20e4cd7e6785
2026-01-01T00:00:00-05:00
$\ell_0$-Regularized Item Response Theory Model for Robust Ideal Point Estimation
arXiv:2512.24642v1 Announce Type: new Abstract: Ideal point estimation methods face a significant challenge when legislators engage in protest voting -- strategically voting against their party to express dissatisfaction. Such votes introduce attenuation bias, making ideologically extreme legislators appear artificially moderate. We propose a novel statistical framework that extends the fast EM-based estimation approach of \cite{Imai2016} using $\ell_0$ regularization method to handle protest votes. Through simulation studies, we demonstrate that our proposed method maintains estimation accuracy even with high proportions of protest votes, while being substantially faster than MCMC-based methods. Applying our method to the 116th and 117th U.S. House of Representatives, we successfully recover the extreme liberal positions of ``the Squad'', whose protest votes had caused conventional methods to misclassify them as moderates. While conventional methods rank Ocasio-Cortez as more conservative than 69\% of Democrats, our method places her firmly in the progressive wing, aligning with her documented policy positions. This approach provides both robust ideal point estimates and systematic identification of protest votes, facilitating deeper analysis of strategic voting behavior in legislatures.
https://arxiv.org/abs/2512.24642
Academic Papers
svg
e8935d8801ddfd51210dce499d5ab2e3e80b76a7f6e11287d328bd8698020554
2026-01-01T00:00:00-05:00
Quasi-Maximum Likelihood Estimation for a Genuinely Unbalanced Dynamic Network Panel Data Model
arXiv:2512.24748v1 Announce Type: new Abstract: This paper develops a quasi-maximum likelihood estimator for genuinely unbalanced dynamic network panel data models with individual fixed effects. We propose a model that accommodates contemporaneous and lagged network spillovers, temporal dependence, and a listing effect that activates upon a unit's first appearance in the panel. We establish the consistency of the QMLE as both $N$ and $T$ go to infinity, derive its asymptotic distribution, and identify an asymptotic bias arising from incidental parameters when $N$ is asymptotically large relative to $T$. Based on the asymptotic bias expression, we propose a bias-corrected estimator that is asymptotically unbiased and normally distributed under appropriate regularity conditions. Monte Carlo experiments examine the finite sample performance of the bias-corrected estimator across different criteria, including bias, RMSE, coverage probability, and the normality of the estimator. The empirical application to Airbnb listings from New Zealand and New York City reveals region-specific patterns in spatial and temporal price transmission, illustrating the importance of modeling genuine unbalancedness in dynamic network settings.
https://arxiv.org/abs/2512.24748
Academic Papers
svg
fdb8f5fc9329fdf958f79380a4e5512e0e20f0c38c3de95d14142fdc9ddcd12c
2026-01-01T00:00:00-05:00
Bayesian Elastic Net Regression with Structured Prior Dependence
arXiv:2512.25045v1 Announce Type: new Abstract: Many regularization priors for Bayesian regression assume the regression coefficients are a priori independent. In particular this is the case for standard Bayesian treatments of the lasso and the elastic net. While independence may be reasonable in some data-analytic settings, incorporating dependence in these prior distributions provides greater modeling flexibility. This paper introduces the orthant normal distribution in its general form and shows how it can be used to structure prior dependence in the Bayesian elastic net regression model. An L1-regularized version of Zellner's g prior is introduced as a special case, creating a new link between the literature on penalized optimization and an important class of regression priors. Computation is challenging due to an intractable normalizing constant in the prior. We avoid this issue by modifying slightly a standard prior of convenience for the hyperparameters in such a way to enable simple and fast Gibbs sampling of the posterior distribution. The benefit of including structured prior dependence in the Bayesian elastic net regression model is demonstrated through simulation and a near-infrared spectroscopy data example.
https://arxiv.org/abs/2512.25045
Academic Papers
svg
a258c588b1a5fd6d70152752749cb98226004e9c7790e20040b1bc7262d8683e
2026-01-01T00:00:00-05:00
Sequential Bayesian parameter-state estimation in dynamical systems with noisy and incomplete observations via a variational framework
arXiv:2512.25056v1 Announce Type: new Abstract: Online joint estimation of unknown parameters and states in a dynamical system with uncertainty quantification is crucial in many applications. For example, digital twins dynamically update their knowledge of model parameters and states to support prediction and decision-making. Reliability and computational speed are vital for DTs. Online parameter-state estimation ensures computational efficiency, while uncertainty quantification is essential for making reliable predictions and decisions. In parameter-state estimation, the joint distribution of the state and model parameters conditioned on the data, termed the joint posterior, provides accurate uncertainty quantification. Because the joint posterior is generally intractable to compute, this paper presents an online variational inference framework to compute its approximation at each time step. The approximation is factorized into a marginal distribution over the model parameters and a state distribution conditioned on the parameters. This factorization enables recursive updates through a two-stage procedure: first, the parameter posterior is approximated via variational inference; second, the state distribution conditioned on the parameters is computed using Gaussian filtering based on the estimated parameter posterior. The algorithmic design is supported by a theorem establishing upper bounds on the joint posterior approximation error. Numerical experiments demonstrate that the proposed method (i) matches the performance of the joint particle filter in low-dimensional problems, accurately inferring both unobserved states and unknown parameters of dynamical and observation models; (ii) remains robust under noisy, partial observations and model discrepancies in a chaotic Lorenz 96 system; and (iii) scales effectively to a high-dimensional convection-diffusion system, where it outperforms the joint ensemble Kalman filter.
https://arxiv.org/abs/2512.25056
Academic Papers
svg
6b0833230ea8216ec3237d7050870a30fc0df639430919ee435b46d48dff747e
2026-01-01T00:00:00-05:00
Constraints on the perfect phylogeny mixture model and their effect on reducing degeneracy
arXiv:2512.24930v1 Announce Type: cross Abstract: The perfect phylogeny mixture (PPM) model is useful due to its simplicity and applicability in scenarios where mutations can be assumed to accumulate monotonically over time. It is the underlying model in many tools that have been used, for example, to infer phylogenetic trees for tumor evolution and reconstruction. Unfortunately, the PPM model gives rise to substantial ambiguity -- in that many different phylogenetic trees can explain the same observed data -- even in the idealized setting where data are observed perfectly, i.e. fully and without noise. This ambiguity has been studied in this perfect setting by Pradhan et al. 2018, which proposed a procedure to bound the number of solutions given a fixed instance of observation data. Beyond this, studies have been primarily empirical. Recent work (Myers et al. 2019) proposed adding extra constraints to the PPM model to tackle ambiguity. In this paper, we first show that the extra constraints of Myers et al. 2019, called longitudinal constraints (LC), often fail to reduce the number of distinct trees that explain the observations. We then propose novel alternative constraints to limit solution ambiguity and study their impact when the data are observed perfectly. Unlike the analysis in Pradhan et al. 2018, our theoretical results regarding both the inefficacy of the LC and the extent to which our new constrains reduce ambiguity are not tied to a single observation instance. Rather, our theorems hold over large ensembles of possible inference problems. To the best of our knowledge, we are the first to study degeneracy in the PPM model in this ensemble-based theoretical framework.
https://arxiv.org/abs/2512.24930
Academic Papers
svg
4c2be135095e9c117fb534a1ca3f4573996ccb9795f2026ac231773d2a252ff6
2026-01-01T00:00:00-05:00
Hypothesis testing for partial tail correlation in multivariate extremes
arXiv:2210.02048v3 Announce Type: replace Abstract: Statistical modeling of high dimensional extremes remains challenging and has generally been limited to moderate dimensions. Understanding structural relationships among variables at their extreme levels is crucial both for constructing simplified models and for identifying sparsity in extremal dependence. In this paper, we introduce the notion of partial tail correlation to characterize structural relationships between pairs of variables in their tails. To this end, we propose a tail regression approach for nonnegative regularly varying random vectors and define partial tail correlation based on the regression residuals. Using an extreme analogue of the covariance matrix, we show that the resulting regression coefficients and partial tail correlations take the same form as in classical non-extreme settings. For inference, we develop a hypothesis test to explore sparsity in extremal dependence structures, and demonstrate its effectiveness through simulations and an application to the Danube river network.
https://arxiv.org/abs/2210.02048
Academic Papers
svg
592d5e9f1802f9f20900c1ea8dc0b9503f37cc1331f1b5a495ef0d635f32fc77
2026-01-01T00:00:00-05:00
Maximum Likelihood Estimates of Parameters in Generalized Gamma Distribution with SeLF Algorithm
arXiv:2306.16419v2 Announce Type: replace Abstract: This undergraduate thesis focuses on calculating maximum likelihood estimates of parameters in the generalized Gamma distribution using the SeLF algorithm. As an extension of the Gamma distribution, the generalized Gamma distribution can better fit real data and has been widely applied. The research begins by exploring the definition of the generalized Gamma distribution and its similarities and differences from the traditional Gamma distribution. Then, the SeLF and US algorithms are discussed in detail. The SeLF algorithm is a new algorithm based on the Minorization-Maximization algorithm, which can obtain the local optimal solution with few iterations, with the advantages of fast computation, high accuracy, and good convergence. The US algorithm is a method for finding the zeros of a function, which stands at a higher level than the SeLF algorithm and can improve the convergence speed and stability. This thesis proposes a method for calculating maximum likelihood estimates of the parameters in the generalized Gamma distribution using the SeLF and US algorithms, and presents the practical implementation of the algorithms, as well as simulations and data analysis to evaluate the performance of the proposed methods. The results demonstrate that the SeLF algorithm can achieve more stable and accurate estimates of the parameters in the generalized Gamma distribution more quickly, compared to traditional Newton's method, which can be useful in various applications. This thesis provides a comprehensive and in-depth exploration of the generalized Gamma distribution and the SeLF algorithm, and proposes a new method for calculating maximum likelihood estimates of parameters, contributing to the development of statistical methods for parameter estimation in complex models. The proposed method in this thesis has important practical significance and application value for solving practical problems.
https://arxiv.org/abs/2306.16419
Academic Papers
svg
806fff1629e778d6afd68d48a95afc2febfc5314043cb11121c1be8aa32e0deb
2026-01-01T00:00:00-05:00
Extremile scalar-on-function regression
arXiv:2405.20817v2 Announce Type: replace Abstract: Extremiles provide a generalization of quantiles which are not only robust, but also have an intrinsic link with extreme value theory. This paper introduces an extremile regression model tailored for functional covariate spaces. The estimation procedure turns out to be a weighted version of local linear scalar-on-function regression, where now a double kernel approach plays a crucial role. Asymptotic expressions for the bias and variance are established, applicable to both decreasing bandwidth sequences and automatically selected bandwidths. The methodology is then investigated in detail through a simulation study. Furthermore, we illustrate the method's applicability with an analysis of the Berkeley Growth data, showcasing its performance in a real-world functional data setting.
https://arxiv.org/abs/2405.20817
Academic Papers
svg
0d220aa3e3c4ce0a538b046b8fc10685fbcf74bf4f21458b10e701adec6fb521
2026-01-01T00:00:00-05:00
Inference at the data's edge: Gaussian processes for modeling and inference under model-dependency, poor overlap, and extrapolation
arXiv:2407.10442v2 Announce Type: replace Abstract: Many inferential tasks involve fitting models to observed data and predicting outcomes at new covariate values, requiring interpolation or extrapolation. Conventional methods select a single best-fitting model, discarding fits that were similarly plausible in-sample but would yield sharply different predictions out-of-sample. Gaussian Processes (GPs) offer a principled alternative. Rather than committing to one conditional expectation function, GPs deliver a posterior distribution over outcomes at any covariate value. This posterior effectively retains the range of models consistent with the data, widening uncertainty intervals where extrapolation magnifies divergence. In this way, the GP's uncertainty estimates reflect the implications of extrapolation on our predictions, helping to tame the "dangers of extreme counterfactuals" (King & Zeng, 2006). The approach requires (i) specifying a covariance function linking outcome similarity to covariate similarity, and (ii) assuming Gaussian noise around the conditional expectation. We provide an accessible introduction to GPs with emphasis on this property, along with a simple, automated procedure for hyperparameter selection implemented in the R package gpss. We illustrate the value of GPs for capturing counterfactual uncertainty in three settings: (i) treatment effect estimation with poor overlap, (ii) interrupted time series requiring extrapolation beyond pre-intervention data, and (iii) regression discontinuity designs where estimates hinge on boundary behavior.
https://arxiv.org/abs/2407.10442
Academic Papers
svg
a41087465c93e15302f4905ccbef16b3b13e438cecd7f22e3b7d05a089d96d39
2026-01-01T00:00:00-05:00
Subtype-Aware Registration of Longitudinal Electronic Health Records
arXiv:2501.07336v2 Announce Type: replace Abstract: Electronic Health Records (EHRs) contain extensive patient information that can inform downstream clinical decisions, such as mortality prediction, disease phenotyping, and disease onset prediction. A key challenge in EHR data analysis is the temporal gap between when a condition is first recorded and its actual onset time. Such timeline misalignment can lead to artificially distinct biomarker trends among patients with similar disease progression, undermining the reliability of downstream analyses and complicating tasks such as disease subtyping and outcome prediction. To address this challenge, we provide a subtype-aware timeline registration method that leverages data projection and discrete optimization to correct timeline misalignment. Through simulation and real-world data analyses, we demonstrate that the proposed method effectively aligns distorted observed records with the true disease progression patterns, enhancing subtyping clarity and improving performance in downstream clinical analyses.
https://arxiv.org/abs/2501.07336
Academic Papers
svg
74b473727a219c86b24add20a584fe9d20e57dbda20010872bdb61183035b314
2026-01-01T00:00:00-05:00
Measuring the Impact of Missingness in Traffic Stop Data
arXiv:2505.18281v2 Announce Type: replace Abstract: In this article we explore the data available through the Stanford Open Policing Project. The data consist of information on millions of traffic stops across close to 100 different cities and highway patrols. Using a variety of metrics, we identify that the data is not missing completely at random. Furthermore, we develop ways of quantifying and visualizing missingness trends for different variables across the datasets. We follow up by performing a sensitivity analysis to extend work done on the outcome test as well as to extend work done on sharp bounds on the average treatment effect. We demonstrate that bias calculations can fundamentally shift depending on the assumptions made about the observations for which the race variable has not been recorded. We suggest ways that our missingness sensitivity analysis can be extended to myriad different contexts.
https://arxiv.org/abs/2505.18281
Academic Papers
svg
fcb4724fac8276194b41d692bb00c6d13821a5cfdc7fd801b6e53b857cb3c132
2026-01-01T00:00:00-05:00
Reframing Three-Dimensional Morphometrics Through Functional Data Innovations
arXiv:2509.00650v2 Announce Type: replace Abstract: This study innovates geometric morphometrics by incorporating functional data analysis, the square-root velocity function (SRVF), and arc-length parameterisation for 3D morphometric data, leading to the development of seven new pipelines in addition to the standard geometric morphometrics (GM) approach.. This enables three-dimensional images to be examined from perspectives that do not neglect curvature, through the combined use of arc-length parameterisation, soft-alignment, and elastic-alignment. A simulation study was conducted to demonstrate the general effectiveness of eight pipelines: geometric morphometrics (GM, baseline), arc-GM, functional data morphometrics (FDM), arc-FDM, soft-SRV-FDM, arc-soft-SRV-FDM, elastic-SRV-FDM, and arc-elastic-SRV-FDM. These pipelines were also applied to distinguish dietary categories of kangaroos (omnivores, mixed feeders, browsers, and grazers) using cranial landmarks obtained from 41 extant species. Principal component analysis was conducted, followed by classification analysis using linear discriminant analysis, multinomial regression and support vector machines with a linear kernel. The results highlight the effectiveness of functional data analysis, together with arc-length and SRVF-based approaches, in opening the door to more robust perspectives for analysing three-dimensional morphometrics, while establishing geometric morphometrics as the baseline for comparison.
https://arxiv.org/abs/2509.00650
Academic Papers
svg
44ab6a441d08df67e34205bb3502148c2e10d76d16be0eadb261639cbc91f1be
2026-01-01T00:00:00-05:00
Estimation and Inference for Causal Explainability
arXiv:2512.20219v4 Announce Type: replace Abstract: Understanding how much each variable contributes to an outcome is a central question across disciplines. A causal view of explainability is favorable for its ability in uncovering underlying mechanisms and generalizing to new contexts. Based on a family of causal explainability quantities, we develop methods for their estimation and inference. In particular, we construct a one-step correction estimator using semi-parametric efficiency theory, which explicitly leverages the independence structure of variables to reduce the asymptotic variance. For a null hypothesis on the boundary, i.e., zero explainability, we show its equivalence to Fisher's sharp null, which motivates a randomization-based inference procedure. Finally, we illustrate the empirical efficacy of our approach through simulations as well as an immigration experiment dataset, where we investigate how features and their interactions shape public opinion toward admitting immigrants.
https://arxiv.org/abs/2512.20219
Academic Papers
svg
76106fbb605f09930d9ab369400c84f21a06e295cf87aeb9aec673b7e8940ae4
2026-01-01T00:00:00-05:00
Landauer cost in a continuous vacuum/no-vacuum measurement
arXiv:2512.23751v1 Announce Type: new Abstract: We study the thermodynamic cost of maintaining a continuous binary record of a vacuum or no-vacuum measurement. Modeling the monitoring as a time-binned click or no-click process with finite bandwidth, we treat the outcomes as a classical register that is reset after each bin. Landauer's principle then yields an operational lower bound on the dissipated heat rate set by the Shannon entropy rate of the measurement record. We discuss the role of coarse-graining, extend the analysis to many monitored modes, including correlations and compressibility, and provide parameter estimates for circuit-QED photon monitoring, with a speculative horizon-based bookkeeping illustration.
https://arxiv.org/abs/2512.23751
Academic Papers
svg
00e995c62e9abb8466053e489ca1e4e36129b2363c223d17297fe80f1ef3c10c
2026-01-01T00:00:00-05:00
Aliphatic Chains as One-Dimensional XY Spin Chains
arXiv:2512.23759v1 Announce Type: new Abstract: Spin waves are propagating disturbances of spin order in lattices with nearest-neighbor interactions. They are traditionally observed in magnetically ordered solids using inelastic neutron, light, or electron scattering, and ferromagnetic resonance. Here, we show that analogous spin dynamics can arise in liquid-state nuclear magnetic resonance (NMR) of molecules containing aliphatic chains. In such molecules, each CH_2 group must have a distinct chemical shift and be magnetically inequivalent via out-of-pair couplings. Under these conditions, singlet state populations of geminal protons propagate along (CH_2)_n segments forming magnetically silent spin waves. For a chain with translational symmetry, the spin Hamiltonian factorizes into subspaces formally equivalent to the one-dimensional XY model. This correspondence yields analytic expressions for eigenstates and eigenenergies in a spectroscopy we term spin-chain zero-quantum NMR. We identify molecular systems in which these conditions are met. Their collective dynamics rapidly exceed classical computational tractability, making them targets for quantum-computer simulations of spin transport and many-body dynamics.
https://arxiv.org/abs/2512.23759
Academic Papers
svg
3659d6faeb14c8ed8dcfc95244132e5de32a7fd23176839b08fd2eaec1953885
2026-01-01T00:00:00-05:00
Euler-Korteweg vortices: A fluid-mechanical analogue to the Schr\"odinger and Klein-Gordon equations
arXiv:2512.23771v1 Announce Type: new Abstract: Quantum theory and relativity exhibit several formal analogies with fluid mechanics. This paper examines under which conditions a classical fluid model may reproduce the most basic mathematical formalism of both theories. By assuming that the angular momentum of an irrotational vortex in an inviscid, barotropic, isothermal fluid with sound speed c is equal in magnitude to the reduced Planck constant, and incorporating Korteweg capillary stress, a complex wave equation describing the momentum and continuity equations of a Euler-Korteweg vortex is obtained. When uniform convection is introduced, the weak field approximation of this wave equation is equivalent to Schr\"odinger's equation. The model is shown to yield classical analogues to de Broglie wavelength, the Einstein-Planck relation, the Born rule and the uncertainty principle. Accounting for the retarded propagation of the wavefield of a vortex in convection produces the Lorentz transformation and the Klein-Gordon equation, with Schr\"odinger's equation appearing as the low-Mach-number limit. These results demonstrate that, under explicit assumptions, a classical continuum can reproduce the mathematical formalism of quantum and relativistic theory in their simplest form, without assuming the postulates principal to those theories.
https://arxiv.org/abs/2512.23771
Academic Papers
svg
0e8d4de9b1760b0b87600bf1827d088fff13da6feee49078c50437daa1408fb5
2026-01-01T00:00:00-05:00
DifGa: Differentiable Error Mitigation for Multi-Mode Gaussian and Non-Gaussian Noise in Quantum Photonic Circuits
arXiv:2512.23776v1 Announce Type: new Abstract: We introduce DifGa, a fully differentiable error-mitigation framework for continuous-variable (CV) quantum photonic circuits operating under Gaussian loss and weak non-Gaussian noise. The approach is demonstrated using analytic simulations with the default.gaussian backend of PennyLane, where quantum states are represented by first and second moments and optimized end-to-end via automatic differentiation. Gaussian loss is modeled as a beam splitter interaction with an environmental vacuum mode of transmissivity $\eta \in [0.3,0.95]$, while non-Gaussian phase noise is incorporated through a differentiable Monte-Carlo mixture of random phase rotations with jitter amplitudes $\delta \in [0,0.7]$. The core architecture employs a multi-mode Gaussian circuit consisting of a signal, ancilla, and environment mode. Input states are prepared using squeezing and displacement operations with parameters $(r_s,\varphi_s,\alpha)=(0.60,0.30,0.80)$ and $(r_a,\varphi_a)=(0.40,0.10)$, followed by an entangling beam splitter with angles $(\theta,\phi)=(0.70,0.20)$. Error mitigation is achieved by appending a six-parameter trainable Gaussian recovery layer comprising local phase rotations and displacements, optimized by minimizing a quadratic loss on the signal-mode quadratures $\langle \hat{x}_0\rangle$ and $\langle \hat{p}_0\rangle$ using gradient descent with fixed learning rate $0.06$ and identical initialization across experiments. Under pure Gaussian loss, the optimized recovery suppresses reconstruction error to near machine precision ($<10^{-30}$) for moderate loss ($\eta \ge 0.5$). When non-Gaussian phase noise is present, noise-aware training using Monte Carlo averaging yields robust generalization, reducing error by more than an order of magnitude compared to Gaussian-trained recovery at large phase jitter. Runtime benchmarks confirm linear scaling with the number of Monte Carlo samples.
https://arxiv.org/abs/2512.23776
Academic Papers
svg
7b2f1293b64b02dc61ad8fe24af441b0835f3a864deba62df0d4e8f3a8170464
2026-01-01T00:00:00-05:00
Efficient simulation of logical magic state preparation protocols
arXiv:2512.23799v1 Announce Type: new Abstract: Developing space- and time-efficient logical magic state preparation protocols will likely be an essential step towards building a large-scale fault-tolerant quantum computer. Motivated by this need, we introduce a scalable method for simulating logical magic state preparation protocols under the standard circuit-level noise model. When applied to protocols based on code switching, magic state cultivation, and magic state distillation, our method yields a complexity polynomial in (i) the number of qubits and (ii) the non-stabilizerness, e.g., stabilizer rank or Pauli rank, of the target encoded magic state. The efficiency of our simulation method is rooted in a curious fact: every circuit-level Pauli error in these protocols propagates to a Clifford error at the end. This property is satisfied by a large family of protocols, including those that repeatedly measure a transversal Clifford that squares to a Pauli. We provide a proof-of-principle numerical simulation that prepares a magic state using such logical Clifford measurements. Our work enables practical simulation of logical magic state preparation protocols without resorting to approximations or resource-intensive state-vector simulations.
https://arxiv.org/abs/2512.23799
Academic Papers
svg
cd670100240a43404d13c4b8b18e67a7138fb8f1eef9dfea618aedbc445581c9
2026-01-01T00:00:00-05:00
Mermin Devices for Generalized Dicke States
arXiv:2512.23803v1 Announce Type: new Abstract: We present here several new exact results for a number of entangled states: the W-state of three qubits and its generalization -- Dicke states for more than three qubits. We derive these results by bounding the expected values of the Bell-Mermin operators. We review the three qubit GHZ Mermin device, make its generalization to four qubits, and then construct analogous Mermin devices for the generalized Dicke states of three and four qubits. As a result of studying if their operations can be fully explained by Mermin's instructional sets, we show that the GHZ and Dicke states of three qubits and the GHZ state of four qubits do not allow such a description. However, among the two generalized Dicke states of four qubits, one does allow and the other does not allow such a description.
https://arxiv.org/abs/2512.23803
Academic Papers
svg
b6f28a60ef16652bc6333136f905dfb1bce41a7f6f3f7d30302bb96974f05122
2026-01-01T00:00:00-05:00
Squeezed states for Frenkel-like two-fermion composite bosons
arXiv:2512.23867v1 Announce Type: new Abstract: We investigate squeezed states of composite bosons (cobosons) formed by pairs of spin-$1/2$ fermions, with emphasis on Frenkel-like cobosons. While squeezing for standard bosonic modes is well established, its extension to cobosons requires accounting for Pauli blocking and the resulting non-canonical commutation algebra. Building on earlier constructions of coboson coherent states, we define squeezed cobosons as eigenstates of a Bogoliubov transformed coboson operator and derive explicit expressions for the associated quadrature variances. We show that the underlying fermionic structure leads to state-dependent modifications of the Heisenberg--Robertson uncertainty bound, which may fall below the canonical bosonic limit without implying any violation of uncertainty principles. Numerical results based on finite-dimensional matrix representations illustrate how these effects constrain the attainable squeezing. Our framework is relevant to composite boson systems such as tightly bound electron-hole pairs and provides a physically transparent setting to probe compositeness through observable quadrature fluctuations.
https://arxiv.org/abs/2512.23867
Academic Papers
svg
9f9136e8ccae0fc1d362cae4121660ea6b8bc443a041546e17607a51c2032c52
2026-01-01T00:00:00-05:00
Entangled photon triplets using lithium niobate nanophotonics
arXiv:2512.24053v1 Announce Type: new Abstract: Multiphoton states are needed for quantum communication and computation. Multiphoton states are significantly more difficult to generate than one- and two-photon states because two individual down-conversion processes must be cascaded. Only efficiencies of $<100$ Hz/mW have been reported to date. We integrate two down-converters on the same thin-film lithium niobate waveguide, significantly enhancing the cascaded process efficiency to $237 \pm 36$ kHz/mW. The measured $4.4\times10^{-5}$ probability of the second down-converter, which sets the limit on detectable triplet rates, exceeds those of previous triplet sources by an order of magnitude and demonstrates a path towards MHz rates of triplets for quantum applications.
https://arxiv.org/abs/2512.24053
Academic Papers
svg
70c6fd598447d4656d18b91df87eafe996c73b065c3a2d88bda7533be7cbd3cc
2026-01-01T00:00:00-05:00
A new entanglement measure based on the total concurrence
arXiv:2512.24057v1 Announce Type: new Abstract: Quantum entanglement is a crucial resource in quantum information processing, advancing quantum technologies. The greater the uncertainty in subsystems' pure states, the stronger the quantum entanglement between them. From the dual form of $q$-concurrence ($q\geq 2$) we introduce the total concurrence. A bona fide measure of quantum entanglement is introduced, the $\mathcal{C}^{t}_q$-concurrence ($q \geq 2$), which is based on the total concurrence. Analytical lower bounds for the $\mathcal{C}^{t}_q$-concurrence are derived. In addition, an analytical expression is derived for the $\mathcal{C}^{t}_q$-concurrence in the cases of isotropic and Werner states. Furthermore, the monogamy relations that the $\mathcal{C}^{t}_q$-concurrence satisfies for qubit systems are examined. Additionally, based on the parameterized $\alpha$-concurrence and its complementary dual, the $\mathcal{C}^{t}_\alpha$-concurrence $(0\leq\alpha\leq\frac{1}{2})$ is also proposed.
https://arxiv.org/abs/2512.24057
Academic Papers
svg
c9518a3556203ec4f9166c0b0dfc320a44165982c45242c9445da690f71f988b
2026-01-01T00:00:00-05:00
Quantum Speed Limits Based on the Sharma-Mittal Entropy
arXiv:2512.24070v1 Announce Type: new Abstract: Quantum speed limits (QSLs) establish intrinsic bounds on the minimum time required for the evolution of quantum systems. We present a class of QSLs formulated in terms of the two-parameter Sharma-Mittal entropy (SME), applicable to finite-dimensional systems evolving under general nonunitary dynamics. In the single-qubit case, the QSLs for both quantum channels and non-Hermitian dynamics are analyzed in detail. For many-body systems, we explore the role of SME-based bounds in characterizing the reduced dynamics and apply the results to the XXZ spin chain model. These entropy-based QSLs characterize fundamental limits on quantum evolution speeds and may be employed in contexts including entropic uncertainty relations, quantum metrology, coherent control and quantum sensing.
https://arxiv.org/abs/2512.24070
Academic Papers
svg
d29a0590c99aa7d0b2e007538791698833d3cdb7045dd9ae52f0f6b93f4938f7
2026-01-01T00:00:00-05:00
Parametric amplification of continuous variable entangled state for loss-tolerant multi-phase estimation
arXiv:2512.24081v1 Announce Type: new Abstract: Quantum parameter estimation exploits quantum states to achieve estimation sensitivity beyond classical limit. In continuous variable (CV) regime, squeezed state has been exploited to implement deterministic phase estimation. It is however, often restricted by fragility of quantum states. The quantum phase estimation sensitivity of squeezed state is significantly affected by loss or detection inefficiency, which restrict its applications. This issue can be solved by using a method of parametric amplification of squeezed state \cite{OPA}. In this work, we implement multi-phase estimation with optical parametric amplification of entanglement generated from squeezed states. We find multi-phase estimation sensitivity is robust against loss or detection inefficiency, where we use two-mode Einstein-Podolsky-Rosen entangled state and four-mode cluster state for analysis. Our work provides a method for realizing large-scale quantum metrology in real-world applications against loss or detection inefficiency.
https://arxiv.org/abs/2512.24081
Academic Papers
svg
0419a74b91059a4aa51f194046dc99370f5c3c3ccd2e5a076010aecccebaf069
2026-01-01T00:00:00-05:00
Testing Noise Correlations by an AI-Assisted Two-Qubit Quantum Sensor
arXiv:2512.24135v1 Announce Type: new Abstract: We introduce and validate a machine learning-assisted protocol to classify time and space correlations of classical noise acting on a quantum system, using two interacting qubits as probe. We consider different classes of noise, according to their Markovianity and spatial correlations. Leveraging the sensitivity of a coherent population transfer protocol under three distinct driving conditions, the various noises are discriminated by only measuring the final transfer efficiencies. This approach reaches around 90% accuracy with a minimal experimental overhead.
https://arxiv.org/abs/2512.24135
Academic Papers
svg
23d9ded858ad7c4c5c22114c33b7ab69d42a8e1557a594c6ddfdfdec80bd0488
2026-01-01T00:00:00-05:00
Capacity-time Trade-off in Highly Reliable Quantum Memory
arXiv:2512.24245v1 Announce Type: new Abstract: Reliable optical quantum memory is limited by real-world imperfections such as disordered coupling and detuning. Existing studies mostly address these factors separately, while in practice their correlated effects set a fundamental limit on storage performance. We develop a comprehensive model that simultaneously incorporates disordered coupling and detuning. It is shown that these disorders induce a random Berry's phase in the stored states, while decoherence from disordered coupling stems from correlations with detuning rather than individual imperfections. This mechanism imposes a fundamental trade-off among storage capacity, storage time, and driving time, setting a universal limit for reliable storage. Extending the analysis to memory based devices operating with multiple storage processes shows that enhancing parameter independence improves their reliability. We further provide a more precise relation for measuring and correcting global detuning, which is directly relevant to current experimental protocols.
https://arxiv.org/abs/2512.24245
Academic Papers
svg
9afa0a35629ecc28aaef25febbf793e6197f1f0c95974375a099053197af202f
2026-01-01T00:00:00-05:00
Deterministic distribution of W-class states in quantum networks
arXiv:2512.24274v1 Announce Type: new Abstract: Multipartite entangled states possess a number of non-intuitive properties, making them a useful resource for various quantum information-processing tasks. The three-qubit W-state is one such example where every state is robust to single-qubit loss. However, this state is not suitable for deterministic distribution, and deterministic communication protocols. Here, we focus on the distribution of a non-symmetric version of such states, namely $W_{\mathrm{mod}}$ states. These states belong to the W-class, and have one ebit of entanglement across a specific bipartition, enabling deterministic teleportation and superdense coding. In particular, we describe a few protocols through which these multipartite entangled states can be distributed {\it deterministically} in a quantum network by first preparing them locally in a central node and then transmitting individual qubits to the end nodes. We analyse the performance of these protocols based on the fidelity of the final distributed state, considering all types of noises that can act during the distribution. Finally, we compare the performance of the protocols to the case where the distribution is performed without any central node.
https://arxiv.org/abs/2512.24274
Academic Papers
svg
d42531bf7f46c863b826c74f7fb08b66fa4e4b6908fce95a0b5afd1bcaaa9e74
2026-01-01T00:00:00-05:00
Ergodic dynamics in iterated quantum protocols
arXiv:2512.24282v1 Announce Type: new Abstract: We study measurement-induced nonlinear dynamics generated by an iterated quantum protocol combining an entangling gate, a single-qubit rotation, and post-selection. For pure single-qubit inputs, a particular choice of the single-qubit unitary yields globally chaotic, strongly mixing dynamics that explores the entire Bloch sphere, providing a physical realization of ergodic behavior in a complex map. We extend the analysis to realistic, noisy preparation by considering mixed initial states and the induced nonlinear evolution inside the Bloch sphere. Numerical results show that the maximally mixed state is an attractor for mixed inputs, although many trajectories exhibit transient increases in purity before ultimately converging. To quantify robustness against noise, we introduce a practical notion of quasi-ergodicity: ensembles prepared in a small angular patch at fixed purity rapidly spread to cover all directions, while the purity gradually decreases toward its minimal value. By varying the final single-qubit gate, we identify a broad family of protocols that remain ergodic-like for pure states, supported by consistent diagnostics including the absence of attracting cycles, agreement of time and ensemble statistics, rapid spreading from localized regions, and exponential sensitivity to initial conditions. Away from the special globally mixing case, the mixed-state dynamics can change qualitatively: for most ergodic-like parameters, a finite subset of noisy inputs is driven toward purification rather than complete mixing, demonstrating the coexistence of statistical mixing and purification within a single iterated protocol.
https://arxiv.org/abs/2512.24282
Academic Papers
svg
b8c25bdf3f00d24f123124c29ae129ee016895ddc6da70dc7d46210640205a37
2026-01-01T00:00:00-05:00
Quantum Thermodynamics and Quantum Perspectives
arXiv:2512.24296v1 Announce Type: new Abstract: After a brief historical perspective, we introduce the key notions of work and heat for quantum systems, to then apply them to quantum engines operating on quantum Otto and Carnot cycles. The irreversible and dissipative character of the quantum Otto cycle is briefly analyzed, contrasting with the energetic optimality of the quantum Carnot cycle. The central question of quantum effects is also addressed and illustrated with several examples. Finally, the last part strives to explain the role that quantum thermodynamics plays for quantum applications and quantum technologies, particularly in relation to energy optimization and the trade-off between performances and energy costs.
https://arxiv.org/abs/2512.24296
Academic Papers
svg
7409879d2b29977d34d96849d4bf426789809f26d700474742cf4376d76babfd
2026-01-01T00:00:00-05:00
In defense of temporal Tsirelson bound
arXiv:2512.24304v1 Announce Type: new Abstract: In a recent paper, Chatterjee et al. [Phys. Rev. Lett 135, 220202 (2025)] analyze and experimentally implement a specific unitary evolution of a simple quantum system. The authors refer to this type of dynamics as a "superposition of unitary time evolutions." They claim that such an evolution enables a violation of the temporal Tsirelson bound in the Leggett-Garg scenario, a claim that is supported by their experimental results. In this work, we show that the proposed evolution can be understood within a more conventional framework, without invoking a superposition of evolutions. Furthermore, we demonstrate that the apparent violation of the bound arises because the measured quantities are not consistent with the assumptions of the Leggett-Garg scenario. This is a slightly extended version of the Comment submitted for publication in Phys. Rev. Lett.
https://arxiv.org/abs/2512.24304
Academic Papers
svg
e9249e7477743fe0eb422d3b8004761d2c11938830a804a7ed5b1645c2528ef2
2026-01-01T00:00:00-05:00
Quantum Computing, Ising Formulation, and the Traveling Salesman Problem
arXiv:2512.24308v1 Announce Type: new Abstract: Ising formulation is important for many NP problems (Lucas, 2014). This formulation enables implementing novel quantum computing methods including Quantum Approximate Optimization Algorithm and Variational Quantum Eigensolver (VQE). Here, we investigate closely the traveling salesman problem (TSP). First, we present some non-trivial issues related to Ising model view versus a realistic salesman. Then, focusing on VQE we discuss and clarify the use of: a.-- Conventional VQE and how it is relevant as a novel SAT-solver; b.-- Qubit efficiency and its importance in the Noisy Intermediate Scale Quantum-era; and c.-- the relevance and importance of a novel approach named Discrete Quantum Exhaustive Search (Alfassi, Meirom, and Mor, 2024), for enhancing VQE and other methods using mutually unbiased bases. The approach we present here in details can potentially be extended for analyzing approximating and solving various other NP complete problems. Our approach can also be extended beyond the Ising model and beyond the class NP, for example to the class Quantum Merlin Arthur (QMA) of problems, relevant for quantum chemistry and for general spin problems.
https://arxiv.org/abs/2512.24308
Academic Papers
svg
9c12aa6bd33b650018ce083e3d73a1befcab8bbf040e8f06d50919117432ae99
2026-01-01T00:00:00-05:00
Gravitationally Induced Entanglement Between Particles in Harmonic Traps: Limits for Gaussian States
arXiv:2512.24312v1 Announce Type: new Abstract: Gravitationally induced entanglement has been proposed as a probe of the quantum nature of gravity. This work analyzes a system of two particles in harmonic traps interacting only through gravity, considering thermal and two-mode squeezed initial states. For thermal states, a maximum temperature is identified above which entanglement cannot be generated, and for fixed system parameters an optimal trap frequency that maximizes the logarithmic negativity is found. Squeezing the initial state does not further enhance the entanglement generation, but increases the temperature range over which it can be observed. Extending the analysis to general Gaussian states, an upper bound on the achievable entanglement is derived and shown to be saturated, for example, by ground and squeezed states. The results show that the amount of entanglement generated in this setup is extremely small, highlighting the experimental challenges of observing gravitationally induced quantum effects.
https://arxiv.org/abs/2512.24312
Academic Papers
svg
9c3caceb0db076cdb7c5067e0473de8d7896b0f14b733bada909230b00d6f657
2026-01-01T00:00:00-05:00
Increased-Efficiency Multiple-Decoding-Attempts Error Correction for Continuous-Variable Quantum Key Distribution
arXiv:2512.24387v1 Announce Type: new Abstract: In continuous-variable quantum key distribution (CV-QKD), the performance of the information reconciliation (IR) step is critical for the achievable secret key rate (SKR) and transmission distance. We show how to improve on the recently introduced implementation of an IR-protocol involving multiple decoding attempts (MDA) and validate the method on simulated data in different application scenarios. Throughout, we demonstrate meaningful SKR-gains compared to both the standard protocol of a single decoding attempt and to the original MDA-implementation, even at given decoding complexity.
https://arxiv.org/abs/2512.24387
Academic Papers
svg
ba5ba5906b127364d8d135f895998b68e1511849ae3abd2a58c6033a65d9454c
2026-01-01T00:00:00-05:00
Diagonal Unitary Covariant Superchannels
arXiv:2512.24389v1 Announce Type: new Abstract: We present a complete characterization of diagonal unitary covariant (DU-covariant) superchannels, i.e. higher-order transformations transforming quantum channels into themselves. Necessary and sufficient conditions for complete positivity and trace preservation are derived and the canonical decomposition describing DU-covariant superchannels is provided. The presented framework unifies and extends known families of covariant quantum channels and enables explicit analysis of their action on physically relevant examples, including amplitude-damping, bit-flip, and Pauli channels. Our results provide a practical toolbox for symmetry-restricted higher-order quantum processes and offer a setting for exploring open problems such as the PPT$^2$ conjecture.
https://arxiv.org/abs/2512.24389
Academic Papers
svg
13f95415cdac707e7411613dbf6b0b020d0cfabb9c219cb02c767c2ff039132b
2026-01-01T00:00:00-05:00
Machine Learning-Aided Optimal Control of a Qubit Subjected to External Noise
arXiv:2512.24393v1 Announce Type: new Abstract: We apply a machine-learning-enhanced greybox framework to a quantum optimal control protocol for open quantum systems. Combining a whitebox physical model with a neural-network blackbox trained on synthetic data, the method captures non-Markovian noise effects and achieves gate fidelities above 90% under Random Telegraph and Ornstein-Uhlenbeck noise. Critical issues of the approach are discussed.
https://arxiv.org/abs/2512.24393
Academic Papers
svg
15f6a7686c7435db7f8e7e16ce1064a4230fd43a64679a60f9f042d17833a910
2026-01-01T00:00:00-05:00
Harnessing subspace controllability: Time-optimal Dicke-state generation in Heisenberg-coupled qubit arrays with a single local control
arXiv:2512.24406v1 Announce Type: new Abstract: We explore the feasibility of realizing Dicke states in qubit arrays with always-on isotropic Heisenberg coupling between adjacent qubits, assuming a single Zeeman-type control acting in the $z$ direction on an actuator qubit. The Lie-algebraic criteria of controllability imply that such an array is not completely controllable, but satisfies the conditions for subspace controllability on any subspace with a fixed number of excitations. Therefore, a qubit array described by the model under consideration is state-to-state controllable for an arbitrary choice of initial and final states that have the same Hamming weight. This limited controllability is exploited here for the time-optimal dynamical generation of an $a$-excitation Dicke state $|D^{N}_{a}\rangle$ ($a=1,2,\ldots, N-1$) in a linear array with $N$ qubits starting from a generic Hamming-weight-$a$ product state. To dynamically generate the desired Dicke states -- including $W$ states $|W_{N}\rangle$ as their special ($a=1$) case -- in the shortest possible time with a single local $Z$ control, we employ an optimal-control scheme based on the dressed Chopped RAndom Basis (dCRAB) algorithm. We optimize the target-state fidelity over the expansion coefficients of smoothly-varying control fields in a truncated random Fourier basis; this is done by combining Nelder-Mead-type local optimizations with the multistart-based clustering algorithm that facilitates searches for global extrema. In this manner, we obtain the optimal control fields for Dicke-state preparation in arrays with up to $N=9$ qubits. Based on our numerical results, we find that the shortest possible state-preparation times scale quadratically with $N$. Finally, we demonstrate the robustness of our control scheme against small control-field deviations from the optimal values.
https://arxiv.org/abs/2512.24406
Academic Papers
svg
023fd2926029642c0951459d2a2afc54c9faea437eec9e95f0e058a5d6a465e5
2026-01-01T00:00:00-05:00
Dissipation-Stabilized Quantum Revivals in a Non-Hermitian Lattice Gauge Theory
arXiv:2512.24418v1 Announce Type: new Abstract: With the advent of quantum simulation experiments of lattice gauge theories (LGTs), an open question is the effect of non-Hermiticity on their rich physics. The well-known PXP model, a U$(1)$ LGT with a two-level electric field in one spatial dimension, has become a paradigm of exotic physics in and out of equilibrium. Here, we introduce a non-Hermitian version in which the spin-flip rate differs between the two spin directions. While the naive expectation is that non-Hermiticity might suppress coherent phenomena such as quantum many-body scars, we find that when the facilitating direction of the spin is disfavored, the oscillations are instead \emph{enhanced}, decaying much slower than in the PXP limit. We demonstrate that this can be understood through a similarity transformation that maps our model to the standard PXP model, revealing that the oscillations are enhanced versions of the PXP scars. Our work provides an analytically tractable and conceptually simple example where non-Hermiticity enhances the stability of dynamically non-trivial coherent many-body modes.
https://arxiv.org/abs/2512.24418
Academic Papers
svg
3f1233a5c5e263999acc7f69c41bc18ffd9a100d08632a36959442730ae05361
2026-01-01T00:00:00-05:00
A Quantum-Inspired Algorithm for Graph Isomorphism
arXiv:2512.24423v1 Announce Type: new Abstract: The Noisy Intermediate-Scale Quantum (NISQ) era of technology in which we currently find ourselves is defined by non-universality, susceptibility to errors and noise, and a search for useful applications. While demonstrations of practical quantum advantage remain elusive in this era, it provides space to develop and analyze the advantages and limitations of systems and their ability to solve problems. In this work, we critically assess a proposed quantum algorithm for the graph isomorphism problem, implemented on a photonic quantum device. Inspired by the nature of this quantum algorithm, we formulate a necessary condition for the isomorphism of graphs encoded in Gaussian boson samplers and a classical algorithm to test for it. Our classical algorithm makes use of efficiently computable statistical properties of a quantum sampling system to show a pair of graphs fail to meet our necessary condition and thus cannot be isomorphic. We analyze our algorithm in the context of the inspiring, sampler-based quantum algorithm of Br\`adler et. al., the classical color refinement algorithm, and the state-of-the-art quasi-polynomial Babai algorithm.
https://arxiv.org/abs/2512.24423
Academic Papers
svg
79aeb84006b56372999803aa36728a83810bb685ef722dde3f5478956097bc85
2026-01-01T00:00:00-05:00
Detection of quantum entanglement across the event horizon
arXiv:2512.24424v1 Announce Type: new Abstract: We investigate the problem of distinguishing between separable and entangled states of two quantum wave packets, one of which falls into a black hole. Intuitively, one might expect the two scenarios to be indistinguishable, since the information carried by one wave packet is hidden beyond the event horizon. We show, however, that fundamental limitations on the localizability of quantum states render the two scenarios, in principle, distinguishable. Employing tools from quantum state discrimination theory, we analyze a concrete realization and discuss the configurations that maximize the probability of successfully distinguishing between the two cases.
https://arxiv.org/abs/2512.24424
Academic Papers
svg
f3888f96588c6ab86dc3088a996d05c1bb214368914b5f75c55e0f7f5aaabb1c
2026-01-01T00:00:00-05:00
Uncertainty inequalities in a non-Hermitian scenario: the problem of the metric
arXiv:2512.24437v1 Announce Type: new Abstract: We investigate uncertainty relations for quantum observables evolving under non-Hermitian Hamiltonians, with particular emphasis on the role of metric operators. By constructing appropriate metrics in each dynamical regime, namely the unbroken-symmetry phase, the broken-symmetry phase, and at exceptional points, we provide a consistent definition of expectation values, variances, and time evolution within a Krein-space framework. Within this approach, we derive a generalized Heisenberg-Robertson uncertainty inequality which is valid across all spectral regimes. As an application, we analyze a two level model with parity-time reversal symmetry and show that, while the uncertainty measure exhibits oscillatory behavior in the unbroken phase, it evolves toward a minimum-uncertainty steady state in the broken symmetry phase and at exceptional points. We further compare our metric-based description with a Lindblad master-equation approach and show their agreement in the steady state. Our results highlight the necessity of incorporating appropriate metric structures to extract physically meaningful predictions from non-Hermitian quantum dynamics.
https://arxiv.org/abs/2512.24437
Academic Papers
svg
3a820d1e86470522b819a42e7f11364560582f0998b226cc47c02b4913129930
2026-01-01T00:00:00-05:00
Incorporating multi-qubit exchange coupling effects between transmon qubits in Maxwell-Schr\"{o}dinger numerical methods
arXiv:2512.24448v1 Announce Type: new Abstract: Superconducting qubits have emerged as a leading platform for realizing quantum computers. Accurate modeling of these devices is essential for predicting performance, improving design, and optimizing control. Many modeling approaches currently rely on lumped circuit approximations or other simplified treatments that can be limited in resolving the interplay between the qubit dynamics and the electromagnetic circuitry, leading to significant experimental deviations from numerical predictions at times. To address many of these limitations, methods that self-consistently solve the Schr\"{o}dinger equation for qubit dynamics with the classical Maxwell's equations have been developed and shown to accurately predict a wide range of effects related to superconducting qubit control and readout. Despite these successes, these methods have not been able to consider multi-qubit effects that give rise to qubit-qubit entanglement. Here, we address this by rigorously deriving how multi-qubit coupling effects between transmon qubits can be embedded into Maxwell-Schr\"{o}dinger methods. To support this, we build on earlier first-principles derivations of Maxwell-Schr\"{o}dinger methods for the specific case of two transmon qubits coupled together through a common electromagnetic system in the dispersive regime. To aid in validating aspects of the Maxwell-Schr\"{o}dinger framework, we also provide a new interpretation of Maxwell-Schr\"{o}dinger methods as an efficient simulation strategy to capture the class of non-Markovian open quantum system dynamics. Our results demonstrate that these effects can give rise to strong classical crosstalk that can significantly alter multi-qubit dynamics, which we demonstrate for the cross-resonance gate. These classical crosstalk effects have been noted in cross-resonance experiments, but previous quantum theory and device analysis could not explain their origin.
https://arxiv.org/abs/2512.24448
Academic Papers
svg
7146e986a6a053d475b28a1157fde0163f1f27876dac93499b224e9db78c5040
2026-01-01T00:00:00-05:00
Quantum phase synchronisation enhanced via Coulomb interaction in an optomechanical system
arXiv:2512.24454v1 Announce Type: new Abstract: In this work, we investigate the dynamics of quantum synchronization in a four-mode optomechanical system, focusing on the influence of the Coulomb interaction between two mechanical resonators. We analyze the effect of the Coulomb coupling on three distinct synchronization regimes, i.e., complete quantum synchronization, $\phi$-synchronization, and quantum phase synchronization. Our results show that while the Coulomb interaction plays a pivotal role in significantly enhancing quantum phase synchronization by facilitating energy exchange and phase coherence, it has little impact on complete and $\phi$-synchronization. This indicates that amplitude and frequency locking are primarily determined by the optical driving, whereas phase alignment depends critically on inter-resonator coupling. We also demonstrate that the oscillations of the two optical cavities, which are indirectly coupled via the mechanical resonators, can become aligned over time, resulting in classical synchronization. These findings provide a robust mechanism for controlling collective quantum dynamics and offer a foundation for applications in quantum communication, precision sensing, and the development of synchronized quantum networks.
https://arxiv.org/abs/2512.24454
Academic Papers
svg
ac425a612c9b0a84c24a34551d865025bdcc2a75dca5f405cfb293c8dd50063f
2026-01-01T00:00:00-05:00
A Boundary Condition Perspective on Circuit QED Dispersive Readout
arXiv:2512.24466v1 Announce Type: new Abstract: Boundary conditions in confined geometries and measurement interactions in quantum mechanics share a common structural role: both select a preferred basis by determining which states are compatible with the imposed constraint. This paper develops this perspective for circuit QED dispersive readout through a first-principles derivation starting from the circuit Lagrangian. The transmon qubit terminating a transmission line resonator provides a frequency-dependent boundary condition whose pole structure encodes the qubit's transition frequencies; different qubit states yield different resonator frequencies. Two approximations, linear response and a pole-dominated expansion valid near resonance, reduce the boundary function to a rational form in the Sturm-Liouville eigenparameter. The extended Hilbert space of the Fulton-Walter spectral theory then provides a framework for the dressed-mode eigenvalue problem conditional on the qubit state. The dispersive shift and vacuum Rabi splitting emerge from the transcendental eigenvalue equation, with the residues determined by matching to the splitting: $\delta_{ge} = 2Lg^2\omega_q^2/v^4$, where $g$ is the vacuum Rabi coupling. A level repulsion theorem guarantees that no dressed mode frequency coincides with a transmon transition. For two qubits with matched dispersive shifts, odd-parity states become frequency-degenerate; true parity-only measurement requires engineered suppression of linear dispersive terms.
https://arxiv.org/abs/2512.24466
Academic Papers
svg
7c152be5430fe6e29c4e70fdc41d62dfcd0baa760ac45ac213ecfcf1ffb59790
2026-01-01T00:00:00-05:00
Three-Axis Spin Squeezed States Associated with Excited-State Quantum Phase Transitions
arXiv:2512.24472v1 Announce Type: new Abstract: Spin squeezing in collective atomic ensembles enables quantum-enhanced metrology by reducing noise below the standard quantum limit through nonlinear interactions. Extending the one-axis and two-axis twisting paradigms of Kitagawa and Ueda, we introduce a general class of three-axis spin squeezed states within the anisotropic Lipkin-Meshkov-Glick model. The model features direction-dependent quadratic couplings that interpolate between uniaxial and biaxial regimes and can be interpreted as an asymmetric quantum rotor. Using semiclassical dynamics, Majorana representations, and Husimi-Q distributions, we analyze the structure and metrological properties of the resulting states. The three-axis framework reproduces the known N^(-2/3) scaling of one-axis twisting and the Heisenberg-limited N^(-1) scaling of two-axis twisting, while allowing additional tunability and enhanced entanglement generation in low-spin systems. We further show that tuning the anisotropy parameters induces ground-state and excited-state quantum phase transitions, including a second-order transition associated with level clustering and critical dynamics. These results unify spin squeezing, quantum criticality, and rotor analogies, and suggest implementations in Rydberg arrays and cavity-QED platforms for precision sensing and quantum simulation.
https://arxiv.org/abs/2512.24472
Academic Papers
svg
aaaca8aafb9c816027c1297f97d422dd60e8e0572fe25833cd7034b511708f93
2026-01-01T00:00:00-05:00
Spectroscopy of Quantum Phase Slips: Visualizing Complex Real-Time Instantons
arXiv:2512.24495v1 Announce Type: new Abstract: Parametrically driven oscillators can emerge as a basis for the next generation of qubits. Classically, these systems exhibit two stable oscillatory states with opposite phases. Upon quantization, these states turn into a pair of closely spaced Floquet states, which can serve as the logical basis for a qubit. However, interaction with the environment induces phase-slip events which set a limit on qubit coherence. Such phase slips persist even at zero temperature due to a mechanism known as quantum activation \cite{QuantumActivation}. In contrast to conventional tunneling, the quantum activation is described by a {\em real-time} instanton trajectory in the complexified phase space of the system. In this work, we show that the phase-slip rate is exponentially sensitive to weak AC perturbations. The spectrum of the system's response -- captured by the so-called logarithmic susceptibility (LS) -- enables a direct observation of characteristic features of real-time instantons. Studying this spectrum suggests new means of efficient qubit control.
https://arxiv.org/abs/2512.24495
Academic Papers
svg
d926babbd57b43e236bb86415dd068072217eb789b7a9a5227ab5f6291aaf5a1
2026-01-01T00:00:00-05:00
Geometric phase of exceptional point as quantum resonance in complex scaling method
arXiv:2512.24528v1 Announce Type: new Abstract: Non-Hermitian operators and exceptional points (EPs) are now routinely realized in few-mode systems such as optical resonators and superconducting qubits. However, their foundations in genuine scattering problems with unbounded Hamiltonians remain much less clear. In this work, we address how the geometric phase associated with encircling an EP should be formulated when the underlying eigenstates are quantum resonances within a one-dimensional scattering model. To do this, we employ the complex scaling method, where resonance poles of the S-matrix are realized as discrete eigenvalues of the non-Hermitian dilated Hamiltonian, to construct situations in which resonant and scattering states coalesce into an EP in the complex energy plane, that is, the resonance pole is embedded into the continuum spectrum. We analyze the self-orthogonality in the vicinity of an EP and the Berry phase. Our results provide a bridge between non-Hermitian spectral theory and the traditional theory of quantum resonances.
https://arxiv.org/abs/2512.24528
Academic Papers
svg
1daff554785f427f55145ff3c628af1593aa1378d9b44d37b163095ae92a8e6d
2026-01-01T00:00:00-05:00
TLS-induced thermal nonlinearity in a micro-mechanical resonator
arXiv:2512.24539v1 Announce Type: new Abstract: We present experimental evidence of a thermally-driven amplitude-frequency nonlinearity in a thin-film quartz phononic crystal resonator at millikelvin temperatures. The nonlinear response arises from the coupling of the mechanical mode to an ensemble of microscopic two-level system defects driven out of equilibrium by a microwave drive. In contrast to the conventional Duffing oscillator, the observed nonlinearity exhibits a mixed reactive-dissipative character. Notably, the reactive effect can manifest as either a softening or hardening of the mechanical resonance, depending on the ratio of thermal to phonon energy. By combining the standard TLS theory with a thermal conductance model, the measured power-dependent response is quantitatively reproduced and readout-enhanced relaxation damping from off-resonant TLSs is identified as the primary mechanism limiting mechanical coherence. Within this framework, we delineate the conditions under which similar systems will realize this nonlinearity.
https://arxiv.org/abs/2512.24539
Academic Papers
svg
cbc0dff0b5f5c971811d3bd78e10fbdb2cddafae51f9cc68d4c696b61af88fa8
2026-01-01T00:00:00-05:00
QAOA-MaxCut has barren plateaus for almost all graphs
arXiv:2512.24577v1 Announce Type: new Abstract: The QAOA has been the subject of intense study over recent years, yet the corresponding Dynamical Lie Algebra (DLA)--a key indicator of the expressivity and trainability of VQAs--remains poorly understood beyond highly symmetric instances. An exponentially scaling DLA dimension is associated with the presence of so-called barren plateaus (BP) in the optimization landscape, which renders training intractable. In this work, we investigate the DLA of QAOA applied to the canonical MaxCut, for both weighted and unweighted graphs. For weighted graphs, we show that when the weights are drawn from a continuous distribution, the DLA dimension grows as $\Theta(4^n)$ almost surely for all connected graphs except paths and cycles. In the more common unweighted setting, we show that asymptotically all but an exponentially vanishing fraction of graphs have $\Theta(4^n)$ large DLA dimension. The entire simple Lie algebra decomposition of the corresponding DLAs is also identified, from which we prove that the variance of the loss function is $O(1/2^n)$, implying that QAOA on these weighted and unweighted graphs all suffers from BP. Moreover, we give explicit constructions for families of graphs whose DLAs have exponential dimension, including cases whose MaxCut is in $\mathsf P$. Our proof of the unweighted case is based on a number of splitting lemmas and DLA-freeness conditions that allow one to convert prohibitively complicated Lie algebraic problems into amenable graph theoretic problems. These form the basis for a new algorithm that computes such DLAs orders of magnitude faster than previous methods, reducing runtimes from days to seconds on standard hardware. We apply this algorithm to MQLib, a classical MaxCut benchmark suite covering over 3,500 instances with up to 53,130 vertices, and find that, ignoring edge weights, at least 75% of the instances possess a DLA of dimension at least $2^{128}$.
https://arxiv.org/abs/2512.24577
Academic Papers
svg
0c4339aa963437bfb22910249d85566fc5b4ff80cc0199e31df757d6e5a44cae
2026-01-01T00:00:00-05:00
Hidden rotation symmetry of the Jordan-Wigner transformation and its application to measurement in quantum computation
arXiv:2512.24589v1 Announce Type: new Abstract: Using a global rotation by theta about the z-axis in the spin sector of the Jordan-Wigner transformation rotates Pauli matrices X and Y in the x-y-plane, while it adds a global complex phase to fermionic quantum states that have a fixed number of particles. With the right choice of angles, this relates expectation values of Pauli strings containing products of X and Y to different products, which can be employed to reduce the number of measurements needed when simulating fermionic systems on a quantum computer. Here, we derive this symmetry and show how it can be applied to systems in Physics and Chemistry that involve Hamiltonians with only single-particle (hopping) and two-particle (interaction) terms. We also discuss the consequences of this for finding efficient measurement circuits in variational ground state preparation.
https://arxiv.org/abs/2512.24589
Academic Papers
svg
294a3685fe2cc85d386a4485d6b63f793d7138eedf1ead533a47e5feacb4ce42
2026-01-01T00:00:00-05:00
Variance Decomposition in Bohmian Mechanics with Weak Actual Value Field and Quantum Potential
arXiv:2512.24664v1 Announce Type: new Abstract: We introduce a trajectory-based decomposition of quantum variances within Bohmian mechanics. By extending the weak actual value to a field on configuration space, we prove, under strong regularity conditions for stationary bound states, that the standard quantum variance splits into two non-negative terms: the ensemble variance of weak actual value and a quantum termarising from phase-amplitude coupling. For momentum, linking variance-level fluctuations to the average quantum potential. The decomposition fails to provide a physical interpretation for spin, reinforcing the Bohmian tenet that only position is fundamental. The work provides a formal tool for analyzing quantum fluctuations and clarifies the interpretative limits of such a trajectory-based approach.
https://arxiv.org/abs/2512.24664
Academic Papers
svg
ed0483c6fe8f3cafcadc6f350eb92155735bc3f3e2d2825e756dc26eb6f15f9c
2026-01-01T00:00:00-05:00
A fast and exact algorithm for stabilizer R\'enyi entropy via XOR-FWHT
arXiv:2512.24685v1 Announce Type: new Abstract: Quantum advantage is widely understood to rely on key quantum resources beyond entanglement, among which nonstabilizerness (quantum ``magic'') plays a central role in enabling universal quantum computation. However, a direct brute-force enumeration of all Pauli strings and the corresponding expectation values from a length-$2^N$ state vector, where $N$ is the system size, yields an overall computational cost scaling as $O(8^N)$, which quickly becomes infeasible as the system size grows. Here we reformulate the second-order stabilizer R\'enyi entropy in a bitstring language, expose an underlying XOR-convolution structure on $\mathbb Z_2^N$, and reduce the computation to $2^N$ fast Walsh-Hadamard transforms of length, together with pointwise operations, yielding a deterministic and exact XOR fast Walsh-Hadamard transforms algorithm with runtime scaling $O(N4^N)$ and natural parallelism. This algorithm enables high-precision, medium-scale exact calculations for generic state vectors. It provides a practical tool for probing the scaling, phase diagnostics, and dynamical fine structure of quantum magic in many-body systems.
https://arxiv.org/abs/2512.24685
Academic Papers
svg
6641d129923da0ad4707a9616b4e2a98f751d984d1a61b5f733e5a0e0bcf2414
2026-01-01T00:00:00-05:00
Interfacing Atomic Spins with Photons for Quantum Metrology, Simulation and Computation
arXiv:2512.24705v1 Announce Type: new Abstract: These lecture notes discuss applications of atom-light interactions in cavities to quantum metrology, simulation, and computation. A focus is on nonlocally interacting spin systems realized by coupling many atoms to a delocalized mode of light. We will build up from the fundamentals: understanding how a cavity enables light to coherently imprint information on atoms and atoms to imprint information on the light, enabling quantum non-demolition measurements that constitute a powerful means of engineering nonclassical states. By extension, letting the intracavity light act back on the atoms enables coherent photon-mediated interactions. I start by discussing collective spin models, emphasizing applications in entanglement-enhanced metrology, before proceeding to richer many-body physics enabled by incorporating spatiotemporal control or employing multiple cavity modes. I will highlight opportunities for leveraging these tools for quantum simulations inspired by problems in condensed matter and quantum gravity. Along the way, I provide a pedagogical introduction to criteria for strong atom-light coupling, illustrate how the corresponding figure of merit -- the cooperativity -- sets fundamental limits on the coherence of atom-light interactions, and discuss prospects for harnessing high-cooperativity cavity QED in quantum simulation and computation.
https://arxiv.org/abs/2512.24705
Academic Papers
svg
632a6d1b79fb674bf4c0506a42b927271cb1770cb68001a758c7e60fc66bb695
2026-01-01T00:00:00-05:00
Continuous-variable quantum key distribution network based on entangled states of optical frequency combs
arXiv:2512.24718v1 Announce Type: new Abstract: Continuous-variable quantum key distribution (CVQKD) features a high key rate and compatibility with classical optical communication. Developing expandable and efficient CVQKD networks will promote the deployment of large-scale quantum communication networks in the future. This paper proposes a CVQKD network based on the entangled states of an optical frequency comb. This scheme generates Einstein-Podolsky-Rosen entangled states with a frequency comb structure through the process of a type-II optical parametric oscillator. By combining with the scheme of entanglement in the middle, a fully connected CVQKD network capable of distributing secret keys simultaneously can be formed. We analyze the security of the system in the asymptotic case. Simulation results show that under commendable controlling of system loss and noise, the proposed scheme is feasible for deploying a short-distance fully connected CVQKD network. Loss will be the main factor limiting the system's performance. The proposed scheme provides new ideas for a multi-user fully connected CVQKD network.
https://arxiv.org/abs/2512.24718
Academic Papers
svg
7f81d113f4039e3c8345171a72c28e5647bd9eb461492f1e065e27b86d8adace
2026-01-01T00:00:00-05:00
Easier randomizing gates provide more accurate fidelity estimation
arXiv:2512.24744v1 Announce Type: new Abstract: Accurate benchmarking of quantum gates is crucial for understanding and enhancing the performance of quantum hardware. A standard method for this is interleaved benchmarking, a technique which estimates the error on an interleaved target gate by comparing cumulative error rates of randomized sequences implemented with the interleaved gate and without it. In this work, we show both numerically and experimentally that the standard approach of interleaved randomized benchmarking (IRB), which uses the multi-qubit Clifford group for randomization, can produce highly inaccurate and even physically impossible estimates for the error on the interleaved gate in the presence of coherent errors. Fortunately we also show that interleaved benchmarking performed with cycle benchmarking, which randomizes with single qubit Pauli gates, provides dramatically reduced systematic uncertainty relative to standard IRB, and further provides as host of additional benefits including data reusability. We support our conclusions with a theoretical framework for bounding systematic errors, extensive numerical results comparing a range of interleaved protocols under fixed resource costs, and experimental demonstrations on three quantum computing platforms.
https://arxiv.org/abs/2512.24744
Academic Papers
svg
01ce03f650b47b3d9371a512a164e60985b45e6519e37958cfd261290146e7e3
2026-01-01T00:00:00-05:00
Quadratic Continuous Quantum Optimization
arXiv:2512.24759v1 Announce Type: new Abstract: Quantum annealers can solve QUBO problems efficiently but struggle with continuous optimization tasks like regression due to their discrete nature. We introduce Quadratic Continuous Quantum Optimization (QCQO), an anytime algorithm that approximates solutions to unconstrained quadratic programs via a sequence of QUBO instances. Rather than encoding real variables as binary vectors, QCQO implicitly represents them using continuous QUBO weights and iteratively refines the solution by summing sampled vectors. This allows flexible control over the number of binary variables and adapts well to hardware constraints. We prove convergence properties, introduce a step size adaptation scheme, and validate the method on linear regression. Experiments with simulated and real quantum annealers show that QCQO achieves accurate results with fewer qubits, though convergence slows on noisy hardware. Our approach enables quantum annealing to address a wider class of continuous problems.
https://arxiv.org/abs/2512.24759
Academic Papers
svg
28eae011d20270703767c6e6a9a90acae0fca05210332e77f583a5101c670852
2026-01-01T00:00:00-05:00
Harmonic rigidity at fixed spectral gap in one dimension
arXiv:2512.24790v1 Announce Type: new Abstract: We solve the static isoperimetric problem underlying the Mandelstam-Tamm bound. Among one-dimensional confining potentials with a fixed spectral gap, we prove that the harmonic trap is the unique maximizer of the ground-state position variance. As a consequence, we obtain a sharp geometric quantum speed-limit bound on the position-position component of the quantum metric, and we give a necessary-and-sufficient condition for when the bound is saturated. Beyond the exact extremum, we establish quantitative rigidity. We control the Thomas-Reiche-Kuhn spectral tail and provide square-integrable structural stability for potentials that nearly saturate the bound. We further extend the analysis to magnetic settings, deriving a longitudinal necessary-and-sufficient characterization and transverse bounds expressed in terms of guiding-center structure. Finally, we outline applications to bounds on static polarizability, limits on the quantum metric, and benchmarking of trapping potentials.
https://arxiv.org/abs/2512.24790
Academic Papers
svg
17884bad543220901d5be9922fb39c49531f119711dbcbaa340be9f605672333
2026-01-01T00:00:00-05:00
Non-Abelian Geometric Phases in Triangular Structures And Universal SU(2) Control in Shape Space
arXiv:2512.24798v1 Announce Type: new Abstract: We construct holonomic quantum gates for qubits that are encoded in the near-degenerate vibrational $E$-doublet of a deformable three-body system. Using Kendall's shape theory, we derive the Wilczek--Zee connection governing adiabatic transport within the $E$-manifold. We show that its restricted holonomy group is $\mathrm{SU}(2)$, implying universal single-qubit control by closed loops in shape space. We provide explicit loops implementing a $\pi/2$ phase gate and a Hadamard-type gate. For two-qubit operations, we outline how linked holonomic cycles in arrays generate a controlled Chern--Simons phase, enabling an entangling controlled-$X$ (CNOT) gate. We present a Ramsey/echo interferometric protocol that measures the Wilson loop trace of the Wilczek--Zee connection for a control cycle, providing a gauge-invariant signature of the non-Abelian holonomy. As a physically realizable demonstrator, we propose bond-length modulations of a Cs($6s$)--Cs($6s$)--Cs($nd_{3/2}$) Rydberg trimer in optical tweezers and specify operating conditions that suppress leakage out of the $E$-manifold.
https://arxiv.org/abs/2512.24798
Academic Papers
svg
b03054b5091f3c0608f195c3d5e81d218a728468b10c705fe2b32ba01b5034b3
2026-01-01T00:00:00-05:00
Operator Entanglement from Non-Commutative Symmetries
arXiv:2512.24806v1 Announce Type: new Abstract: We argue that Hopf-algebra deformations of symmetries -- as encountered in non-commutative models of quantum spacetime -- carry an intrinsic content of $operator$ $entanglement$ that is enforced by the coproduct-defined notion of composite generators. As a minimal and exactly solvable example, we analyze the $U_q(\mathfrak{su}(2))$ quantum group and a two-qubit realization obtained from the coproduct of a $q$-deformed single-spin Hamiltonian. Although the deformation is invisible on a single qubit, it resurfaces in the two-qubit sector through the non-cocommutative coproduct, yielding a family of intrinsically nonlocal unitaries. We compute their operator entanglement in closed form and show that, for Haar-uniform product inputs, their entangling power is fully determined by the latter. This provides a concrete mechanism by which non-commutative symmetries enforce a baseline of entanglement at the algebraic level, with implications for information dynamics in quantum-spacetime settings and quantum information processing.
https://arxiv.org/abs/2512.24806
Academic Papers
svg
142078720fc876b204f72ffb70144049ad46d6ba189db6819352c72ff4074d57
2026-01-01T00:00:00-05:00
Unsupervised Topological Phase Discovery in Periodically Driven Systems via Floquet-Bloch State
arXiv:2512.24822v1 Announce Type: new Abstract: Floquet engineering offers an unparalleled platform for realizing novel non-equilibrium topological phases. However, the unique structure of Floquet systems, which includes multiple quasienergy gaps, poses a significant challenge to classification using conventional analytical methods. We propose a novel unsupervised machine learning framework that employs a kernel defined in momentum-time ($\boldsymbol{k},t$) space, constructed directly from Floquet-Bloch eigenstates. This approach is intrinsically data-driven and requires no prior knowledge of the underlying topological invariants, providing a fundamental advantage over prior methods that rely on abstract concepts like the micromotion operator or homotopic transformations. Crucially, this work successfully reveals the intrinsic topological characteristics encoded within the Floquet eigenstates themselves. We demonstrate that our method robustly and simultaneously identifies the topological invariants associated with both the $0$-gap and the $\pi$-gap across various symmetry classes (1D AIII, 1D D, and 2D A), establishing a robust methodology for the systematic classification and discovery of complex non-equilibrium topological matter.
https://arxiv.org/abs/2512.24822
Academic Papers
svg
e8ade46e842a21b4a9dbf02f151a2c8f8c000319062b79f94704444a4b724c79
2026-01-01T00:00:00-05:00
Role reversal in quantum Mpemba effect
arXiv:2512.24839v1 Announce Type: new Abstract: We investigate the quantum Mpemba effect in a dissipative Dicke model, which consists of a spin-1/2 ensemble coupled to a bosonic mode, which in turn is coupled to a bosonic bath. We derive a sufficient criterion for occurrence of the quantum Mpemba effect, characterized by quantum coherence, in this model. We introduce the phenomenon of role reversal in the Mpemba effect, wherein changes in the system parameters invert the relaxation ordering of a given pair of initial states that exhibit the Mpemba effect, causing the faster-relaxing state to become slower and vice versa. We find the existence of role reversal in Mpemba effect for this Dicke model using different relaxation measures, including differential quantum coherence and entanglement, and trace distance, between the time-evolved and steady states.
https://arxiv.org/abs/2512.24839
Academic Papers
svg
70ec55eb89eda8513719d8c96767ebce5ac22e8b9702772f6b79e25d01509975
2026-01-01T00:00:00-05:00
Probing quantum-coherent dynamics with free electrons
arXiv:2512.24883v1 Announce Type: new Abstract: Recent advances in time-resolved cathodoluminescence have enabled ultrafast studies of single emitters in quantum materials with femtosecond temporal resolution. Here, we develop a quantum theory modeling the dynamics of free electrons interacting with quantum emitters in arbitrary initial states. Our analysis reveals that a free electron can induce transient coherent oscillations in the populations when the system is initially prepared in a coherent superposition of its states. Moreover, the electron energy spectrum exhibits a clear signature of the quantum coherence and sensitivity to the transition frequency of the emitter. These coherence effects manifest themselves as oscillations in the zero-loss peak of the spectral energy-loss probability. Our findings pave the way for characterization of quantum-coherent dynamics of individual quantum emitters by electron-probes.
https://arxiv.org/abs/2512.24883
Academic Papers
svg
18c5af1c72babb471383a25efd5aeaf40e8b47bf6a2b1278f97170766403188a
2026-01-01T00:00:00-05:00
Quantumness of hybrid systems under quantum noise
arXiv:2512.24884v1 Announce Type: new Abstract: We investigate the quantum correlations in an axially symmetric hybrid qubit-qutrit system subjected to different noisy environments. We first introduce a physical model and analyze its Hamiltonian structure, emphasizing the role of hybrid dimensionality and axial symmetry. The effects of decoherence are then examined under two local noise mechanisms, namely dephasing and phase-flip channels, acting on the qubit and qutrit subsystems in both symmetric and asymmetric configurations. Quantum correlations are quantified using negativity to capture entanglement and quantum discord based on linear entropy to characterize more general nonclassical correlations. Our results show that both thermal fluctuations and phase noise lead to a monotonic degradation of quantum correlations, with increasing temperature accelerating coherence loss and inducing entanglement sudden death at finite temperatures. While negativity vanishes abruptly under sufficiently strong noise, quantum discord persists beyond the entanglement threshold, revealing residual quantum correlations in mixed states. We further demonstrate that asymmetric noise configurations significantly enhance the robustness of both entanglement and discord by partially shielding coherence in the less affected subsystem. A comparative analysis reveals that phase-flip noise is more destructive than pure dephasing, leading to faster suppression of quantum correlations.
https://arxiv.org/abs/2512.24884
Academic Papers
svg
b0f9cfd653871b7a9f658cd41f02fabcbcad7aca7b99ab93ace5133ba3c09f8d
2026-01-01T00:00:00-05:00
High-performance quantum interconnect between bosonic modules beyond transmission loss constraints
arXiv:2512.24926v1 Announce Type: new Abstract: Distributed quantum computing architectures require high-performance quantum interconnects between quantum information processing units, while previous implementations have been fundamentally limited by transmission line losses. Here, we demonstrate a low-loss interconnect between two superconducting modules using an aluminum coaxial cable, achieving a bus mode quality factor of 1.7e6. By employing SNAIL as couplers, we realize inter-modular state transfer in 0.8 {\mu}s via a three-wave mixing process. The state transfer fidelity reaches 98.2% for quantum states encoded in the first two energy levels, achieving a Bell state fidelity of 92.5%. Furthermore, we show the capability to transfer high-dimensional states by successfully transmitting binomially encoded logical states. Systematic characterization reveals that performance constraints have shifted from transmission line losses (contributing merely 0.2% infidelity) to module-channel interface effects and local Kerr nonlinearities. Our work advances the realization of quantum interconnects approaching fundamental capacity limits, paving the way for scalable distributed quantum computing and efficient quantum communications.
https://arxiv.org/abs/2512.24926
Academic Papers
svg
21a8ad05af529925e46c74b56871786aac65e699df1ae03ef85072ec95a6738a
2026-01-01T00:00:00-05:00
Multi-particle quantum systems within the Worldline Monte Carlo formalism
arXiv:2512.24942v1 Announce Type: new Abstract: We extend the Worldline Monte Carlo approach to computationally simulating the Feynman path integral of non-relativistic multi-particle quantum-mechanical systems. We show how to generate an arbitrary number of worldlines distributed according to the (free) kinetic part of the multi-particle quantum dynamics and how to simulate interactions between worldlines in the ensemble. We test this formalism with two- and three-particle quantum mechanical systems, with both long range Coulomb-like interactions between the particles and external fields acting separately on the particles, in various spatial dimensionality. We extract accurate estimations of the ground state energy of these systems using the late-time behaviour of the propagator, validating our approach with numerically exact solutions obtained via straightforward diagonalisation of the Hamiltonian. Systematic benchmarking of the new approach, presented here for the first time, shows that the computational complexity of Wordline Monte Carlo scales more favourably with respect to standard numerical alternatives. The method, which is general, numerically exact, and computationally not intensive, can easily be generalised to relativistic systems.
https://arxiv.org/abs/2512.24942
Academic Papers
svg
8f98680777a7a4a6cef1b8b5663b231bce3316ff70f9e0fd22c466e3080e8d17
2026-01-01T00:00:00-05:00
The uncertainty constants: A unified framework of two, three and four observables
arXiv:2512.24950v1 Announce Type: new Abstract: Uncertainty is a fundamental and important concept in quantum mechanics. Recent works have revealed both the product and sum forms of uncertainty constants for three observables. Such a result is intimately to the properties of Pauli operators. In this work, using the technique in matrix theory, we give an alternative proof for the case of three observables, and generalize the result to the case of four measurements. Comparing with the original proof, such a derivation is simplified. Moreover, the discussions can deal with the summation form of uncertainty relation for two, three and four observables in a unified way.
https://arxiv.org/abs/2512.24950
Academic Papers
svg
f02e078533f6fa8d2e32030e34a476172cee57fc5e8951d8cf2ef8f1804a4a1f
2026-01-01T00:00:00-05:00
Laser intracavity absorption magnetometry for optical quantum sensing
arXiv:2512.24951v1 Announce Type: new Abstract: Intracavity absorption spectroscopy (ICAS) is a well-established technique for detecting weak absorption signals with ultrahigh sensitivity. Here, we extend this concept to magnetometry using nitrogen-vacancy (NV) centers in diamond. We introduce laser intracavity absorption magnetometry (LICAM), a concept that is in principle applicable to a broader class of optical quantum sensors, including optically pumped magnetometers. Using an electrically driven, edge-emitting diode laser that operates self-sustainably, we show that LICAM enables highly sensitive magnetometers operating under ambient conditions. Near the lasing threshold, we achieve a 475-fold enhancement in optical contrast and a 180-fold improvement in magnetic sensitivity compared with a conventional single-pass geometry. The experimental results are accurately described by a rate-equation model for single-mode diode lasers. From our measurements, we determine a projected shot-noise-limited sensitivity in the $\mathrm{pT}\,\mathrm{Hz}^{-1/2}$ range and show that, with realistic device improvements, sensitivities down to the $\mathrm{fT}\,\mathrm{Hz}^{-1/2}$ scale are attainable.
https://arxiv.org/abs/2512.24951
Academic Papers
svg
84f43db26aa96a893c1a96a87107ba8f841ececef33c26438867675dbce0e585
2026-01-01T00:00:00-05:00
Matrix Thermodynamic Uncertainty Relation for Non-Abelian Charge Transport
arXiv:2512.24956v1 Announce Type: new Abstract: Thermodynamic uncertainty relations (TURs) bound the precision of currents by entropy production, but quantum transport of noncommuting (non-Abelian) charges challenges standard formulations because different charge components cannot be monitored within a single classical frame. We derive a process-level matrix TUR starting from the operational entropy production $\Sigma = D(\rho'_{SE}\|\rho'_S\!\otimes\!\rho_E)$. Isolating the experimentally accessible bath divergence $D_{\mathrm{bath}}=D(\rho'_E\|\rho_E)$, we prove a fully nonlinear, saturable lower bound valid for arbitrary current vectors $\Delta q$: $D_{\mathrm{bath}} \ge B(\Delta q,V,V')$, where the bound depends only on the transported-charge signal $\Delta q$ and the pre/post collision covariance matrices $V$ and $V'$. In the small-fluctuation regime $D_{\mathrm{bath}}\geq\frac12\,\Delta q^{\mathsf T}V^{-1}\Delta q+O(\|\Delta q\|^4)$, while beyond linear response it remains accurate. Numerical strong-coupling qubit collisions illustrate the bound and demonstrate near-saturation across broad parameter ranges using only local measurements on the bath probe.
https://arxiv.org/abs/2512.24956
Academic Papers
svg
c1bb7f39b03051772fa6d99f25731fc861fa727bcdc05cc1e701918e771182a5
2026-01-01T00:00:00-05:00
GEQIE Framework for Rapid Quantum Image Encoding
arXiv:2512.24973v1 Announce Type: new Abstract: This work presents a Python framework named after the General Equation of Quantum Image Encoding (GEQIE). The framework creates the image-encoding state using a unitary gate, which can later be transpiled to target quantum backends. The benchmarking results, simulated with different noise levels, demonstrate the correctness of the already implemented encoding methods and the usability of the framework for more sophisticated research tasks based on quantum image encodings. Additionally, we present a showcase example of Cosmic Web dark-matter density snapshot encoding and high-accuracy retrieval (PCC = 0.995) to demonstrate the extendability of the GEQIE framework to multidimensional data and its applicability to other fields of research.
https://arxiv.org/abs/2512.24973
Academic Papers
svg
73218a10bda1e037e362e701c6856e62d6ca62e1e6d6f0f788c1a918d6600d24
2026-01-01T00:00:00-05:00
Lindbladian PT phase transitions
arXiv:2512.24981v1 Announce Type: new Abstract: A parity-time (PT) transition is a spectral transition characteristic of non-Hermitian generators; it typically occurs at an exceptional point, where multiple eigenvectors coalesce. The concept of a PT transition has been extended to Markovian open quantum systems, which are described by the GKSL equation. Interestingly, the PT transition in many-body Markovian open quantum systems, the so-called \textit{Lindbladian PT (L-PT) phase transition}, is closely related to two classes of exotic nonequilibrium many-body phenomena: \textit{continuous-time crystals} and \textit{non-reciprocal phase transitions}. In this review, we describe the recent advances in the study of L-PT phase transitions. First, we define PT symmetry in three distinct contexts: non-Hermitian systems, nonlinear dynamical systems, and Markovian open quantum systems, highlighting the interconnections between these frameworks. Second, we develop mean-field theories of L-PT phase transitions for collective-spin systems and for bipartite bosonic systems with particle-number conservation. Within these classes of models, we show that L-PT symmetry can induce a breaking of continuous time-translation symmetry down to a discrete one, leading to persistent periodic dynamics. We further demonstrate that the L-PT phase transition point is typically \textit{a critical exceptional point}, where multiple collective excitation modes with zero excitation spectrum coalesce. These findings establish an explicit connection to continuous-time crystals and non-reciprocal phase transitions. Third, going beyond the mean-field theory, we analyze statistical and quantum properties, such as purity and quantum entanglement indicators of time-independent steady states for several specific models with the L-PT symmetry. Finally, we discuss future research directions for L-PT phase transitions.
https://arxiv.org/abs/2512.24981
Academic Papers
svg
8cd3afabb100ae5777029fceeb27446274b8de83ea5d984c5a664aec364ee074
2026-01-01T00:00:00-05:00
Any Clifford+T circuit can be controlled with constant T-depth overhead
arXiv:2512.24982v1 Announce Type: new Abstract: Since an n-qubit circuit consisting of CNOT gates can have up to $\Omega(n^2/\log{n})$ CNOT gates, it is natural to expect that $\Omega(n^2/\log{n})$ Toffoli gates are needed to apply a controlled version of such a circuit. We show that the Toffoli count can be reduced to at most n. The Toffoli depth can also be reduced to O(1), at the cost of 2n Toffoli gates, even without using any ancilla or measurement. In fact, using a measurement-based uncomputation, the Toffoli depth can be further reduced to 1. From this, we give two corollaries: any controlled Clifford circuit can be implemented with O(1) T-depth, and any Clifford+T circuit with T-depth D can be controlled with T-depth O(D), even without ancillas. As an application, we show how to catalyze a rotation by any angle up to precision $\epsilon$ in T-depth exactly 1 using a universal $\lceil\log_2(8/\epsilon)\rceil$-qubit catalyst state.
https://arxiv.org/abs/2512.24982
Academic Papers
svg