id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
0c6aa5c9e4d29d6e2e1ecf887e0f6e63d359a55d35c56b9b70e67387b06ec11c
|
2026-02-02T00:00:00-05:00
|
The Impact of Star Formation and Feedback Recipes on the Stellar Mass and Interstellar Medium of High-Redshift Galaxies
|
arXiv:2411.07282v2 Announce Type: replace Abstract: We introduce MEGATRON, a new galaxy formation model for cosmological radiation hydrodynamics simulations of high-redshift galaxies. The model accounts for the non-equilibrium chemistry and heating/cooling processes of $\geq 80$ atoms, ions, and molecules, coupled to on-the-fly radiation transfer. We apply the model in a cosmological setting to the formation of a $10^9\ {\rm M_{\odot}}$ halo at $z=6$, and run 25 realizations at pc-scale resolution, varying numerous parameters associated with our state-of-the-art star formation, stellar feedback, and chemical enrichment models. We show that the overall budget of feedback energy is the key parameter that controls star formation regulation at high redshift, with other numerical parameters (e.g. supernova clustering, star formation conditions) having a more limited impact. As a similar feedback model has been shown to produce realistic $z=0$ galaxies, our work demonstrates that calibration at $z=0$ does not guarantee strong regulation of star formation at high-redshift. Interestingly, we find that subgrid model variations that have little impact on the final $z=6$ stellar mass can lead to substantial changes on the observable properties of high-redshift galaxies. For example, different star formation models based on, e.g. density thresholds or turbulence inspired criteria, lead to fundamentally distinct nebular emission line ratios across the interstellar medium (ISM). These results highlight the ISM as an important resource for constraining models of star formation, feedback, and galaxy formation in the JWST era, where emission line measurements for $>1,000$ high-redshift galaxies are now available.
|
https://arxiv.org/abs/2411.07282
|
Academic Papers
|
svg
|
fce0871729ef1019ca94e5cb593e2455f965873c9a695ecd019a18d750456a89
|
2026-02-02T00:00:00-05:00
|
Zooming-in on cluster radio relics -- I. How density fluctuations explain the Mach number discrepancy, microgauss magnetic fields, and spectral index variations
|
arXiv:2411.11947v2 Announce Type: replace Abstract: It is generally accepted that radio relics are the result of synchrotron emission from shock-accelerated electrons. Current models, however, are still unable to explain several aspects of their formation. In this paper, we focus on three outstanding problems: i) Mach number estimates derived from radio data do not agree with those derived from X-ray data, ii) cooling length arguments imply a magnetic field that is at least an order of magnitude larger than the surrounding intracluster medium (ICM), and iii) spectral index variations do not agree with standard cooling models. To solve these problems, we first identify typical shock conditions in cosmological simulations, using the results to inform significantly higher resolution shock-tube simulations. We apply the cosmic ray electron spectra code CREST and the emission code CRAYON+ to these, thereby generating mock observables ab-initio. We identify that upon running into an accretion shock, merger shocks generate a shock-compressed sheet, which, in turn, runs into upstream density fluctuations in pressure equilibrium. This mechanism directly gives rise to solutions to the three problems: it creates a distribution of Mach numbers at the shock-front, which flattens cosmic ray electron spectra, thereby biasing radio-derived Mach number estimates to higher values. We show that this effect is particularly strong in weaker shocks. Secondly, the density sheet becomes Rayleigh-Taylor unstable at the contact discontinuity, causing turbulence and additional compression downstream. This amplifies the magnetic field from ICM-like conditions up to microgauss levels. We show that synchrotron-based measurements are strongly biased by the tail of the distribution here too. Finally, the same instability also breaks the common assumption that matter is advected at the post-shock velocity downstream, thus invalidating laminar-flow based cooling models.
|
https://arxiv.org/abs/2411.11947
|
Academic Papers
|
svg
|
a4951e55b6d4a7eae665b7cbc799fbbf81b0f6138d1a739bff032ca33f8a69b2
|
2026-02-02T00:00:00-05:00
|
Possible evidence for extended X-ray emission surrounding PSR B0656+14 with eROSITA
|
arXiv:2501.17046v2 Announce Type: replace Abstract: Extended very-high-energy $\gamma$-ray emission from middle-aged pulsars as revealed recently by several groundbased $\gamma$-ray experiments has strong implication on the transport of high-energy particles in the interstellar medium surrounding those pulsars. The $\gamma$-ray emission is widely believed to be produced by high-energy electrons and positrons accelerated by the pulsar wind nebulae when scattering off the interstellar radiation field via the inverse Compton process. Consequently, multiwavelength counterparts of the $\gamma$-ray halos are expected to be present, which have not yet been detected. In this work we report the possible detection of extended X-ray emission from a $\sim 0.2\degr$ radius region around PSR B0656+14 with eROSITA. In spite that there are uncertainties of the on-orbit point spread function of the pointing mode, the radial profile of PSR B0656+14 is found to be broader than that of a star at similar observational conditions, indicating that emission is possibly from the expected extended halo around the pulsar. The spectrum of the emission can be described by a power-law function with an index of $\sim3.7$. Its surface brightness declines with radius faster than the prediction of the particle diffusion and synchrotron radiation in a uniform magnetic field, suggesting the existence of a radial gradient of the magnetic field strength as $\sim r^{-1}$. The magnetic field strength in the X-ray emitting region is constrained to be $4-10~\mu$G.
|
https://arxiv.org/abs/2501.17046
|
Academic Papers
|
svg
|
e508f075e881201bfe970b3ebaa7ad9ad537238d50c3c64893a5fdaf9c212c60
|
2026-02-02T00:00:00-05:00
|
J-PAS and PFS surveys in the era of dark energy and neutrino mass measurements
|
arXiv:2505.04275v3 Announce Type: replace Abstract: Fisher-matrix forecasts are presented for the cosmological surveys of the Javalambre Physics of the Accelerating Universe Astrophysical Survey (J-PAS) and the Subaru Prime Focus Spectrograph (PFS). The wide, low-redshift coverage of J-PAS and the high-density, high-redshift mapping of PFS are strongly complementary: combining the two reduces marginalized uncertainties on all primary parameters compared with either survey individually. Adding the joint J-PAS+PFS data to next-generation CMB measurements from the Simons Observatory (SO) and \textsc{LiteBird} yields an expected precision of $\sigma(\sum m_\nu)=0.017\,$eV in the $\Lambda$CDM$+\sum m_\nu+N_{\rm eff}$ framework, sufficient to disfavour the inverted neutrino hierarchy at $2.34\,\sigma$ if the true mass sum equals the normal-ordering minimum. Motivated by recent DESI results, we also forecast within a $w_0w_a$CDM$+\sum m_\nu+N_{\rm eff}$ cosmology, adopting the DESI\,DR2 best-fit values ($w_0=-0.758$, $w_a=-0.82$) as fiducial. The combination CMB+J-PAS+PFS then delivers $\sigma(w_0)=0.044$ and $\sigma(w_a)=0.18$, corresponding to a $5.1\,\sigma$ preference for a time-varying dark-energy equation of state. These findings show that J-PAS and PFS, especially when coupled with Stage-IV CMB observations, will provide competitive tests of neutrino physics and the dynamics of cosmic acceleration.
|
https://arxiv.org/abs/2505.04275
|
Academic Papers
|
svg
|
8aff5bc0503ddf74824a9093107ebb3492499124a8552ea7ab70d95f0f982c4b
|
2026-02-02T00:00:00-05:00
|
Fast Low Energy Reconstruction using Convolutional Neural Networks
|
arXiv:2505.16777v3 Announce Type: replace Abstract: IceCube is a Cherenkov detector instrumenting over a cubic kilometer of glacial ice deep under the surface of the South Pole. The DeepCore sub-detector lowers the detection energy threshold to a few GeV, enabling the precise measurements of neutrino oscillation parameters with atmospheric neutrinos. The reconstruction of neutrino interactions inside the detector is essential in studying neutrino oscillations. It is particularly challenging to reconstruct sub-100 GeV events with the IceCube detectors due to the relatively sparse detection units and detection medium. Convolutional neural networks (CNNs) are broadly used in physics experiments for both classification and regression purposes. This paper discusses the CNNs developed and employed for the latest IceCube-DeepCore oscillation measurements. These CNNs estimate various properties of the detected neutrinos, such as their energy, direction of arrival, interaction vertex position, flavor-related signature, and are also used for background classification.
|
https://arxiv.org/abs/2505.16777
|
Academic Papers
|
svg
|
08df0979b54c5bcf1a2b9a9d6530c6431b6a646cf777c12da2016ace50b2d567
|
2026-02-02T00:00:00-05:00
|
Non-Separable Halo Bias from High-Redshift Galaxy Clustering
|
arXiv:2506.07662v2 Announce Type: replace Abstract: The halo model provides a powerful framework for interpreting galaxy clustering by linking the spatial distribution of dark matter haloes to the underlying matter distribution. A key assumption within the halo bias approximation of the halo model is that, on sufficiently large scales, the halo bias between two halo populations is a separable function of the mass of each population. In this work, we test the validity of this approximation on quasi-linear scales using both simulations and observational data across a broad range of halo masses and redshifts. In particular, we define a separability function based on halo or galaxy cross-correlations to quantify deviations from halo bias separability, and measure it from N-body simulations. We find significant departures from separability on quasi-linear scales (\(\sim 1\text{--}5\,\mathrm{Mpc}\)) at high redshifts (\(z \geq 3\)), leading to a suppression in the scale-dependent halo bias and hence in halo cross-correlations by up to a factor of 2 -- or even higher. In contrast, deviations at low redshifts remain modest. Additionally, using high-redshift (\(z \sim 3.6\)) galaxy samples, we detect deviations from bias separability that closely align with simulation predictions. The breakdown of the separable bias approximation on quasi-linear scales at high redshifts underscore the importance to account for non-separability in models of the galaxy-halo connection in this regime. Furthermore, these results highlight the potential of high-redshift galaxy cross-correlations as a probe for improving the galaxy-halo connection from upcoming large-scale surveys.
|
https://arxiv.org/abs/2506.07662
|
Academic Papers
|
svg
|
7bb5ed98303a822db50ae2287f4dca7145a8fde9ab48a1427a8a2746043db17a
|
2026-02-02T00:00:00-05:00
|
The Effective Field Theory of Large Scale Structure for Mixed Dark Matter Scenarios
|
arXiv:2507.08792v3 Announce Type: replace Abstract: We initiate a systematic study of the perturbative nonlinear dynamics of cosmological fluctuations in dark sectors comprising a fraction of non-cold dark matter, for example ultra-light axions or light thermal relics. These mixed dark matter scenarios exhibit suppressed growth of perturbations below a characteristic, cosmologically relevant, scale associated with the microscopic nature of the non-cold species. As a consequence, the scale-free nonlinear solutions developed for pure cold dark matter and for massive neutrinos do not, in general, apply. We thus extend the Effective Field Theory of Large Scale Structure to model the coupled fluctuations of the cold and non-cold dark matter components, describing the latter as a perfect fluid with finite sound speed at linear level. We provide new analytical solutions wherever possible and devise an accurate and computationally tractable prescription for the evaluation of the one-loop galaxy power spectrum, which can be applied to probe mixed dark matter scenarios with current and upcoming galaxy survey data. As a first application of this framework, we derive updated constraints on the energy density in ultra-light axions using a combination of Planck and BOSS data. Our refined theoretical modeling leads to somewhat weaker bounds compared to previous analyses.
|
https://arxiv.org/abs/2507.08792
|
Academic Papers
|
svg
|
2e9433521ed52162f5efbfa61384b7dc469103df23d23d64c50ddfbe22e04722
|
2026-02-02T00:00:00-05:00
|
Phantom crossing or dark interaction?
|
arXiv:2507.18274v2 Announce Type: replace Abstract: Recent results from DESI BAO measurements, together with Planck CMB and Pantheon+ data, suggest that there may be a `phantom' phase ($w_{\rm de}<-1$) in the expansion of the Universe. This inference follows when the $w_0, w_a$ parametrization for the dark energy equation of state $w_{\rm de}$ is used to fit the data. Since phantom dark energy in general relativity is unphysical, we investigate the possibility that the phantom behaviour is not intrinsic, but effective -- due to a non-gravitational interaction between dark matter and non-phantom dark energy. To this end, we assume a physically motivated thawing quintessence-like form of the intrinsic dark energy equation of state $w_{\rm de}$. Then we use a $w_0, w_a$ model for the \emph{effective} equation of state of dark energy. We find that the data favours a phantom crossing for the effective dark energy, but only at low significance. The intrinsic equation of state of dark energy is non-phantom, without imposing any non-phantom priors. A nonzero interaction is favoured at more than $3\sigma$ at $z\sim0.3$. The energy flows from dark matter to dark energy at early times and reverses at later times.
|
https://arxiv.org/abs/2507.18274
|
Academic Papers
|
svg
|
2d5a9a295fc801d56ca65d475275dd1966c0c7a10da536f4308f35fd76d6ea72
|
2026-02-02T00:00:00-05:00
|
PHECT: A lightweight computation tool for pulsar halo emission
|
arXiv:2508.13667v2 Announce Type: replace Abstract: $\gamma$-ray pulsar halos, most likely formed by inverse Compton scattering of electrons and positrons propagating in the pulsar-surrounding interstellar medium with background photons, serve as an ideal probe for Galactic cosmic-ray propagation on small scales (typically tens of parsecs). While the associated electron and positron propagation is often modeled using homogeneous and isotropic diffusion, termed here as standard diffusion, the actual transport process is expected to be more complex. This work introduces the Pulsar Halo Emission Computation Tool (PHECT), a lightweight software designed for modeling pulsar halo emission. PHECT incorporates multiple transport models extending beyond standard diffusion, accounting for different possible origins of pulsar halos. Users can conduct necessary computations simply by configuring a YAML file without manual code edits. Furthermore, the tool adopts finite-volume discretizations that remain stable on non-uniform grids and in the presence of discontinuous diffusion coefficients. PHECT is ready for the increasingly precise observational data and the rapidly growing sample of pulsar halos.
|
https://arxiv.org/abs/2508.13667
|
Academic Papers
|
svg
|
df96b22df91f69f88e156964d3f2978af9e8156726a856b8f10e7bbc5816b9fd
|
2026-02-02T00:00:00-05:00
|
Lense-Thirring precession of neutron-star accretion flows: Relativistic versus classical precession
|
arXiv:2508.13777v2 Announce Type: replace Abstract: The vertical (Lense-Thirring) precession of the innermost accretion flows has been discussed as a sensitive indicator of the rotational properties of neutron stars (NSs) and their equation of state because it vanishes for a non-rotating star. In this work, we apply the Hartle-Thorne spacetimes to study the frequencies of the precession for both geodesic and non-geodesic (fluid) flows. We build on previous findings on the effect of the NS quadrupole moment, which revealed the importance of the interplay between the relativistic and classical precession. Because of this interplay, the widely used Lense-Thirring metric, linear in the NS angular momentum, is insufficient to calculate the behaviour of the precession frequency across an astrophysically relevant range of NS angular momentum values. We find that even for a moderately oblate NSs, the dependencies of the precession frequency on the NS angular momentum at radii within the innermost accretion region have maxima that occur at relatively low values of the NS angular momentum. We conclude that very different groups of accreting NSs -- slow and fast rotators -- can display the same precession frequencies. This may explain the lack of evidence for a correlation between the frequencies of the observed low-frequency quasiperiodic oscillations and the NS spin. In our work, we provide a full, general description of precession behaviour, and also examples that assume specific NS and quark star (MIT bag) equation of state. Our calculations are reproducible using the associated Wolfram Mathematica notebook.
|
https://arxiv.org/abs/2508.13777
|
Academic Papers
|
svg
|
84a4a7b46f286b9ede3e624dc4ed328a3a4b0c47cad56d526b6149fca331a37a
|
2026-02-02T00:00:00-05:00
|
Dark Degeneracy in DESI DR2: Interacting or Evolving Dark Energy?
|
arXiv:2508.17955v2 Announce Type: replace Abstract: The standard $\Lambda$CDM model, despite its success, is challenged by persistent observational tensions in the Hubble constant ($H_0$) and the matter clustering amplitude ($S_8$), motivating the exploration of alternative cosmological scenarios. We investigate a dark energy model with a phenomenological interaction in the dark sector, constructed to be exactly degenerate at the background level with the Chevallier-Polarski-Linder (CPL) parameterization. This setup allows us to test whether models with identical expansion histories but distinct physical mechanisms can be distinguished by cosmological data. We perform a Bayesian analysis using a combination of recent datasets: DESI DR2 BAO measurements, DESY5 supernovae, and CMB data from Planck and ACT. We find that both the interacting model and the CPL model provide significantly better fits to the data than $\Lambda$CDM. Although indistinguishable in background observables, the interacting model predicts a distinct matter-sector evolution driven by a late-time sign change in the dark sector interaction at $z \approx 0.8$, corresponding to the $w=-1$ crossing in the CPL description. In this sense, the interacting picture may be considered more physical, since it avoids the problematic crossing by construction. The resulting decay of dark energy into dark matter lowers $S_8$, potentially alleviating the weak-lensing $S_8$ tension. At the same time, it predicts a sharp suppression of the growth rate $f\sigma_8(z)$ at $z \lesssim 0.8$, which is in tension with current measurements of structure formation. This indicates that the model may not simultaneously reconcile the expansion history and the observed growth of cosmic structure, highlighting the need for a more comprehensive analysis to fully assess its viability.
|
https://arxiv.org/abs/2508.17955
|
Academic Papers
|
svg
|
09c3f314840099ad2fd9c2da9c66bfe722751d70d94151a6122a25c61bb13082
|
2026-02-02T00:00:00-05:00
|
An improved model for the effect of correlated Si-III absorption on the one-dimensional Lyman-$\alpha$ forest power spectrum
|
arXiv:2509.08613v3 Announce Type: replace Abstract: We present an analysis of Si III absorption and its effect on the 1D Ly$\alpha$ forest power spectrum using the Sherwood-Relics hydrodynamical simulation suite. In addition to oscillations from the Ly$\alpha$--Si III cross correlation that are damped toward smaller scales, we find an enhancement in small-scale power that has been ignored in previous studies. We therefore develop a new analytical fitting function that captures two critical effects that have previously been neglected: distinct Ly$\alpha$ and Si III line profiles, and a variable ratio for coeval Ly$\alpha$ and Si III optical depths. In contrast to earlier work, we also predict amplitudes for the Si III power spectrum and Ly$\alpha$--Si III cross power spectrum that decrease toward lower redshift due to the hardening metagalactic UV background spectrum at $z\lesssim 3.5$. The fitting function is validated by comparison against multiple simulated datasets at redshifts $2.2\leq z \leq 5.0$ and wavenumbers $k < 0.2\rm\,s\,km^{-1}$. Our model has little effect on existing warm dark matter constraints from the Ly$\alpha$ forest when adopting a physically motivated prior on the silicon abundance. It will, however, be an essential consideration for future, high precision Ly$\alpha$ forest power spectrum measurements.
|
https://arxiv.org/abs/2509.08613
|
Academic Papers
|
svg
|
ffc45ff1fbd17938d34e80b49b23be3447c6d3e1edfefa636229052e16bd09be
|
2026-02-02T00:00:00-05:00
|
Resistive Scaling in the Magnetic Helicity-Driven Inverse Cascade
|
arXiv:2509.21141v3 Announce Type: replace Abstract: The inverse cascade in MHD turbulence plays a crucial role in various astrophysical processes such as galaxy cluster formation, solar and stellar dynamo mechanisms, and the evolution of primordial magnetic fields in the early universe. A standard numerical approach involves injecting magnetic helicity at intermediate length scales to generate a secondary, time-dependent spectral peak that gradually propagates toward larger scales. Previous simulations have already suggested a resistive dependence of inverse transfer rates and demonstrated the significant influence of magnetic helicity flux density $\epsilon_\mathrm{H}$ on this process. On dimensional grounds, we have $E_\mathrm{M}(k,t)=C_\mathrm{H} \epsilon_\mathrm{H}^{2/3} k^{-1}$ where $C_\mathrm{H}$ represents a potentially universal dimensionless coefficient analogous to the Kolmogorov constant. We present a summary of the 25 distinct simulations conducted with the \textsc{Pencil Code}, systematically varying the forcing wavenumber $k_\mathrm{f}$, magnetic Prandtl number $Pm$, grid resolution $N^3$, and Lundquist number $Lu$. We obtained $C_\mathrm{H}$ and corresponding error bars by calculating the compensated spectrum and investigated its dependence with $Lu$ and $k_\mathrm{f}$. For the $C_\mathrm{H}$ - $Lu$ relationship, we observe strong correlations with power-law exponents of 1 and 2/3. In contrast, we find no significant correlation between $C_\mathrm{H}$ and $k_\mathrm{f}$.
|
https://arxiv.org/abs/2509.21141
|
Academic Papers
|
svg
|
12c2b6a0575d7cac73662ee33ff36a71fbad52bc9af022490680392d38f86c5e
|
2026-02-02T00:00:00-05:00
|
BICEP/Keck XX: Component-separated maps of polarized CMB and thermal dust emission using Planck and BICEP/Keck Observations through the 2018 Observing Season
|
arXiv:2509.21648v2 Announce Type: replace Abstract: We present component-separated polarization maps of the cosmic microwave background (CMB) and Galactic thermal dust emission, derived using data from the BICEP/Keck experiments through the 2018 observing season and Planck. By employing a maximum-likelihood method that utilizes observing matrices, we produce unbiased maps of the CMB and dust signals. We outline the computational challenges and demonstrate an efficient implementation of the component map estimator. We show methods to compute and characterize power spectra of these maps, opening up an alternative way to infer the tensor-to-scalar ratio from our data. We compare the results of this map-based separation method with the baseline BICEP/Keck analysis. Our analysis demonstrates consistency between the two methods, finding an 84% correlation between the pipelines.
|
https://arxiv.org/abs/2509.21648
|
Academic Papers
|
svg
|
55e6621356362d6c596433bd37f138ba4fd86fef41b7f678a215e7de5d1b3698
|
2026-02-02T00:00:00-05:00
|
Exotic PeVatrons as sources of ultra-high-energy gamma rays
|
arXiv:2510.00254v2 Announce Type: replace Abstract: We explore novel classes of exotic astrophysical sources capable of producing ultra-high-energy gamma rays extending beyond the PeV scale, motivated by quantum gravity scenarios and dark matter phenomenology. These sources include: ultra-spinning black hole vortex-string systems; exotic compact objects such as boson star, axion star and Q-ball. Such Exotica generate powerful magnetic fields through interactions with millicharged dark matter, enabling particle acceleration mechanisms that surpass the energy limits of conventional astrophysical sources like pulsar wind nebulae and supernova remnants. We demonstrate that such exotic PeVatrons could be distributed throughout our Galaxy and may be detectable by current (LHAASO, HAWC) and next-generation (CTA) gamma-ray observatories.
|
https://arxiv.org/abs/2510.00254
|
Academic Papers
|
svg
|
d56108bcb084131af91116faba3a9e08f63a0903213badeec27873dca56e82e2
|
2026-02-02T00:00:00-05:00
|
EIGER VIII: First stars signatures in the connection between OI absorption and Galaxies in the Epoch of Reionization
|
arXiv:2510.05220v2 Announce Type: replace Abstract: We investigate the association between galaxies and neutral OI absorption systems at z~6, which trace metal-enriched gas during the epoch of reionization. We identify 40 galaxies across six quasar fields, residing in 15 overdensities within 300 kpc of the background sightlines. Five OI absorption systems are associated with five of these overdensities, yielding a covering fraction of $0.27^{+0.13}_{-0.10}$ within 300 kpc. The absorption occurs beyond typical virial radii, indicating that the gas traces extended overdensity environments rather than individual galaxy halos, unlike the z~0 CGM which is largely bound to halos. These galaxy-associated absorbers account for $\sim35\%$ of all OI systems seen in blind quasar surveys, implying the remainder arise in lower-mass galaxies below our detection threshold or in dense neutral IGM pockets. The CGM around these galaxies contains $\gtrsim 2\times10^6~M_{\odot}$ of oxygen, comparable to the ISM oxygen mass of the galaxies themselves, suggesting that the surrounding environment holds as much metal mass as the galaxies. All five galaxy-associated systems show significantly higher $\log(N_{\rm CII}/N_{\rm OI})$ ratios than absorbers lacking galaxy associations. Furthermore, relative abundance ratios ([Si/O], [C/O]) reveal that four of the five exhibit enrichment patterns consistent with Population III nucleosynthesis at the outskirts of galaxy overdensities.. These rare systems offer a unique window into the role of first-generation stars in shaping the early metal enrichment of galaxies and their environments.
|
https://arxiv.org/abs/2510.05220
|
Academic Papers
|
svg
|
f74a3b4e8c2ae6bc5a18e2be66e538b766f08fe0ce6ce9f8f36d10d819ebf1ae
|
2026-02-02T00:00:00-05:00
|
Probing Primordial black holes with the distortion of Stochastic Gravitational Wave Background
|
arXiv:2510.13477v2 Announce Type: replace Abstract: The stochastic gravitational-wave background (SGWB), arising from the incoherent superposition of numerous compact binary coalescences, serves as a powerful probe of both astrophysical populations and fundamental physics. In this work, we investigate the influence of gravitational lensing on the SGWB, focusing on primordial black holes (PBHs) as potential lenses. Assuming PBHs as dark matter candidates with a broad cosmic distribution, we show that their lensing optical depth can be significantly enhanced, producing pronounced effects with relative deviations at the 10^-1 level. By systematically varying the PBH mass (M_PBH) and abundance (f_PBH), we demonstrate that the mass predominantly determines the frequency-dependent diffraction features of the spectrum, while the abundance primarily amplifies the overall lensing-induced deviation. Although the SGWB from binary black holes has not yet been observed, our analytical results provide theoretical insight into the possible imprint of lensing on its spectrum and suggest that future detections could offer a novel avenue to constrain dark matter scenarios.
|
https://arxiv.org/abs/2510.13477
|
Academic Papers
|
svg
|
d17c181f4f0e815b2e4fdf17345fe843ab95b7129858a7279440537f945b2c19
|
2026-02-02T00:00:00-05:00
|
Gravitational-wave and electromagnetic detections in the context of the CosmoDC2 LSST synthetic catalog
|
arXiv:2510.18727v2 Announce Type: replace Abstract: We release CosmoDC2_BCO, a synthetic catalog of gravitational-wave events and electromagnetic counterparts associated with galaxies from CosmoDC2. The catalog provides intrinsic and extrinsic source parameters, signal-to-noise ratios, parameter uncertainties, sky localization areas, and kilonova apparent magnitudes in LSST filters. Our results show that third-generation detector networks substantially increase detection rates and improve parameter estimation. Second-generation detectors, when combined with third-generation ones, significantly enhance sky localization and distance precision, particularly for BNS mergers. Assuming a simplified Target of Opportunity strategy, we estimate that an LSST-like survey, partnered with the CE+ET+LVK network at 70% duty cycle, could detect about 5000 kilonovae with GW counterparts over a 10-year period on a 16000 deg^2 footprint, predominantly from low-mass BNS mergers that produce long-lived supermassive neutron star remnants. While this is a substantial number, it represents only a small fraction of the total neutron star mergers expected to be observed by third-generation networks. These projections rely on several simplifying assumptions-including the adopted merger rate, the kilonova luminosity distribution, and the configuration and scheduling of future surveys-which introduce notable uncertainties. Therefore, the estimated detection numbers should be interpreted with appropriate caution.
|
https://arxiv.org/abs/2510.18727
|
Academic Papers
|
svg
|
43b5d6b4950765608e93ac4f5af0c80a6b61f98d951969ee7d3dcd1f5091f2c5
|
2026-02-02T00:00:00-05:00
|
Photometric Redshifts in JWST Deep Fields: A Pixel-Based Alternative with DeepDISC
|
arXiv:2510.27032v2 Announce Type: replace Abstract: Photo-z algorithms that utilize SED template fitting have matured, and are widely adopted for use on high-redshift near-infrared data that provides a unique window into the early universe. Alternative photo-z methods have been developed, largely within the context of low-redshift optical surveys. Machine learning based approaches have gained footing in this regime, including those that utilize raw pixel information instead of aperture photometry. However, the efficacy of image-based algorithms on high-redshift, near-infrared data remains underexplored. Here, we test the performance of Detection, Instance Segmentation and Classification with Deep Learning (DeepDISC) on photometric redshift estimation with NIRCam images from the JWST Advanced Deep Extragalactic Survey (JADES) program. DeepDISC is designed to produce probabilistic photometric redshift estimates directly from images, after detecting and deblending sources in a scene. Using NIRCam-only images and a compiled catalog of spectroscopic redshifts, we show that DeepDISC produces reliable photo-zs and uncertainties comparable to those estimated from template fitting using HST+JWST filters; DeepDISC even outperforms template fitting (lower scatter/fewer outliers) when the input photometric filters are matched. Compared with template fitting, DeepDISC does not require measured photometry from images, and can produce a catalog of 94000 photo-zs in ~4 minutes on a single NVIDIA A40 GPU. While current spectroscopic training samples are small and incomplete in color-magnitude space, this work demonstrates the potential of DeepDISC for increasingly larger image volumes and spectroscopic samples from ongoing and future programs. We discuss the impact of the training data on applications to broader samples and produce a catalog of photo-zs for all JADES DR2 photometric sources in the GOOD-S field, with quality flags indicating caveats.
|
https://arxiv.org/abs/2510.27032
|
Academic Papers
|
svg
|
65f4c3f460f81885f2592e667bcd952058f8beaa7d28b32f257cee67d48d9804
|
2026-02-02T00:00:00-05:00
|
Sequential Fragmentation of C/2025 K1 (ATLAS) After Its Near-Sun Passage
|
arXiv:2511.19707v2 Announce Type: replace Abstract: Comet C/2025 K1 (ATLAS) reached perihelion at 0.33 au on 2025 October 8. Daily monitoring by the LCO Outbursting Objects Key Project revealed a major activity increase between November 2 and 4, accompanied by rapid changes in coma morphology. Serendipitous HST/STIS acquisition images obtained on November 8-10 captured the comet only days after this event and resolved five fragments, providing an early high-resolution view of a nucleus in the process of disruption. Fragment motions and morphologies indicate a hierarchical fragmentation sequence, including a slow secondary split of fragment II. Back extrapolation shows that both the primary and secondary breakups preceded their associated photometric outbursts by roughly one to three days. This consistent lag, together with the appearance of thin, short-lived arclets around fragment I in the first HST epoch, suggests that freshly exposed interior material warms rapidly but requires time before dust can be released efficiently. Given the comet's close perihelion passage, rotational instability driven by enhanced outgassing torques is a plausible contributor to nucleus disintegration and dust release, and may represent the primary source of the observed brightening. These combined ground- and space-based observations provide rare, time-resolved constraints on the thermal and structural evolution of a fragmented comet near perihelion and highlight the scientific value of capturing a nucleus within days of disruption, when thermal adjustment, dust mantle re-formation, and outgassing-driven torques jointly govern the onset of activity.
|
https://arxiv.org/abs/2511.19707
|
Academic Papers
|
svg
|
a432822e1cd13d64f6dc4f8297ffd3850177db359aaf9a9e8f129218390f82a7
|
2026-02-02T00:00:00-05:00
|
Defects and Inconsistencies in Solar Flare Data Sources: Implications for Machine Learning Forecasting
|
arXiv:2512.13417v2 Announce Type: replace Abstract: Machine learning models for forecasting solar flares have been trained and evaluated using a variety of data sources, including Space Weather Prediction Center (SWPC) operational and science-quality data. Typically, data from these sources is minimally processed before being used to train and validate a forecasting model. However, predictive performance can be affected if defects and inconsistencies between these data sources are ignored. For a set of commonly used data sources, along with the software that queries and outputs processed data, we identify their defects and inconsistencies, quantify their extent, and show how they can affect predictions from data-driven machine-learning forecasting models. We also outline procedures for fixing these issues or at least mitigating their impacts. Finally, based on thorough comparisons of the effects of data sources on the trained forecasting model's predictive skill scores, we offer recommendations for using different data products in operational forecasting.
|
https://arxiv.org/abs/2512.13417
|
Academic Papers
|
svg
|
f1a38825df85f35848a012392a13770d05ac5b9c0a7d5f074c3d3e1985bddd52
|
2026-02-02T00:00:00-05:00
|
Tales of stellar and binary co-evolution, told by stellar oscillations -- Binary demographics and their impact on stellar mass, orbits, and age estimates in main-sequence and red-giant stars
|
arXiv:2512.13581v2 Announce Type: replace Abstract: Red giants are increasingly used as stellar population tracers due to their well-understood evolution and the availability of asteroseismic observables. However, stellar binarity can alter observable properties and introduce strong biases. We aim to provide a holistic picture of the binary population and its evolution in the red giant phase by characterizing a sample of binaries hosting oscillating red giants from a combination of extensive asteroseismic, spectroscopic, and astrometric surveys. We investigate the binary properties of evolved stars in the APOKASC3 and APO-K2 catalogs, leveraging asteroseismic constraints and Gaia DR3 non-single-star solutions. We explore the mass distribution of red-giant binary systems and analyze the evolution of their binary fraction. For stars with M$\leq$1.8M$_\odot$, we find binary fractions $\sim$31% and $\sim$41% for oscillating and non-oscillating solar-like stars on the main-sequence (MS). By the power excess ($\nu_\mathrm{max}$) as luminosity proxy, we detect a binary attrition of $\sim$69% and $\sim$81% on the low- and high-luminosity red-giant branch (RGB) and an additional $\sim$38% to the red clump (RC), with respect to the MS. Binaries hosting RC and secondary clump stars (2RC) stars are largely depleted at $P_\mathrm{orb}\lesssim$500 and $\lesssim$200 days, respectively. Mass-dependent differences in binary fractions and orbital properties point to more substantial binary attrition for stars with M $\leq$1.8 M$_\odot$. The distinct mass distributions and the depletion of short-period binaries during the red-giant phase underscore the impact of stellar expansion and binary interactions on stellar evolution. RC systems with $P_\mathrm{orb}\lesssim$800 to 1,000 days are likely shaped by past interactions, such as mass transfer or loss, which can lead to significantly biased age estimates if not accounted for.
|
https://arxiv.org/abs/2512.13581
|
Academic Papers
|
svg
|
a96e6f52f7841b750fe08d8b786ed4d05817bcde7daeb62fd0b3daec22043636
|
2026-02-02T00:00:00-05:00
|
How is Cold Gas Loaded into Galactic Nuclear Outflows?
|
arXiv:2512.14081v2 Announce Type: replace Abstract: The origin of the multiphase gas within the Fermi/eROSITA bubbles is crucial for understanding Galactic Center (GC) feedback. We use HI4PI data to investigate the kinematics and physical properties of high-velocity clouds (HVCs) toward the GC. Our results reveal that the HVCs exhibit a distinct asymmetric distribution, closely associated with the bar-driven tilted dust lanes and the distorted overshooting streams. We propose that powerful nuclear outflows interact with these gas-rich, off-plane structures, striping and entraining cold gas from the outer Galactic regions (R_GC~0.5--1.7 kpc) rather than solely from the region of the central molecular zone (CMZ; R_GC<0.3 kpc). In this scenario, as the Galactic bar drives gas inflows along the dust lanes, nuclear outflows simultaneously break through the CMZ, sweeping up and ablating cold gas from the boundary layer of these pre-existing structures. This process naturally accounts for the observed high turbulence, complex spectral signatures, and anomalous spatial-kinematic gas patterns, as well as multiwavelength asymmetries of the bubbles. The HVCs are accelerated to about 230--340 km/s over a dynamical time of ~3--6 Myr. When the multiphase, inhomogeneous composition of the gas is included, the estimated gas outflow rate reaches ~1 Msun/yr. This value is comparable to the bar-driven inflow rate, indicating a tightly coupled gas cycle in the inner Galaxy. Our research highlights the critical role of bar-driven gas dynamics and nuclear feedback in the secular evolution of the Milky Way, offering a valuable paradigm for investigating gas cycles in external galaxies.
|
https://arxiv.org/abs/2512.14081
|
Academic Papers
|
svg
|
9c3f72879b212accadb868fbe172e6db3884b56cfd85f8c72fa103fa29cc9eca
|
2026-02-02T00:00:00-05:00
|
ExoMiner++ 2.0: Vetting TESS Full-Frame Image Transit Signals
|
arXiv:2601.14877v2 Announce Type: replace Abstract: The Transiting Exoplanet Survey Satellite (TESS) Full-Frame Images (FFIs) provide photometric time series for millions of stars, enabling transit searches beyond the limited set of pre-selected 2-minute targets. However, FFIs present additional challenges for transit identification and vetting. In this work, we apply ExoMiner++ 2.0, an adaptation of the ExoMiner++ framework originally developed for TESS 2-minute data, to FFI light curves. The model is used to perform large-scale planet versus non-planet classification of Threshold Crossing Events across the sectors analyzed in this study. We construct a uniform vetting catalog of all evaluated signals and assess model performance under different observing conditions. We find that ExoMiner++ 2.0 generalizes effectively to the FFI domain, providing robust discrimination between planetary signals, astrophysical false positives, and instrumental artifacts despite the limitations inherent to longer cadence data. This work extends the applicability of ExoMiner++ to the full TESS dataset and supports future population studies and follow-up prioritization.
|
https://arxiv.org/abs/2601.14877
|
Academic Papers
|
svg
|
eb383ad295f3d258e81848a75671620fd6b016b193b7000c11477550b70c7ddf
|
2026-02-02T00:00:00-05:00
|
Lensing without mixing: Probing Baryonic Acoustic Oscillations and other scale-dependent features in cosmic shear surveys
|
arXiv:2601.19696v2 Announce Type: replace Abstract: Weak-gravitational lensing tends to wash out scale and time-dependent features of the clustering of matter, such as the Baryonic Acoustic Oscillations (BAO) which appear in the form of wiggles in the matter power spectrum but that disappear in the analogous lensing $C_\ell$. This is a direct consequence of lensing being a projected effect. In this paper, we demonstrate how the noise complexity -- often deemed "erasing the signal" -- induced by a particular de-projection technique, the Bernardeau-Nishimichi-Taruya (BNT) transform arXiv:1312.0430, can be used to extract the BAO signal and non-gaussian aperture-mass-like properties at chosen physical scales. We take into account parts of the data vectors that should effectively be without cosmological signature and also introduce an additional re-weighting designed to specifically highlight clustering features -- both at the probe (summary statistics) or map (amplitude of the field) level. We thus demonstrate why weak-gravitational lensing by the large-scale structure of the Universe, though only in a tomographic setting, does not erase scale and time-dependent features of the dynamics of matter, while providing a tool to effectively extract them from actual galaxy-shapes measurements.
|
https://arxiv.org/abs/2601.19696
|
Academic Papers
|
svg
|
15bbd01eab93360f603645a62edd52dc6cbf432f4b14750afbcd5d117809fc88
|
2026-02-02T00:00:00-05:00
|
Fast variability and circular polarization of the 6.7 GHz methanol maser in G33.641$-$0.228
|
arXiv:2601.20371v2 Announce Type: replace Abstract: The 6.7 GHz methanol maser in a high-mass star-forming region G33.641$-$0.228 is known to exhibit burst-like flux variability due to an unknown mechanism. To investigate the burst mechanism, we conducted high-cadence flux and circular polarization monitoring observations, simultaneously using left- and right-hand circular polarizations. We found that the flux density increased and decreased on a short timescale of 0.3 d during a burst. We also found strong circular polarization, reaching up to 20\% in the component exhibiting the bursts. Circular polarization of 0--20\% was continuously observed from 2009 to 2016, even in the quiescent period. The polarization also varied on timescales of less than one day. When a burst occurred and the flux density increased, the circular polarization decreased to zero. To explain the observational properties of the flux variability and circular polarization, we propose a model in which an explosive event similar to a solar radio burst occurs on the line of sight behind the maser cloud, producing circularly polarized continuum emission due to gyro-synchrotron or gyro-resonance radiation, which is then amplified by the maser.
|
https://arxiv.org/abs/2601.20371
|
Academic Papers
|
svg
|
16f9d048630db0f36e79d65cba9e93f4dc1d01da3926fe46f48970880160be11
|
2026-02-02T00:00:00-05:00
|
How well is the local Large Scale Structure of the Universe known? CosmicFlows vs. Biteau's Galaxy Catalog with Cloning
|
arXiv:2601.20808v2 Announce Type: replace Abstract: Knowledge of the actual density distribution of matter in the local universe is needed for a variety of purposes -- for instance, as a baseline model for ultrahigh energy cosmic ray sources in the continuum limit and for predicting the diffuse Dark Matter annihilation signal. Determining the local mass density and velocity distribution is the aim of the CosmicFlows project. An alternate approach is based on catalogs of galaxies, supplemented with some scheme for filling in for unseen galaxies. Here, we compare the density field proposed by Biteau (2021) with the quasi-linear density field of CosmicFlows2 (Hoffman et al. 2018) and the mean posterior field of CosmicFlows4 (Valade 2026). We find factor-two level differences in some regions and even greater in regions toward the Galactic center zone of avoidance (ZoA) (|l| < 30{\deg}, |b| < 20{\deg}) as filled by Biteau using "cloning". Within 11 Mpc the density field is well-determined by the Local Volume catalog (Karachentsev et al. 2018) which Biteau directly incorporates; at larger distances, Biteau (2021) should not be used in the ZoA where "galaxies" are entirely fictitious but otherwise is to be preferred over CosmicFlows emphasizing the direction and integrated mass of structures; the radial distribution of mass in Biteau (2021) is less robust due to line-of-sight peculiar velocities. The angular positions of structures in CosmicFlows are sometimes not congruent with evidence in the galaxy catalog.
|
https://arxiv.org/abs/2601.20808
|
Academic Papers
|
svg
|
a0d31053fa2123f178bb28b220423a3b8219eb0b82ffb5a5f320030bf0b1e980
|
2026-02-02T00:00:00-05:00
|
Radio-Near Infrared Imaging of Dual Active Galactic Nuclei Candidates
|
arXiv:2601.20984v2 Announce Type: replace Abstract: We report the results of a pilot study that searched for dual active galactic nuclei (AGN) in local ($z10^{8}$ K) radio sources that indicate the presence of either a parsec-scale-separation dual AGN ($d_{\text{sep}} \sim 90$ pc and $\sim 56$ pc, respectively) or a radio jet. Matched-resolution multi-band radio observations are necessary to further characterize the AGN activity in these systems.
|
https://arxiv.org/abs/2601.20984
|
Academic Papers
|
svg
|
ee59bbe5ff47daf1cbabb7b667faa74212ebd1f6c4fc60c273018d36c69e5de0
|
2026-02-02T00:00:00-05:00
|
A redshift survey of the nearby galaxy cluster Abell 2199 : No upturn of the faint-end slope of galaxy luminosity function
|
arXiv:2601.21329v2 Announce Type: replace Abstract: We determine the galaxy luminosity function of cluster galaxies in the nearby galaxy cluster Abell 2199 (A2199), focusing on the faint-end slope down to $M_r \sim -14.5$. To achieve this, we augment the existing dataset by adding redshift data from our deep MMT/Hectospec survey and from the Dark Energy Spectroscopic Instrument (DESI), significantly improving the spectroscopic completeness down to $r_{\mathrm{petro},0} = 20.8$ within the central $30^\prime$ region. The resulting luminosity function is well described by a Schechter function with a characteristic magnitude $M^* = -21.30 \pm 0.27$ and a faint-end slope $\alpha = -1.23 \pm 0.05$. This faint-end slope is consistent with those measured in the nearby Coma and Virgo clusters and in a cluster from the TNG50 cosmological simulation, and is slightly shallower than that of field galaxies. These findings indicate that the previously claimed steep faint-end upturn (with $\alpha \sim -2$) in nearby galaxy clusters is not supported. Instead, they indicate that environmental processes in dense cluster cores does not seem to trigger the formation or survival of low-mass galaxies, thereby preventing a steep faint-end upturn in the luminosity function.
|
https://arxiv.org/abs/2601.21329
|
Academic Papers
|
svg
|
42a4a639d41a5dc6c175f82a4b762131dd49fb6fa62130971128e8db2135630a
|
2026-02-02T00:00:00-05:00
|
Fitting NANOGrav 15-year data and ACT data with modified inflation in entropic cosmology
|
arXiv:2510.20484v2 Announce Type: replace-cross Abstract: Recent evidences of stochastic gravitational wave background (SGWB) through Pulsar Time Array (PTA) observations hint towards an alternative inflationary scenario, compared to the usual inflation, for describing the early stage of the universe in order to be compatible with the PTA data. Moreover, currently the Atacama Cosmology Telescope (combined with the Planck 2018 and BAO) refines the constraint on inflationary observables, compared to the only-Planck 2018 measurements. In the present work, we simultaneously address these two issues by incorporating certain modification during inflation over the usual inflationary scenario. Such modification amplifies the primordial tensor perturbation over the modes that are sensitive to the NANOGrav frequency region. For this purpose, we take the thermodynamic route of cosmology where the entropy of the apparent horizon is given by a generalized form of entropy that is able to generalize the other known form of horizon entropies for suitable representations. The constraints on the model parameters coming from the ACT data also fit the NANOGrav 15-year data (based on numerical analysis), which reveal the model's compatibility with both the ACT and the PTA data.
|
https://arxiv.org/abs/2510.20484
|
Academic Papers
|
svg
|
69c5ad8bd1d012f0b804e7412099e0f39bb5d2bac2c298f6caa827936883c220
|
2026-02-02T00:00:00-05:00
|
Multi-probe analysis of strong-field effects in $f(Q)$ gravity
|
arXiv:2512.03529v2 Announce Type: replace-cross Abstract: Covariant $f(Q)$ gravity is a viable extension of General Relativity, however its strong-field predictions remain largely untested. Using the static, spherically symmetric black-hole solutions of the theory, we confront it with the most stringent probes available: black-hole shadows, Event Horizon Telescope (EHT) measurements, S2-star precession, and strong gravitational lensing. We show that the two admissible solution branches behave very differently: Case~I produces negligible deviations from Schwarzschild solution, whereas Case~II yields significant, potentially observable corrections to the photon sphere and shadow size. From the EHT shadow diameters of M87* and Sgr~A*, we obtain tight bounds, which are further strengthened by strong-lensing coefficients. These results provide the sharpest strong-field constraints on covariant $f(Q)$ gravity to date, and point toward future tests using next-generation horizon-scale imaging and precision Galactic-center astrometry.
|
https://arxiv.org/abs/2512.03529
|
Academic Papers
|
svg
|
ebea1beddbc7080d52b8beacb951da21502ce8691ad6c0b39aa14dc32189a805
|
2026-02-02T00:00:00-05:00
|
Searching for axion dark matter with magnetic resonance force microscopy
|
arXiv:2512.12120v2 Announce Type: replace-cross Abstract: We propose a magnetic resonance force microscopy (MRFM) search for axion dark matter around 1 GHz. The experiment leverages the axion's derivative coupling to electrons, which induces an effective A.C. magnetic field on a sample of electron spins polarized by a D.C. magnetic field and a micromagnet. A second pump field at a nearby frequency enhances the signal, with the detuning matched to the resonant frequency of a magnet-loaded mechanical oscillator. The resulting spin-dependent force is detected with hih sensitivity via optical interferometry. Accounting for the relevant noise sources, we show that current technology can be used to put constraints competitive with those from laboratory experiments with just a minute of integration time. Furthermore, varying the pump field frequency and D.C. magnetic field allows one to scan the axion mass. Finally, we explore this setup's capability to put constraints on other dark matter - Standard Model couplings.
|
https://arxiv.org/abs/2512.12120
|
Academic Papers
|
svg
|
bcd3b2112d73fa507e153ac70eb955afafa897819e65d09bb3759b84f0c3f3be
|
2026-02-02T00:00:00-05:00
|
A framework for LISA population inference
|
arXiv:2601.04168v3 Announce Type: replace-cross Abstract: The Laser Interferometer Space Antenna (LISA) is expected to have a source rich data stream containing signals from large numbers of many different types of source. This will include both individually resolvable signals and overlapping stochastic backgrounds, a regime intermediate between current ground-based detectors and pulsar timing arrays. The resolved sources and backgrounds will be fitted together in a high dimensional Global Fit. To extract information about the astrophysical populations to which the sources belong, we need to decode the information in the Global Fit, which requires new methodology that has not been required for the analysis of current gravitational wave detectors. Here, we %start that development, presenting present a hierarchical Bayesian framework to infer the properties of astrophysical populations directly from the output of a LISA Global Fit, consistently accounting for information encoded in both the resolved sources and the unresolved background. Using a simplified model of the Global Fit, we illustrate how the interplay between resolved and unresolved components affects population inference and highlight the impact of data analysis choices, such as the signal-to-noise threshold for resolved sources, on the results. Our approach provides a practical foundation for population inference using LISA data.
|
https://arxiv.org/abs/2601.04168
|
Academic Papers
|
svg
|
1886c8de20d76dda832b1fac5ad45ab3332d45419034b15870aba87d2d4eed01
|
2026-02-02T00:00:00-05:00
|
Spectrum of radiation from global strings and the relic axion density
|
arXiv:2601.19463v2 Announce Type: replace-cross Abstract: We discuss key aspects of the nature of radiation from global strings and its impact on the relic axion density. Using a simple model we demonstrate the dependence on the spectrum of radiation emitted by strings. We then study the radiation emitted by perturbed straight strings paying particular attention to the difference between the overall phase of the field and the small perturbations about the string solution which are the axions. We find that a significant correction is required to be sure that one is analyzing the axions and not the self-field of the string. Typically this requires one to excise a sizeable region around the string - something which is not usually done in the case of numerical field theory simulations of string networks. We have measured the spectrum of radiation from these strings and find that it is compatible with an exponential, as predicted by the Nambu-like Kalb-Ramond action, and in particular is not a ``hard'' spectrum often found in string network simulations. We conclude by attempting to assess the uncertainties on relic density and find that this leads to a range of possible axion masses when compared to the measured density from the Cosmic Microwave Background, albeit that they are typically higher than what is predicted by the Initial Misalignment Mechanism. If the decay is via a ``soft spectrum'' from loops produced close to the backreaction scale we find that $m_{\rm a}\approx 160\,\mu{\rm eV}$ and a detection frequency $f\approx 38\,{\rm GHz}$. If axions are emitted directly by the string network, and we use emission spectra reported in field theory simulations, then $m_{\rm a}\approx 4\,\mu{\rm eV}$ and $f\approx 1\,{\rm GHz}$, however this increases to $m_a \approx 125\,\mu{\rm eV}$ and $f\approx 30\,{\rm GHz}$ using our spectra for the case of an oscillating string. In all scenarios there are significant remaining uncertainties that we delineate.
|
https://arxiv.org/abs/2601.19463
|
Academic Papers
|
svg
|
9bb89e9acc83ac505d4d66dc4e1ba6ec04fb303b981833784eb76367094098f4
|
2026-02-02T00:00:00-05:00
|
Endogenous Inequality Aversion: Decision criteria for triage and other ethical tradeoffs
|
arXiv:2601.22250v1 Announce Type: new Abstract: Medical ``Crisis Standards of Care'' call for a utilitarian allocation of scarce resources in emergencies, while favoring the worst-off under normal conditions. Inspired by such triage rules, we introduce social welfare functions whose distributive tradeoffs depend on the prevailing level of aggregate welfare. These functions are inherently self-referential: they take the welfare level as an input, even though that level is itself determined by the function. In our formulation, inequality aversion varies with welfare and is therefore self-referential. We provide an axiomatic foundation for a family of social welfare functions that move from Rawlsian to utilitarian criteria as overall welfare falls, thereby formalizing triage guidelines. We also derive the converse case, in which the social objective shifts from Rawlsianism toward utilitarianism as welfare increases.
|
https://arxiv.org/abs/2601.22250
|
Academic Papers
|
svg
|
0437ae58e997e47967ffab0757e14992850847b76560e03ce99d91d214e5be0c
|
2026-02-02T00:00:00-05:00
|
Model Selection in Panel Data Models: A Generalization of the Vuong Test
|
arXiv:2601.22354v1 Announce Type: new Abstract: This paper generalizes the classical Vuong (1989) test to panel data models by employing modified profile likelihoods and the Kullback-Leibler information criterion. Unlike the standard likelihood function, the profile likelihood lacks certain regular properties, making modification necessary. We adopt a generalized panel data framework that incorporates group fixed effects for time and individual pairs, rather than traditional individual fixed effects. Applications of our approach include linear models with non-nested specifications of individual-time effects.
|
https://arxiv.org/abs/2601.22354
|
Academic Papers
|
svg
|
623faa1f0f3fa8309aaccccaae53a4968d9cb2953b697796c05daf00317f67f8
|
2026-02-02T00:00:00-05:00
|
Screening with Advertisements
|
arXiv:2601.22404v1 Announce Type: new Abstract: We investigate a seller's revenue-maximizing mechanism in a setting where a desirable good is sold together with an undesirable bad (e.g., advertisements) that generates third-party revenue. The buyer's private information is two-dimensional: valuation for the good and willingness to pay to avoid the bad. Following the duality framework of Daskalakis, Deckelbaum, and Tzamos (2017), whose results extend to our setting, we formulate the seller's problem using a transformed measure $\mu$ that depends on the third-party payment $k$. We provide a near-characterization for optimality of three pricing mechanisms commonly used in practice -- the Good-Only, Ad-Tiered, and Single-Bundle Posted Price -- and introduce a new class of tractable, interpretable two-dimensional orthant conditions on $\mu$ for sufficiency. Economically, $k$ yields a clean comparative static: low $k$ excludes the bad, intermediate $k$ separates ad-tolerant and ad-averse buyers, and high $k$ bundles ads for all types.
|
https://arxiv.org/abs/2601.22404
|
Academic Papers
|
svg
|
4d03c52de48632c09294a4df90509846368dd9a65a0fe29243dfec49c40c1170
|
2026-02-02T00:00:00-05:00
|
Using SVM to Estimate and Predict Binary Choice Models
|
arXiv:2601.22659v1 Announce Type: new Abstract: The support vector machine (SVM) has an asymptotic behavior that parallels that of the quasi-maximum likelihood estimator (QMLE) for binary outcomes generated by a binary choice model (BCM), although it is not a QMLE. We show that, under the linear conditional mean condition for covariates given the systematic component used in the QMLE slope consistency literature, the slope of the separating hyperplane given by the SVM consistently estimates the BCM slope parameter, as long as the class weight is used as required when binary outcomes are severely imbalanced. The SVM slope estimator is asymptotically equivalent to that of logistic regression in this sense. The finite-sample performance of the two estimators can be quite distinct depending on the distributions of covariates and errors, but neither dominates the other. The intercept parameter of the BCM can be consistently estimated once a consistent estimator of its slope parameter is obtained.
|
https://arxiv.org/abs/2601.22659
|
Academic Papers
|
svg
|
a2fdf6f8111c92b48730f55886fc1454b4423654f5191f18f8e370cd6be538b3
|
2026-02-02T00:00:00-05:00
|
A Real-Options-Aware Multi-Criteria Framework for Ex-Ante Real Estate Redevelopment Use Selection
|
arXiv:2601.22166v1 Announce Type: cross Abstract: A growing share of the existing real estate stock exhibits persistent underperformance that can no longer be explained by cyclical market phases or inadequate maintenance alone. In many cases, technically recoverable assets located in non-marginal contexts fail to generate economic value consistent with the capital immobilized. This condition reflects a structural misalignment between intended use and effective demand rather than episodic market weakness, and calls for a decision framework capable of integrating value, risk, complexity, and irreversibility in strategic use selection. This study proposes a decision-analytic framework for the ex-ante selection of intended use in real estate redevelopment processes. The framework integrates real-options logic on irreversibility and managerial flexibility with a multi-criteria decision-analysis structure, enabling comparative evaluation of expected economic value, market and operational risk, technical and managerial complexity, and time-to-income. By treating redevelopment primarily as a problem of strategic option selection rather than design or financial optimization, the framework operationalizes option value preservation through disciplined ex-ante screening. Illustrative cases demonstrate how this integration of real options reasoning and MCDA reduces over-complexification and misalignment across different asset types and urban contexts.
|
https://arxiv.org/abs/2601.22166
|
Academic Papers
|
svg
|
0039c9eb683667b2bf71245c4a7aba138af6cf08b861001da38538e96121c00b
|
2026-02-02T00:00:00-05:00
|
The Widening Profitability Gap between Renewable and Fossil Power Firms in Europe
|
arXiv:2601.22167v1 Announce Type: cross Abstract: Mobilising private capital is a critical bottleneck of the energy transition, yet recent crisis-driven windfall profits for fossil power firms suggest that market signals may still favour carbon-intensive assets. Here we analyse a panel of 900 European power firms (2001-2023) to resolve whether these profits reflect a durable profitability advantage or a crisis-driven anomaly. Using machine-learning clustering and Bayesian model averaging, we identify a structural divergence: wind and solar portfolios exhibit rising profitability, with return on assets among wind-dominated firms increasing by over 6% between 2014 and 2023. Conversely, higher fossil portfolio shares are increasingly associated with lower profitability, with marginal effects reaching -4% by 2023, while renewable-dominated firms match or outperform their fossil-heavy counterparts across most European regions. These findings suggest that the record profits of fossil incumbents were distinct outliers, masking an ongoing decline in the profitability of carbon-intensive business models.
|
https://arxiv.org/abs/2601.22167
|
Academic Papers
|
svg
|
e15f9fdec22ddd331136ef29b4b980e2f90ab13a4ca6dc20aecadd64cf70c425
|
2026-02-02T00:00:00-05:00
|
Realized Stochastic Volatility Model with Skew-t Distributions for Improved Volatility and Quantile Forecasting
|
arXiv:2401.13179v4 Announce Type: replace Abstract: Accurate forecasting of volatility and return quantiles is essential for evaluating financial tail risks such as value-at-risk and expected shortfall. This study proposes an extension of the traditional stochastic volatility model, termed the realized stochastic volatility model, that incorporates realized volatility as an efficient proxy for latent volatility. To better capture the stylized features of financial return distributions, particularly skewness and heavy tails, we introduce three variants of skewed t-distributions, two of which incorporate skew-normal components to flexibly model asymmetry. The models are estimated using a Bayesian Markov chain Monte Carlo approach and applied to daily returns and realized volatilities from major U.S. and Japanese stock indices. Empirical results demonstrate that incorporating both realized volatility and flexible return distributions substantially improves the accuracy of volatility and tail risk forecasts.
|
https://arxiv.org/abs/2401.13179
|
Academic Papers
|
svg
|
dde75ef446bfe09145d4c5618ca4004e6bde26342605ac0cb4a19debae005c0e
|
2026-02-02T00:00:00-05:00
|
Divide and Diverge
|
arXiv:2405.20564v5 Announce Type: replace Abstract: Political polarization can be beneficial to competing political parties. I study how electoral competition itself generates incentives to polarize voters, even when parties are ex ante identical and motivated purely by political power, interpreted as office rents or influence. I develop a probabilistic voting model with aggregate popularity shocks in which parties have decreasing marginal utility from political power. Equilibrium policy convergence fails. Platform differentiation provides insurance against electoral volatility by securing loyal voter bases and stabilizing political power. In a unidimensional policy space, parties' equilibrium payoffs rise as voters on opposite sides of the median become more extreme, including when polarization is driven by changes in the opponent's supporters. In a multidimensional setting, parties benefit from ideological coherence, the alignment of disagreements across issues. The results have implications for polarizing political communication, party identity, and electoral institutions.
|
https://arxiv.org/abs/2405.20564
|
Academic Papers
|
svg
|
cf890e1cd822b2eab67af3fe960dd94d501b319f77a8f6c373170bb04c3e6153
|
2026-02-02T00:00:00-05:00
|
Identity and Cooperation in Multicultural Societies: An Experimental Investigation
|
arXiv:2507.02511v2 Announce Type: replace Abstract: Immigration has shaped many nations, posing the challenge of integrating immigrants into society. While economists often focus on immigrants' economic outcomes compared to natives (such as education, labor market success, and health) social interactions between immigrants and natives are equally crucial. These interactions, from everyday exchanges to teamwork, often lack enforceable contracts and require cooperation to avoid conflicts and achieve efficient outcomes. However, socioeconomic, ethnic, and cultural differences can hinder cooperation. Thus, evaluating integration should also consider its impact on fostering cooperation across diverse groups. This paper studies how priming different identity dimensions affects cooperation between immigrant and native youth. Immigrant identity includes both ethnic ties to their country of origin and connections to the host country. We test whether cooperation improves by making salient a specific identity: Common identity (shared society), Multicultural identity (ethnic group within society), or Neutral identity. In a lab in the field experiment with over 390 adolescents, participants were randomly assigned to one of these priming conditions and played a Public Good Game. Results show that immigrants are 13 percent more cooperative than natives at baseline. Natives increase cooperation by about 3 percentage points when their multicultural identity is primed, closing the initial gap with immigrant peers.
|
https://arxiv.org/abs/2507.02511
|
Academic Papers
|
svg
|
290287cf6c379e927760100c4503487aa8c5fa42b93053d39880d4f01c6181fd
|
2026-02-02T00:00:00-05:00
|
A Time-Varying Branching Process Approach to Model Self-Renewing Cells
|
arXiv:2601.22282v1 Announce Type: new Abstract: Stem cells, through their ability to produce daughter stem cells and differentiate into specialized cells, are essential in the growth, maintenance, and repair of biological tissues. Understanding the dynamics of cell populations in the proliferation process not only uncovers proliferative properties of stem cells, but also offers insight into tissue development under both normal conditions and pathological disruption. In this paper, we develop a continuous time branching process model with time-dependent offspring distribution to characterize stem cell proliferation process. We derive analytical expressions for mean, variance, and autocovariance of the stem cell counts, and develop likelihood-based inference procedures to estimate model parameters. Particularly, we construct a forward algorithm likelihood to handle situations when some cell types cannot be directly observed. Simulation results demonstrate that our estimation method recovers the time-dependent division probabilities with good accuracy.
|
https://arxiv.org/abs/2601.22282
|
Academic Papers
|
svg
|
eaf2006b3a232c088753f97f05ea0779522c50c9d0224cb5d16ff69f54aa73bd
|
2026-02-02T00:00:00-05:00
|
Mixed Latent Position Cluster Models for Networks
|
arXiv:2601.22380v1 Announce Type: new Abstract: Over the last two decades, the Latent Position Model (LPM) has become a prominent tool to obtain model-based visualizations of networks. However, the geometric structure of the LPM is inherently symmetric, in the sense that outgoing and incoming edges are assumed to follow the same statistical distribution. As a consequence, the canonical LPM framework is not ideal for the analysis of directed networks. In addition, edges may be weighted to describe the duration or intensity of a connection. This can lead to disassortative patterns and other motifs that cannot be easily captured by the underlying geometry. To address these limitations, we develop a novel extension of the LPM, called the Mixed Latent Position Cluster Model (MLPCM), which can deal with asymmetry and non-Euclidean patterns, while providing new interpretations of the latent space. We dissect the directed edges of the network by formally disentangling how a node behaves from how it is perceived by others. This leads to a dual representation of a node's profile, identifying its ``overt'' and ``covert'' social positions. In order to efficiently estimate the parameters of our model, we develop a variational Bayes approach to approximate the posterior distribution. Unlike many existing variational frameworks, our algorithm does not require any additional numerical approximations. Model selection is performed by introducing a novel partially integrated complete likelihood criteria, which builds upon the literature on penalized likelihood methods. We demonstrate the accuracy of our proposed methodology using synthetic datasets, and we illustrate its practical utility with an application to a dataset of international arms transfers.
|
https://arxiv.org/abs/2601.22380
|
Academic Papers
|
svg
|
f1ee5d16295c482fcc512e78460afae5c22c0b1032481e2b2906181113a1582d
|
2026-02-02T00:00:00-05:00
|
Changepoint Detection As Model Selection: A General Framework
|
arXiv:2601.22481v1 Announce Type: new Abstract: This dissertation presents a general framework for changepoint detection based on L0 model selection. The core method, Iteratively Reweighted Fused Lasso (IRFL), improves upon the generalized lasso by adaptively reweighting penalties to enhance support recovery and minimize criteria such as the Bayesian Information Criterion (BIC). The approach allows for flexible modeling of seasonal patterns, linear and quadratic trends, and autoregressive dependence in the presence of changepoints. Simulation studies demonstrate that IRFL achieves accurate changepoint detection across a wide range of challenging scenarios, including those involving nuisance factors such as trends, seasonal patterns, and serially correlated errors. The framework is further extended to image data, where it enables edge-preserving denoising and segmentation, with applications spanning medical imaging and high-throughput plant phenotyping. Applications to real-world data demonstrate IRFL's utility. In particular, analysis of the Mauna Loa CO2 time series reveals changepoints that align with volcanic eruptions and ENSO events, yielding a more accurate trend decomposition than ordinary least squares. Overall, IRFL provides a robust, extensible tool for detecting structural change in complex data.
|
https://arxiv.org/abs/2601.22481
|
Academic Papers
|
svg
|
4f38ab2c38ac94a9d6fdee932061ef35b920d4ad4bd2860424a1f56a08086262
|
2026-02-02T00:00:00-05:00
|
Group Sequential Methods for the Win Ratio
|
arXiv:2601.22525v1 Announce Type: new Abstract: The win ratio is increasingly used in randomized trials due to its intuitive clinical interpretation, ability to incorporate the relative importance of composite endpoints, and its capacity for combining different types of outcomes (e.g. time-to-event, binary, counts, etc.) to be combined. There are open questions, however, about how to implement adaptive design approaches when the primary endpoint is a win ratio, including in group sequential designs. A key requirement allowing for straightforward application of classical group sequential methods is the independence of incremental interim test statistics. This paper derives the covariance structure of incremental U-statistics that evaluate the win ratio under its asymptotic distribution. The derived covariance shows that the independent increments assumption holds for the asymptotic distribution of U-statistics that test the win ratio. Simulations confirm that traditional $\alpha$-spending preserves Type I error across interim looks. A retrospective look at the IN.PACT SFA clinical trial data illustrates the potential for stopping early in a group sequential design using the win ratio. We have demonstrated that straightforward use of Lan-De\uppercase{M}ets $\alpha$-spending is possible for randomized trials involving the win ratio under certain common conditions. Thus, existing software capable of computing traditional group sequential boundaries can be employed.
|
https://arxiv.org/abs/2601.22525
|
Academic Papers
|
svg
|
bab4bed08bc73537e5fbb1ed7daaa900c6b6741358158ae9908a236876af35f4
|
2026-02-02T00:00:00-05:00
|
Propensity score weighted Cox regression for survival outcomes in observational studies with multiple or factorial treatments
|
arXiv:2601.22572v1 Announce Type: new Abstract: In observational studies with survival or time-to-event outcomes, a propensity score weighted marginal Cox proportional hazard model with the treatment variable as the only predictor is commonly used to estimate the causal marginal hazard ratio between two treatments. Observational studies often have more than two treatments, but corresponding analysis methods are limited. In this paper, we combine the propensity score weighting method for multiple treatments and a marginal Cox model with indicators for each treatment to estimate the causal hazard ratios between multiple treatments and a common reference treatment. We illustrate two weighting schemes: inverse probability of treatment weighting and overlap weighting. We prove the consistency of the maximum weighted partial likelihood estimator of the causal marginal hazard ratio and derive a robust sandwich variance estimator. As an important special case of multiple treatments, we elaborate the Cox model for two-way factorial treatments. We apply the method to evaluate the real-world comparative effectiveness of three types of anti-obesity medications on heart failure. We develop an associated R package 'PSsurvival'.
|
https://arxiv.org/abs/2601.22572
|
Academic Papers
|
svg
|
1a47074edd31a66890184da4f15bd0a4af21f683ec9c6c5b30af688157973095
|
2026-02-02T00:00:00-05:00
|
Quadratic robust methods for causal mediation analysis
|
arXiv:2601.22592v1 Announce Type: new Abstract: Estimating natural effects is a core task in causal mediation analysis. Existing triply robust (TR) frameworks (Tchetgen Tchetgen & Shpitser 2012) and their extensions have been developed to estimate the natural effects. In this work, we introduce a new quadruply robust (QR) framework that enlarges the model class for unbiased identification. We study two modeling strategies. The first is a nonparametric modeling approach, under which we propose a general QR estimator that supports the use of machine learning methods for nuisance estimation. We also study high-dimensional settings, where the dimensions of covariates and mediators may both be large. In these settings, we adopt a parametric modeling strategy and develop a model quadruply robust (MQR) estimator to limit the impact of model misspecification. Simulation studies and a real data application demonstrate the finite-sample performance of the proposed methods.
|
https://arxiv.org/abs/2601.22592
|
Academic Papers
|
svg
|
a8e1fb3c82674d9822095016efb31e6bf34ba144e1d91083f9565f49f5a402fc
|
2026-02-02T00:00:00-05:00
|
Policy learning under constraint: Maximizing a primary outcome while controlling an adverse event
|
arXiv:2601.22717v1 Announce Type: new Abstract: A medical policy aims to support decision-making by mapping patient characteristics to individualized treatment recommendations. Standard approaches typically optimize a single outcome criterion. For example, recommending treatment according to the sign of the Conditional Average Treatment Effect (CATE) maximizes the policy "value" by exploiting treatment effect heterogeneity. This point of view shifts policy learning towards the challenge of learning a reliable CATE estimator. However, in multi-outcome settings, such strategies ignore the risk of adverse events, despite their relevance. PLUC (Policy Learning Under Constraint) addresses this challenges by learning an estimator of the CATE that yields smoothed policies controlling the probability of an adverse event in observational settings. Inspired by insights from EP-learning, PLUC involves the optimization of strongly convex Lagrangian criteria over a convex hull of functions. Its alternating procedure iteratively applies the Frank-Wolfe algorithm to minimize the current criterion, then performs a targeting step that updates the criterion so that its evaluations at previously visited landmarks become targeted estimators of the corresponding theoretical quantities. An R package PLUC-R provides a practical implementation. We illustrate PLUC's performance through a series of numerical experiments.
|
https://arxiv.org/abs/2601.22717
|
Academic Papers
|
svg
|
38c718616d989d30bfffea2ea7caeae418944f01fb19fc877853810691e82c86
|
2026-02-02T00:00:00-05:00
|
Optimal Sample Splitting for Observational Studies
|
arXiv:2601.22782v1 Announce Type: new Abstract: In observational studies of treatment effects, estimates may be biased by unmeasured confounders, which can potentially affect the validity of the results. Understanding sensitivity to such biases helps assess how unmeasured confounding impacts credibility. The design of an observational study strongly influences its sensitivity to bias. Previous work has shown that the sensitivity to bias can be reduced by dividing a dataset into a planning sample and a larger analysis sample, where the planning sample guides design decisions. But the choice of what fraction of the data to put in the planning sample vs. the analysis sample was ad hoc. Here, we develop an approach to find the optimal fraction using plasmode datasets. We show that our method works well in high-dimensional outcome spaces. We apply our method to study the effects of exposure to second-hand smoke in children. The OptimalSampling R package implementing our method is available at GitHub.
|
https://arxiv.org/abs/2601.22782
|
Academic Papers
|
svg
|
0c6cbc0a55e9204f0042bd31be6b1416d1c6f5bdc3c262a6601a7a67302b03bd
|
2026-02-02T00:00:00-05:00
|
Wasserstein Geometry of Information Loss in Nonlinear Dynamical Systems
|
arXiv:2601.22814v1 Announce Type: new Abstract: Time-delay embedding is a powerful technique for reconstructing the state space of nonlinear time series. However, the fidelity of reconstruction relies on the assumption that the time-delay map is an embedding, which is implicitly justified by Takens' embedding theorem but rarely scrutinised in practice. In this work, we argue that time-delay reconstruction is not always an embedding, and that the non-injectivity of the time-delay map induced by a given measurement function causes irreducible information loss, degrading downstream model performance. Our analysis reveals that this local self-overlap stems from inherent dynamical properties, governed by the competition between the dynamical and the curvature penalty, and the irreducible information loss scales with the product of the geometric separation and the probability mass. We establish a measure-theoretic framework that lifts the dynamics to the space of probability measures, where the multi-valued evolution induced by the non-injectivity is quantified by how far the $n$-step conditional kernel $K^{n}(x, \cdot)$ deviates from a Dirac mass and introduce intrinsic stochasticity $\mathcal{E}^{*}_{n}$, an almost-everywhere, data-driven certificate of deterministic closure, to quantify irreducible information loss without any prior information. We demonstrate that $\mathcal{E}^{*}_{n}$ improves reconstruction quality and downstream model performance on both synthetic and real-world nonlinear data sets.
|
https://arxiv.org/abs/2601.22814
|
Academic Papers
|
svg
|
61fa6eb701f437eada3ef062f8973108022ce28a652d1b7720ed727539c48993
|
2026-02-02T00:00:00-05:00
|
Depth-based estimation for multivariate functional data with phase variability
|
arXiv:2601.22884v1 Announce Type: new Abstract: In the context of multivariate functional data with individual phase variation, we develop a robust depth-based approach to estimate the main pattern function when cross-component time warping is also present. In particular, we consider the latent deformation model (Carroll and M\"uller, 2023) in which the different components of a multivariate functional variable are also time-distorted versions of a common template function. Rather than focusing on a particular functional depth measure, we discuss the necessary conditions on a depth function to be able to provide a consistent estimation of the central pattern, considering different model assumptions. We evaluate the method performance and its robustness against atypical observations and violations of the model assumptions through simulations, and illustrate its use on two real data sets.
|
https://arxiv.org/abs/2601.22884
|
Academic Papers
|
svg
|
379da6b81574ae3b0210b11e0375d71b9faf4b9e03dbcb1c55e1b23e6ff53355
|
2026-02-02T00:00:00-05:00
|
Dynamic modelling and evaluation of preclinical trials in acute leukaemia
|
arXiv:2601.22971v1 Announce Type: new Abstract: Dynamic models are widely used to mathematically describe biological phenomena that evolve over time. One important area of application is leukaemia research, where leukaemia cells are genetically modified in preclinical studies to explore new therapeutic targets for reducing leukaemic burden. In advanced experiments, these studies are often conducted in mice and generate time-resolved data, the analysis of which may reveal growth-inhibiting effects of the investigated gene modifications. However, the experimental data is often times evaluated using statistical tests which compare measurements from only two different time points. This approach does not only reduce the time series to two instances but also neglects biological knowledge about cell mechanisms. Such knowledge, translated into mathematical models, expands the power to investigate and understand effects of modifications on underlying mechanisms based on experimental data. We utilise two population growth models -- an exponential and a logistic growth model -- to capture cell dynamics over the whole experimental time horizon and to consider all measurement times jointly. This approach enables us to derive modification effects from estimated model parameters. We demonstrate that the exponential growth model recognises simulated scenarios more reliably than the other candidate model and than a statistical test. Moreover, we apply the population growth models to evaluate the efficacy of candidate gene knockouts in patient-derived xenograft (PDX) models of acute leukaemia.
|
https://arxiv.org/abs/2601.22971
|
Academic Papers
|
svg
|
ef419274271400000f6c0d68ba59b578f8a34125c74837d74f6e2a863b10c9b8
|
2026-02-02T00:00:00-05:00
|
Computationally efficient segmentation for non-stationary time series with oscillatory patterns
|
arXiv:2601.22999v1 Announce Type: new Abstract: We propose a novel approach for change-point detection and parameter learning in multivariate non-stationary time series exhibiting oscillatory behaviour. We approximate the process through a piecewise function defined by a sum of sinusoidal functions with unknown frequencies and amplitudes plus noise. The inference for this model is non-trivial. However, discretising the parameter space allows us to recast this complex estimation problem into a more tractable linear model, where the covariates are Fourier basis functions. Then, any change-point detection algorithms for segmentation can be used. The advantage of our proposal is that it bypasses the need for trans-dimensional Markov chain Monte Carlo algorithms used by state-of-the-art methods. Through simulations, we demonstrate that our method is significantly faster than existing approaches while maintaining comparable numerical accuracy. We also provide high probability bounds on the change-point localization error. We apply our methodology to climate and EEG sleep data.
|
https://arxiv.org/abs/2601.22999
|
Academic Papers
|
svg
|
f24cca91b0afd4faa2ffabc43f6e22ca6586b69ae8a1065a84ed69cbea4296d9
|
2026-02-02T00:00:00-05:00
|
Differences in Performance of Bayesian Dynamic Borrowing and Synthetic Control Methods: A Case Study of Pediatric Atopic Dermatitis
|
arXiv:2601.23021v1 Announce Type: new Abstract: Bayesian dynamic borrowing (BDB) and synthetic control methods (SCM) are both used in clinical trial design when recruitment, retention, or allocation is a challenge. The performance of these approaches has not previously been directly compared due to differences in application, product, and measurement metrics. This study aims to conduct a comparison of power and type 1 error rates of BDB (using meta-analytic predictive prior (MAP)) and SCM using a case study of Pediatric Atopic Dermatitis. Six historical randomised control trials were selected for use in both the creation of the MAP prior and synthetic control arm. The R library RBesT was used to create a MAP prior and the R library Synthpop was used to create a synthetic control arm for the SCM. Power and type 1 error rate were used as comparison metrics. BDB produced a power of 0.580 and a type 1 error rate of 0.026. SCM produced a power of 0.641 and a type 1 error rate of 0.027. In this case study, the SCM model produced a higher power than the BDB method with a similar type 1 error rate. However, the decision to use SCM or BDB should come from the specific needs of the potential trial, since their power and type 1 error rate may differ on a case-by-case basis.
|
https://arxiv.org/abs/2601.23021
|
Academic Papers
|
svg
|
0cb70371be795f08ee475ed2408ccfadcfbcc2e00f400aaebceb8a7e8c0f4899
|
2026-02-02T00:00:00-05:00
|
Revisiting the Lost Submarine Problem: A Decision Theoretic Approach
|
arXiv:2601.23171v1 Announce Type: new Abstract: This article includes a discussion of the ``lost submarine problem", following Morey \emph{et al} (2016). As the title of that paper suggests (\emph{The fallacy of placing confidence in confidence intervals}), the example is intended to illustrate the futility of relying on the confidence interval as a formal inference statement. In the view of this author, the misgivings expressed in Morey \emph{et al} (2016) can be resolved using a decision theoretic approach. While it is true that a variety of statistical methods lead to a variety of confidence intervals, once we precisely define their purpose, a single optimal choice emerges. Furthermore, distinct purposes lead to distinct optimal choices. Therefore, that a variety of procedures exist is an advantage rather than a liability.
|
https://arxiv.org/abs/2601.23171
|
Academic Papers
|
svg
|
6a89ea7117b758e0ffceff6a5a5ba0fb2365ba18f2ab913178c302b587416fee
|
2026-02-02T00:00:00-05:00
|
Robust, partially alive particle Metropolis-Hastings via the Frankenfilter
|
arXiv:2601.23173v1 Announce Type: new Abstract: When a hidden Markov model permits the conditional likelihood of an observation given the hidden process to be zero, all particle simulations from one observation time to the next could produce zeros. If so, the filtering distribution cannot be estimated and the estimated parameter likelihood is zero. The alive particle filter addresses this by simulating a random number of particles for each inter-observation interval, stopping after a target number of non-zero conditional likelihoods. For outlying observations or poor parameter values, a non-zero result can be extremely unlikely, and computational costs prohibitive. We introduce the Frankenfilter, a principled, partially alive particle filter that targets a user-defined amount of success whilst fixing lower and upper bounds on the number of simulations. The Frankenfilter produces unbiased estimators of the likelihood, suitable for pseudo-marginal Metropolis--Hastings (PMMH). We demonstrate that PMMH with the Frankenfilter is more robust to outliers and mis-specified initial parameter values than PMMH using standard particle filters, and is typically at least 2-3 times more efficient. We also provide advice for choosing the amount of success. In the case of n exact observations, this is particularly simple: target n successes.
|
https://arxiv.org/abs/2601.23173
|
Academic Papers
|
svg
|
7088b9fb01787e02df30a5de35309fe1b57d0948a7232f00dbe5db9697698658
|
2026-02-02T00:00:00-05:00
|
Beyond the Null Effect: Unmasking the True Impact of Teacher-Child Interaction Quality on Child Outcomes in Early Head Start
|
arXiv:2601.23203v1 Announce Type: new Abstract: In Early Head Start (EHS), teacher-child interactions are widely believed to shape infant-toddler outcomes, yet large-scale studies often find only modest or null associations. This study addresses four methodological sources of attenuation -- item-level measurement error, center-level confounding, teacher- and classroom-level covariate imbalance, and overlooked nonlinearities -- to clarify classroom process quality's true influence on child development. Using data from the 2018 wave of the Early Head Start Family and Child Experiences Survey (Baby FACES), we applied a three-level generalized additive latent and mixed model (GALAMM) to distinguish genuine classroom-level variability in process quality, as measured by the Classroom Assessment Scoring System (CLASS) and Quality of Caregiver-Child Interactions for Infants and Toddlers (QCIT), from item-level noise and center-level effects. We then estimated dose-response relationships with children's language and socioemotional outcomes, employing covariate balancing weights and generalized additive models. Results show that nearly half of each item's variance reflects classroom-level processes, with the remainder tied to measurement error or center-wide influences, masking true classroom effects. After correcting for these biases, domain-focused dose-response analyses reveal robust linear associations between cognitive/language supports and children's English communicative skills, while emotional-behavioral supports better predict social-emotional competence. Some domains display plateaus when pushed to extremes, underscoring potential nonlinearities. These findings challenge the "null effect" narrative, demonstrating that rigorous methodology can uncover the critical, domain-specific impacts of teacher-child interaction quality, offering clearer guidance for targeted professional development and policy in EHS.
|
https://arxiv.org/abs/2601.23203
|
Academic Papers
|
svg
|
f0e943fc8262da6a4ad95f344a90c40bccca2413317c70873baacfcdba35154f
|
2026-02-02T00:00:00-05:00
|
Leaf clustering using circular densities
|
arXiv:2211.10547v2 Announce Type: replace Abstract: In the biology field of botany, leaf shape recognition is an important task. One way of characterising the leaf shape is through the centroid contour distances (CCD). Each CCD path might have different resolution, so normalisation is done by associating each contour to a circular density. Densities are rotated by subtracting the mean or mode preferred direction. Distance measures between densities are used to produce a hierarchical clustering method to cluster the leaves. We illustrate our approach with a motivating small dataset as well as a larger dataset.
|
https://arxiv.org/abs/2211.10547
|
Academic Papers
|
svg
|
3ddfcbda44d0ac4a773c5c44990149edccb24a3d8d70e9cb9156877a3a2891f8
|
2026-02-02T00:00:00-05:00
|
A VAE Approach to Sample Multivariate Extremes
|
arXiv:2306.10987v2 Announce Type: replace Abstract: Generating accurate extremes from an observational data set is crucial when seeking to estimate risks associated with the occurrence of future extremes which could be larger than those already observed. Applications range from the occurrence of natural disasters to financial crashes. Generative approaches from the machine learning community do not apply to extreme samples without careful adaptation. Besides, asymptotic results from extreme value theory (EVT) give a theoretical framework to model multivariate extreme events, especially through the notion of multivariate regular variation. Bridging these two fields, this paper details a variational autoencoder (VAE) approach for sampling multivariate heavy-tailed distributions, i.e., distributions likely to have extremes of particularly large intensities. We illustrate the relevance of our approach on a synthetic data set and on a real data set of discharge measurements along the Danube river network. The latter shows the potential of our approach for flood risks' assessment. In addition to outperforming the standard VAE for the tested data sets, we also provide a comparison with a competing EVT-based generative approach. On the tested cases, our approach improves the learning of the dependency structure between extremes.
|
https://arxiv.org/abs/2306.10987
|
Academic Papers
|
svg
|
8d348dd5be042bf3e095fc825687c07a95f9299c171d905feaabf86e2b12a948
|
2026-02-02T00:00:00-05:00
|
Bayesian Strategies for Repulsive Spatial Point Processes
|
arXiv:2404.15133v3 Announce Type: replace Abstract: There is increasing interest to develop Bayesian inferential algorithms for point process models with intractable likelihoods. A purpose of this paper is to illustrate the utility of using simulation based strategies, including Approximate Bayesian Computation (ABC) and Markov Chain Monte Carlo (MCMC) methods for this task. Shirota and Gelfand (2017) proposed an extended version of an ABC approach for Repulsive Spatial Point Processes (RSPP), but their algorithm was not correctly detailed. In this paper, we correct their method and, based on this, we propose a new ABC-MCMC algorithm to which Markov property is introduced compared to a typical ABC method. Though it is generally impractical to use, Monte Carlo approximations can be leveraged for intractable terms. Another aspect of this paper is to explore the use of the exchange algorithm and the noisy Metropolis-Hastings algorithm (Alquier et al., 2016) on RSPP. Comparisons to ABC-MCMC methods are also provided. We find that the inferential approaches outlined above yield good performance for RSPP in both simulated and real data applications and should be considered as viable approaches for the analysis of these models.
|
https://arxiv.org/abs/2404.15133
|
Academic Papers
|
svg
|
d9f9df2c3431c41adc383dd7deffc1811209f825468105394fa0f7f4ce800bda
|
2026-02-02T00:00:00-05:00
|
Multivariate Bayesian Last Layer for Regression with Uncertainty Quantification and Decomposition
|
arXiv:2405.01761v2 Announce Type: replace Abstract: We present new Bayesian Last Layer neural network models in the setting of multivariate regression under heteroscedastic noise, and propose EM algorithms for parameter learning. Bayesian modeling of a neural network's final layer has the attractive property of uncertainty quantification with a single forward pass. The proposed framework is capable of disentangling the aleatoric and epistemic uncertainty, and can be used to enhance a canonically trained deep neural network with uncertainty-aware capabilities.
|
https://arxiv.org/abs/2405.01761
|
Academic Papers
|
svg
|
4a24825923040fab394e7e14d2fd5a64b578bf33876f71d59e9c805c837a9937
|
2026-02-02T00:00:00-05:00
|
CLE-SH: Comprehensive Literal Explanation package for SHapley values by statistical validity
|
arXiv:2409.12578v2 Announce Type: replace Abstract: Recently, SHapley Additive exPlanations (SHAP) has been widely utilized in various research domains. This is particularly evident in application fields, where SHAP analysis serves as a crucial tool for identifying biomarkers and assisting in result validation. However, despite its frequent usage, SHAP is often not applied in a manner that maximizes its potential contributions. A review of recent papers employing SHAP reveals that many studies subjectively select a limited number of features as 'important' and analyze SHAP values by approximately observing plots without assessing statistical significance. Such superficial application may hinder meaningful contributions to the applied fields. To address this, we propose a library package designed to simplify the interpretation of SHAP values. By simply inputting the original data and SHAP values, our library provides: 1) the number of important features to analyze, 2) the pattern of each feature via univariate analysis, and 3) the interaction between features. All information is extracted based on its statistical significance and presented in simple, comprehensible sentences, enabling users of all levels to understand the interpretations. We hope this library fosters a comprehensive understanding of statistically valid SHAP results.
|
https://arxiv.org/abs/2409.12578
|
Academic Papers
|
svg
|
5862441d3b70179e0a66068cea22b5686139ac3cfc9e7b437c7734c4daf6f44d
|
2026-02-02T00:00:00-05:00
|
Bayesian Transfer Learning for Artificially Intelligent Geospatial Systems: A Predictive Stacking Approach
|
arXiv:2410.09504v4 Announce Type: replace Abstract: Building artificially intelligent geospatial systems requires rapid delivery of spatial data analysis on massive scales with minimal human intervention. Depending upon their intended use, data analysis can also involve model assessment and uncertainty quantification. This article devises transfer learning frameworks for deployment in artificially intelligent systems, where a massive data set is split into smaller data sets that stream into the analytical framework to propagate learning and assimilate inference for the entire data set. Specifically, we introduce Bayesian predictive stacking for multivariate spatial data and demonstrate rapid and automated analysis of massive data sets. Furthermore, inference is delivered without human intervention without excessively demanding hardware settings. We illustrate the effectiveness of our approach through extensive simulation experiments and in producing inference from massive dataset on vegetation index that are indistinguishable from traditional (and more expensive) statistical approaches.
|
https://arxiv.org/abs/2410.09504
|
Academic Papers
|
svg
|
4635f7af9666d0534e7a482dd1e27b081e2a0befd772ddecb24ae2bbb9b252eb
|
2026-02-02T00:00:00-05:00
|
Model-assisted inference for dynamic causal effects in staggered rollout cluster randomized experiments
|
arXiv:2502.10939v3 Announce Type: replace Abstract: Staggered rollout cluster randomized experiments (SR-CREs) involve sequential treatment adoption across clusters, requiring analysis methods that address a general class of dynamic causal effects, anticipation, and non-ignorable cluster-period sizes. Without imposing any outcome modeling assumptions, we study regression estimators using individual data, cluster-period averages, and scaled cluster-period totals, with and without covariate adjustment from a design-based perspective. We establish consistency and asymptotic normality of each estimator under a randomization-based framework and prove that the associated variance estimators are asymptotically conservative in the L\"{o}wner ordering. Furthermore, we conduct a unified efficiency comparison of the estimators and provide recommendations. We highlight the efficiency advantage of using estimators based on scaled cluster-period totals with covariate adjustment over their counterparts using individual-level data and cluster-period averages. Our results rigorously justify linear regression estimators as model-assisted methods to address an entire class of dynamic causal effects in SR-CREs.
|
https://arxiv.org/abs/2502.10939
|
Academic Papers
|
svg
|
3111d115538e17bb65e75c9df97a14f88573611a9b6706536d47954ad100bcb6
|
2026-02-02T00:00:00-05:00
|
Bayesian Kernel Machine Regression via Random Fourier Features for Estimating Joint Health Effects of Multiple Exposures
|
arXiv:2502.13157v2 Announce Type: replace Abstract: Environmental epidemiology has traditionally examined single exposure one at a time. Advances in exposure assessment and statistical methods now enable studies of multiple exposures and their combined health impacts. Bayesian Kernel Machine Regression (BKMR) is a widely used approach to flexibly estimates joint, nonlinear effects of multiple exposures. But BMKR is computationally intensive for large datasets, as repeated kernel inversion in Markov chain Monte Carlo (MCMC) can be time-consuming and often infeasible in practice. To address this issue, we propose using supervised random Fourier basis functions to replace the Gaussian process random effects. This re-frames the kernel machine regression into a linear mixed-effect model that facilitates computationally efficient estimation and prediction. Bayesian inference is conducted using MCMC with Hamiltonian Monte Carlo algorithms. Simulation studies demonstrate that our method yields results comparable to BKMR while significantly reduces the computation time. Our approach outperforms BKMR when the exposure-response surface has stronger dependency and when using predictive process as an alternative approximation method. Finally, we applied this approach to analyze over 270,000 birth records, examining associations between multiple ambient air pollutants and birthweight in Georgia.
|
https://arxiv.org/abs/2502.13157
|
Academic Papers
|
svg
|
bda3e536f1a046fdf6840517e785b5b9514db46af22996623682fad646382ca0
|
2026-02-02T00:00:00-05:00
|
A Zero-Inflated Poisson Latent Position Cluster Model
|
arXiv:2502.13790v2 Announce Type: replace Abstract: The latent position network model (LPM) is a popular approach for the statistical analysis of network data. A central aspect of this model is that it assigns nodes to random positions in a latent space, such that the probability of an interaction between each pair of individuals or nodes is determined by their distance in this latent space. A key feature of this model is that it allows one to visualize nuanced structures via the latent space representation. The LPM can be further extended to the Latent Position Cluster Model (LPCM), to accommodate the clustering of nodes by assuming that the latent positions are distributed following a finite mixture distribution. In this paper, we extend the LPCM to accommodate missing network data and apply this to non-negative discrete weighted social networks. By treating missing data as ``unusual'' zero interactions, we propose a combination of the LPCM with the zero-inflated Poisson distribution. Statistical inference is based on a novel partially collapsed Markov chain Monte Carlo algorithm, where a Mixture-of-Finite-Mixtures (MFM) model is adopted to automatically determine the number of clusters and optimal group partitioning. Our algorithm features a truncated absorb-eject move, which is a novel adaptation of an idea commonly used in collapsed samplers, within the context of MFMs. Another aspect of our work is that we illustrate our results on 3-dimensional latent spaces, maintaining clear visualizations while achieving more flexibility than 2-dimensional models. The performance of this approach is illustrated via three carefully designed simulation studies, as well as four different publicly available real networks, where some interesting new perspectives are uncovered.
|
https://arxiv.org/abs/2502.13790
|
Academic Papers
|
svg
|
f2fcbea7b894e7e38e7ad7859d273ad6791ecd6136191ee7aa0eac925662efbf
|
2026-02-02T00:00:00-05:00
|
Quantifying sleep apnea heterogeneity using hierarchical Bayesian modeling
|
arXiv:2503.11599v5 Announce Type: replace Abstract: Obstructive Sleep Apnea (OSA) is a breathing disorder during sleep that affects millions of people worldwide. The diagnosis of OSA often occurs through an overnight polysomnogram (PSG) sleep study that generates a massive amount of physiological data. However, despite the evidence of substantial heterogeneity in the expression and symptoms of OSA, diagnosis and scientific analysis of severity typically focus on a single summary statistic, the Apnea-Hypopnea Index (AHI). We address the limitations of this approach through hierarchical Bayesian modeling of PSG data. Our approach produces interpretable random effects for each patient, which govern sleep-stage dynamics, rates of OSA events, and impacts of OSA events on subsequent sleep-stage dynamics. We propose a novel approach for using these random effects to produce a Bayes optimal clustering of patients. We use the proposed approach to analyze data from the APPLES study. Our analysis produces clinically interesting groups of patients with sleep apnea and a novel finding of an association between OSA expression and cognitive performance that is missed by an AHI-based analysis.
|
https://arxiv.org/abs/2503.11599
|
Academic Papers
|
svg
|
447fcdc32793272a726e803b05528f42e9e8bbdc66107f989caf135e13682ec5
|
2026-02-02T00:00:00-05:00
|
Representation Learning for Extrapolation in Perturbation Modeling
|
arXiv:2504.18522v2 Announce Type: replace Abstract: We consider the problem of modeling the effects of perturbations, such as gene knockdowns or drugs, on measurements, such as single-cell RNA or protein counts. Given data for some perturbations, we aim to predict the distribution of measurements for new combinations of perturbations. To address this challenging extrapolation task, we posit that perturbations act additively in a suitable, unknown embedding space. We formulate the data-generating process as a latent variable model, in which perturbations amount to mean shifts in latent space and can be combined additively. We then prove that, given sufficiently diverse training perturbations, the representation and perturbation effects are identifiable up to orthogonal transformation and use this to characterize the class of unseen perturbations for which we obtain extrapolation guarantees. We establish a link between our model class and shift interventions in linear latent causal models. To estimate the model from data, we propose a new method, the perturbation distribution autoencoder (PDAE), which is trained by maximizing the distributional similarity between true and simulated perturbation distributions. The trained model can then be used to predict previously unseen perturbation distributions. Through simulations, we demonstrate that PDAE can accurately predict the effects of unseen but identifiable perturbations, supporting our theoretical results.
|
https://arxiv.org/abs/2504.18522
|
Academic Papers
|
svg
|
04ec84e042b0c70ce1eafaeba4d7818adcbd822172a7c539979befea98f6f998
|
2026-02-02T00:00:00-05:00
|
Discrimination performance in illness-death models with interval-censored disease data
|
arXiv:2504.19726v2 Announce Type: replace Abstract: In clinical studies, the illness-death model is often used to describe disease progression. A subject starts disease-free, may develop the disease and then die, or die directly. In clinical practice, disease can only be diagnosed at pre-specified follow-up visits, so the exact time of disease onset is often unknown, resulting in interval-censored data. This study examines the impact of ignoring this interval-censored nature of disease data on the discrimination performance of illness-death models, focusing on the time-specific Area Under the receiver operating characteristic Curve (AUC) in both incident/dynamic and cumulative/dynamic definitions. A simulation study with data simulated from Weibull transition hazards and disease state censored at regular intervals is conducted. Estimates are derived using different methods: the Cox model with a time-dependent binary disease marker, which ignores interval-censoring, and the illness-death model for interval-censored data estimated with three implementations - the piecewise-constant model from the msm package, the Weibull and M-spline models from the SmoothHazard package. These methods are also applied to a dataset of 2232 patients with high-grade soft tissue sarcoma, where the interval-censored disease state is the post-operative development of distant metastases. The results suggest that, in the presence of interval-censored disease times, it is important to account for interval-censoring not only when estimating the parameters of the model but also when evaluating the discrimination performance of the disease.
|
https://arxiv.org/abs/2504.19726
|
Academic Papers
|
svg
|
d68faf6d0936898761cd733939c1e850f92060c5eb2a09e43f2145dd44fd44d9
|
2026-02-02T00:00:00-05:00
|
Uncertainty Quantification for Prior-Data Fitted Networks using Martingale Posteriors
|
arXiv:2505.11325v3 Announce Type: replace Abstract: Prior-data fitted networks (PFNs) have emerged as promising foundation models for prediction from tabular data sets, achieving state-of-the-art performance on small to moderate data sizes without tuning. While PFNs are motivated by Bayesian ideas, they do not provide any uncertainty quantification for predictive means, quantiles, or similar quantities. We propose a principled and efficient sampling procedure to construct Bayesian posteriors for such estimates based on Martingale posteriors, and prove its convergence. Several simulated and real-world data examples showcase the uncertainty quantification of our method in inference applications.
|
https://arxiv.org/abs/2505.11325
|
Academic Papers
|
svg
|
d937b449bd00ad5b2c0fc7352d5a0eacb31cca4763aadc90770e94348db7ca7c
|
2026-02-02T00:00:00-05:00
|
Two-Phase Treatment with Noncompliance: Identifying the Cumulative Average Treatment Effect via Multisite Instrumental Variables
|
arXiv:2506.03104v3 Announce Type: replace Abstract: When evaluating a two-phase intervention, the cumulative average treatment effect (ATE) is often the primary causal estimand of interest. However, some individuals who do not respond well to the Phase I treatment may subsequently display noncompliant behaviors. At the same time, exposure to the Phase I treatment is expected to directly influence an individual's potential outcomes, thereby violating the exclusion restriction. Building on an instrumental variable (IV) strategy for multisite trials, we clarify the conditions under which the cumulative ATE of a two-phase treatment can be identified by employing the random assignment of the Phase I treatment as the instrument. Our strategy relaxes both the conventional exclusion restriction and sequential ignorability assumptions. We assess the performance of the new strategy through simulation studies. Additionally, we reanalyze data from the Tennessee class size study, in which students and teachers were randomly assigned to either small or regular class types in kindergarten (Phase I) with noncompliance emerging in Grade 1 (Phase II). Applying our new strategy, we estimate the cumulative ATE of receiving two consecutive years of instruction in a small versus regular class.
|
https://arxiv.org/abs/2506.03104
|
Academic Papers
|
svg
|
9711966819d1448b51aa25e26d7decfbf704eaaf76c21d4963a4cf7347bcd6aa
|
2026-02-02T00:00:00-05:00
|
Post-selection inference with a single realization of a network
|
arXiv:2508.11843v2 Announce Type: replace Abstract: Given a dataset consisting of a single realization of a network, we consider conducting inference on a parameter selected from the data. In particular, we focus on the setting where the parameter of interest is a linear combination of the mean connectivities within and between estimated communities. Inference in this setting poses a challenge, since the communities are themselves estimated from the data. Furthermore, since only a single realization of the network is available, sample splitting is not possible. In this paper, we show that it is possible to split a single realization of a network consisting of $n$ nodes into two (or more) networks involving the same $n$ nodes; the first network can be used to select a data-driven parameter, and the second to conduct inference on that parameter. In the case of weighted networks with Poisson or Gaussian edges, we obtain two independent realizations of the network; by contrast, in the case of Bernoulli edges, the two realizations are dependent, and so extra care is required. We establish the theoretical properties of our estimators, in the sense of confidence intervals that attain the nominal (selective) coverage, and demonstrate their utility in numerical simulations and in application to a dataset representing the relationships among dolphins in Doubtful Sound, New Zealand.
|
https://arxiv.org/abs/2508.11843
|
Academic Papers
|
svg
|
446d6ba90ac9875ed5f17505e554c58d3ccc2c02e6daf2d4f64577c2c055a503
|
2026-02-02T00:00:00-05:00
|
A General Framework for Joint Multi-State Models
|
arXiv:2510.07128v3 Announce Type: replace Abstract: Conventional joint modeling approaches generally characterize the relationship between longitudinal biomarkers and discrete event occurrences within terminal, recurring or competing risk settings, thereby offering a limited representation of complex, multi-state trajectories. We propose a general multi-state joint modeling framework that unifies longitudinal biomarker dynamics with multi-state time-to-event processes defined on arbitrary directed graphs. The proposed framework also accomodates nonlinear longitudinal submodels and scalable inference via stochastic gradient descent. This formulation encompasses both Markovian and semi-Markovian transition structures, allowing recurrent cycles and terminal absorptions to be naturally represented. The longitudinal and event processes are linked through shared latent structures within nonlinear mixed-effects models, extending classical joint modeling formulations. We derive the complete likelihood, model selection criteria, and develop scalable inference procedures based on stochastic gradient descent to enable high-dimensional and large-scale applications. In addition, we formulate a dynamic prediction framework that provides individualized state-transition probabilities and personalized risk assessments along complex event trajectories. Through simulation and application to the PAQUID cohort, we demonstrate accurate parameter recovery and individualized prediction.
|
https://arxiv.org/abs/2510.07128
|
Academic Papers
|
svg
|
4d40bf61275b3232ea7d24ee5067875e17a9538b317b36a5c540923fa3d063af
|
2026-02-02T00:00:00-05:00
|
Calibrating Decision Robustness via Inverse Conformal Risk Control
|
arXiv:2510.07750v2 Announce Type: replace Abstract: Robust optimization safeguards decisions against uncertainty by optimizing against worst-case scenarios, yet their effectiveness hinges on a prespecified robustness level that is often chosen ad hoc, leading to either insufficient protection or overly conservative and costly solutions. Recent approaches using conformal prediction construct data-driven uncertainty sets with finite-sample coverage guarantees, but they still fix coverage targets a priori and offer little guidance for selecting robustness levels. We propose a new framework that provides distribution-free, finite-sample guarantees on both miscoverage and regret for any family of robust predict-then-optimize policies. Our method constructs valid estimators that trace out the miscoverage--regret Pareto frontier, enabling decision-makers to reliably evaluate and calibrate robustness levels according to their cost--risk preferences. The framework is simple to implement, broadly applicable across classical optimization formulations, and achieves sharper finite-sample performance. This paper offers a principled data-driven methodology for guiding robustness selection and empowers practitioners to balance robustness and conservativeness in high-stakes decision-making.
|
https://arxiv.org/abs/2510.07750
|
Academic Papers
|
svg
|
3eaa35cc092e9b794c4c1733e6bd42f2b8d1574831e192f44603f1190e6d3fdc
|
2026-02-02T00:00:00-05:00
|
Physics-Informed Neural Networks and Neural Operators for Parametric PDEs
|
arXiv:2511.04576v3 Announce Type: replace Abstract: PDEs arise ubiquitously in science and engineering, where solutions depend on parameters (physical properties, boundary conditions, geometry). Traditional numerical methods require re-solving the PDE for each parameter, making parameter space exploration prohibitively expensive. Recent machine learning advances, particularly physics-informed neural networks (PINNs) and neural operators, have revolutionized parametric PDE solving by learning solution operators that generalize across parameter spaces. We critically analyze two main paradigms: (1) PINNs, which embed physical laws as soft constraints and excel at inverse problems with sparse data, and (2) neural operators (e.g., DeepONet, Fourier Neural Operator), which learn mappings between infinite-dimensional function spaces and achieve unprecedented generalization. Through comparisons across fluid dynamics, solid mechanics, heat transfer, and electromagnetics, we show neural operators can achieve computational speedups of $10^3$ to $10^5$ times faster than traditional solvers for multi-query scenarios, while maintaining comparable accuracy. We provide practical guidance for method selection, discuss theoretical foundations (universal approximation, convergence), and identify critical open challenges: high-dimensional parameters, complex geometries, and out-of-distribution generalization. This work establishes a unified framework for understanding parametric PDE solvers via operator learning, offering a comprehensive, incrementally updated resource for this rapidly evolving field
|
https://arxiv.org/abs/2511.04576
|
Academic Papers
|
svg
|
6f7692bec2e013e47139e0f4fd55ca549de82b026c47f26d50eb9c9f0fec49a0
|
2026-02-02T00:00:00-05:00
|
Standardized Descriptive Index for Measuring Deviation and Uncertainty in Psychometric Indicators
|
arXiv:2512.21399v2 Announce Type: replace Abstract: The use of descriptive statistics in pilot testing procedures requires objective, standard diagnostic tools that are feasible for small sample sizes. While current psychometric practices report item-level statistics, they often report these raw descriptives separately rather than consolidating both mean and standard deviation into a single diagnostic tool to directly measure item quality. By leveraging the analytical properties of Cohen's d, this article repurposes its use in scale development as a standardized item deviation index. This measures the extent of an item's raw deviation relative to its scale midpoint while accounting for its own uncertainty. Analytical properties such as boundedness, scale invariance, and bias are explored to further understand how the index values behave, which will aid future efforts to establish empirical thresholds that characterize redundancy among formative indicators and consistency among reflective indicators.
|
https://arxiv.org/abs/2512.21399
|
Academic Papers
|
svg
|
da28f7219d683216c2771ebfdba58a3ccf3dcb6cad87c73493b843d8e3e8b2f6
|
2026-02-02T00:00:00-05:00
|
CAOS: Conformal Aggregation of One-Shot Predictors
|
arXiv:2601.05219v2 Announce Type: replace Abstract: One-shot prediction enables rapid adaptation of pretrained foundation models to new tasks using only one labeled example, but lacks principled uncertainty quantification. While conformal prediction provides finite-sample coverage guarantees, standard split conformal methods are inefficient in the one-shot setting due to data splitting and reliance on a single predictor. We propose Conformal Aggregation of One-Shot Predictors (CAOS), a conformal framework that adaptively aggregates multiple one-shot predictors and uses a leave-one-out calibration scheme to fully exploit scarce labeled data. Despite violating classical exchangeability assumptions, we prove that CAOS achieves valid marginal coverage using a monotonicity-based argument. Experiments on one-shot facial landmarking and RAFT text classification tasks show that CAOS produces substantially smaller prediction sets than split conformal baselines while maintaining reliable coverage.
|
https://arxiv.org/abs/2601.05219
|
Academic Papers
|
svg
|
e6a800644b95befef6c287f82b5b91aa4a3667970fe764d97312300ade4bfd53
|
2026-02-02T00:00:00-05:00
|
Variational autoencoder for inference of nonlinear mixed effect models based on ordinary differential equations
|
arXiv:2601.17400v2 Announce Type: replace Abstract: We propose a variational autoencoder (VAE) approach for parameter estimation in nonlinear mixed-effects models based on ordinary differential equations (NLME-ODEs) using longitudinal data from multiple subjects. In moderate dimensions, likelihood-based inference via the stochastic approximation EM algorithm (SAEM) is widely used, but it relies on Markov Chain Monte-Carlo (MCMC) to approximate subject-specific posteriors. As model complexity increases or observations per subject are sparse and irregular, performance often deteriorates due to a complex, multimodal likelihood surface which may lead to MCMC convergence difficulties. We instead estimate parameters by maximizing the evidence lower bound (ELBO), a regularized surrogate for the marginal likelihood. A VAE with a shared encoder amortizes inference of subject-specific random effects by avoiding per-subject optimization and the use of MCMC. Beyond pointwise estimation, we quantify parameter uncertainty using observed-information-based variance estimator and verify that practical identifiability of the model parameters is not compromised by nuisance parameters introduced in the encoder. We evaluate the method in three simulation case studies (pharmacokinetics, humoral response to vaccination, and TGF-$\beta$ activation dynamics in asthmatic airways) and on a real-world antibody kinetics dataset, comparing against SAEM baselines.
|
https://arxiv.org/abs/2601.17400
|
Academic Papers
|
svg
|
43b1b782f0186888b601b573055f94f7d50a85fafb6c75bb3c4b1b41881d8fe4
|
2026-02-02T00:00:00-05:00
|
M-SGWR: Multiscale Similarity and Geographically Weighted Regression
|
arXiv:2601.19888v2 Announce Type: replace Abstract: The first law of geography is a cornerstone of spatial analysis, emphasizing that nearby and related locations tend to be more similar, however, defining what constitutes "near" and "related" remains challenging, as different phenomena exhibit distinct spatial patterns. Traditional local regression models, such as Geographically Weighted Regression (GWR) and Multiscale GWR (MGWR), quantify spatial relationships solely through geographic proximity. In an era of globalization and digital connectivity, however, geographic proximity alone may be insufficient to capture how locations are interconnected. To address this limitation, we propose a new multiscale local regression framework, termed M-SGWR, which characterizes spatial interaction across two dimensions: geographic proximity and attribute (variable) similarity. For each predictor, geographic and attribute-based weight matrices are constructed separately and then combined using an optimized parameter, alpha, which governs their relative contribution to local model fitting. Analogous to variable-specific bandwidths in MGWR, the optimal alpha varies by predictor, allowing the model to flexibly account for geographic, mixed, or non-spatial (remote similarity) effects. Results from two simulation experiments and one empirical application demonstrate that M-SGWR consistently outperforms GWR, SGWR, and MGWR across all goodness-of-fit metrics.
|
https://arxiv.org/abs/2601.19888
|
Academic Papers
|
svg
|
ef8b3845f2c519d0056438cd5b70649c788f80a49f3efddf35ed6e13c83fea43
|
2026-02-02T00:00:00-05:00
|
Probing Entanglement and Symmetries in Random States Using a Superconducting Quantum Processor
|
arXiv:2601.22224v1 Announce Type: new Abstract: Quantum many-body systems display an extraordinary degree of complexity, yet many of their features are universal: they depend not on microscopic details, but on a few fundamental physical aspects such as symmetries. A central challenge is to distill these universal characteristics from model-specific ones. Random quantum states sampled from a uniform distribution, the Haar measure, provide a powerful framework for capturing this typicality. Here, we experimentally study the entanglement and symmetries of random many-body quantum states generated by evolving simple product states under ergodic Floquet models. We find excellent agreement with the predictions from the Haar-random state ensemble. First, we measure the R\'enyi-2 entanglement entropy as a function of the subsystem size, observing the Page curve. Second, we probe the subsystem symmetries using entanglement asymmetry. Finally, we measure the moments of partially transposed reduced density matrices obtained by tracing out part of the system in the generated ensembles, thereby revealing distinct entanglement phases. Our results offer an experimental perspective on the typical entanglement and symmetries of many-body quantum systems.
|
https://arxiv.org/abs/2601.22224
|
Academic Papers
|
svg
|
0d86c1350b848763d0cd725ac7ff741adc57926d163b41a784abbc9237cbd102
|
2026-02-02T00:00:00-05:00
|
The Photonic Foundation of Temperature: Mechanisms of Thermal Equilibrium and Entropy Production
|
arXiv:2601.22247v1 Announce Type: new Abstract: I examine the physical foundations of temperature and thermal equilibrium by identifying photons as the fundamental agents that establish and maintain the characteristic energy scale $E_c = k_B T$ in ordinary matter. While classical thermodynamics successfully describes equilibrium phenomenologically, the realization of thermal distributions requires concrete microscopic mechanisms provided by quantum electrodynamics. We derive the Boltzmann distribution from a minimal differential scaling postulate and show that sustaining thermal equilibrium demands continuous photon exchange with average energy $\langle h\nu \rangle = 2.701\,E_c$, quantifying the energetic throughput necessary to counter radiative losses. Entropy production is shown to arise naturally from inelastic photon scattering that converts high-energy photons into many lower-energy quanta, thereby increasing accessible microstates and driving irreversible evolution toward equilibrium. We establish physical criteria distinguishing genuine thermal equilibrium from purely formal temperature assignments and demonstrate that the classical notion of an infinite thermal reservoir emerges as an effective idealization within a hierarchy of dynamically maintained photon baths. This photonic framework complements phenomenological thermodynamics by providing its microscopic foundation and clarifies the physical meaning of temperature as an emergent collective property of photon-mediated energy exchange.
|
https://arxiv.org/abs/2601.22247
|
Academic Papers
|
svg
|
6c9186a4bf8260ec7dfaa23b17defae58a9d5ae8a3583788ff4e6a90cddf7b88
|
2026-02-02T00:00:00-05:00
|
Entanglement and discord classification via deep learning
|
arXiv:2601.22253v1 Announce Type: new Abstract: In this work, we propose a deep learning-based approach for quantum entanglement and discord classification using convolutional autoencoders. We train models to distinguish entangled from separable bipartite states for $d \times d$ systems with local dimension $d$ ranging from two to seven, which enables identification of bound and free entanglement. Through extensive numerical simulations across various quantum state families, we demonstrate that our model achieves high classification accuracy. Furthermore, we leverage the learned representations to generate samples of bound entangled states, the rarest form of entanglement and notoriously difficult to construct analytically. We separately train the same convolutional autoencoders architecture for detecting the presence of quantum discord and show that the model also exhibits high accuracy while requiring significantly less training time.
|
https://arxiv.org/abs/2601.22253
|
Academic Papers
|
svg
|
d4727db85cb399b51bf26a263bf93452e4c9626810a378ecaf8f4339e4f705fa
|
2026-02-02T00:00:00-05:00
|
Some properties of coherent states with singular complex matrix argument
|
arXiv:2601.22258v1 Announce Type: new Abstract: In the paper our aim was to study the properties of a new version of coherent states whose argument is a linear combination of two special singular square 2 x 2 matrix, having a single nonzero element, equal to 1, and two labeling complex variables as developing coefficients. We have shown that this new version of coherent states satisfies all the conditions imposed on coherent states, both of pure, as well as the mixed (thermal) states characterized by the density operator. As applications, we examined the connection between these coherent states and the notions of qubits and von Neuman entropy.
|
https://arxiv.org/abs/2601.22258
|
Academic Papers
|
svg
|
aa810cc14ceca5528a6d2a56aae75ce0fa0f4c1b3b57d3eeb6ba518cd9d6bc72
|
2026-02-02T00:00:00-05:00
|
Local-oscillator-agnostic squeezing detection
|
arXiv:2601.22291v1 Announce Type: new Abstract: We address the problem of measuring nonclassicality in continuous-variable bosonic systems without having access to a known reference signal. To this end, we construct broader classes of criteria for nonclassicality which allow us to investigate quantum phenomena regardless of the quantumness of selected subsystems. Such witnesses are based on the notion of partial normal ordering. This approach is applied to balanced homodyne detection using arbitrary, potentially nonclassical local oscillator states, yet only revealing the probed signal's quantumness. Our framework is compared to standard techniques, and the robustness and advanced sensitivity of our approach is shown. Therefore, a widely applicable framework, well-suited for applications in quantum metrology and quantum information, is derived to assess the quantum features of a photonic system when a well-defined coherent laser as a reference state is not available in the physical domain under study.
|
https://arxiv.org/abs/2601.22291
|
Academic Papers
|
svg
|
22e5e328c1ee667735d96db33b882be39bb19c05e4b8b5005a7fb2e082b25c9d
|
2026-02-02T00:00:00-05:00
|
Manjushri: A Tool for Equivalence Checking of Quantum Circuits
|
arXiv:2601.22372v1 Announce Type: new Abstract: Verifying whether two quantum circuits are equivalent is a central challenge in the compilation and optimization of quantum programs. We introduce \textsc{Manjushri}, a new automated framework for scalable quantum-circuit equivalence checking. \textsc{Manjushri} uses local projections as discriminative circuit fingerprints, implemented with weighted binary decision diagrams (WBDDs), yielding a compact and efficient symbolic representation of quantum behavior. We present an extensive experimental evaluation that, for random 1D Clifford+$T$ circuits, explores the trade-off between \textsc{Manjushri} and \textsc{ECMC}, a tool for equivalence checking based on a much different approach. \textsc{Manjushri} is much faster up to depth 30 (with the crossover point varying from 39--49, depending on the number of qubits and whether the input circuits are equivalent or inequivalent): when inputs are equivalent, \textsc{Manjushri} is about 10$\times$ faster (or more); when inputs are inequivalent, \textsc{Manjushri} is about 8$\times$ faster (or more). For both kinds of equivalence-checking outcomes, \textsc{ECMC}'s success rate out to depth 50 is impressive on 32- and 64-qubit circuits: on such circuits, \textsc{ECMC} is almost uniformly successful. However, \textsc{ECMC} struggled on 128-qubit circuits for some depths. \textsc{Manjushri} is almost uniformly successful out to about depth 38, before tailing off to about 75\% at depth 50 (falling to 0\% at depth 48 for 128-qubit circuits that are equivalent). These results establish that \textsc{Manjushri} is a practical and scalable solution for large-scale quantum-circuit verification, and would be the preferred choice unless clients need to check equivalence of circuits of depth $>$38.
|
https://arxiv.org/abs/2601.22372
|
Academic Papers
|
svg
|
b7c64700fea1c16402fdc74854e0ff3ed68155ad3305d4eaf8d7980a9bcbb82e
|
2026-02-02T00:00:00-05:00
|
Dicke States for Accelerated Two Two-Level Atoms
|
arXiv:2601.22479v1 Announce Type: new Abstract: We explore the formation of Dicke states. A system consisting of two two-level atoms located in the right Rindler wedge, has investigated to determine the conditions under which the superradiant or subradiant state can be formed. The dynamics of N two-level atoms forming symmetric state has also been analyzed and showed that the probability to excite any one atom of a collection of N atoms is related to the probability of exciting a single atom. We derive the analytical expression for the joint excitation probability which demonstrates the the interference effect. These findings provide new insights into the behavior of quantum systems in non-inertial frames and contribute to the broader understanding of relativistic quantum information theory.
|
https://arxiv.org/abs/2601.22479
|
Academic Papers
|
svg
|
287eb5a746ae8f5ca9d2e340b9a8908d9dc63e804115a315bdac127cc3e1a4a2
|
2026-02-02T00:00:00-05:00
|
Quantum-Enhanced Sensing Enabled by Scrambling-Induced Genuine Multipartite Entanglement
|
arXiv:2601.22503v1 Announce Type: new Abstract: Quantum sensing leverages quantum resources to surpass the standard quantum limit, yet many existing protocols rely on the preparation of complex entangled states and Hamiltonian engineering, posing challenges for universality and scalability. Here, we report an experimental realization of a universal protocol, known as Butterfly Metrology, proposed in [arXiv:2411.12794], demonstrating a scrambling-based approach for quantum-enhanced sensing on a superconducting quantum processor. By exploiting many-body information scrambling, we observe quantum-enhanced sensitivity to an encoded phase beyond the standard quantum limit, with a scaling consistent with a factor-of-two of the Heisenberg limit for system sizes of up to 10 qubits. Importantly, we experimentally establish a connection between the enhanced sensitivity and the dynamics of the out-of-time-order correlator (OTOC), and show that the buildup of scrambling-induced genuine multipartite entanglement underlies the observed sensitivity enhancement. Our results demonstrate a scalable and practical approach for quantum-enhanced sensing in interacting many-body quantum systems.
|
https://arxiv.org/abs/2601.22503
|
Academic Papers
|
svg
|
51fe9ecfcc3e35c0d41ecab37d534106a8a70c7dafbf08d9c0b1732df052e0a3
|
2026-02-02T00:00:00-05:00
|
Analysis of self-thermalization dynamics in the Bose-Hubbard model by using the pseudoclassical approach
|
arXiv:2601.22553v1 Announce Type: new Abstract: We analyze the self-thermalization dynamics of the $M$-site Bose-Hubbard model in terms of the single-particle density matrix that is calculated by using the pseudoclassical approach. It is shown that a weak inter-particle interaction, which suffices to convert the integrable system of non-interacting bosons into a chaotic system, has a negligible effect on the thermal density matrix given by the Bose-Einstein distribution. This opens the door for equilibration where the two coupled Bose-Hubbard systems, which are initially in different thermal states, relax to the same thermal state. When we couple these two subsystems by using a lattice of the length $L\ll M$, we numerically calculate the quasi-stationary current of Bose particles across the lattice and show that its magnitude is consistent with the solution of the master equation for the boundary driven $L$-site Bose-Hubbard model.
|
https://arxiv.org/abs/2601.22553
|
Academic Papers
|
svg
|
9ff2b17dac39083108c1755e356bd92dfda44e3d95ae68fb9620a611d635e551
|
2026-02-02T00:00:00-05:00
|
Towards Sample Efficient Entanglement Classification for 3 and 4 Qubit Systems: A Tailored CNN-BiLSTM Approach
|
arXiv:2601.22562v1 Announce Type: new Abstract: Accurate classification of multipartite entanglement in high-dimensional quantum systems is crucial for advancing quantum communication and information processing. However, conventional methods are resource-intensive, and even many machine-learning-based approaches necessitate large training datasets, creating a significant experimental bottleneck for data acquisition. To address this challenge, we propose a hybrid neural network architecture integrating Convolutional and Bidirectional Long Short-Term Memory networks (CNN-BiLSTM). This design leverages CNNs for local feature extraction and BiLSTMs for sequential dependency modeling, enabling robust feature learning from minimal training data. We investigate two fusion paradigms: Architecture 1 (flattening-based) and Architecture 2 (dimensionality-transforming). When trained on only 100 samples, Architecture 2 maintains classification accuracies exceeding 90% for both 3-qubit and 4-qubit systems, demonstrating rapid loss convergence within tens of epochs. Under full-data conditions (400 000 samples), both architectures achieve accuracies above 99.97%. Comparative benchmarks reveal that our CNN-BiLSTM models, especially Architecture 2, consistently outperform standalone CNNs, BiLSTMs, and MLPs in low-data regimes, albeit with increased training time. These results demonstrates that the tailored CNN-BiLSTM fusion significantly alleviates experimental data acquisition burden, offering a practical pathway toward scalable entanglement verification in complex quantum systems.
|
https://arxiv.org/abs/2601.22562
|
Academic Papers
|
svg
|
ace4a10e61bf0332dae7e3309ff4fd062e00c1ad3ef0c3904cefccbc6fbe85c8
|
2026-02-02T00:00:00-05:00
|
Two-parameter bipartite entanglement measure
|
arXiv:2601.22568v1 Announce Type: new Abstract: Entanglement concurrence is an important bipartite entanglement measure that has found wide applications in quantum technologies. In this work, inspired by unified entropy, we introduce a two-parameter family of entanglement measures, referred to as the unified $(q,s)$-concurrence. Both the standard entanglement concurrence and the recently proposed $q$-concurrence emerge as special cases within this family. By combining the positive partial transposition and realignment criteria, we derive an analytical lower bound for this measure for arbitrary bipartite mixed states, revealing a connection to strong separability criteria. Explicit expressions are obtained for the unified $(q,s)$-concurrence in the cases of isotropic and Werner states under the constraint $q>1$ and $qs\geq 1$. Furthermore, we explore the monogamy properties of the unified $(q,s)$-concurrence for $q\geq 2$, $0\leq s\leq 1$ and $1\leq qs\leq 3$, in qubit systems. In addition, we derive an entanglement polygon inequality for the unified $(q,s)$-concurrence with $q\geq 1$ and $qs\geq 1$, which manifests the relationship among all the marginal entanglements in any multipartite qudit system.
|
https://arxiv.org/abs/2601.22568
|
Academic Papers
|
svg
|
d8c5b7d5cd4d268e139ca6bfd44fdb9c0968dccc314787b680cf23ccb2730329
|
2026-02-02T00:00:00-05:00
|
Multipartite entanglement measures based on the thermodynamic framework
|
arXiv:2601.22583v1 Announce Type: new Abstract: In this work, we introduce a unified method to characterize and measure multipartite entanglement using the framework of thermodynamics. A family of the new entanglement measures is proposed: \textit{ergotropic-gap concentratable entanglement}. Furthermore, we establish that ergotropic-gap concentratable entanglement constitutes a well-defined entanglement measure within a specific parameter regime, satisfying key properties including continuity, majorization monotonicity and monogamy. We demonstrate the utility of this measure by showing it effectively distinguishes between multi-qubit Greenberger-Horne-Zeilinger states and W states. It also proves effective in detecting entanglement in specific classes of four-partite star quantum network states.
|
https://arxiv.org/abs/2601.22583
|
Academic Papers
|
svg
|
5b3b2c280ba9b725b0bf17e7b5045ab7313c1afe17b6c5e1d5fbd0c1bca440b5
|
2026-02-02T00:00:00-05:00
|
A complex-linear reformulation of Hamilton--Jacobi theory and the emergence of quantum structure
|
arXiv:2601.22697v1 Announce Type: new Abstract: Classical mechanics admits multiple equivalent formulations, from Newton's equations to the variational Lagrange-Hamilton framework and the scalar Hamilton-Jacobi (HJ) theory. In the HJ formulation, classical ensembles evolve through the continuity equation for a real density $\rho = R^{2}$ coupled to Hamilton's principal function $S$. Here we develop a complementary formulation, the Hamilton-Jacobi-Schr\"odinger (HJS) theory, by embedding the pair $(R,S)$ into a single complex field. Starting from a completely general complex ansatz $\psi = f(R,S) e^{i g(R,S)}$, and imposing two minimal structural requirements, we obtain a unique map $\psi = R e^{iS/\kappa}$ together with a linear HJS equation whose $|\kappa| \to 0$ limit reproduces the HJ formulation exactly. Remarkably, when $\mathrm{Re}(\kappa)\neq 0$, essential features of quantum mechanics, including superposition, operator algebra, commutators, the Heisenberg uncertainty principle, Born's rule, and unitary evolution, arise naturally as consistency conditions. HJS thus provides a unified mathematical viewpoint in which classical and quantum dynamics appear as different limits of a single underlying structure.
|
https://arxiv.org/abs/2601.22697
|
Academic Papers
|
svg
|
cd8183bec3f412e008e5fdb6a0f2ca3c233a2746e1e16ef98f9c84fac7a4f43e
|
2026-02-02T00:00:00-05:00
|
Orders of magnitude runtime reduction in quantum error mitigation
|
arXiv:2601.22785v1 Announce Type: new Abstract: Quantum error mitigation (QEM) infers noiseless expectation values by combining outcomes from intentionally modified, noisy variants of a target quantum circuit. Unlike quantum error correction, QEM requires no additional hardware resources and is therefore routinely employed in experiments on contemporary quantum processors. A central limitation of QEM is its substantial sampling overhead, which necessitates long execution times where device noise may drift, potentially compromising the reliability of standard mitigation protocols. QEM strategies based on agnostic noise amplification (ANA) are intrinsically resilient to such noise variations, but their sampling cost remains a major practical bottleneck. Here we introduce a mitigation framework that combines virtual noise scaling with a layered mitigation architecture, yielding orders of magnitude reduction in runtime overhead compared to conventional zero-noise extrapolation post-processing. The proposed approach is compatible with dynamic circuits and can be seamlessly integrated with error detection and quantum error correction schemes. In addition, it naturally extends to ANA-based mitigation of mid-circuit measurements and preparation errors. We validate our post-processing approach by applying it to previously reported experimental data, where we observe a substantial improvement in mitigation efficiency and accuracy.
|
https://arxiv.org/abs/2601.22785
|
Academic Papers
|
svg
|
0017ba32a9a2a7e99dea9e92d4295b6af0c252f4f22ae0745a0a1b8546d59523
|
2026-02-02T00:00:00-05:00
|
Steady-State Emission of Quantum-Correlated Light in the Telecom Band from a Single Atom
|
arXiv:2601.22821v1 Announce Type: new Abstract: We propose and investigate a scheme for the steady-state emission of quantum-correlated, telecom-band light from a single multilevel atom. By appropriately tuning the frequency of a pair of lasers, a two-photon transition is continually driven to an atomic excited state that emits photons at the desired wavelength. We show that resonantly coupling a cavity mode to the telecom transition can enhance the rate of emission while retaining the antibunched counting statistics that are characteristic of atomic light sources. We also explore coupling a second, independent cavity mode to the atom, which increases the telecom emission rate and introduces quantum correlations between the cavity modes. A model for the hyperfine structure of a single cesium atom is then described and numerically integrated to demonstrate the viability of implementing the scheme with a modern cavity QED system.
|
https://arxiv.org/abs/2601.22821
|
Academic Papers
|
svg
|
e215ee81db3bb240b8f55e450c1c185ab76ae6439e9d45f32638e22c963e4a51
|
2026-02-02T00:00:00-05:00
|
Are Bell's conditions for local realism general enough?
|
arXiv:2601.22833v1 Announce Type: new Abstract: Bell conditions for local realism are critically revisited. In particular for optical experiments I criticize Bell's proposed response of detectors to signals as extremely idealized. More physical conditions are proposed, whence a realistic local model of an optical experiment is possible which violates the Clauser-Horne (Bell) inequality. The possibility rests on the existence of a coincidence-time loophole in the experiments.
|
https://arxiv.org/abs/2601.22833
|
Academic Papers
|
svg
|
099fc03d35c6243c9ddf9646946fa775670f0a3ed646e62dbb47da539f07fd36
|
2026-02-02T00:00:00-05:00
|
Dynamics of states of infinite quantum systems as a cornerstone of the second law of thermodynamics
|
arXiv:2601.22863v1 Announce Type: new Abstract: We improve on our version of the second law of thermodynamics as a deterministic theorem for quantum spin systems in two basic aspects. The first concerns the general statement of the second law: spontaneous changes in an adiabatically closed system will always be in the direction of increasing mean entropy, which rises to a maximal value. Two specific examples concern the transition from pure to mixed states in two different universality classes of dynamics in one dimension, one being the exponential model, the other the Dyson model, the dynamics of the latter exhibiting strong graphical evidence of quantum chaos, as a consequence of the results of Albert and Kiessling on the Cloitre function.
|
https://arxiv.org/abs/2601.22863
|
Academic Papers
|
svg
|
7db17c2e44c05c7f4dbe2d914196df82fc9b7982059aa6cd40f228104a2092e3
|
2026-02-02T00:00:00-05:00
|
Fast magic state preparation by gauging higher-form transversal gates in parallel
|
arXiv:2601.22939v1 Announce Type: new Abstract: Magic states are a foundational resource for universal quantum computation. To survive in a realistic noisy environment, magic states must be prepared fault-tolerantly and protected by a quantum error-correcting code. The recent discovery of highly efficient quantum low-density parity-check codes, together with efficient logic gates, lays the groundwork for low-overhead fault-tolerant quantum computation. This motivates the search for fast and parallel protocols for logical magic state preparation to enable universal quantum computation. Here, we introduce a fast code surgery procedure that performs a fault-tolerant measurement of many transversal logic gates in parallel. This is achieved by performing a generalized gauging measurement on a quantum code that supports a higher-form transversal gate. The time overhead of our procedure is constant, and the qubit overhead is linear. The procedure inherits fault-tolerance properties from the base code and the structure of the higher-form transversal gate. When applied to codes that support higher-form Clifford gates our procedure achieves fast and fault-tolerant preparation of many magic states in parallel. This motivates the search for good quantum low-density parity-check codes that support higher-form Clifford gates.
|
https://arxiv.org/abs/2601.22939
|
Academic Papers
|
svg
|
4323fab3e5900d30b2fccf3577a2b9bebe9421f5e0c8856272bf1e1fcf20c0b8
|
2026-02-02T00:00:00-05:00
|
Dicke superposition probes for noise-resilient Heisenberg and super-Heisenberg Metrology
|
arXiv:2601.23043v1 Announce Type: new Abstract: Phase sensing with entangled multiqubit states in the presence of noise is a central theme of modern quantum metrology. The present work investigates Dicke state superposition probes for quantum phase sensing under parameter encoding generated by one- and two-body interaction Hamiltonians. A class of N-qubit Dicke superposition states that exhibit near-Heisenberg scaling, of the quantum Fisher information, while maintaining significantly enhanced robustness to dephasing noise compared to GHZ, W-superposition, and balanced Dicke states, under unitary encodings generated by one-body interaction Hamiltonians are identified. For two-body interactions, Dicke superposition probes optimizing the quantum Fisher information are identified, and their performance under phase-damping, amplitude-damping, and global depolarizing noise is explored. Within this family, certain Dicke superpositions are found to combine super-Heisenberg scaling with improved resilience to phase damping relative to Fisher information optimal probes. These results establish tailored near-optimal Dicke-state superposition probes as versatile and noise-resilient resources for Heisenberg and super-Heisenberg quantum phase sensing governed by one- and two-body interactions.
|
https://arxiv.org/abs/2601.23043
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.