id
large_stringlengths 9
16
| title
large_stringlengths 1
382
| abstract
large_stringlengths 3
6.09k
| publish_date
date32 | update_date
date32 | categories
large listlengths 1
13
| authors
large_stringlengths 3
62.8k
|
|---|---|---|---|---|---|---|
2601.00567
|
Improving Scientific Document Retrieval with Academic Concept Index
|
Adapting general-domain retrievers to scientific domains is challenging due to the scarcity of large-scale domain-specific relevance annotations and the substantial mismatch in vocabulary and information needs. Recent approaches address these issues through two independent directions that leverage large language models (LLMs): (1) generating synthetic queries for fine-tuning, and (2) generating auxiliary contexts to support relevance matching. However, both directions overlook the diverse academic concepts embedded within scientific documents, often producing redundant or conceptually narrow queries and contexts. To address this limitation, we introduce an academic concept index, which extracts key concepts from papers and organizes them guided by an academic taxonomy. This structured index serves as a foundation for improving both directions. First, we enhance the synthetic query generation with concept coverage-based generation (CCQGen), which adaptively conditions LLMs on uncovered concepts to generate complementary queries with broader concept coverage. Second, we strengthen the context augmentation with concept-focused auxiliary contexts (CCExpand), which leverages a set of document snippets that serve as concise responses to the concept-aware CCQGen queries. Extensive experiments show that incorporating the academic concept index into both query generation and context augmentation leads to higher-quality queries, better conceptual alignment, and improved retrieval performance.
| 2026-01-02
| 2026-01-05
|
[
"cs.IR",
"cs.AI"
] |
Jeyun Lee, Junhyoung Lee, Wonbin Kweon, Bowen Jin, Yu Zhang, Susik Yoon, Dongha Lee, Hwanjo Yu, Jiawei Han, Seongku Kang
|
2509.23323
|
LLM Interpretability with Identifiable Temporal-Instantaneous Representation
|
Despite Large Language Models' remarkable capabilities, understanding their internal representations remains challenging. Mechanistic interpretability tools such as sparse autoencoders (SAEs) were developed to extract interpretable features from LLMs but lack temporal dependency modeling, instantaneous relation representation, and more importantly theoretical guarantees, undermining both the theoretical foundations and the practical confidence necessary for subsequent analyses. While causal representation learning (CRL) offers theoretically grounded approaches for uncovering latent concepts, existing methods cannot scale to LLMs' rich conceptual space due to inefficient computation. To bridge the gap, we introduce an identifiable temporal causal representation learning framework specifically designed for LLMs' high-dimensional concept space, capturing both time-delayed and instantaneous causal relations. Our approach provides theoretical guarantees and demonstrates efficacy on synthetic datasets scaled to match real-world complexity. By extending SAE techniques with our temporal causal framework, we successfully discover meaningful concept relationships in LLM activations. Our findings show that modeling both temporal and instantaneous conceptual relationships advances the interpretability of LLMs.
| 2026-01-02
| 2026-01-06
|
[
"cs.LG"
] |
Xiangchen Song, Jiaqi Sun, Zijian Li, Yujia Zheng, Kun Zhang
|
2601.00953
|
Ultra Heavy Cosmic Rays from Magnetars
|
Matter ejected from the neutron star crust during a magnetar giant flare will undergo $r$-process nucleosynthesis during decompression. Ultra heavy ions ($Z \gg 26$) can be accelerated to cosmic ray energies by the reverse shock as the ejecta decelerates by interacting with the ambient environment. We investigate the contribution of magnetars to the local ultra heavy cosmic ray flux using semi-analytic Galactic transport calculations, demonstrating that they may be significant contributors throughout Galactic history depending on the giant flare rate and ion acceleration efficiency. Although neutron star mergers inject orders of magnitude more energy into cosmic rays, they rarely occur within the spallation-limited propagation horizon for ultra heavy species, reducing their local contributions. As compared to lighter nuclei which are dominantly accelerated by supernovae, the SuperTIGER experiment has presented tentative evidence for a distinct contribution to the cosmic ray abundances near and above the first $r$-process peak ($Z \approx 35\text{--}56$). We argue that current abundance data are consistent with either a magnetar giant flare or neutron star merger origin for these species. Measurements with single element resolution through the third $r$-process peak, expected from the upcoming TIGERISS experiment, may discriminate between these sources for the heaviest cosmic rays.
| 2026-01-02
| 2026-01-06
|
[
"astro-ph.HE"
] |
Anirudh Patel, Rebecca Diesing, Brian Metzger
|
2509.14057
|
Navigating the safe harbor paradox in human-machine systems
|
When deploying artificial skills, decision-makers often assume that layering human oversight is a safe harbor that mitigates the risks of full automation in high-complexity tasks. This paper formally challenges the economic validity of this widespread assumption, arguing that the true bottom-line economic utility of a human-machine skill policy is highly contingent on situational and design factors. To investigate this gap, we develop an in-silico exploratory framework for policy analysis based on Monte Carlo simulations to quantify the economic impact of skill policies in the execution of tasks presenting varying levels of complexity across diverse setups. Our results show that in complex scenarios, a human-machine strategy can yield the highest economic utility, but only if genuine augmentation is achieved. In contrast, when failing to realize this synergy, the human-machine approach can perform worse than either the machine-exclusive or the human-exclusive policy, actively destroying value under the pressure of costs that are not sufficiently compensated by performance gains. This finding points to a key implication for decision-makers: when the context is complex and critical, simply allocating human and machine skills to a task may be insufficient, and far from being a silver-bullet solution or a low-risk compromise. Rather, it is a critical opportunity to boost competitiveness that demands a strong organizational commitment to enabling augmentation. Also, our findings show that improving the cost-effectiveness of machine skills over time, while useful, does not replace the fundamental need to focus on achieving augmentation when surprise is the norm, even when machines become more effective than humans in handling uncertainty.
| 2026-01-02
| 2026-01-05
|
[
"econ.GN",
"cs.AI",
"q-fin.EC"
] |
Riccardo Zanardelli
|
2506.11877
|
Robust Molecular Property Prediction via Densifying Scarce Labeled Data
|
A widely recognized limitation of molecular prediction models is their reliance on structures observed in the training data, resulting in poor generalization to out-of-distribution compounds. Yet in drug discovery, the compounds most critical for advancing research often lie beyond the training set, making the bias toward the training data particularly problematic. This mismatch introduces substantial covariate shift, under which standard deep learning models produce unstable and inaccurate predictions. Furthermore, the scarcity of labeled data-stemming from the onerous and costly nature of experimental validation-further exacerbates the difficulty of achieving reliable generalization. To address these limitations, we propose a novel bilevel optimization approach that leverages unlabeled data to interpolate between in-distribution (ID) and out-of-distribution (OOD) data, enabling the model to learn how to generalize beyond the training distribution. We demonstrate significant performance gains on challenging real-world datasets with substantial covariate shift, supported by t-SNE visualizations highlighting our interpolation method.
| 2026-01-02
| 2026-01-05
|
[
"cs.LG",
"cs.AI"
] |
Jina Kim, Jeffrey Willette, Bruno Andreis, Sung Ju Hwang
|
2601.00993
|
WildIng: A Wildlife Image Invariant Representation Model for Geographical Domain Shift
|
Wildlife monitoring is crucial for studying biodiversity loss and climate change. Camera trap images provide a non-intrusive method for analyzing animal populations and identifying ecological patterns over time. However, manual analysis is time-consuming and resource-intensive. Deep learning, particularly foundation models, has been applied to automate wildlife identification, achieving strong performance when tested on data from the same geographical locations as their training sets. Yet, despite their promise, these models struggle to generalize to new geographical areas, leading to significant performance drops. For example, training an advanced vision-language model, such as CLIP with an adapter, on an African dataset achieves an accuracy of 84.77%. However, this performance drops significantly to 16.17% when the model is tested on an American dataset. This limitation partly arises because existing models rely predominantly on image-based representations, making them sensitive to geographical data distribution shifts, such as variation in background, lighting, and environmental conditions. To address this, we introduce WildIng, a Wildlife image Invariant representation model for geographical domain shift. WildIng integrates text descriptions with image features, creating a more robust representation to geographical domain shifts. By leveraging textual descriptions, our approach captures consistent semantic information, such as detailed descriptions of the appearance of the species, improving generalization across different geographical locations. Experiments show that WildIng enhances the accuracy of foundation models such as BioCLIP by 30% under geographical domain shift conditions. We evaluate WildIng on two datasets collected from different regions, namely America and Africa. The code and models are publicly available at https://github.com/Julian075/CATALOG/tree/WildIng.
| 2026-01-02
| 2026-01-06
|
[
"cs.CV",
"cs.AI"
] |
Julian D. Santamaria, Claudia Isaza, Jhony H. Giraldo
|
2511.13067
|
Analysis of the hidden-charm pentaquark candidates in the $J/ÏÎ$ mass spectrum via the QCD sum rules
|
In this work, we construct the color $\bar{\mathbf{3}}\bar{\mathbf{3}}\bar{\mathbf{3}}$ type local five-quark currents with the light quarks $qss$ in the flavor octet, and study the $qssc\bar{c}$ pentaquark states via the QCD sum rules in a comprehensive way, and we emphasize that we achieve two light-flavor octets. We obtain the mass spectrum of the hidden-charm-doubly-strange pentaquark states with the isospin-spin-parity $IJ^{P}=\frac{1}{2}{\frac{1}{2}}^-$, $\frac{1}{2}{\frac{3}{2}}^-$ and $\frac{1}{2}{\frac{5}{2}}^-$, which can be confronted to the experimental data in the future, especially the process $Î_b^- \to P_{css}^-Ï\to J/ÏÎ^- Ï$. As a byproduct, we observe that the lowest hidden-charm pentaquark states are not of the scalar-diquark-scalar-diquark-antiquark type, it is wrong to refer the scalar and axialvector diquarks as the "good" and "bad" diquarks, respectively.
| 2026-01-02
| 2026-01-05
|
[
"hep-ph"
] |
Zhi-Gang Wang, Yang Liu
|
2511.16956
|
Distortion of charge distribution due to internal electric fields described by the drift-diffusion semiconductor model
|
In this paper, the initial value problem for the Debye--Hueckel drift-diffusion equation is studied. This equation was introduced as a model describing plasma behavior and is also known as a simulation model of MOSFET, and so its solution describes charge density. It is well-known that, if the initial density is localized, then the density is adjusted to be radially symmetric due to the linear diffusion. Consequently, the electric field is also governed by a radially symmetric potential, and its effects are expected to act radially symmetrically. The main result express the electric field and its effect on the charge density as concrete functions. It also denotes the distortion of symmetry and the shift of scale on the density due to the internal electric field. Unlike the historical paper via Escobedo and Zuazua and the followers, the main result captures stronger nonlinearity than the logarithmic shift.
| 2026-01-02
| 2026-01-05
|
[
"math.AP",
"math-ph",
"math.MP"
] |
Masakazu Yamamoto
|
2601.00566
|
Low Rank Comes with Low Security: Gradient Assembly Poisoning Attacks against Distributed LoRA-based LLM Systems
|
Low-Rank Adaptation (LoRA) has become a popular solution for fine-tuning large language models (LLMs) in federated settings, dramatically reducing update costs by introducing trainable low-rank matrices. However, when integrated with frameworks like FedIT, LoRA introduces a critical vulnerability: clients submit $A$ and $B$ matrices separately, while only their product $AB$ determines the model update, yet this composite is never directly verified. We propose Gradient Assembly Poisoning (GAP), a novel attack that exploits this blind spot by crafting individually benign $A$ and $B$ matrices whose product yields malicious updates. GAP operates without access to training data or inter-client coordination and remains undetected by standard anomaly detectors. We identify four systemic vulnerabilities in LoRA-based federated systems and validate GAP across LLaMA, ChatGLM, and GPT-2. GAP consistently induces degraded or biased outputs while preserving surface fluency, reducing BLEU by up to 14.5\%, increasing factual and grammatical errors by over 800\%, and maintaining 92.6\% long-form response length. These results reveal a new class of stealthy, persistent threats in distributed LoRA fine-tuning.
| 2026-01-02
| 2026-01-05
|
[
"cs.CR"
] |
Yueyan Dong, Minghui Xu, Qin Hu, Yinhao Xiao, Qi Luo, Yechao Zhang, Yue Zhang, Xiuzhen Cheng
|
2511.22871
|
Which-crystal information and wave-particle duality in induced-coherence interferometry
|
We provide an operational reinterpretation of wave-particle complementarity in the low-gain Zou-Wang-Mandel (ZWM) induced-coherence interferometer. In the low gain limit, each photon pair is emitted by either one of two nonlinear crystals. Preparing nonorthogonal conditional idler states that encode which-crystal information. While previous studies inferred distinguishability indirectly from signal visibility with undetected idler photons. We show that the idler states naturally define a binary quantum hypothesis-testing problem. By performing optimal measurements on the idler, we analyze this task using both zero-error measurement unambiguous discrimination (Ivanovic-Dieks-Peres (IDP)) and minimum-error discrimination (Helstrom bound). We show that the signal visibility equals the optimal inconclusive probability of unambiguous discrimination. The Helstrom bound gives the optimal probability of identifying the emitting crystal. While signal visibility is an ensemble-averaged expectation value, the IDP and Helstrom strategies correspond to optimal single-photon decision measurements on the idler. The decision problem concerns inferring a past source event from a present measurement outcome This establishes wave-particle duality in induced coherence as a manifestation of optimal quantum decision strategies rather than a purely geometric constraint. We further extend the analysis to the presence of thermal photons introduced in the object arm, which render the conditional idler states mixed. In this case, both the visibility and the achievable distinguishability are reduced, reflecting the fundamental limitations imposed by mixed-state discrimination. The approach is model-independent and applies to general two-path interferometers with markers.
| 2026-01-02
| 2026-01-05
|
[
"quant-ph"
] |
L. Theerthagiri
|
2207.11982
|
Quantum-critical transport in marginal Fermi liquids
|
We use the Kubo response functions to calculate the electrical and thermal conductivity and Seebeck coefficient at low temperatures and frequencies in the quantum-critical region for fermions on a lattice. The theory uses scattering of the fermions with the previously derived collective fluctuations due to topological defects of the quantum XY model coupled to fermions. The microscopic model is applicable to the fluctuations of the loop-current order in cuprates as well as to a class of quasi-two-dimensional heavy-fermion and other metallic antiferromagnets, and proposed recently also for the possible loop-current order in Moiré twisted bi-layer graphene and bilayer WSe$_2$. All these metals have a linear-in-temperature electrical resistivity in the quantum-critical region of their phase diagrams, often termed ``Planckian" resistivity. The solution of the Kubo equation for transport shows that vertex renormalizations to the external fields, beside those caused by Aslamazov-Larkin (A-L) processes, are absent. A-L appears as an Umklapp scattering matrix, which gives a temperature-independent multiplicative factor for the electrical resistivity but does not affect the thermal conductivity. We also show that the mass renormalization which gives a logarithmic enhancement of the marginal Fermi-liquid specific heat does not appear in the electrical resistivity and, more remarkably, in the thermal conductivity. On the other hand the mass renormalization $\propto \ln Ï_c/T$ appears in the Seebeck coefficient. We also discuss in detail the conservation laws which play a crucial role in all transport properties. We calculate exactly, the numerical coefficients of the transport properties for a circular Fermi surface. The leading temperature dependences is shown to remain the same for a general Fermi surface, but it is too messy to calculate the numerical coefficient.
| 2026-01-02
| 2026-01-05
|
[
"cond-mat.str-el"
] |
Hideaki Maebashi, Chandra M. Varma
|
2508.06319
|
Towards Balanced Behavior Cloning from Imbalanced Datasets
|
Robots should be able to learn complex behaviors from human demonstrations. In practice, these human-provided datasets are inevitably imbalanced: i.e., the human demonstrates some subtasks more frequently than others. State-of-the-art methods default to treating each element of the human's dataset as equally important. So if -- for instance -- the majority of the human's data focuses on reaching a goal, and only a few state-action pairs move to avoid an obstacle, the learning algorithm will place greater emphasis on goal reaching. More generally, misalignment between the relative amounts of data and the importance of that data causes fundamental problems for imitation learning approaches. In this paper we analyze and develop learning methods that automatically account for mixed datasets. We formally prove that imbalanced data leads to imbalanced policies when each state-action pair is weighted equally; these policies emulate the most represented behaviors, and not the human's complex, multi-task demonstrations. We next explore algorithms that rebalance offline datasets (i.e., reweight the importance of different state-action pairs) without human oversight. Reweighting the dataset can enhance the overall policy performance. However, there is no free lunch: each method for autonomously rebalancing brings its own pros and cons. We formulate these advantages and disadvantages, helping other researchers identify when each type of approach is most appropriate. We conclude by introducing a novel meta-gradient rebalancing algorithm that addresses the primary limitations behind existing approaches. Our experiments show that dataset rebalancing leads to better downstream learning, improving the performance of general imitation learning algorithms without requiring additional data collection. See our project website: https://collab.me.vt.edu/data_curation/.
| 2026-01-02
| 2026-01-06
|
[
"cs.RO"
] |
Sagar Parekh, Heramb Nemlekar, Dylan P. Losey
|
2601.00731
|
One-dimensional and time-dependent modelling of complex organic molecules in protostars
|
Complex organic molecules (COMs), the building blocks of life, have been extensively detected under various physical conditions, from quiescent clouds to star-forming regions. They therefore serve as excellent tracers for the local physical and chemical properties of these environments. Proper models that are capable of grasping the formation and destruction of COMs are crucial to understanding observations. However, given that distinct COMs may be detected from different locations and at varying times, we improve UCLCHEM - a gas-grain chemical code - to a one-dimensional, time-dependent model, tailored to protostars. In this update, we examine two stages of a protostar: the prestellar and heating stages, incorporating a simple radiative mechanism for both the internal and external radiation fields of the cloud. This approach relies on the key assumption that the dust and gas temperatures are completely coupled. Ultimately, we implement an updated version of our model to interpret observations obtained through both single-dish and interferometry under varying conditions, including a SgrB2(N1) hot core, massive Galactic clumps and a hot core in Orion. We show that our model could reproduce these observations well, highlighting that some COMs are positioned at a higher temperature in the envelope, whereas others are from the lower temperature, potentially leading to misinterpretation when using a single-point model. In a particular case of SgrB2(N1), the best model indicates that the cosmic-ray ionisation rate significantly exceeds the value typically used for the standard interstellar medium. Our model shows as an efficient computational tool particularly useful for better insights into observations of COMs.
| 2026-01-02
| 2026-01-05
|
[
"astro-ph.GA"
] |
Le Ngoc Tram, Serena Viti, Katarzyna M. Dutkowska, Gijs Vermariën, Tobias Dijkhuis, Audrey Coutens, Timea Csengeri, Thiem Hoang
|
2511.05257
|
SU(n)-structures through quotient by torus actions
|
We show that if $(X,g,J,Ï)$ is a Kähler manifold with an $SU(n+s)$-structure and a Hamiltonian holomorphic action of a compact torus $T^s$, then the usual symplectic quotient $Y$ inherits an $SU(n)$-structure provided the existence of special $1$-forms on $X$, called twist forms. We then give several applications of our results: on complex projective spaces, on cones over Fano Kähler-Einstein manifold and on toric $\mathbb{C}\mathbb{P}^1$ bundles. We also study the geometry behind these structures in the case of $n=3$.
| 2026-01-02
| 2026-01-05
|
[
"math.DG"
] |
Quentin Peres
|
2512.24917
|
Frequent subgraph-based persistent homology for graph classification
|
Persistent homology (PH) has recently emerged as a powerful tool for extracting topological features. Integrating PH into machine learning and deep learning models enhances topology awareness and interpretability. However, most PH methods on graphs rely on a limited set of filtrations, such as degree-based or weight-based filtrations, which overlook richer features like recurring information across the dataset and thus restrict expressive power. In this work, we propose a novel graph filtration called Frequent Subgraph Filtration (FSF), which is derived from frequent subgraphs and produces stable and information-rich frequency-based persistent homology (FPH) features. We study the theoretical properties of FSF and provide both proofs and experimental validation. Beyond persistent homology itself, we introduce two approaches for graph classification: an FPH-based machine learning model (FPH-ML) and a hybrid framework that integrates FPH with graph neural networks (FPH-GNNs) to enhance topology-aware graph representation learning. Our frameworks bridge frequent subgraph mining and topological data analysis, offering a new perspective on topology-aware feature extraction. Experimental results show that FPH-ML achieves competitive or superior accuracy compared with kernel-based and degree-based filtration methods. When integrated into graph neural networks, FPH yields relative performance gains ranging from 0.4 to 21 percent, with improvements of up to 8.2 percentage points over GCN and GIN backbones across benchmarks.
| 2026-01-02
| 2026-01-05
|
[
"cs.LG",
"math.AT"
] |
Xinyang Chen, Amaël Broustet, Guanyuan Zeng, Cheng He, Guoting Chen
|
2512.07908
|
Symmetry-Based Quantum Codes Beyond the Pauli Group
|
Typical stabilizer codes aim to solve the general problem of fault-tolerance without regard for the structure of a specific system. By incorporating a broader representation-theoretic perspective, we provide a generalized framework that allows the code designer to take this structure into account. For any representation of a finite group, we produce a quantum code with a code space invariant under the group action, providing passive error mitigation against errors belonging to the image of the representation. Furthermore, errors outside this scope are detected and diagnosed by performing a projective measurement onto the isotypic components corresponding to irreducible representations of the chosen group, effectively generalizing syndrome extraction to symmetry-resolved quantum measurements. We show that all stabilizer codes are a special case of this construction, including qudit stabilizer codes, and show that there is a natural one logical qubit code associated to the dihedral group. Thus we provide a unifying framework for existing codes while simultaneously facilitating symmetry-aware codes tailored to specific systems.
| 2026-01-02
| 2026-01-05
|
[
"quant-ph",
"math-ph",
"math.MP"
] |
Zachary P. Bradshaw, Margarite L. LaBorde, Dillon Montero
|
2601.00600
|
Limiting Behavior of Non-Autonomous Stochastic Reversible Selkov Lattice Systems Driven by Locally Lipschitz Lévy Noises
|
This work investigates the long-term distributional behavior of the reversible Selkov lattice systems defined on the set $\mathbb{Z}$ and driven by locally Lipschitz \emph{Lévy noises}, which possess two pairs of oppositely signed nonlinear terms and whose nonlinear couplings can grow polynomially with any order $p \geq 1$. Firstly, based on the global-in-time well-posedness in $L^{2}(Ω, \ell^2 \times \ell^2)$, we define a \emph{continuous} non-autonomous dynamical system (NDS) on the metric space $(\mathcal{P}_{2}(\ell^2 \times \ell^2), d_{\mathcal{P}(\ell^2 \times \ell^2)})$, where $d_{\mathcal{P}(\ell^2 \times \ell^2)}$ is the dual-Lipschitz distance on $\mathcal{P}(\ell^2 \times \ell^2)$, the space of probability measures on $\ell^2 \times \ell^2$. Specifically, we establish that this non-autonomous dynamical system admits a unique pullback measure attractor, characterized via measure-valued complete solutions and orbits in the sense of Wang (DOI.org/10.1016/j.jde.2012.05.015). Moreover, when the deterministic external forcing terms are periodic in time, we demonstrate that the pullback measure attractors are also periodic. We also study the upper semicontinuity of pullback measure attractors as $(ε_1, ε_2, γ_1, γ_2) \rightarrow (0, 0, 0, 0)$. The main difficulty in proving the pullback asymptotic compactness of the NDS in $(\mathcal{P}_{2}(\ell^2 \times \ell^2), d_{\mathcal{P}(\ell^2 \times \ell^2)})$ is caused by the lack of compactness in infinite-dimensional lattice systems, which is overcome by using uniform tail-ends estimates. And the inherent structure of the Selkov system precludes the possibility of any unidirectional dissipative influence arising from the interaction between the two coupled equations, thereby obstructing the emergence of a dominant energy-dissipation mechanism along a single directional pathway.
| 2026-01-02
| 2026-01-05
|
[
"math.AP"
] |
Guofu Li, Jianxin Wu, Yunshun Wu
|
2601.00694
|
A Vision-and-Knowledge Enhanced Large Language Model for Generalizable Pedestrian Crossing Behavior Inference
|
Existing paradigms for inferring pedestrian crossing behavior, ranging from statistical models to supervised learning methods, demonstrate limited generalizability and perform inadequately on new sites. Recent advances in Large Language Models (LLMs) offer a shift from numerical pattern fitting to semantic, context-aware behavioral reasoning, yet existing LLM applications lack domain-specific adaptation and visual context. This study introduces Pedestrian Crossing LLM (PedX-LLM), a vision-and-knowledge enhanced framework designed to transform pedestrian crossing inference from site-specific pattern recognition to generalizable behavioral reasoning. By integrating LLaVA-extracted visual features with textual data and transportation domain knowledge, PedX-LLM fine-tunes a LLaMA-2-7B foundation model via Low-Rank Adaptation (LoRA) to infer crossing decisions. PedX-LLM achieves 82.0% balanced accuracy, outperforming the best statistical and supervised learning methods. Results demonstrate that the vision-augmented module contributes a 2.9% performance gain by capturing the built environment and integrating domain knowledge yields an additional 4.1% improvement. To evaluate generalizability across unseen environments, cross-site validation was conducted using site-based partitioning. The zero-shot PedX-LLM configuration achieves 66.9% balanced accuracy on five unseen test sites, outperforming the baseline data-driven methods by at least 18 percentage points. Incorporating just five validation examples via few-shot learning to PedX-LLM further elevates the balanced accuracy to 72.2%. PedX-LLM demonstrates strong generalizability to unseen scenarios, confirming that vision-and-knowledge-enhanced reasoning enables the model to mimic human-like decision logic and overcome the limitations of purely data-driven methods.
| 2026-01-02
| 2026-01-05
|
[
"cs.AI"
] |
Qingwen Pu, Kun Xie, Hong Yang, Guocong Zhai
|
2601.00672
|
Sparse FEONet: A Low-Cost, Memory-Efficient Operator Network via Finite-Element Local Sparsity for Parametric PDEs
|
In this paper, we study the finite element operator network (FEONet), an operator-learning method for parametric problems, originally introduced in J. Y. Lee, S. Ko, and Y. Hong, Finite Element Operator Network for Solving Elliptic-Type Parametric PDEs, SIAM J. Sci. Comput., 47(2), C501-C528, 2025. FEONet realizes the parameter-to-solution map on a finite element space and admits a training procedure that does not require training data, while exhibiting high accuracy and robustness across a broad class of problems. However, its computational cost increases and accuracy may deteriorate as the number of elements grows, posing notable challenges for large-scale problems. In this paper, we propose a new sparse network architecture motivated by the structure of the finite elements to address this issue. Throughout extensive numerical experiments, we show that the proposed sparse network achieves substantial improvements in computational cost and efficiency while maintaining comparable accuracy. We also establish theoretical results demonstrating that the sparse architecture can approximate the target operator effectively and provide a stability analysis ensuring reliable training and prediction.
| 2026-01-02
| 2026-01-05
|
[
"math.NA",
"cs.LG",
"cs.NA"
] |
Seungchan Ko, Jiyeon Kim, Dongwook Shin
|
2601.00530
|
Cost-Performance Analysis of Cloud-Based Retail Point-of-Sale Systems: A Comparative Study of Google Cloud Platform and Microsoft Azure
|
Althoughthereislittleempiricalresearchonplatform-specific performance for retail workloads, the digital transformation of the retail industry has accelerated the adoption of cloud-based Point-of-Sale (POS) systems. This paper presents a systematic, repeatable comparison of POS workload deployments on Google Cloud Platform (GCP) and Microsoft Azure using real-time API endpoints and open-source benchmarking code. Using free-tier cloud resources, we offer a transparent methodology for POS workload evaluation that small retailers and researchers can use. Our approach measures important performance metrics like response latency, throughput, and scalability while estimating operational costs based on actual resource usage and current public cloud pricing because there is no direct billing under free-tier usage. All the tables and figures in this study are generated directly from code outputs, ensuring that the experimental data and the reported results are consistent. Our analysis shows that GCP achieves 23.0% faster response times at baseline load, while Azure shows 71.9% higher cost efficiency for steady-state operations. We look at the architectural components that lead to these differences and provide a helpful framework for merchants considering cloud point-of-sale implementation. This study establishes a strong, open benchmarking methodology for retail cloud applications and offers the first comprehensive, code-driven comparison of workloads unique to point-of-sale systems across leading cloud platforms.
| 2026-01-02
| 2026-01-05
|
[
"cs.DC",
"cs.SE"
] |
Ravi Teja Pagidoju
|
2601.00712
|
Universal Outlier Hypothesis Testing via Mean- and Median-Based Tests
|
Universal outlier hypothesis testing refers to a hypothesis testing problem where one observes a large number of length-$n$ sequences -- the majority of which are distributed according to the typical distribution $Ï$ and a small number are distributed according to the outlier distribution $μ$ -- and one wishes to decide, which of these sequences are outliers without having knowledge of $Ï$ and $μ$. In contrast to previous works, in this paper it is assumed that both the number of observation sequences and the number of outlier sequences grow with the sequence length. In this case, the typical distribution $Ï$ can be estimated by computing the mean over all observation sequences, provided that the number of outlier sequences is sublinear in the total number of sequences. It is demonstrated that, in this case, one can achieve the error exponent of the maximum likelihood test that has access to both $Ï$ and $μ$. However, this mean-based test performs poorly when the number of outlier sequences is proportional to the total number of sequences. For this case, a median-based test is proposed that estimates $Ï$ as the median of all observation sequences. It is demonstrated that the median-based test achieves again the error exponent of the maximum likelihood test that has access to both $Ï$ and $μ$, but only with probability approaching one. To formalize this case, the typical error exponent -- similar to the typical random coding exponent introduced in the context of random coding for channel coding -- is proposed.
| 2026-01-02
| 2026-01-05
|
[
"cs.IT",
"math.IT"
] |
Bernhard C. Geiger, Tobias Koch, Josipa MihaljeviÄ, Maximilian Toller
|
2601.00568
|
Capital allocation and tail central moments for the multivariate normal mean-variance mixture distribution
|
Capital allocation is a procedure used to assess the risk contributions of individual risk components to the total risk of a portfolio. While the conditional tail expectation (CTE)-based capital allocation is arguably the most popular capital allocation method, its inability to reflect important tail behaviour of losses necessitates a more accurate approach. In this paper, we introduce a new capital allocation method based on the tail central moments (TCM), generalising the tail covariance allocation informed by the tail variance. We develop analytical expressions of the TCM as well as the TCM-based capital allocation for the class of normal mean-variance mixture distributions, which is widely used to model asymmetric and heavy-tailed data in finance and insurance. As demonstrated by a numerical analysis, the TCM-based capital allocation captures several significant patterns in the tail region of equity losses that remain undetected by the CTE, enhancing the understanding of the tail risk contributions of risk components.
| 2026-01-02
| 2026-01-05
|
[
"q-fin.PM"
] |
Enrique CalderÃn-Ojeda, Yuyu Chen, Soon Wei Tan
|
2601.00563
|
ASCNet: Research on all-sky camera images classification at the Muztagh-ata site
|
Cloud coverage is one of the crucial elements of site testing in astronomy. All-sky camera (ASC) images are beneficial for our research on cloud coverage. In this paper, we propose ASCNet, an innovative model specifically designed for classifying nighttime ASC images collected at the Muztagh-ata site from 2022 March to 2024 June. ASCNet integrates ResNet34 with an ASCModule, which employs Depthwise Dilated Convolution and embeds lightweight Squeeze-and-Excitation attention within its branches to extract fine-grained texture information from the luminance channel. The data set is partitioned by category, with 70% of images assigned to the training set and 30% to the test set. The model's performance is assessed by comparing its predictions on the test set with manually annotated labels, yielding a consistency rate of 92.7%. All evaluation metrics of ASCNet are as follows: Accuracy 92.66%, Precision 83.26%, Recall 84.25%, and F1-Score 83.67%, and both ablation and comparative experiments demonstrate significant superiority over other models. A confusion matrix is utilized to analyze the differences between manual classification and model classification. The statistical results demonstrate the model's excellent classification performance and its robust generalization ability, illustrating that ASCNet has potential for application in future astronomical image classifications.
| 2026-01-02
| 2026-01-05
|
[
"astro-ph.IM"
] |
Siqi Wang, Qi Fan, Wenbo Gu, Haozhi Wang, AYZADA Jumahali, Lixian Shen, Daiping Zhang, Liyong Liu, Ali Esamdin
|
2601.00619
|
High-Temperature Deformation Behavior of Co-Free Non-Equiatomic CrMnFeNi Alloy
|
Cobalt-free high-entropy alloys (HEAs) have garnered interest for nuclear structural applications due to their good mechanical performance, thermal stability, and resistance to radiation-induced degradation, while avoiding long-lived Co radioisotopes. This study presents an experimental and computational investigation of the plastic deformation behavior of a non-equatomic CrMnFeNi alloy, designed to maintain a stability of fcc phase in a large domain of temperatures and to balance stacking fault (SF) energies for enhanced strain hardening and ductility. Tensile tests reveal a temperature-dependent reduction in mechanical strength, attributed to thermally activated deformation mechanisms and microstructural evolution. Molecular dynamics simulations of single- and polycrystals capture dislocation activity, SF formation, and twin nucleation as a function of strain and temperature. Electron backscatter diffraction (EBSD) confirms twin formation and grain boundary activity. The Schmid factor mapping is drawn to interpret local slip activity and anisotropic deformation behavior. The absence of Co leads to enhanced high-temperature strength compared to the Cantor alloy.
| 2026-01-02
| 2026-01-05
|
[
"cond-mat.mtrl-sci",
"physics.comp-ph"
] |
F. J. Dominguez-Gutierrez, M. Frelek-Kozak, G. Markovic, M. A. Strozyk, A. Daramola, M. Traversier, A. Fraczkiewicz, A. Zaborowska, T. Khvan, I. Jozwik, L. Kurpaska
|
2601.00538
|
Parametrized Sharing for Multi-Agent Hybrid DRL for Multiple Multi-Functional RISs-Aided Downlink NOMA Networks
|
Multi-functional reconfigurable intelligent surface (MF-RIS) is conceived to address the communication efficiency thanks to its extended signal coverage from its active RIS capability and self-sustainability from energy harvesting (EH). We investigate the architecture of multi-MF-RISs to assist non-orthogonal multiple access (NOMA) downlink networks. We formulate an energy efficiency (EE) maximization problem by optimizing power allocation, transmit beamforming and MF-RIS configurations of amplitudes, phase-shifts and EH ratios, as well as the position of MF-RISs, while satisfying constraints of available power, user rate requirements, and self-sustainability property. We design a parametrized sharing scheme for multi-agent hybrid deep reinforcement learning (PMHRL), where the multi-agent proximal policy optimization (PPO) and deep-Q network (DQN) handle continuous and discrete variables, respectively. The simulation results have demonstrated that proposed PMHRL has the highest EE compared to other benchmarks, including cases without parametrized sharing, pure PPO and DQN. Moreover, the proposed multi-MF-RISs-aided downlink NOMA achieves the highest EE compared to scenarios of no-EH/amplification, traditional RISs, and deployment without RISs/MF-RISs under different multiple access.
| 2026-01-02
| 2026-01-05
|
[
"eess.SP",
"cs.AI"
] |
Chi-Te Kuo, Li-Hsiang Shen, Jyun-Jhe Huang
|
2508.07451
|
Cyclic Division Algebras of Odd Prime Degree are never Amitsur-Small
|
A division ring $D$ is Amitsur-Small if for every $n$ and every maximal left ideal $I$ in $D[x_1,\dots,x_n]$, $I \cap D[x_1,\dots,x_{n-1}]$ is maximal in $D[x_1,\dots,x_{n-1}]$. The goal of this note is to prove that cyclic division algebras of odd prime degree over their center are never Amitsur-Small.
| 2026-01-02
| 2026-01-06
|
[
"math.RA",
"math.AG"
] |
Adam Chapman, Ilan Levin, Marco Zaninelli
|
2601.00591
|
Coordination-driven magic numbers in protonated argon clusters
|
The structural properties of rare-gas clusters can be primarily described by a simple sphere packing model or by pairwise interactions. Remarkably, adding a single proton yields a large set of magic numbers that has remained unexplained. In this Letter, we unravel their origin by combining quantum Monte Carlo techniques with many-body ab initio potentials that correctly capture the proton's coordination environment. Thanks to this approach, we find that argon atoms are mainly localized around the classical minimum, resulting in a particularly rigid behavior in stark contrast to lighter rare-gas clusters. Moreover, as cluster size increases, we identify a clear structural transition from many-body coordination-driven stability to a regime dominated by two-body interactions, reflecting a reshaping of the underlying potential energy landscape.
| 2026-01-02
| 2026-01-05
|
[
"physics.atm-clus",
"cond-mat.mes-hall",
"quant-ph"
] |
Saajid Chowdhury, MarÃa Judit Montes de Oca-Estévez, Florian Foitzik, Elisabeth Gruber, Paul Scheier, Pablo Villarreal, Rita Prosmiti, Tomás González-Lezana, Jesús Pérez-RÃos
|
2412.06343
|
Diffusion on the circle and a stochastic correlation model
|
We develop diffusion models for time-varying correlation using stochastic processes defined on the unit circle. Specifically, we study Brownian motion on the circle and the von Mises diffusion, and propose their use as continuous-time models for correlation dynamics. The von Mises process, introduced by Kent (1975) as a characterization of the von Mises distribution in circular statistics, does not have a known closed-form transition density, which has limited its use in likelihood-based inference. We derive an accurate analytical approximation to the transition density of the von Mises diffusion, enabling practical likelihood-based estimation. We study inference for discretely observed circular diffusions, establish consistency and asymptotic normality of the resulting estimators, and propose a stochastic correlation model for financial applications. The methodology is illustrated through simulation studies and empirical applications to equity-foreign exchange market data.
| 2026-01-02
| 2026-01-05
|
[
"math.ST",
"q-fin.MF",
"stat.TH"
] |
Sourav Majumdar, Arnab Kumar Laha
|
2601.00954
|
Tidal perturbations of an extreme mass ratio inspiral around a Kerr black hole
|
We determine the metric of a Kerr black hole subject to external tidal fields using metric reconstruction techniques. Working within the Newman-Penrose formalism, we solve the Teukolsky master equation for static, quadrupolar modes associated with a slowly varying tidal environment, and reconstruct the corresponding metric perturbation in the outgoing radiation gauge. As an application, we derive the secular Hamiltonian governing the motion of a test particle in the tidally deformed Kerr spacetime and investigate long-term tidal effects relevant to extreme-mass-ratio inspirals. In particular, we compute tidal-induced shifts of the innermost stable circular orbit and the light ring. We find that these tidal corrections are strongly spin dependent, with significantly larger effects for retrograde orbits around rapidly rotating black holes. Our results provide a fully analytic framework for studying tidal interactions and secular dynamics in rotating black-hole spacetimes, with direct applications to gravitational-wave modeling and tests of gravity in the strong-field regime.
| 2026-01-02
| 2026-01-06
|
[
"gr-qc",
"hep-th"
] |
Marta Cocco, Gianluca Grignani, Troels Harmark, Marta Orselli, David Pereñiguez, Maarten van de Meent
|
2601.00737
|
Stochastic Actor-Critic: Mitigating Overestimation via Temporal Aleatoric Uncertainty
|
Off-policy actor-critic methods in reinforcement learning train a critic with temporal-difference updates and use it as a learning signal for the policy (actor). This design typically achieves higher sample efficiency than purely on-policy methods. However, critic networks tend to overestimate value estimates systematically. This is often addressed by introducing a pessimistic bias based on uncertainty estimates. Current methods employ ensembling to quantify the critic's epistemic uncertainty-uncertainty due to limited data and model ambiguity-to scale pessimistic updates. In this work, we propose a new algorithm called Stochastic Actor-Critic (STAC) that incorporates temporal (one-step) aleatoric uncertainty-uncertainty arising from stochastic transitions, rewards, and policy-induced variability in Bellman targets-to scale pessimistic bias in temporal-difference updates, rather than relying on epistemic uncertainty. STAC uses a single distributional critic network to model the temporal return uncertainty, and applies dropout to both the critic and actor networks for regularization. Our results show that pessimism based on a distributional critic alone suffices to mitigate overestimation, and naturally leads to risk-averse behavior in stochastic environments. Introducing dropout further improves training stability and performance by means of regularization. With this design, STAC achieves improved computational efficiency using a single distributional critic network.
| 2026-01-02
| 2026-01-05
|
[
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY"
] |
UÄurcan Ãzalp
|
2601.00773
|
Variable Importance in Generalized Linear Models -- A Unifying View Using Shapley Values
|
Variable importance in regression analyses is of considerable interest in a variety of fields. There is no unique method for assessing variable importance. However, a substantial share of the available literature employs Shapley values, either explicitly or implicitly, to decompose a suitable goodness-of-fit measure, in the linear regression model typically the classical $R^2$. Beyond linear regression, there is no generally accepted goodness-of-fit measure, only a variety of pseudo-$R^2$s. We formulate and discuss the desirable properties of goodness-of-fit measures that enable Shapley values to be interpreted in terms of relative, and even absolute, importance. We suggest to use a pseudo-$R^2$ based on the Kullback-Leibler divergence, the Kullback-Leibler $R^2$, which has a convenient form for generalized linear models and permits to unify and extend previous work on variable importance for linear and nonlinear models. Several examples are presented, using data from public health and insurance.
| 2026-01-02
| 2026-01-05
|
[
"stat.ME"
] |
Sinan Acemoglu, Christian Kleiber, Jörg Urban
|
2601.00718
|
Near-Contact Binaries on the Path to Contact Binaries
|
A comprehensive evolution study was conducted on a carefully
selected sample of near-contact binaries (NCBs) with more massive
components filling the Roche lobes, utilizing the best-known basic
parameters and indications of ongoing mass transfer. The results and
discussion highlight that several NCBs with total masses exceeding 2 solar masses
survive only a short time after mass exchange as contact binaries (CBs),
with both components eventually merging to form a rapidly rotating giant,
akin to FK~Com. Less massive NCBs transition into typical CBs and
remain in this phase for up to 2 Gyr before ending their binary evolution
as systems with extremely low mass ratios, susceptible to Darwin
instability.
However, this does not fully explain the existence of low-mass CBs with
masses in the range of 1-1.5 solar masses. It is noted that there exists a
population of low-mass binaries, nearly filling their Roche lobes. Their
overall properties suggest that they could be progenitors of low-mass
CBs.
| 2026-01-02
| 2026-01-05
|
[
"astro-ph.SR"
] |
K. StÈ©pieÅ
|
2601.00957
|
Algorithmic Applications of Tyshkevich's Graph Decomposition: A Primer and a Toolkit
|
A graph that is completely determined by its degree sequence is called a unigraph. In 2000, Regina Tyshkevich published one of the most important papers on unigraphs. There are two parts to the paper: a decomposition theorem that describes how every graph can be broken into a sequence of basic graphs and a complete classification of all basic unigraphs. Together, they reveal how every unigraph is constructed. We provide an informal overview of Tyshkevich's results and show how they enable the computation of various graph parameters of unigraphs in linear time. We also created a toolkit (https://chelseal11.github.io/tyshkevich_decomposition_toolkit/) that implements the algorithms described in this write-up.
| 2026-01-02
| 2026-01-06
|
[
"math.CO",
"cs.DM"
] |
Christine T. Cheng, Chelsea Ann Lambert
|
2506.01626
|
Safety, Relative Tightness and the Probabilistic Frame Rule
|
Probabilistic separation logic offers an approach to reasoning about imperative probabilistic programs in which a separating conjunction is used as a mechanism for expressing independence properties. Crucial to the effectiveness of the formalism is the frame rule, which enables modular reasoning about independent probabilistic state. We explore a semantic formulation of probabilistic separation logic, in which the frame rule has the same simple formulation as in separation logic, without further side conditions. This is achieved by building a notion of safety into specifications, using which we establish a crucial property of specifications, called relative tightness, from which the soundness of the frame rule follows.
| 2026-01-02
| 2026-01-06
|
[
"cs.LO"
] |
Janez Ignacij Jereb, Alex Simpson
|
2601.00952
|
On Cosmological Correlators at One Loop
|
We study equal-time in-in correlators of massless scalar fields in flat space at one loop. Using the time-ordered decomposition of correlators together with a cosmological analogue of the Baikov representation, we systematically construct relatively simple loop integrals and make manifest why, in this setting, loop corrections to correlators are simpler than those of wavefunction coefficients. As benchmark examples, we analyse the bubble and triangle diagrams. The bubble exhibits a UV divergence that can be removed by a local counterterm, while the triangle yields a finite result, which we evaluate explicitly in terms of dilogarithms using an integral transform for the Laplacian Green's function. We classify the kinematic singularities of these diagrams using Landau analysis, identifying novel types of singular behaviour, and validate this analysis against the explicit results. Finally, we derive a factorisation property of one-loop cosmological correlators at singular kinematics, relating them to flat-space loop amplitudes and lower-point tree-level correlators.
| 2026-01-02
| 2026-01-06
|
[
"hep-th",
"astro-ph.CO",
"hep-ph"
] |
Guilherme L. Pimentel, Tom Westerdijk
|
2601.00523
|
The CoinAlg Bind: Profitability-Fairness Tradeoffs in Collective Investment Algorithms
|
Collective Investment Algorithms (CoinAlgs) are increasingly popular systems that deploy shared trading strategies for investor communities. Their goal is to democratize sophisticated -- often AI-based -- investing tools. We identify and demonstrate a fundamental profitability-fairness tradeoff in CoinAlgs that we call the CoinAlg Bind: CoinAlgs cannot ensure economic fairness without losing profit to arbitrage. We present a formal model of CoinAlgs, with definitions of privacy (incomplete algorithm disclosure) and economic fairness (value extraction by an adversarial insider). We prove two complementary results that together demonstrate the CoinAlg Bind. First, privacy in a CoinAlg is a precondition for insider attacks on economic fairness. Conversely, in a game-theoretic model, lack of privacy, i.e., transparency, enables arbitrageurs to erode the profitability of a CoinAlg. Using data from Uniswap, a decentralized exchange, we empirically study both sides of the CoinAlg Bind. We quantify the impact of arbitrage against transparent CoinAlgs. We show the risks posed by a private CoinAlg: Even low-bandwidth covert-channel information leakage enables unfair value extraction.
| 2026-01-02
| 2026-01-05
|
[
"cs.GT",
"cs.CR"
] |
Andrés Fábrega, James Austgen, Samuel Breckenridge, Jay Yu, Amy Zhao, Sarah Allen, Aditya Saraf, Ari Juels
|
2601.00977
|
Spectral Analysis of the 2019 and 2022 Outbursts of SAX J1808.4-3658
|
The accreting millisecond pulsar SAX J1808.4-3658 went into outburst from July to November in 2019 and August to October in 2022, which were observed by \textit{NICER} and \textit{NuSTAR}. In this paper, we first present the light curve for both outbursts using \textit{NICER} data. Several thermonuclear bursts occurred during these outbursts. We analyze the evolution of the spectra of two thermonuclear bursts that took place during the 2019 \textit{NuSTAR} observation. We proceed by analyzing the combined broad-band spectrum using \textit{NICER} and \textit{NuSTAR} for the first time for this source. We jointly modeled the combined quiescent spectra of both outbursts with a self-consistent reflection component. In our best-fit model, we find evidence of reflection, consistently constrain the inclination to 72°$^{+1°}_{-4°}$\, considering this reflection, and identify a 1 keV feature during persistent emission.
| 2026-01-02
| 2026-01-06
|
[
"astro-ph.HE"
] |
Katherine Bruce, Sachiko Tsuruta, Andrew C. Liebmann, Marcus Teter
|
2601.00968
|
Explainability-Guided Defense: Attribution-Aware Model Refinement Against Adversarial Data Attacks
|
The growing reliance on deep learning models in safety-critical domains such as healthcare and autonomous navigation underscores the need for defenses that are both robust to adversarial perturbations and transparent in their decision-making. In this paper, we identify a connection between interpretability and robustness that can be directly leveraged during training. Specifically, we observe that spurious, unstable, or semantically irrelevant features identified through Local Interpretable Model-Agnostic Explanations (LIME) contribute disproportionately to adversarial vulnerability. Building on this insight, we introduce an attribution-guided refinement framework that transforms LIME from a passive diagnostic into an active training signal. Our method systematically suppresses spurious features using feature masking, sensitivity-aware regularization, and adversarial augmentation in a closed-loop refinement pipeline. This approach does not require additional datasets or model architectures and integrates seamlessly into standard adversarial training. Theoretically, we derive an attribution-aware lower bound on adversarial distortion that formalizes the link between explanation alignment and robustness. Empirical evaluations on CIFAR-10, CIFAR-10-C, and CIFAR-100 demonstrate substantial improvements in adversarial robustness and out-of-distribution generalization.
| 2026-01-02
| 2026-01-06
|
[
"cs.LG"
] |
Longwei Wang, Mohammad Navid Nayyem, Abdullah Al Rakin, KC Santosh, Chaowei Zhang, Yang Zhou
|
2504.09372
|
On the uniqueness of a generalized quadrangle of order (4,16)
|
In the manuscript [v3], we prove the uniqueness of a generalized quadrangle of order (4,16).
| 2026-01-02
| 2026-01-05
|
[
"math.CO"
] |
Koichi Inoue
|
2503.06326
|
Finding All Solutions of qKZ Equations in Characteristic $p$
|
In [J. Lond. Math. Soc. 109 (2024), e12884, 22 pages, arXiv:2208.09721], the difference qKZ equations were considered modulo a prime number $p$ and a family of polynomial solutions of the qKZ equations modulo $p$ was constructed by an elementary procedure as suitable $p$-approximations of the hypergeometric integrals. In this paper, we study in detail the first family of nontrivial examples of the qKZ equations in characteristic $p$. We describe all solutions of these qKZ equations in characteristic $p$ by demonstrating that they all stem from the $p$-hypergeometric solutions. We also prove a Lagrangian property (called the orthogonality property) of the subbundle of the qKZ bundle spanned by the $p$-hypergeometric sections. This paper extends the results of [arXiv:2405.05159] on the differential KZ equations to the difference qKZ equations.
| 2026-01-02
| 2026-01-05
|
[
"math-ph",
"math.AG",
"math.MP",
"math.NT"
] |
Evgeny Mukhin, Alexander Varchenko
|
2601.00531
|
Fair Policy Learning under Bipartite Network Interference: Learning Fair and Cost-Effective Environmental Policies
|
Numerous studies have shown the harmful effects of airborne pollutants on human health. Vulnerable groups and communities often bear a disproportionately larger health burden due to exposure to airborne pollutants. Thus, there is a need to design policies that effectively reduce the public health burdens while ensuring cost-effective policy interventions. Designing policies that optimally benefit the population while ensuring equity between groups under cost constraints is a challenging statistical and causal inference problem. In the context of environmental policy this is further complicated by the fact that interventions target emission sources but health impacts occur in potentially distant communities due to atmospheric pollutant transport -- a setting known as bipartite network interference (BNI). To address these issues, we propose a fair policy learning approach under BNI. Our approach allows to learn cost-effective policies under fairness constraints even accounting for complex BNI data structures. We derive asymptotic properties and demonstrate finite sample performance via Monte Carlo simulations. Finally, we apply the proposed method to a real-world dataset linking power plant scrubber installations to Medicare health records for more than 2 million individuals in the U.S. Our method determine fair scrubber allocations to reduce mortality under fairness and cost constraints.
| 2026-01-02
| 2026-01-05
|
[
"stat.ME"
] |
Raphael C. Kim, Rachel C. Nethery, Kevin L. Chen, Falco J. Bargagli-Stoffi
|
2601.00647
|
Physio-DPO: Aligning Large Language Models with the Protein Energy Landscape to Eliminate Structural Hallucinations
|
Large Protein Language Models have shown strong potential for generative protein design, yet they frequently produce structural hallucinations, generating sequences with high linguistic likelihood that fold into thermodynamically unstable conformations. Existing alignment approaches such as Direct Preference Optimization are limited in this setting, as they model preferences as binary labels and ignore the continuous structure of the physical energy landscape. We propose Physio-DPO, a physics informed alignment framework that grounds protein language models in thermodynamic stability. Physio-DPO introduces a magnitude aware objective that scales optimization updates according to the energy gap between native structures and physics perturbed hard negatives. Experiments show that Physio-DPO consistently outperforms strong baselines including SFT, PPO, and standard DPO, reducing self consistency RMSD to 1.28 Ã
and increasing foldability to 92.8%. Qualitative analysis further demonstrates that Physio-DPO effectively mitigates structural hallucinations by recovering biophysical interactions such as hydrophobic core packing and hydrogen bond networks.
| 2026-01-02
| 2026-01-05
|
[
"cs.CL",
"cs.CE",
"q-bio.QM"
] |
QiWei Meng
|
2601.00671
|
Fast-weight Product Key Memory
|
Sequence modeling layers in modern language models typically face a trade-off between storage capacity and computational efficiency. While Softmax attention offers unbounded storage at prohibitive quadratic costs, linear variants provide efficiency but suffer from limited, fixed-size storage. We propose Fast-weight Product Key Memory (FwPKM), a novel architecture that resolves this tension by transforming the sparse Product Key Memory (PKM) from a static module into a dynamic, "fast-weight" episodic memory. Unlike PKM, FwPKM updates its parameters dynamically at both training and inference time via local chunk-level gradient descent, allowing the model to rapidly memorize and retrieve new key-value pairs from input sequences. Experiments reveal that FwPKM functions as an effective episodic memory that complements the semantic memory of standard modules, yielding significant perplexity reductions on long-context datasets. Notably, in Needle in a Haystack evaluations, FwPKM generalizes to 128K-token contexts despite being trained on only 4K-token sequences.
| 2026-01-02
| 2026-01-05
|
[
"cs.CL",
"cs.AI"
] |
Tianyu Zhao, Llion Jones
|
2407.15870
|
CIC: Circular Image Compression
|
Learned image compression (LIC) is currently the cutting-edge method. However, the inherent difference between testing and training images of LIC results in performance degradation to some extent. Especially for out-of-sample, out-of-distribution, or out-of-domain testing images, the performance of LIC degrades significantly. Classical LIC is a serial image compression (SIC) approach that utilizes an open-loop architecture with serial encoding and decoding units. Nevertheless, according to the principles of automatic control systems, a closed-loop architecture holds the potential to improve the dynamic and static performance of LIC. Therefore, a circular image compression (CIC) approach with closed-loop encoding and decoding elements is proposed to minimize the gap between testing and training images and upgrade the capability of LIC. The proposed CIC establishes a nonlinear loop equation and proves that steady-state error between reconstructed and original images is close to zero by Taylor series expansion. The proposed CIC method possesses the property of Post-Training and Plug-and-Play which can be built on any existing advanced SIC methods. Experimental results including rate-distortion curves on five public image compression datasets demonstrate that the proposed CIC outperforms eight competing state-of-the-art open-source SIC algorithms in reconstruction capacity. Experimental results further show that the proposed method is suitable for out-of-sample testing images with dark backgrounds, sharp edges, high contrast, grid shapes, or complex patterns.
| 2026-01-02
| 2026-01-05
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] |
Honggui Li, Sinan Chen, Dingtai Li, Zhengyang Zhang, Nahid Md Lokman Hossain, Xinfeng Xu, Yinlu Qin, Ruobing Wang, Maria Trocan, Dimitri Galayko, Amara Amara, Mohamad Sawan
|
2601.00730
|
Grading Handwritten Engineering Exams with Multimodal Large Language Models
|
Handwritten STEM exams capture open-ended reasoning and diagrams, but manual grading is slow and difficult to scale. We present an end-to-end workflow for grading scanned handwritten engineering quizzes with multimodal large language models (LLMs) that preserves the standard exam process (A4 paper, unconstrained student handwriting). The lecturer provides only a handwritten reference solution (100%) and a short set of grading rules; the reference is converted into a text-only summary that conditions grading without exposing the reference scan. Reliability is achieved through a multi-stage design with a format/presence check to prevent grading blank answers, an ensemble of independent graders, supervisor aggregation, and rigid templates with deterministic validation to produce auditable, machine-parseable reports. We evaluate the frozen pipeline in a clean-room protocol on a held-out real course quiz in Slovenian, including hand-drawn circuit schematics. With state-of-the-art backends (GPT-5.2 and Gemini-3 Pro), the full pipeline achieves $\approx$8-point mean absolute difference to lecturer grades with low bias and an estimated manual-review trigger rate of $\approx$17% at $D_{\max}=40$. Ablations show that trivial prompting and removing the reference solution substantially degrade accuracy and introduce systematic over-grading, confirming that structured prompting and reference grounding are essential.
| 2026-01-02
| 2026-01-05
|
[
"cs.CV"
] |
Janez PerÅ¡, Jon MuhoviÄ, Andrej KoÅ¡ir, BoÅ¡tjan Murovec
|
2601.00996
|
VEAT Quantifies Implicit Associations in Text-to-Video Generator Sora and Reveals Challenges in Bias Mitigation
|
Text-to-Video (T2V) generators such as Sora raise concerns about whether generated content reflects societal bias. We extend embedding-association tests from words and images to video by introducing the Video Embedding Association Test (VEAT) and Single-Category VEAT (SC-VEAT). We validate these methods by reproducing the direction and magnitude of associations from widely used baselines, including Implicit Association Test (IAT) scenarios and OASIS image categories. We then quantify race (African American vs. European American) and gender (women vs. men) associations with valence (pleasant vs. unpleasant) across 17 occupations and 7 awards. Sora videos associate European Americans and women more with pleasantness (both d>0.8). Effect sizes correlate with real-world demographic distributions: percent men and White in occupations (r=0.93, r=0.83) and percent male and non-Black among award recipients (r=0.88, r=0.99). Applying explicit debiasing prompts generally reduces effect-size magnitudes, but can backfire: two Black-associated occupations (janitor, postal service) become more Black-associated after debiasing. Together, these results reveal that easily accessible T2V generators can actually amplify representational harms if not rigorously evaluated and responsibly deployed.
| 2026-01-02
| 2026-01-06
|
[
"cs.CY",
"cs.AI"
] |
Yongxu Sun, Michael Saxon, Ian Yang, Anna-Maria Gueorguieva, Aylin Caliskan
|
2512.18689
|
Fusion of Multiscale Features Via Centralized Sparse-attention Network for EEG Decoding
|
Electroencephalography (EEG) signal decoding is a key technology that translates brain activity into executable commands, laying the foundation for direct brain-machine interfacing and intelligent interaction. To address the inherent spatiotemporal heterogeneity of EEG signals, this paper proposes a multi-branch parallel architecture, where each temporal scale is equipped with an independent spatial feature extraction module. To further enhance multi-branch feature fusion, we propose a Fusion of Multiscale Features via Centralized Sparse-attention Network (EEG-CSANet), a centralized sparse-attention network. It employs a main-auxiliary branch architecture, where the main branch models core spatiotemporal patterns via multiscale self-attention, and the auxiliary branch facilitates efficient local interactions through sparse cross-attention. Experimental results show that EEG-CSANet achieves state-of-the-art (SOTA) performance across five public datasets (BCIC-IV-2A, BCIC-IV-2B, HGD, SEED, and SEED-VIG), with accuracies of 88.54%, 91.09%, 97.15%, 96.03%, and 90.56%, respectively. Such performance demonstrates its strong adaptability and robustness across various EEG decoding tasks. Moreover, extensive ablation studies are conducted to enhance the interpretability of EEG-CSANet. In the future, we hope that EEG-CSANet could serve as a promising baseline model in the field of EEG signal decoding. The source code is publicly available at: https://github.com/Xiangrui-Cai/EEG-CSANet
| 2026-01-02
| 2026-01-05
|
[
"cs.LG",
"cs.AI"
] |
Xiangrui Cai, Shaocheng Ma, Lei Cao, Jie Li, Tianyu Liu, Yilin Dong
|
2601.00704
|
Exceptional Lines and Excitation of (Nearly) Double-Pole Quasinormal Modes: A Semi-Analytic Study in the Nariai Black Hole
|
We show that quasinormal modes (QNMs) of a massive scalar field in Kerr-de Sitter and Myers-Perry black holes exhibit an exceptional line (EL), which is a continuous set of exceptional points (EPs) in parameter space, at which two QNM frequencies and their associated solutions coincide. We find that the EL appears in the parameter space spanned by the scalar mass and the black hole spin parameter, and also in the Nariai limit, i.e., $r_{\rm c} - r_{\rm h} \to 0$, where $r_{\rm c}$ and $r_{\rm h}$ denote the radii of the cosmological and black hole horizons, respectively. We analytically study the amplitudes or excitation factors of QNMs near the EL. Such an analytic treatment becomes possible since, in the Nariai limit, the perturbation equation reduces to a wave equation with the Pöschl-Teller (PT) potential. We discuss the destructive excitation of QNMs and the stability of the ringdown near and at the EL. The transient linear growth of QNMs -- a characteristic excitation pattern near an EP or EL -- together with the conditions under which this linear growth dominates the early ringdown, is also studied analytically. Our conditions apply to a broad class of systems that involve the excitation of (nearly) double-pole QNMs.
| 2026-01-02
| 2026-01-05
|
[
"gr-qc",
"astro-ph.CO",
"hep-th"
] |
Nao Nakamoto, Naritaka Oshita
|
2510.26371
|
Unambiguous Acceptance of Thin Coalgebras
|
Automata admitting at most one accepting run per structure, known as unambiguous automata, find applications in verification of reactive systems as they extend the class of deterministic automata whilst maintaining some of their desirable properties. In this paper, we generalise a classical construction of unambiguous automata from thin trees to thin coalgebras for analytic functors. This achieves two goals: extending the existing construction to a larger class of structures, and providing conceptual clarity and parametricity to the construction by formalising it in the coalgebraic framework. As part of the construction, we link automaton acceptance of languages of thin coalgebras to language recognition via so-called coherent algebras, which were previously introduced for studying thin coalgebras. This link also allows us to establish an automata-theoretic characterisation of languages recognised by finite coherent algebras.
| 2026-01-02
| 2026-01-06
|
[
"cs.FL"
] |
Anton Chernev, Corina Cîrstea, Helle Hvid Hansen, Clemens Kupke
|
2503.13965
|
First-Order Projected Algorithms With the Same Linear Convergence Rate Bounds as Their Unconstrained Counterparts
|
In this paper, we propose a systematic approach for extending first-order optimization algorithms, originally designed for unconstrained strongly convex problems, to handle closed and convex set constraints. We show that the resulting projected algorithms retain the same linear convergence rate bounds, provided that the underlying unconstrained optimization algorithms admit a quadratic Lyapunov function obtained from integral quadratic constraint (IQC) analysis. The projected algorithms are constructed by applying a projection in the norm induced by the Lyapunov matrix, ensuring both constraint satisfaction and optimality at the fixed point. Furthermore, under a linear transformation associated with this matrix, the projection becomes non-expansive in the Euclidean norm, allowing the use of the contraction mapping theorem to establish convergence. Our results indicate that, when analyzing worst-case convergence rates or when synthesizing first-order optimization algorithms with potentially higher-order dynamics, it suffices to focus solely on the unconstrained dynamics, since the same parameters or stepsizes can be employed without retuning.
| 2026-01-02
| 2026-01-05
|
[
"math.OC"
] |
Mengmou Li, Ioannis Lestas, Masaaki Nagahara
|
2601.00662
|
Extended BMS representations and strings
|
We construct in detail the irreducible representations of the BMS group with super rotations in three and four dimensions that have the same rest frame momenta as the massive and massless Poincare point particles. We compare these representations to those of the Poincare group and also to the analogous representations of global BMS. We argue that these extended BMS representations are carried by a string rather than a point particle. The super rotations play a crucial role in our discussions.
| 2026-01-02
| 2026-01-05
|
[
"hep-th"
] |
Romain Ruzziconi, Peter West
|
2601.02408
|
A Combined Barrow Entropy and QCD Ghost Mechanism for Late-Time Cosmic Acceleration
|
We investigate a unified dark-energy scenario based on the combined effects of Barrow entropy corrections and the QCD ghost mechanism, referred to as the BH--QCDGDE model. The dark-energy density is constructed in a generalized holographic form that incorporates both Barrow-deformed entropy corrections and low-energy QCD vacuum effects within a single framework. The cosmological dynamics are analyzed in a spatially flat Friedmann--Lema\^ıtre--Robertson--Walker background. The model exhibits a smooth transition from a decelerated matter-dominated era to a late-time accelerated phase without crossing the phantom divide, indicating a viable background evolution. An equivalent scalar-field description of the effective dark-energy sector is reconstructed and shown to admit a quintessence-like behavior. The thermodynamic viability is examined by testing the generalized second law at the apparent horizon, which is found to be satisfied throughout the parameter space. The classical stability of the model is further investigated through the squared speed of sound, revealing the role of model parameters in shaping stable cosmological regimes. Overall, the BH--QCDGDE framework provides a consistent and physically viable description of late-time cosmic acceleration.
| 2026-01-02
| 2026-01-07
|
[
"physics.gen-ph"
] |
Aziza Altaibayeva, Ulbossyn Ualikhanova, Zhanar Umurzakhova, Surajit Chattopadhyay
|
2406.07746
|
Any-Time Regret-Guaranteed Algorithm for Control of Linear Quadratic Systems
|
We propose a computationally efficient algorithm that achieves anytime regret of order $\mathcal{O}(\sqrt{t})$, with explicit dependence on the system dimensions and on the solution of the Discrete Algebraic Riccati Equation (DARE). Our approach builds on the SDP-based framework of \cite{cohen2019learning}, using an appropriately tuned regularization and a sufficiently accurate initial estimate to construct confidence ellipsoids for control design. A carefully designed input-perturbation mechanism is incorporated to ensure anytime performance. We develop two variants of the algorithm. The first enforces a notion of strong sequential stability, requiring each policy to be stabilizing and successive policies to remain close. However, enforcing this notion results in a suboptimal regret scaling. The second removes the sequential-stability requirement and instead requires only that each generated policy be stabilizing. Closed-loop stability is then preserved through a dwell-time-inspired policy-update rule, adapting ideas from switched-systems control to carefully balance exploration and exploitation. This class of algorithms also addresses key shortcomings of most existing approaches including certainty-equivalence-based methods which typically guarantee stability only in the Lyapunov sense and lack explicit uniform high-probability bounds on the state trajectory expressed in system-theoretic terms. Our analysis explicitly characterizes the trade-off between state amplification and regret, and shows that partially relaxing the sequential-stability requirement yields optimal regret. Finally, our method eliminates the need for any a priori bound on the norm of the DARE solution, an assumption required by all existing computationally efficient optimism in the face of uncertainty (OFU) based algorithms, and thereby removes the reliance of regret guarantees on such external inputs.
| 2026-01-02
| 2026-01-06
|
[
"stat.ML",
"cs.LG",
"cs.SY",
"eess.SY"
] |
Jafar Abbaszadeh Chekan, Cedric Langbort
|
2510.06585
|
Reversible computations are computations
|
Causality serves as an abstract notion of time for concurrent systems. A computation is causal, or simply valid, if each observation of a computation event is preceded by the observation of its causes. The present work establishes that this simple requirement is equally relevant when the occurrence of an event is invertible. We propose a conservative extension of causal models for concurrency that accommodates reversible computations. We first model reversible computations using a symmetric residuation operation in the general model of configuration structures. We show that stable configuration structures, which correspond to prime algebraic domains, remain stable under the action of this residuation. We then derive a semantics of reversible computations for prime event structures, which is shown to coincide with a switch operation that dualizes conflict and causality.
| 2026-01-02
| 2026-01-06
|
[
"cs.LO",
"math.LO"
] |
Clément Aubert, Jean Krivine
|
2507.17669
|
A Further Generalization of the Gale-Nikaido-Kuhn-Debreu Market Equilibrium Theorem
|
We extend the important generalizations by Yannelis [25] and Cornet et al [7] of the classical result of Gale, Nikaido, Kuhn and Debreu (the "GNKD theorem") regarding existence of market equilibrium, by broadening the applicability of their results, which apply only to economies with commodity space that can be modeled by a locally convex Hausdorff space, to the wider class of economies with commodity spaces describable by any Hausdorff topological vector space.
| 2026-01-02
| 2026-01-06
|
[
"math.FA"
] |
Ranjit Vohra
|
2512.07629
|
Sustainable Exploitation Equilibria for Dynamic Games
|
We introduce the Sustainable Exploitation Equilibrium (SEE), a refinement of Markov Perfect Equilibrium (MPE) for dynamic games with an exploiter-exploitee structure. SEE imposes renegotiation-proof exploiter-optimal selection on the set of rationally viable stationary Markov equilibria, where viability follows from sequential rationality when exiting a sustainability set entails catastrophic losses. Unlike MPE, SEE rules out equilibria in which the exploiter optimally drives the state to collapse despite positive continuation payoffs. The exploitee cannot exit, but retains a strategic effort margin affecting dynamics and payoffs. We establish existence under standard conditions, and the refinement is illustrated in a hegemon-client model of foreign politics.
| 2026-01-02
| 2026-01-05
|
[
"econ.TH"
] |
Nicholas H. Kirk
|
2601.00595
|
Asteroseismology study of a new faint ZZ Ceti J053009.62+594557.0 discovered in WFST
|
In this work, we present a detailed asteroseismological analysis of WFST J053009.62+594557.0, a newly discovered faint pulsating white dwarf by the Wide Field Survey Telescope (WFST) with a Gaia G magnitude of 19.13. Analysis of two nights of high-precision WFST g band photometry reveals three significant pulsation frequencies with high signal-to-noise ratios. Follow-up P200/DBSP spectroscopy classifies the object as a DA white dwarf with Teff=11,609 $\pm$ 605 K and M = 0.63$\pm$ 0.22 $M_{\odot}$. To probe its internal structure, we construct asteroseismological models with the White Dwarf Evolution Code (WDEC). After exploring sufficient matching models, best-fitting solutions yield Teff=11,850$\pm$ 10 K and M = 0.600 $\pm$ 0.005 $M_{\odot}$, consistent with independent constraints from Gaia color-magnitude diagram, Gaia XP spectrum, P200 spectral fitting, SED fitting, and Gaia parallax. It has shown that the asteroseismological distance agrees with the Gaia parallax to 1.45\%.
| 2026-01-02
| 2026-01-05
|
[
"astro-ph.SR"
] |
Yang Yonghui, Guo Jincheng, Lin Jie, Wang Tinggui, Jiang Ning, Wang Yibo, Fan Lulu, Fang Min, Li Bin, Li Feng, Liu Hao, Liang Ming, Luo Wentao, Tang Jinlong, Wang Hairen, Wang Jian, Xue Yongquan, Yao Dazhi, Zhang Hongfei
|
2601.00955
|
Is the Conventional Picture of Coherence Time Complete? Dark Matter Recoherence
|
The local solar gravitational potential forms a basin for ultralight dark matter (ULDM), with discrete energy levels. Even if barely populated, it introduces a new characteristic timescale in DM dynamics. This necessitates a generalization of the notion of coherence time. We find that, at long times, the phenomenon of recoherence emerges, whereby a subcomponent of ULDM exhibits a formally divergent coherence time. The fact that this generalized coherence time can significantly exceed the naive estimate implies an enhanced sensitivity for dark matter searches that accumulate data over extended observation periods.
| 2026-01-02
| 2026-01-06
|
[
"hep-ph",
"astro-ph.SR",
"quant-ph"
] |
Chaitanya Paranjape, Gilad Perez, Wolfram Ratzinger, Somasundaram Sankaranarayanan
|
2302.13905
|
Hamiltonian representation of isomonodromic deformations of twisted rational connections: The Painlevé $1$ hierarchy
|
In this paper, we build the Hamiltonian system and the corresponding Lax pairs associated to a twisted connection in $\mathfrak{gl}_2(\mathbb{C})$ admitting an irregular and ramified pole at infinity of arbitrary degree, hence corresponding to the Painlevé $1$ hierarchy. We provide explicit formulas for these Lax pairs and Hamiltonians in terms of the irregular times and standard $2g$ Darboux coordinates associated to the twisted connection. Furthermore, we obtain a map that reduces the space of irregular times to only $g$ non-trivial isomonodromic deformations. In addition, we perform a symplectic change of Darboux coordinates to obtain a set of symmetric Darboux coordinates in which Hamiltonians and Lax pairs are polynomial. Finally, we apply our general theory to the first cases of the hierarchy: the Airy case $(g=0)$, the Painlevé $1$ case $(g=1)$ and the next two elements of the Painlevé $1$ hierarchy.
| 2026-01-02
| 2026-01-05
|
[
"math-ph",
"hep-th",
"math.MP",
"math.SG",
"nlin.SI"
] |
Olivier Marchal, Mohamad Alameddine
|
2507.05466
|
Computer-aided analyses of stochastic first-order methods, via interpolation conditions for stochastic optimization
|
This work proposes a framework, embedded within the Performance Estimation framework (PEP), for obtaining worst-case performance guarantees on stochastic first-order methods. Given a first-order method, a function class, and a noise model with prescribed expectation and variance properties, we present a semidefinite program (SDP), whose size grows linearly with $N$, the number of iterations analyzed, and whose solution yields a convergence guarantee on the problem.
The framework accommodates a wide range of stochastic settings, with finite or infinite support, including the unstructured noise model with bounded variance, finite-sum optimization, and block-coordinate methods, in a unified manner, as guarantees apply to any setting consistent with the noise model, i.e., its expectation and variance. It covers both non-variance-reduced and variance-reduced methods. Using the framework, we analyze the stochastic gradient method under several noise models, and illustrate how the resulting numerical and analytical convergence rates connect with existing results. In particular, we provide improved convergence rates on the unstructured noise model with bounded variance and in the block-coordinate setting.
| 2026-01-02
| 2026-01-05
|
[
"math.OC"
] |
Anne Rubbens, Sébastien Colla, Julien M. Hendrickx
|
2512.24530
|
A Magnified View into Heterogeneous-ISA Thread Migration Performance without State Transformation
|
Heterogeneous-ISA processor designs have attracted considerable research interest. However, unlike their homogeneous-ISA counterparts, explicit software support for bridging ISA heterogeneity is required. The lack of a compilation toolchain ready to support heterogeneous-ISA targets has been a major factor hindering research in this exciting emerging area. For any such compiler, "getting right" the mechanics involved in state transformation upon migration and doing this efficiently is of critical importance. In particular, any runtime conversion of the current program stack from one architecture to another would be prohibitively expensive. In this paper, we design and develop Unifico, a new multi-ISA compiler that generates binaries that maintain the same stack layout during their execution on either architecture. Unifico avoids the need for runtime stack transformation, thus eliminating overheads associated with ISA migration. Additional responsibilities of the Unifico compiler backend include maintenance of a uniform ABI and virtual address space across ISAs. Unifico is implemented using the LLVM compiler infrastructure, and we are currently targeting the x86-64 and ARMv8 ISAs. We have evaluated Unifico across a range of compute-intensive NAS benchmarks and show its minimal impact on overall execution time, where less than 6% (10%) overhead is introduced on average for high-end (low-end) processors. We also analyze the performance impact of Unifico's key design features and demonstrate that they can be further optimized to mitigate this impact. When compared against the state-of-the-art Popcorn compiler, Unifico reduces binary size overhead from ~200% to ~10%, whilst eliminating the stack transformation overhead during ISA migration.
| 2026-01-02
| 2026-01-05
|
[
"cs.SE",
"cs.PF"
] |
Nikolaos Mavrogeorgis, Christos Vasiladiotis, Pei Mu, Amir Khordadi, Björn Franke, Antonio Barbalace
|
2601.00755
|
A formal theory on problem space as a semantic world model in systems engineering
|
Classic problem-space theory models problem solving as a navigation through a structured space of states, operators, goals, and constraints. Systems Engineering (SE) employs analogous constructs (functional analysis, operational analysis, scenarios, trade studies), yet still lacks a rigorous systems-theoretic representation of the problem space itself. In current practice, reasoning often proceeds directly from stakeholder goals to prescriptive artifacts. This makes foundational assumptions about the operational environment, admissible interactions, and contextual conditions implicit or prematurely embedded in architectures or requirements. This paper addresses that gap by formalizing the problem space as an explicit semantic world model containing theoretical constructs that are defined prior to requirements and solution commitments. These constructs along with the developed axioms, theorems and corollary establish a rigorous criterion for unambiguous boundary semantics, context-dependent interaction traceability to successful stakeholder goal satisfaction, and sufficiency of problem-space specification over which disciplined reasoning can occur independent of solution design. It offers a clear distinction between what is true of the problem domain and what is chosen as a solution. The paper concludes by discussing the significance of the theory on practitioners and provides a dialogue-based hypothetical case study between a stakeholder and an engineer, demonstrating how the theory guides problem framing before designing any prescriptive artifacts.
| 2026-01-02
| 2026-01-05
|
[
"eess.SY",
"cs.SY"
] |
Mayuranath SureshKumar, Hanumanthrao Kannan
|
2601.00759
|
Unified Primitive Proxies for Structured Shape Completion
|
Structured shape completion recovers missing geometry as primitives rather than as unstructured points, which enables primitive-based surface reconstruction. Instead of following the prevailing cascade, we rethink how primitives and points should interact, and find it more effective to decode primitives in a dedicated pathway that attends to shared shape features. Following this principle, we present UniCo, which in a single feed-forward pass predicts a set of primitives with complete geometry, semantics, and inlier membership. To drive this unified representation, we introduce primitive proxies, learnable queries that are contextualized to produce assembly-ready outputs. To ensure consistent optimization, our training strategy couples primitives and points with online target updates. Across synthetic and real-world benchmarks with four independent assembly solvers, UniCo consistently outperforms recent baselines, lowering Chamfer distance by up to 50% and improving normal consistency by up to 7%. These results establish an attractive recipe for structured 3D understanding from incomplete data. Project page: https://unico-completion.github.io.
| 2026-01-02
| 2026-01-05
|
[
"cs.CV"
] |
Zhaiyu Chen, Yuqing Wang, Xiao Xiang Zhu
|
2503.07955
|
PLK-Calib: Single-shot and Target-less LiDAR-Camera Extrinsic Calibration using Plücker Lines
|
Accurate LiDAR-Camera (LC) calibration is challenging but crucial for autonomous systems and robotics. In this paper, we propose two single-shot and target-less algorithms to estimate the calibration parameters between LiDAR and camera using line features. The first algorithm constructs line-to-line constraints by defining points-to-line projection errors and minimizes the projection error. The second algorithm (PLK-Calib) utilizes the co-perpendicular and co-parallel geometric properties of lines in Plücker (PLK) coordinate, and decouples the rotation and translation into two constraints, enabling more accurate estimates. Our degenerate analysis and Monte Carlo simulation indicate that three nonparallel line pairs are the minimal requirements to estimate the extrinsic parameters. Furthermore, we collect an LC calibration dataset with varying extrinsic under three different scenarios and use it to evaluate the performance of our proposed algorithms.
| 2026-01-02
| 2026-01-05
|
[
"cs.RO"
] |
Yanyu Zhang, Jie Xu, Wei Ren
|
2601.00938
|
Rate-Distortion Analysis of Compressed Query Delegation with Low-Rank Riemannian Updates
|
Bounded-context agents fail when intermediate reasoning exceeds an effective working-memory budget. We study compressed query delegation (CQD): (i) compress a high-dimensional latent reasoning state into a low-rank tensor query, (ii) delegate the minimal query to an external oracle, and (iii) update the latent state via Riemannian optimization on fixed-rank manifolds. We give a math-first formulation: CQD is a constrained stochastic program with a query-budget functional and an oracle modeled as a noisy operator. We connect CQD to classical rate-distortion and information bottleneck principles, showing that spectral hard-thresholding is optimal for a natural constrained quadratic distortion problem, and we derive convergence guarantees for Riemannian stochastic approximation under bounded oracle noise and smoothness assumptions. Empirically, we report (A) a 2,500-item bounded-context reasoning suite (BBH-derived tasks plus curated paradox instances) comparing CQD against chain-of-thought baselines under fixed compute and context; and (B) a human "cognitive mirror" benchmark (N=200) measuring epistemic gain and semantic drift across modern oracles.
| 2026-01-02
| 2026-01-06
|
[
"cs.CL",
"math.OC"
] |
Faruk Alpay, Bugra Kilictas
|
2601.00620
|
A Reduction of the Reconstruction Conjecture using Domination and Vertex Pair Parameters
|
A graph is reconstructible if it is determined up to isomorphism from the collection of all its one-vertex-deleted subgraphs, known as the deck of G. The Reconstruction Conjecture (RC) posits that every finite simple graph with at least three vertices is reconstructible. In this paper, we prove that the class of graphs with domination number $γ(G)=2$ is recognizable from the deck $D(G)$. We also establish a new reduction of the RC: it holds if and only if all $2$-connected graphs $G$ with $γ(G)=2$ or $\operatorname{diam}(G)=\operatorname{diam}(\overline{G})=2$ are reconstructible. To aid reconstruction, we introduce two new parameters: $dv(G,k_1,k_2,k_3)$, which counts the number of non-adjacent vertex pairs in $G$ with $k_1$ common neighbours, $k_2$ neighbours exclusive to the first vertex, and $k_3$ exclusive to the second; and $dav(G,k_1,k_2,k_3)$, defined analogously for adjacent pairs. For connected graphs with at least $12$ vertices and $γ(G)\geq 3$, we show these parameters are reconstructible from $D(G)$ via recursive equations and induction. Finally, we prove that $k$-geodetic graphs of diameter two with $γ(G),γ(\overline{G})\geq 3$ are reconstructible under conditions where a vertex degree matches the size of a specific subset derived from these parameters.
| 2026-01-02
| 2026-01-05
|
[
"math.CO"
] |
J. Antony Aravind, S. Monikandan
|
2510.10537
|
Plastic metric spaces and groups
|
A metric space is plastic if all its non-expansive bijections are isometries. We prove three main results: (1) every countable dense subspace of a normed space is not plastic, (2) every $k$-crowded separable metric space contains a plastic dense subspace, and (3) every strictly convex separable metric group contains a plastic dense subgroup.
| 2026-01-02
| 2026-01-05
|
[
"math.GN",
"math.FA",
"math.GR"
] |
Taras Banakh, Oles Mazurenko, Olesia Zavarzina
|
2601.00702
|
DefVINS: Visual-Inertial Odometry for Deformable Scenes
|
Deformable scenes violate the rigidity assumptions underpinning classical visual-inertial odometry (VIO), often leading to over-fitting to local non-rigid motion or severe drift when deformation dominates visual parallax. We introduce DefVINS, a visual-inertial odometry framework that explicitly separates a rigid, IMU-anchored state from a non--rigid warp represented by an embedded deformation graph. The system is initialized using a standard VIO procedure that fixes gravity, velocity, and IMU biases, after which non-rigid degrees of freedom are activated progressively as the estimation becomes well conditioned. An observability analysis is included to characterize how inertial measurements constrain the rigid motion and render otherwise unobservable modes identifiable in the presence of deformation. This analysis motivates the use of IMU anchoring and informs a conditioning-based activation strategy that prevents ill-posed updates under poor excitation. Ablation studies demonstrate the benefits of combining inertial constraints with observability-aware deformation activation, resulting in improved robustness under non-rigid environments.
| 2026-01-02
| 2026-01-05
|
[
"cs.RO",
"cs.CV"
] |
Samuel Cerezo, Javier Civera
|
2509.09088
|
An entropy formula for the Deep Linear Network
|
We study the Riemannian geometry of the Deep Linear Network (DLN) as a foundation for a thermodynamic description of the learning process. The main tools are the use of group actions to analyze overparametrization and the use of Riemannian submersion from the space of parameters to the space of observables. The foliation of the balanced manifold in the parameter space by group orbits is used to define and compute a Boltzmann entropy. We also show that the Riemannian geometry on the space of observables defined in [2] is obtained by Riemannian submersion of the balanced manifold. The main technical step is an explicit construction of an orthonormal basis for the tangent space of the balanced manifold using the theory of Jacobi matrices.
| 2026-01-02
| 2026-01-06
|
[
"cs.LG",
"math.DG",
"math.DS"
] |
Govind Menon, Tianmin Yu
|
2504.02820
|
Excitation of the Glashow resonance without neutrino beams
|
The $s$-channel process $\barν_ee^-\rightarrow W^-$ (on-shell) is now referred to as the Glashow resonance and being searched for at kilometer-scale neutrino ice/water detectors like IceCube, Baikal-GVD or KM3NeT. After over a decade of observations, IceCube has recorded only a few neutrino events with energies of interest such that an independent confirmation of the existence of this resonant interaction would be of great importance for testing the Standard Model. One might therefore ask: are there reactions with the Glashow resonance that would not necessitate having initial (anti)neutrino beams? This article suggests a surprisingly affirmative answer to the question $-$ namely, that the process may proceed in electron-positron collisions at accelerator energies, occurring as $e^+e^-\rightarrow W^-Ï(770)^+$. Although the resonance appears somewhat disguised, the underlying physics is transparent, quite resembling the well known radiative return: emission of $Ï^+$ from the initial state converts the incident $e^+$ into $\barν_e$. Likewise, the CP conjugate channel, $ν_e e^+\rightarrow W^+$, takes the form $e^+e^-\rightarrow W^+Ï(770)^-$. Similar reactions with muons and other hadrons are also possible. From this viewpoint, future high-luminosity lepton colliders seem to be promising for excitation of the Glashow resonance in laboratory conditions.
| 2026-01-02
| 2026-01-05
|
[
"hep-ph"
] |
I. Alikhanov
|
2409.10211
|
Neutrino yield and neutron shielding calculations for a high-power target installed in an underground setting
|
With the ever increasing beam power at particle accelerator-based facilities for nuclear and particle physics, radioactive isotope production, and nuclear engineering, targets that can withstand this power, and shielding of secondary particles are becoming increasingly important. Here we present Monte Carlo (MC) calculations using the well-established Geant4 software to optimise and predict the antineutrino yield of a $^8$Li Decay-At-Rest (DAR) source. The source relies on 600~kW of beam power from a continuous wave proton beam impinging on a beryllium target, where spallation neutrons capture on $^7$Li to produce the $^8$Li. We further present an in-depth treatment of the neutron shielding surrounding this target. We show that we can produce the high antineutrino flux needed for the discovery-level experiment IsoDAR, searching for ``sterile'' neutrinos (predicted new fundamental particles) and other beyond standard model physics, while maintaining a neutron flux in the detector that is below natural backgrounds. The methods presented in this paper are easily transferable to other high-power targets and their associated shielding.
| 2026-01-02
| 2026-01-05
|
[
"hep-ex"
] |
Adriana Bungau, Jose Alonso, Roger Barlow, Larry Bartozsek, Janet Conrad, Michael Shaevitz, Joshua Spitz, Daniel Winklehner
|
2601.00789
|
Fusion-SSAT: Unleashing the Potential of Self-supervised Auxiliary Task by Feature Fusion for Generalized Deepfake Detection
|
In this work, we attempted to unleash the potential of self-supervised learning as an auxiliary task that can optimise the primary task of generalised deepfake detection. To explore this, we examined different combinations of the training schemes for these tasks that can be most effective. Our findings reveal that fusing the feature representation from self-supervised auxiliary tasks is a powerful feature representation for the problem at hand. Such a representation can leverage the ultimate potential and bring in a unique representation of both the self-supervised and primary tasks, achieving better performance for the primary task. We experimented on a large set of datasets, which includes DF40, FaceForensics++, Celeb-DF, DFD, FaceShifter, UADFV, and our results showed better generalizability on cross-dataset evaluation when compared with current state-of-the-art detectors.
| 2026-01-02
| 2026-01-05
|
[
"cs.CV"
] |
Shukesh Reddy, Srijan Das, Abhijit Das
|
2601.00943
|
PhyEduVideo: A Benchmark for Evaluating Text-to-Video Models for Physics Education
|
Generative AI models, particularly Text-to-Video (T2V) systems, offer a promising avenue for transforming science education by automating the creation of engaging and intuitive visual explanations. In this work, we take a first step toward evaluating their potential in physics education by introducing a dedicated benchmark for explanatory video generation. The benchmark is designed to assess how well T2V models can convey core physics concepts through visual illustrations. Each physics concept in our benchmark is decomposed into granular teaching points, with each point accompanied by a carefully crafted prompt intended for visual explanation of the teaching point. T2V models are evaluated on their ability to generate accurate videos in response to these prompts. Our aim is to systematically explore the feasibility of using T2V models to generate high-quality, curriculum-aligned educational content-paving the way toward scalable, accessible, and personalized learning experiences powered by AI. Our evaluation reveals that current models produce visually coherent videos with smooth motion and minimal flickering, yet their conceptual accuracy is less reliable. Performance in areas such as mechanics, fluids, and optics is encouraging, but models struggle with electromagnetism and thermodynamics, where abstract interactions are harder to depict. These findings underscore the gap between visual quality and conceptual correctness in educational video generation. We hope this benchmark helps the community close that gap and move toward T2V systems that can deliver accurate, curriculum-aligned physics content at scale. The benchmark and accompanying codebase are publicly available at https://github.com/meghamariamkm/PhyEduVideo.
| 2026-01-02
| 2026-01-06
|
[
"cs.CV"
] |
Megha Mariam K. M, Aditya Arun, Zakaria Laskar, C. V. Jawahar
|
2601.00585
|
Microwave vortex beam lasing via photonic time crystals
|
Microwave lasing carrying orbital angular momentum (OAM) holds significant potential for advanced applications in fields such as high-capacity communications, precision sensing, and radar imaging. However, conventional approaches to masers fail to produce emission with embedded OAM. The recent emergence of photonic time crystals (PTCs)-artificially structured media with periodically varying electromagnetic properties in time-offers a paradigm shift toward resonance-free lasing without the need for gain media. Yet, pioneering PTC designs have been based on three-dimensional bulk structures, which lack a surface-emitting configuration, and do not possess the capability to modulate OAM, thus hindering the realization of surface-emitted PTC masing that carries OAM. Here, we report the first experimental demonstration of non-resonant, gain medium-free, and surface-emitted microwave vortex beam lasing OAM using ring-shaped PTCs. By developing a multiplier-driven time-varying metamaterial that achieves over 100% equivalent permittivity modulation depth, we establish momentum bandgaps (k gaps) with sufficient bandwidth to overcome intrinsic losses and enable self-sustained coherent microwave amplification. Furthermore, space-time modulation induces non-reciprocity between clockwise and counterclockwise k gap modes within the circularly symmetric PTC structure, facilitating the selective generation of microwave lasing carrying OAM-a capability beyond the reach of conventional maser technologies. Our work bridges PTC physics with coherent OAM-carrying microwave emission, establishing a transformative platform for next-generation wireless communications, advanced sensing systems, and OAM-based technologies.
| 2026-01-02
| 2026-01-05
|
[
"physics.optics"
] |
Lei Huang, Weixuan Zhang, Deyuan Zou, Jiacheng Bao, Fengxiao Di, Haoyu Qin, Long Qian, Houjun Sun, Xiangdong Zhang
|
2601.00720
|
Quantum Approaches to the Minimum Edge Multiway Cut Problem
|
We investigate the minimum edge multiway cut problem, a fundamental task in evaluating the resilience of telecommunication networks. This study benchmarks the problem across three quantum computing paradigms: quantum annealing on a D-Wave quantum processing unit, photonic variational quantum circuits simulated on Quandela s Perceval platform, and IBM s gate-based Quantum Approximate Optimization Algorithm (QAOA). We assess the comparative feasibility of these approaches for early-stage quantum optimization, highlighting trade-offs in circuit constraints, encoding overhead, and scalability. Our findings suggest that quantum annealing currently offers the most scalable performance for this class of problems, while photonic and gate-based approaches remain limited by hardware and simulation depth. These results provide actionable insights for designing quantum workflows targeting combinatorial optimization in telecom security and resilience analysis.
| 2026-01-02
| 2026-01-05
|
[
"quant-ph",
"cs.DM"
] |
Ali Abbassi, Yann Dujardin, Eric Gourdin, Philippe Lacomme, Caroline Prodhon
|
2509.10187
|
Initial Algebras of Domains via Quotient Inductive-Inductive Types
|
Domain theory has been developed as a mathematical theory of computation and to give a denotational semantics to programming languages. It helps us to fix the meaning of language concepts, to understand how programs behave and to reason about programs. At the same time it serves as a great theory to model various algebraic effects such as non-determinism, partial functions, side effects and numerous other forms of computation.
In the present paper, we present a general framework to construct algebraic effects in domain theory, where our domains are DCPOs: directed complete partial orders. We first describe so called DCPO algebras for a signature, where the signature specifies the operations on the DCPO and the inequational theory they obey. This provides a method to represent various algebraic effects, like partiality. We then show that initial DCPO algebras exist by defining them as so called Quotient Inductive-Inductive Types (QIITs), known from homotopy type theory. A quotient inductive-inductive type allows one to simultaneously define an inductive type and an inductive relation on that type, together with equations on the type. We illustrate our approach by showing that several well-known constructions of DCPOs fit our framework: coalesced sums, smash products and free DCPOs (partiality and power domains). Our work makes use of various features of homotopy type theory and is formalized in Cubical Agda.
| 2026-01-02
| 2026-01-06
|
[
"cs.LO"
] |
Simcha van Collem, Niels van der Weide, Herman Geuvers
|
2601.00573
|
Benchmarking ERP Analysis: Manual Features, Deep Learning, and Foundation Models
|
Event-related potential (ERP), a specialized paradigm of electroencephalographic (EEG), reflects neurological responses to external stimuli or events, generally associated with the brain's processing of specific cognitive tasks. ERP plays a critical role in cognitive analysis, the detection of neurological diseases, and the assessment of psychological states. Recent years have seen substantial advances in deep learning-based methods for spontaneous EEG and other non-time-locked task-related EEG signals. However, their effectiveness on ERP data remains underexplored, and many existing ERP studies still rely heavily on manually extracted features. In this paper, we conduct a comprehensive benchmark study that systematically compares traditional manual features (followed by a linear classifier), deep learning models, and pre-trained EEG foundation models for ERP analysis. We establish a unified data preprocessing and training pipeline and evaluate these approaches on two representative tasks, ERP stimulus classification and ERP-based brain disease detection, across 12 publicly available datasets. Furthermore, we investigate various patch-embedding strategies within advanced Transformer architectures to identify embedding designs that better suit ERP data. Our study provides a landmark framework to guide method selection and tailored model design for future ERP analysis. The code is available at https://github.com/DL4mHealth/ERP-Benchmark.
| 2026-01-02
| 2026-01-05
|
[
"cs.NE",
"cs.CE"
] |
Yihe Wang, Zhiqiao Kang, Bohan Chen, Yu Zhang, Xiang Zhang
|
2601.00525
|
Optimizing LSTM Neural Networks for Resource-Constrained Retail Sales Forecasting: A Model Compression Study
|
Standard LSTM(Long Short-Term Memory) neural networks provide accurate predictions for sales data in the retail industry, but require a lot of computing power. It can be challenging especially for mid to small retail industries. This paper examines LSTM model compression by gradually reducing the number of hidden units from 128 to 16. We used the Kaggle Store Item Demand Forecasting dataset, which has 913,000 daily sales records from 10 stores and 50 items, to look at the trade-off between model size and how accurate the predictions are. Experiments show that lowering the number of hidden LSTM units to 64 maintains the same level of accuracy while also improving it. The mean absolute percentage error (MAPE) ranges from 23.6% for the full 128-unit model to 12.4% for the 64-unit model. The optimized model is 73% smaller (from 280KB to 76KB) and 47% more accurate. These results show that larger models do not always achieve better results.
| 2026-01-02
| 2026-01-05
|
[
"cs.LG",
"cs.AI"
] |
Ravi Teja Pagidoju
|
2506.01847
|
A Quantum-Inspired Framework for Subjective Evaluation: Cognitive Polarization and Entropic Measures
|
We propose a quantum-inspired framework to model subjective evaluation processes using state vectors in Hilbert space. In this approach, individual preferences are represented as cognitive states polarized between 'like' and 'dislike', enabling a continuous interpretation of evaluative attitudes. The evolution of these states is characterized on the Bloch sphere, and the cognitive coherence is interpreted geometrically. To further analyze the uncertainty and diversity in subjective preferences, we introduce both Shannon entropy (at the individual level) and Von Neumann entropy (at the group level) into the framework. A small-scale simulated dataset is used to conceptually demonstrate how these entropy measures can reveal internal indecisiveness and collective incoherence. The model offers a physically grounded and mathematically expressive tool for quantifying subjectivity.
| 2026-01-02
| 2026-01-05
|
[
"cond-mat.mes-hall"
] |
Bumned Soodchomshom
|
2510.13655
|
Four-charge static non-extremal black holes in the five-dimensional $\mathcal{N}=2$, $STU-W^2U$ supergravity
|
We construct, for the first time, new static non-extremal five-dimensional black hole solutions (without or with squashed horizons) endowing with four different electric charge parameters in the $D = 5$, $\mathcal{N} = 2$ supergravity coupled to three vector multiplets with a specific pre-potential $\mathcal{V} = STU -W^2U \equiv 1$. When the fourth charge parameter disappears, the solution simplify reduces to the three-charge static black hole solution previously presented in ref. [1], which belongs to the solution to the $D = 5$, $\mathcal{N} = 2$ supergravity coupled to two vector multiplets (also notably known as the $STU$ model). We parameterize the model in such a simple fashion that not only can one easily recover the static three-charge solution but also it is very convenient to study their thermodynamical properties of the obtained black hole solutions in the case without a squashing horizon. We then show that the thermodynamical quantities perfectly obey both the differential first law and integral Smarr formula of thermodynamics. Finally, we also extend to present its generalizations with squashed horizons or including a nonzero cosmological constant.
| 2026-01-02
| 2026-01-05
|
[
"hep-th",
"gr-qc"
] |
Di Wu, Shuang-Qing Wu
|
2601.00621
|
Some lemmas on spectral radius of graphs: including an application
|
For a graph $G$, the spectral radius $Ï(G)$ of $G$ is the largest eigenvalue of its adjacency matrix. In this paper, we give three lammas on $Ï(G)$ when $G$ contains a spanning complete bipartite graph. Moreover, an application was also included at the end.
| 2026-01-02
| 2026-01-05
|
[
"math.CO"
] |
Wenqian Zhang
|
2512.07443
|
A multivariate extension of Azadkia-Chatterjee's rank coefficient
|
The Azadkia-Chatterjee coefficient is a rank-based measure of dependence between a random variable $Y \in \mathbb{R}$ and a random vector ${\boldsymbol Z} \in \mathbb{R}^{d_Z}$. In this paper, we propose a multivariate extension that measures the dependence between random vectors ${\boldsymbol Y} \in \mathbb{R}^{d_Y}$ and ${\boldsymbol Z} \in \mathbb{R}^{d_Z}$, based on $n$ i.i.d. samples. The proposed coefficient converges almost surely to a limit with the following properties: i) it lies in $[0, 1]$; ii) it is equal to zero if and only if ${\boldsymbol Y}$ and ${\boldsymbol Z}$ are independent; and iii) it is equal to one if and only if ${\boldsymbol Y}$ is almost surely a function of ${\boldsymbol Z}$. Remarkably, the only assumption required by this convergence is that ${\boldsymbol Y}$ is not almost surely a constant vector. We further prove that under the same mild condition and after a proper scaling, this coefficient converges in distribution to a standard normal random variable when ${\boldsymbol Y}$ and ${\boldsymbol Z}$ are independent. This asymptotic normality result allows us to construct a Wald-type hypothesis test of independence based on this coefficient. To compute this coefficient, we propose a merge sort based algorithm that runs in $O(n (\log n)^{d_Y})$. Finally, we show that it can be used to measure the conditional dependence between ${\boldsymbol Y}$ and ${\boldsymbol Z}$ conditional on a third random vector ${\boldsymbol X}$, and prove that the measure is monotonic with respect to the deviation from an independence distribution under certain model restrictions.
| 2026-01-02
| 2026-01-05
|
[
"math.ST",
"stat.ME",
"stat.TH"
] |
Wenjie Huang, Zonghan Li, Yuhao Wang
|
2504.11203
|
Braiding vineyards
|
In this work, we introduce and study what we believe is an intriguing and, to the best of our knowledge, previously unknown connection between two areas in computational topology, topological data analysis (TDA) and knot theory. Given a function from a topological space to $\mathbb{R}$, TDA provides tools to simplify and study the importance of topological features: in particular, the $l^{th}$-dimensional persistence diagram encodes the $l$-homology in the sublevel set as the function value increases as a set of points in the plane. Given a continuous one-parameter family of such functions, we can combine the persistence diagrams into an object known as a vineyard, which track the evolution of points in the persistence diagram. If we further restrict that family of functions to be periodic, we identify the two ends of the vineyard, yielding a closed vineyard. This allows the study of monodromy, which in this context means that following the family of functions for a period permutes the set of points in a non-trivial way. In this work, given a link and value $l$, we construct a topological space and periodic family of functions such that the closed $l$-vineyard contains this link. This shows that vineyards are topologically as rich as one could possibly hope. Importantly, it has at least two immediate consequences: First, monodromy of any periodicity can occur in a $l$-vineyard, answering a variant of a question by [Arya et al 2024]. To exhibit this, we also reformulate monodromy in a more geometric way, which may be of interest in itself. Second, distinguishing vineyards is likely to be difficult given the known difficulty of knot and link recognition, which have strong connections to many NP-hard problems.
| 2026-01-02
| 2026-01-05
|
[
"cs.CG",
"math.AT",
"math.GT"
] |
Erin Chambers, Christopher Fillmore, Elizabeth Stephenson, Mathijs Wintraecken
|
2507.13510
|
Strassen $2\times2$ Matrix Multiplication from a 3-dimensional Volume Form
|
The Strassen $2\times2$ matrix multiplication algorithm arises from the volume form on the 3-dimensional quotient space of the $2\times 2$ matrices by the multiples of identity.
| 2026-01-02
| 2026-01-05
|
[
"cs.DS",
"cs.CC"
] |
Benoit Jacob
|
2402.01966
|
The general solution to an autoregressive law of motion
|
We provide a complete description of the set of all solutions to a vector autoregressive law of motion. Every solution is shown to be the sum of three components, each corresponding to a directed flow of time. One component flows forward from the arbitrarily distant past; one flows backward from the arbitrarily distant future; and one flows outward from time zero. The three components are obtained by applying three complementary spectral projections to the solution, these corresponding to a separation of the eigenvalues of the autoregressive coefficient matrix according to whether they are inside, outside or on the unit circle. We establish a one-to-one correspondence between the set of all solutions and a finite-dimensional space of initial conditions.
| 2026-01-02
| 2026-01-05
|
[
"econ.EM"
] |
Brendan K. Beare, Massimo Franchi, Phil Howlett
|
2406.13546
|
Dual of the Geometric Lemma and the Second Adjointness Theorem for $p$-adic reductive groups
|
Let $P,Q$ be standard parabolic subgroups of a $p$-adic reductive group $G$. We study the smooth dual of the filtration on a parabolically induced module arising from the geometric lemma associated to the cosets $P\setminus G/Q$. We prove that the dual filtration coincides with the filtration associated to the cosets $P\setminus G/Q^-$ via the Bernstein-Casselman canonical pairing from the second adjointness of parabolic induction. This result generalizes a result of Bezrukavnikov-Kazhdan on the explicit description in the second adjointness. Along the way, we also study some group theoretic results.
| 2026-01-02
| 2026-01-05
|
[
"math.RT",
"math.NT"
] |
Kei Yuen Chan
|
2601.00726
|
Implicit Large Eddy Simulation of Nearly Incompressible Flows with a Discontinuous Galerkin-Boltzmann Formulation
|
We present a high-order implicit large eddy simulation (ILES) approach for simulating flows at the nearly incompressible regime. Our methodology based on utilization of a nodal discontinuous Galerkin (DG) discretization of the Boltzmann equations. The compactness and low-dissipative nature of the discontinuous Galerkin method are leveraged to mimic traditional large eddy simulations with subgrid-scale models. One of the key requirements of ILES is to provide dissipation only within a narrow band of high wavenumbers. This is validated through numerical experiments on the Taylor-Green Vortex problem in detail at a Reynolds number where varying scales of coherent turbulent structures are present. Furthermore, the approach is validated for external aerodynamic configurations by simulating the flow over a sphere at a Reynolds number of $Re=3700$, capturing the laminar-turbulent transition and the complex multiscale vortex dynamics characteristic of this regime. The results demonstrate the capability of the high-order DG-Boltzmann formulation to accurately capture transitional and turbulent flow features without the use of explicit sub-grid scale modeling, highlighting its potential as a robust and physically consistent framework for ILES of nearly incompressible turbulent flows.
| 2026-01-02
| 2026-01-05
|
[
"physics.flu-dyn"
] |
Onur Ata, Atakan Aygun, Tim Warburton, Ali Karakus
|
2601.00663
|
Activity correlation and temporal variation of small-scale magnetic fields on young Sun-like stars
|
We aim to evaluate how well the variation of small-scale magnetic fields on the stellar surface can be monitored with time-series observations. Further, we aim to establish to what extent the measured total unsigned magnetic field traces other activity indicators. We measured the total unsigned magnetic field on four young, stars using Zeeman splitting of magnetically sensitive spectral lines from high-resolution spectra obtained with the spectropolarimeters ESPaDOnS at CFHT and NARVAL at TBL. We then characterised the magnetic field variations using both sinusoidal variation and Lomb-Scargle periodograms. We evaluated how the rotational variation of the total unsigned magnetic field strength correlates with the activity indicators S-index, H$α$-index, Ca IRT-index, and the large-scale magnetic field obtained from ZDI maps obtained in earlier studies. We find clear signals of rotational modulation of the total magnetic field on HIP 76768 and tentative detection on Mel 25-5. This is supported both by the sinusoidal fitting and the periodogram. For the other stars, we find no modulation signals of the total magnetic field. We find positive correlations between the total magnetic field and activity indices on all four stars, indicating that indirect magnetic activity indicators trace the underlying magnetic field variability. However, comparing the activity-magnetic field relationship between the stars in our sample shows a significant deviation between activity level and measured magnetic field strength. Small-scale magnetic field variability can be traced using the Zeeman effect on magnetically sensitive lines, provided that the star is sufficiently active. It is also possible to self-consistently recover rotational periods from such measurements. The primary limit for the detection of magnetic field variations is the precision of Zeeman broadening and intensification measurements.
| 2026-01-02
| 2026-01-05
|
[
"astro-ph.SR"
] |
A. Hahlin, B. Zaire, C. P. Folsom, K. Al Moulla, A. Lavail
|
2503.00886
|
Algorithms for parabolic inductions and Jacquet modules in $\mathrm{GL}_n$
|
In this article, we present algorithms for computing parabolic inductions and Jacquet modules for the general linear group $G$ over a non-Archimedean local field. Given the Zelevinsky data or Langlands data of an irreducible smooth representation $Ï$ of $G$ and an essentially square-integrable representation $Ï$, we explicitly determine the Jacquet module of $Ï$ with respect to $Ï$ and the socle of the normalized parabolic induction $Ï\times Ï$. Our result builds on and extends some previous work of MÅglin-Waldspurger, Jantzen, MÃnguez, and Lapid-MÃnguez, and also uses other methods such as sequences of derivatives and an exotic duality. As an application, we give a simple algorithm for computing the highest derivative multisegment and an algorithm for computing the Langlands parameter of the highest Bernstein-Zelevinsky derivatives.
| 2026-01-02
| 2026-01-05
|
[
"math.RT",
"math.NT"
] |
Kei Yuen Chan, Basudev Pattanayak
|
2512.22979
|
PoseStreamer: A Multi-modal Framework for 3D Tracking of Unseen Moving Objects
|
Six degree of freedom (6DoF) pose estimation for novel objects is a critical task in computer vision, yet it faces significant challenges in high-speed and low-light scenarios where standard RGB cameras suffer from motion blur. While event cameras offer a promising solution due to their high temporal resolution, current 6DoF pose estimation methods typically yield suboptimal performance in high-speed object moving scenarios. To address this gap, we propose PoseStreamer, a robust multi-modal 6DoF pose estimation framework designed specifically on high-speed moving scenarios. Our approach integrates three core components: an Adaptive Pose Memory Queue that utilizes historical orientation cues for temporal consistency, an Object-centric 2D Tracker that provides strong 2D priors to boost 3D center recall, and a Ray Pose Filter for geometric refinement along camera rays. Furthermore, we introduce MoCapCube6D, a novel multi-modal dataset constructed to benchmark performance under rapid motion. Extensive experiments demonstrate that PoseStreamer not only achieves superior accuracy in high-speed moving scenarios, but also exhibits strong generalizability as a template-free framework for unseen moving objects.
| 2026-01-02
| 2026-01-05
|
[
"cs.CV"
] |
Huiming Yang, Linglin Liao, Fei Ding, Sibo Wang, Zijian Zeng
|
2601.00610
|
Vision-based Goal-Reaching Control for Mobile Robots Using a Hierarchical Learning Framework
|
Reinforcement learning (RL) is effective in many robotic applications, but it requires extensive exploration of the state-action space, during which behaviors can be unsafe. This significantly limits its applicability to large robots with complex actuators operating on unstable terrain. Hence, to design a safe goal-reaching control framework for large-scale robots, this paper decomposes the whole system into a set of tightly coupled functional modules. 1) A real-time visual pose estimation approach is employed to provide accurate robot states to 2) an RL motion planner for goal-reaching tasks that explicitly respects robot specifications. The RL module generates real-time smooth motion commands for the actuator system, independent of its underlying dynamic complexity. 3) In the actuation mechanism, a supervised deep learning model is trained to capture the complex dynamics of the robot and provide this model to 4) a model-based robust adaptive controller that guarantees the wheels track the RL motion commands even on slip-prone terrain. 5) Finally, to reduce human intervention, a mathematical safety supervisor monitors the robot, stops it on unsafe faults, and autonomously guides it back to a safe inspection area. The proposed framework guarantees uniform exponential stability of the actuation system and safety of the whole operation. Experiments on a 6,000 kg robot in different scenarios confirm the effectiveness of the proposed framework.
| 2026-01-02
| 2026-01-05
|
[
"cs.RO"
] |
Mehdi Heydari Shahna, Pauli Mustalahti, Jouni Mattila
|
2511.19601
|
Classical Spin Transitions and Absorptive Scattering
|
We describe an on-shell, amplitudes-based approach to incorporating radiation absorption effects in the post-Minkowskian scattering of generic, compact, spinning bodies. Classical spinning observables are recovered by extrapolating to large spin, results calculated with finite quantum spin-$s$ particles using the properties of spin universality and Casimir interpolation. At leading-order our results give a completely general and non-redundant parametrization of absorptive observables in terms of a finite number of Wilson coefficients associated with 3-particle mass and spin-magnitude changing on-shell amplitudes. We denote these semi-fictitious microscopic processes: \textit{classical spin transitions}. Explicit results for the leading-order impulse due to the absorption of scalar, electromagnetic and gravitational radiation, for spin transitions $Îs = 0,\pm 1, \pm 2$ are given in a fully interpolated form up to $\mathcal{O}\left(S^2\right)$, and Casimir independent contributions given up to $\mathcal{O}\left(S^4\right)$. Our explicit results reveal some surprising universal patterns. We find that, up to identification of Wilson coefficients, the Casimir independent contributions to the impulse for spinning-up and spinning-down by the same magnitude $|Îs|$ are identical. For processes where the quantum $Îs<0$ transition is forbidden, the corresponding classical observable is suppressed in powers of $S$ by a predictable amount. Additionally we find that, while for generic non-aligned spin configurations there is a non-zero scattering angle at leading-order, for aligned spin, similar to non-spinning absorption, the scattering angle vanishes and the impulse is purely longitudinal.
| 2026-01-02
| 2026-01-05
|
[
"hep-th"
] |
Juan Pablo Gatica, Callum R. T. Jones
|
2504.09156
|
LEL: Lipschitz Continuity Constrained Ensemble Learning for Efficient EEG-Based Intra-subject Emotion Recognition
|
Accurate and efficient recognition of emotional states is critical for human social functioning, and impairments in this ability are associated with significant psychosocial difficulties. While electroencephalography (EEG) offers a powerful tool for objective emotion detection, existing EEG-based Emotion Recognition (EER) methods suffer from three key limitations: (1) insufficient model stability, (2) limited accuracy in processing high-dimensional nonlinear EEG signals, and (3) poor robustness against intra-subject variability and signal noise. To address these challenges, we introduce Lipschitz continuity-constrained Ensemble Learning (LEL), a novel framework that enhances EEG-based emotion recognition by enforcing Lipschitz continuity constraints on Transformer-based attention mechanisms, spectral extraction, and normalization modules. This constraint ensures model stability, reduces sensitivity to signal variability and noise, and improves generalization capability. Additionally, LEL employs a learnable ensemble fusion strategy that optimally combines decisions from multiple heterogeneous classifiers to mitigate single-model bias and variance. Extensive experiments on three public benchmark datasets (EAV, FACED, and SEED) demonstrate superior performance, achieving average recognition accuracies of 74.25%, 81.19%, and 86.79%, respectively. The official implementation codes are available at https://github.com/NZWANG/LEL.
| 2026-01-02
| 2026-01-05
|
[
"cs.CV"
] |
Shengyu Gong, Yueyang Li, Zijian Kang, Bo Chai, Weiming Zeng, Hongjie Yan, Zhiguo Zhang, Wai Ting Siok, Nizhuan Wang
|
2511.10684
|
SpiderGen: Towards Procedure Generation For Carbon Life Cycle Assessments with Generative AI
|
Investigating the effects of climate change and global warming caused by GHG emissions have been a key concern worldwide. These emissions are largely contributed to by the production, use and disposal of consumer products. Thus, it is important to build tools to estimate the environmental impact of consumer goods, an essential part of which is conducting Life Cycle Assessments (LCAs). LCAs specify and account for the appropriate processes involved with the production, use, and disposal of the products. We present SpiderGen, an LLM-based workflow which integrates the taxonomy and methodology of traditional LCA with the reasoning capabilities and world knowledge of LLMs to generate graphical representations of the key procedural information used for LCA, known as Product Category Rules Process Flow Graphs (PCR PFGs). We additionally evaluate the output of SpiderGen by comparing it with 65 real-world LCA documents. We find that SpiderGen provides accurate LCA process information that is either fully correct or has minor errors, achieving an F1-Score of 65% across 10 sample data points, as compared to 53% using a one-shot prompting method. We observe that the remaining errors occur primarily due to differences in detail between LCA documents, as well as differences in the "scope" of which auxiliary processes must also be included. We also demonstrate that SpiderGen performs better than several baselines techniques, such as chain-of-thought prompting and one-shot prompting. Finally, we highlight SpiderGen's potential to reduce the human effort and costs for estimating carbon impact, as it is able to produce LCA process information for less than \$1 USD in under 10 minutes as compared to the status quo LCA, which can cost over \$25000 USD and take up to 21-person days.
| 2026-01-02
| 2026-01-05
|
[
"cs.CL",
"cs.CY"
] |
Anupama Sitaraman, Bharathan Balaji, Yuvraj Agarwal
|
2510.10304
|
Sample-Efficient Online Learning in LM Agents via Hindsight Trajectory Rewriting
|
Language model (LM) agents deployed in novel environments often exhibit poor sample efficiency when learning from sequential interactions. This significantly hinders the usefulness of such agents in environments where interaction is costly (for example, when they interact with humans or reset physical systems). While a number of existing LM agent architectures incorporate various mechanisms for experience storage and reflection, they make limited use of LMs' abilities to directly generate or reason about full counterfactual trajectories. We introduce ECHO (Experience Consolidation via Hindsight Optimization), a prompting framework that adapts hindsight experience replay from reinforcement learning for language model agents. ECHO generates optimized trajectories for alternative goals that could have been achieved during failed attempts, effectively creating synthetic positive examples from unsuccessful interactions. Our approach consists of two components: a hindsight rule that uses the language model itself to identify relevant subgoals and generate optimized trajectories, and an update rule that maintains compressed trajectory representations in memory. We evaluate ECHO on stateful versions of XMiniGrid, a text-based navigation and planning benchmark, and PeopleJoinQA, a collaborative information-gathering enterprise simulation. Across both domains, ECHO outperforms vanilla language agent baselines by up to 80%; in XMiniGrid, it also outperforms a number of sophisticated agent architectures including Reflexion and AWM, demonstrating faster adaptation to novel environments through more effective utilization of past experiences.
| 2026-01-02
| 2026-01-06
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] |
Michael Y. Hu, Benjamin Van Durme, Jacob Andreas, Harsh Jhamtani
|
2504.08349
|
A Proof-Theoretic Approach to the Semantics of Classical Linear Logic
|
Linear logic (LL) is a resource-aware, abstract logic programming language that refines both classical and intuitionistic logic. Linear logic semantics is typically presented in one of two ways: by associating each formula with the set of all contexts that can be used to prove it (e.g. phase semantics) or by assigning meaning directly to proofs (e.g. coherence spaces).
This work proposes a different perspective on assigning meaning to proofs by adopting a proof-theoretic perspective. More specifically, we employ base-extension semantics (BeS) to characterise proofs through the notion of base support.
Recent developments have shown that BeS is powerful enough to capture proof-theoretic notions in structurally rich logics such as intuitionistic linear logic. In this paper, we extend this framework to the classical case, presenting a proof-theoretic approach to the semantics of the multiplicative-additive fragment of linear logic (MALL).
| 2026-01-02
| 2026-01-06
|
[
"cs.LO",
"math.LO"
] |
Victor Barroso-Nascimento, Ekaterina Piotrovskaya, Elaine Pimentel
|
2601.00743
|
An Agentic Framework for Neuro-Symbolic Programming
|
Integrating symbolic constraints into deep learning models could make them more robust, interpretable, and data-efficient. Still, it remains a time-consuming and challenging task. Existing frameworks like DomiKnowS help this integration by providing a high-level declarative programming interface, but they still assume the user is proficient with the library's specific syntax. We propose AgenticDomiKnowS (ADS) to eliminate this dependency. ADS translates free-form task descriptions into a complete DomiKnowS program using an agentic workflow that creates and tests each DomiKnowS component separately. The workflow supports optional human-in-the-loop intervention, enabling users familiar with DomiKnowS to refine intermediate outputs. We show how ADS enables experienced DomiKnowS users and non-users to rapidly construct neuro-symbolic programs, reducing development time from hours to 10-15 minutes.
| 2026-01-02
| 2026-01-05
|
[
"cs.AI"
] |
Aliakbar Nafar, Chetan Chigurupati, Danial Kamali, Hamid Karimian, Parisa Kordjamshidi
|
2504.16549
|
Exponential decay of correlations for random interval diffeomorphisms
|
We consider a finite number of orientation preserving $C^2$ interval diffeomorphisms and apply them randomly in such a way that the expected Lyapunov exponents at the boundary points are positive. We prove the exponential decay of correlations for Lipschitz observables with respect to the unique stationary measure supported on the interior of the interval. The key step is to show the exponential synchronization in average.
| 2026-01-02
| 2026-01-05
|
[
"math.DS"
] |
Klaudiusz Czudek
|
2601.00940
|
Learning to Segment Liquids in Real-world Images
|
Different types of liquids such as water, wine and medicine appear in all aspects of daily life. However, limited attention has been given to the task, hindering the ability of robots to avoid or interact with liquids safely. The segmentation of liquids is difficult because liquids come in diverse appearances and shapes; moreover, they can be both transparent or reflective, taking on arbitrary objects and scenes from the background or surroundings. To take on this challenge, we construct a large-scale dataset of liquids named LQDS consisting of 5000 real-world images annotated into 14 distinct classes, and design a novel liquid detection model named LQDM, which leverages cross-attention between a dedicated boundary branch and the main segmentation branch to enhance segmentation predictions. Extensive experiments demonstrate the effectiveness of LQDM on the test set of LQDS, outperforming state-of-the-art methods and establishing a strong baseline for the semantic segmentation of liquids.
| 2026-01-02
| 2026-01-06
|
[
"cs.CV"
] |
Jonas Li, Michelle Li, Luke Liu, Heng Fan
|
2509.15294
|
Classical and Quantum Heuristics for the Binary Paint Shop Problem
|
The Binary Paint Shop Problem (BPSP) is an $\mathsf{APX}$-hard optimisation problem in automotive manufacturing: given a sequence of $2n$ cars, comprising $n$ distinct models each appearing twice, the task is to decide which of two colours to paint each car so that the two occurrences of each model are painted differently, while minimising consecutive colour swaps. The key performance metric is the paint swap ratio, the average number of colour changes per car, which directly impacts production efficiency and cost. Prior work showed that the Quantum Approximate Optimisation Algorithm (QAOA) at depth $p=7$ achieves a paint swap ratio of $0.393$, outperforming the classical Recursive Greedy (RG) heuristic with an expected ratio of $0.4$ [Phys. Rev. A 104, 012403 (2021)]. More recently, the classical Recursive Star Greedy (RSG) heuristic was conjectured to achieve an expected ratio of $0.361$. In this study, we develop the theoretical foundations for applying QAOA to BPSP through a reduction of BPSP to weighted MaxCut, and use this framework to benchmark two state-of-the-art low-depth QAOA variants, eXpressive QAOA (XQAOA) and Recursive QAOA (RQAOA), at $p=1$ (denoted XQAOA$_1$ and RQAOA$_1$), against the strongest classical heuristics known to date. Across instances ranging from $2^7$ to $2^{12}$ cars, XQAOA$_1$ achieves an average ratio of $0.357$, surpassing RQAOA$_1$ and all classical heuristics, including the conjectured performance of RSG. Surprisingly, RQAOA$_1$ shows diminishing performance as size increases: despite using provably optimal QAOA$_1$ parameters at each recursion, it is outperformed by RSG on most $2^{11}$-car instances and all $2^{12}$-car instances. To our knowledge, this is the first study to report RQAOA$_1$'s performance degradation at scale. In contrast, XQAOA$_1$ remains robust, indicating strong potential to asymptotically surpass all known heuristics.
| 2026-01-02
| 2026-01-05
|
[
"quant-ph",
"cs.DS",
"cs.ET",
"math.OC"
] |
V Vijendran, Dax Enshan Koh, Ping Koy Lam, Syed M Assad
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.