id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
d3bfe270ede0d223402d8eafb74c4e03be0c5c819dbc3c8e062e4e6935cb8a86
2026-01-13T00:00:00-05:00
Benchmarking Small Language Models and Small Reasoning Language Models on System Log Severity Classification
arXiv:2601.07790v1 Announce Type: new Abstract: System logs are crucial for monitoring and diagnosing modern computing infrastructure, but their scale and complexity require reliable and efficient automated interpretation. Since severity levels are predefined metadata in system log messages, having a model merely classify them offers limited standalone practical value, revealing little about its underlying ability to interpret system logs. We argue that severity classification is more informative when treated as a benchmark for probing runtime log comprehension rather than as an end task. Using real-world journalctl data from Linux production servers, we evaluate nine small language models (SLMs) and small reasoning language models (SRLMs) under zero-shot, few-shot, and retrieval-augmented generation (RAG) prompting. The results reveal strong stratification. Qwen3-4B achieves the highest accuracy at 95.64% with RAG, while Gemma3-1B improves from 20.25% under few-shot prompting to 85.28% with RAG. Notably, the tiny Qwen3-0.6B reaches 88.12% accuracy despite weak performance without retrieval. In contrast, several SRLMs, including Qwen3-1.7B and DeepSeek-R1-Distill-Qwen-1.5B, degrade substantially when paired with RAG. Efficiency measurements further separate models: most Gemma and Llama variants complete inference in under 1.2 seconds per log, whereas Phi-4-Mini-Reasoning exceeds 228 seconds per log while achieving <10% accuracy. These findings suggest that (1) architectural design, (2) training objectives, and (3) the ability to integrate retrieved context under strict output constraints jointly determine performance. By emphasizing small, deployable models, this benchmark aligns with real-time requirements of digital twin (DT) systems and shows that severity classification serves as a lens for evaluating model competence and real-time deployability, with implications for root cause analysis (RCA) and broader DT integration.
https://arxiv.org/abs/2601.07790
Academic Papers
svg
631fed5da3fa9d7f75c97aabe78116648d08d7efef1c3bcca9a2ae7df57ce6da
2026-01-13T00:00:00-05:00
Necessary and Sufficient Conditions for the Existence of an LU Factorization for General Rank Deficient Matrices
arXiv:2601.07791v1 Announce Type: new Abstract: We establish necessary and sufficient conditions for the existence of an LU factorization $A=LU$ for an arbitrary square matrix $A$, including singular and rank-deficient cases, without the use of row or column permutations. We prove that such a factorization exists if and only if the nullity of every leading principal submatrix is bounded by the sum of the nullities of the corresponding leading column and row blocks. While building upon the work of Okunev and Johnson, we present simpler, constructive proofs. Furthermore, we extend these results to characterize rank-revealing factorizations, providing explicit sparsity bounds for the factors $L$ and $U$. Finally, we derive analogous necessary and sufficient conditions for the existence of factorizations constrained to have unit lower or unit upper triangular factors.
https://arxiv.org/abs/2601.07791
Academic Papers
svg
4dd3d404a212f96f148fdc21f104afe4e36c79490b89b451ee794c6d84fa680e
2026-01-13T00:00:00-05:00
Kinship Data Benchmark for Multi-hop Reasoning
arXiv:2601.07794v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly evaluated on their ability to perform multi-hop reasoning, i.e., to combine multiple pieces of information into a coherent inference. We introduce KinshipQA, a benchmark designed to probe this capability through reasoning over kinship relations. The central contribution of our work is a generative pipeline that produces, on demand, large-scale, realistic, and culture-specific genealogical data: collections of interconnected family trees that satisfy explicit marriage constraints associated with different kinship systems. This allows task difficulty, cultural assumptions, and relational depth to be systematically controlled and varied. From these genealogies, we derive textual inference tasks that require reasoning over implicit relational chains. We evaluate the resulting benchmark using six state-of-the-art LLMs, spanning both open-source and closed-source models, under a uniform zero-shot protocol with deterministic decoding. Performance is measured using exact-match and set-based metrics. Our results demonstrate that KinshipQA yields a wide spread of outcomes and exposes systematic differences in multi-hop reasoning across models and cultural settings.
https://arxiv.org/abs/2601.07794
Academic Papers
svg
834580f5794123a2bd809d56f31cf699359d86bf08750c9126e8f247eb8e095e
2026-01-13T00:00:00-05:00
Vision-Language Model for Accurate Crater Detection
arXiv:2601.07795v1 Announce Type: new Abstract: The European Space Agency (ESA), driven by its ambitions on planned lunar missions with the Argonaut lander, has a profound interest in reliable crater detection, since craters pose a risk to safe lunar landings. This task is usually addressed with automated crater detection algorithms (CDA) based on deep learning techniques. It is non-trivial due to the vast amount of craters of various sizes and shapes, as well as challenging conditions such as varying illumination and rugged terrain. Therefore, we propose a deep-learning CDA based on the OWLv2 model, which is built on a Vision Transformer, that has proven highly effective in various computer vision tasks. For fine-tuning, we utilize a manually labeled dataset fom the IMPACT project, that provides crater annotations on high-resolution Lunar Reconnaissance Orbiter Camera Calibrated Data Record images. We insert trainable parameters using a parameter-efficient fine-tuning strategy with Low-Rank Adaptation, and optimize a combined loss function consisting of Complete Intersection over Union (CIoU) for localization and a contrastive loss for classification. We achieve satisfactory visual results, along with a maximum recall of 94.0% and a maximum precision of 73.1% on a test dataset from IMPACT. Our method achieves reliable crater detection across challenging lunar imaging conditions, paving the way for robust crater analysis in future lunar exploration.
https://arxiv.org/abs/2601.07795
Academic Papers
svg
4fce96f39111a2fd12b4329b0e9fd5ed7b678e0144a9f7f50448ec36f548205d
2026-01-13T00:00:00-05:00
Learning Through Dialogue: Unpacking the Dynamics of Human-LLM Conversations on Political Issues
arXiv:2601.07796v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used as conversational partners for learning, yet the interactional dynamics supporting users' learning and engagement are understudied. We analyze the linguistic and interactional features from both LLM and participant chats across 397 human-LLM conversations about socio-political issues to identify the mechanisms and conditions under which LLM explanations shape changes in political knowledge and confidence. Mediation analyses reveal that LLM explanatory richness partially supports confidence by fostering users' reflective insight, whereas its effect on knowledge gain operates entirely through users' cognitive engagement. Moderation analyses show that these effects are highly conditional and vary by political efficacy. Confidence gains depend on how high-efficacy users experience and resolve uncertainty. Knowledge gains depend on high-efficacy users' ability to leverage extended interaction, with longer conversations benefiting primarily reflective users. In summary, we find that learning from LLMs is an interactional achievement, not a uniform outcome of better explanations. The findings underscore the importance of aligning LLM explanatory behavior with users' engagement states to support effective learning in designing Human-AI interactive systems.
https://arxiv.org/abs/2601.07796
Academic Papers
svg
a1e5dd23829a9c25bc2405513992e880b3f97e3bfe00841f7a8949bdd11ce2a7
2026-01-13T00:00:00-05:00
Lossy Source Coding with Broadcast Side Information
arXiv:2601.07797v1 Announce Type: new Abstract: This paper considers the source coding problem with broadcast side information. The side information is sent to two receivers through a noisy broadcast channel. We provide an outer bound of the rate--distortion--bandwidth (RDB) quadruples and achievable RDB quadruples when the helper uses a separation-based scheme. Some special cases with full characterization are also provided. We then compare the separation-based scheme with the uncoded scheme in the quadratic Gaussian case.
https://arxiv.org/abs/2601.07797
Academic Papers
svg
ea52fb645045dea86706d759ff55af33778a314296c0617b2e935a3060004b8a
2026-01-13T00:00:00-05:00
Exchange Is All You Need for Remote Sensing Change Detection
arXiv:2601.07805v1 Announce Type: new Abstract: Remote sensing change detection fundamentally relies on the effective fusion and discrimination of bi-temporal features. Prevailing paradigms typically utilize Siamese encoders bridged by explicit difference computation modules, such as subtraction or concatenation, to identify changes. In this work, we challenge this complexity with SEED (Siamese Encoder-Exchange-Decoder), a streamlined paradigm that replaces explicit differencing with parameter-free feature exchange. By sharing weights across both Siamese encoders and decoders, SEED effectively operates as a single parameter set model. Theoretically, we formalize feature exchange as an orthogonal permutation operator and prove that, under pixel consistency, this mechanism preserves mutual information and Bayes optimal risk, whereas common arithmetic fusion methods often introduce information loss. Extensive experiments across five benchmarks, including SYSU-CD, LEVIR-CD, PX-CLCD, WaterCD, and CDD, and three backbones, namely SwinT, EfficientNet, and ResNet, demonstrate that SEED matches or surpasses state of the art methods despite its simplicity. Furthermore, we reveal that standard semantic segmentation models can be transformed into competitive change detectors solely by inserting this exchange mechanism, referred to as SEG2CD. The proposed paradigm offers a robust, unified, and interpretable framework for change detection, demonstrating that simple feature exchange is sufficient for high performance information fusion. Code and full training and evaluation protocols will be released at https://github.com/dyzy41/open-rscd.
https://arxiv.org/abs/2601.07805
Academic Papers
svg
25cc3aa101a27413ef11096e5062675d510b968c2062a6600c50b7315b3a618c
2026-01-13T00:00:00-05:00
The Confidence Trap: Gender Bias and Predictive Certainty in LLMs
arXiv:2601.07806v1 Announce Type: new Abstract: The increased use of Large Language Models (LLMs) in sensitive domains leads to growing interest in how their confidence scores correspond to fairness and bias. This study examines the alignment between LLM-predicted confidence and human-annotated bias judgments. Focusing on gender bias, the research investigates probability confidence calibration in contexts involving gendered pronoun resolution. The goal is to evaluate if calibration metrics based on predicted confidence scores effectively capture fairness-related disparities in LLMs. The results show that, among the six state-of-the-art models, Gemma-2 demonstrates the worst calibration according to the gender bias benchmark. The primary contribution of this work is a fairness-aware evaluation of LLMs' confidence calibration, offering guidance for ethical deployment. In addition, we introduce a new calibration metric, Gender-ECE, designed to measure gender disparities in resolution tasks.
https://arxiv.org/abs/2601.07806
Academic Papers
svg
053276e809c0875976f89296d5166cd53d904b89fb55fdb8394fbe4ac2912a81
2026-01-13T00:00:00-05:00
More Images, More Problems? A Controlled Analysis of VLM Failure Modes
arXiv:2601.07812v1 Announce Type: new Abstract: Large Vision Language Models (LVLMs) have demonstrated remarkable capabilities, yet their proficiency in understanding and reasoning over multiple images remains largely unexplored. While existing benchmarks have initiated the evaluation of multi-image models, a comprehensive analysis of their core weaknesses and their causes is still lacking. In this work, we introduce MIMIC (Multi-Image Model Insights and Challenges), a new benchmark designed to rigorously evaluate the multi-image capabilities of LVLMs. Using MIMIC, we conduct a series of diagnostic experiments that reveal pervasive issues: LVLMs often fail to aggregate information across images and struggle to track or attend to multiple concepts simultaneously. To address these failures, we propose two novel complementary remedies. On the data side, we present a procedural data-generation strategy that composes single-image annotations into rich, targeted multi-image training examples. On the optimization side, we analyze layer-wise attention patterns and derive an attention-masking scheme tailored for multi-image inputs. Experiments substantially improved cross-image aggregation, while also enhancing performance on existing multi-image benchmarks, outperforming prior state of the art across tasks. Data and code will be made available at https://github.com/anurag-198/MIMIC.
https://arxiv.org/abs/2601.07812
Academic Papers
svg
e074c892026a20f0272c7a3cd8d38fcae6b86f35e44baf8820d5e67bfd77815b
2026-01-13T00:00:00-05:00
Data-driven control of hydraulic impact hammers under strict operational and control constraints
arXiv:2601.07813v1 Announce Type: new Abstract: This paper presents a data-driven methodology for the control of static hydraulic impact hammers, also known as rock breakers, which are commonly used in the mining industry. The task addressed in this work is that of controlling the rock-breaker so its end-effector reaches arbitrary target poses, which is required in normal operation to place the hammer on top of rocks that need to be fractured. The proposed approach considers several constraints, such as unobserved state variables due to limited sensing and the strict requirement of using a discrete control interface at the joint level. First, the proposed methodology addresses the problem of system identification to obtain an approximate dynamic model of the hydraulic arm. This is done via supervised learning, using only teleoperation data. The learned dynamic model is then exploited to obtain a controller capable of reaching target end-effector poses. For policy synthesis, both reinforcement learning (RL) and model predictive control (MPC) algorithms are utilized and contrasted. As a case study, we consider the automation of a Bobcat E10 mini-excavator arm with a hydraulic impact hammer attached as end-effector. Using this machine, both the system identification and policy synthesis stages are studied in simulation and in the real world. The best RL-based policy consistently reaches target end-effector poses with position errors below 12 cm and pitch angle errors below 0.08 rad in the real world. Considering that the impact hammer has a 4 cm diameter chisel, this level of precision is sufficient for breaking rocks. Notably, this is accomplished by relying only on approximately 68 min of teleoperation data to train and 8 min to evaluate the dynamic model, and without performing any adjustments for a successful policy Sim2Real transfer. A demonstration of policy execution in the real world can be found in https://youtu.be/e-7tDhZ4ZgA.
https://arxiv.org/abs/2601.07813
Academic Papers
svg
626762d5fbe75a3571afeed4bdd4c8f6e104f7050c569827d1957ca98488720d
2026-01-13T00:00:00-05:00
Reference Games as a Testbed for the Alignment of Model Uncertainty and Clarification Requests
arXiv:2601.07820v1 Announce Type: new Abstract: In human conversation, both interlocutors play an active role in maintaining mutual understanding. When addressees are uncertain about what speakers mean, for example, they can request clarification. It is an open question for language models whether they can assume a similar addressee role, recognizing and expressing their own uncertainty through clarification. We argue that reference games are a good testbed to approach this question as they are controlled, self-contained, and make clarification needs explicit and measurable. To test this, we evaluate three vision-language models comparing a baseline reference resolution task to an experiment where the models are instructed to request clarification when uncertain. The results suggest that even in such simple tasks, models often struggle to recognize internal uncertainty and translate it into adequate clarification behavior. This demonstrates the value of reference games as testbeds for interaction qualities of (vision and) language models.
https://arxiv.org/abs/2601.07820
Academic Papers
svg
466920c0f9845c71209bbf7e648e89feec8fa114882956cd179c57a0a42cb074
2026-01-13T00:00:00-05:00
Failure-Aware RL: Reliable Offline-to-Online Reinforcement Learning with Self-Recovery for Real-World Manipulation
arXiv:2601.07821v1 Announce Type: new Abstract: Post-training algorithms based on deep reinforcement learning can push the limits of robotic models for specific objectives, such as generalizability, accuracy, and robustness. However, Intervention-requiring Failures (IR Failures) (e.g., a robot spilling water or breaking fragile glass) during real-world exploration happen inevitably, hindering the practical deployment of such a paradigm. To tackle this, we introduce Failure-Aware Offline-to-Online Reinforcement Learning (FARL), a new paradigm minimizing failures during real-world reinforcement learning. We create FailureBench, a benchmark that incorporates common failure scenarios requiring human intervention, and propose an algorithm that integrates a world-model-based safety critic and a recovery policy trained offline to prevent failures during online exploration. Extensive simulation and real-world experiments demonstrate the effectiveness of FARL in significantly reducing IR Failures while improving performance and generalization during online reinforcement learning post-training. FARL reduces IR Failures by 73.1% while elevating performance by 11.3% on average during real-world RL post-training. Videos and code are available at https://failure-aware-rl.github.io.
https://arxiv.org/abs/2601.07821
Academic Papers
svg
b18783ff60c6b0b43fa109d3dda9a906f513b24ea52ca845e0df79da73fd6090
2026-01-13T00:00:00-05:00
Video Generation Models in Robotics - Applications, Research Challenges, Future Directions
arXiv:2601.07823v1 Announce Type: new Abstract: Video generation models have emerged as high-fidelity models of the physical world, capable of synthesizing high-quality videos capturing fine-grained interactions between agents and their environments conditioned on multi-modal user inputs. Their impressive capabilities address many of the long-standing challenges faced by physics-based simulators, driving broad adoption in many problem domains, e.g., robotics. For example, video models enable photorealistic, physically consistent deformable-body simulation without making prohibitive simplifying assumptions, which is a major bottleneck in physics-based simulation. Moreover, video models can serve as foundation world models that capture the dynamics of the world in a fine-grained and expressive way. They thus overcome the limited expressiveness of language-only abstractions in describing intricate physical interactions. In this survey, we provide a review of video models and their applications as embodied world models in robotics, encompassing cost-effective data generation and action prediction in imitation learning, dynamics and rewards modeling in reinforcement learning, visual planning, and policy evaluation. Further, we highlight important challenges hindering the trustworthy integration of video models in robotics, which include poor instruction following, hallucinations such as violations of physics, and unsafe content generation, in addition to fundamental limitations such as significant data curation, training, and inference costs. We present potential future directions to address these open research challenges to motivate research and ultimately facilitate broader applications, especially in safety-critical settings.
https://arxiv.org/abs/2601.07823
Academic Papers
svg
87d83e2b187fdb0bb10f641e0ec8f56b4633e11f0460fa3e4f6f802cebf7a5a1
2026-01-13T00:00:00-05:00
Tensor Algebra Processing Primitives (TAPP): Towards a Standard for Tensor Operations
arXiv:2601.07827v1 Announce Type: new Abstract: To address the absence of a universal standard interface for tensor operations, we introduce the Tensor Algebra Processing Primitives (TAPP), a C-based interface designed to decouple the application layer from hardware-specific implementations. We provide a mathematical formulation of tensor contractions and a reference implementation to ensure correctness and facilitate the validation of optimized kernels. Developed through community consensus involving academic and industrial stakeholders, TAPP aims to enable performance portability and resolving dependency challenges. The viability of the standard is demonstrated through successful integrations with the TBLIS and cuTENSOR libraries, as well as the DIRAC quantum chemistry package.
https://arxiv.org/abs/2601.07827
Academic Papers
svg
946b4ae44da84acfffca28489bf64247041cd4292ab546f36729bee1586eb68b
2026-01-13T00:00:00-05:00
Optimal Learning Rate Schedule for Balancing Effort and Performance
arXiv:2601.07830v1 Announce Type: new Abstract: Learning how to learn efficiently is a fundamental challenge for biological agents and a growing concern for artificial ones. To learn effectively, an agent must regulate its learning speed, balancing the benefits of rapid improvement against the costs of effort, instability, or resource use. We introduce a normative framework that formalizes this problem as an optimal control process in which the agent maximizes cumulative performance while incurring a cost of learning. From this objective, we derive a closed-form solution for the optimal learning rate, which has the form of a closed-loop controller that depends only on the agent's current and expected future performance. Under mild assumptions, this solution generalizes across tasks and architectures and reproduces numerically optimized schedules in simulations. In simple learning models, we can mathematically analyze how agent and task parameters shape learning-rate scheduling as an open-loop control solution. Because the optimal policy depends on expectations of future performance, the framework predicts how overconfidence or underconfidence influence engagement and persistence, linking the control of learning speed to theories of self-regulated learning. We further show how a simple episodic memory mechanism can approximate the required performance expectations by recalling similar past learning experiences, providing a biologically plausible route to near-optimal behaviour. Together, these results provide a normative and biologically plausible account of learning speed control, linking self-regulated learning, effort allocation, and episodic memory estimation within a unified and tractable mathematical framework.
https://arxiv.org/abs/2601.07830
Academic Papers
svg
f344c56bedb43f0ef7d4000f345d790589cf0cfc67f094f6a1b1df6ac93e8ed0
2026-01-13T00:00:00-05:00
MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head
arXiv:2601.07832v1 Announce Type: new Abstract: While the Transformer architecture dominates many fields, its quadratic self-attention complexity hinders its use in large-scale applications. Linear attention offers an efficient alternative, but its direct application often degrades performance, with existing fixes typically re-introducing computational overhead through extra modules (e.g., depthwise separable convolution) that defeat the original purpose. In this work, we identify a key failure mode in these methods: global context collapse, where the model loses representational diversity. To address this, we propose Multi-Head Linear Attention (MHLA), which preserves this diversity by computing attention within divided heads along the token dimension. We prove that MHLA maintains linear complexity while recovering much of the expressive power of softmax attention, and verify its effectiveness across multiple domains, achieving a 3.6\% improvement on ImageNet classification, a 6.3\% gain on NLP, a 12.6\% improvement on image generation, and a 41\% enhancement on video generation under the same time complexity.
https://arxiv.org/abs/2601.07832
Academic Papers
svg
60e622288bab94a6776424db54df579322231bd4ec8650375ceb159c088f9c47
2026-01-13T00:00:00-05:00
Tuning-free Visual Effect Transfer across Videos
arXiv:2601.07833v1 Announce Type: new Abstract: We present RefVFX, a new framework that transfers complex temporal effects from a reference video onto a target video or image in a feed-forward manner. While existing methods excel at prompt-based or keyframe-conditioned editing, they struggle with dynamic temporal effects such as dynamic lighting changes or character transformations, which are difficult to describe via text or static conditions. Transferring a video effect is challenging, as the model must integrate the new temporal dynamics with the input video's existing motion and appearance. % To address this, we introduce a large-scale dataset of triplets, where each triplet consists of a reference effect video, an input image or video, and a corresponding output video depicting the transferred effect. Creating this data is non-trivial, especially the video-to-video effect triplets, which do not exist naturally. To generate these, we propose a scalable automated pipeline that creates high-quality paired videos designed to preserve the input's motion and structure while transforming it based on some fixed, repeatable effect. We then augment this data with image-to-video effects derived from LoRA adapters and code-based temporal effects generated through programmatic composition. Building on our new dataset, we train our reference-conditioned model using recent text-to-video backbones. Experimental results demonstrate that RefVFX produces visually consistent and temporally coherent edits, generalizes across unseen effect categories, and outperforms prompt-only baselines in both quantitative metrics and human preference. See our website $\href{https://tuningfreevisualeffects-maker.github.io/Tuning-free-Visual-Effect-Transfer-across-Videos-Project-Page/}{at\ this\ URL}$.
https://arxiv.org/abs/2601.07833
Academic Papers
svg
4b93f0d1f0101e8a19c51f2f5518ba521ee90e3d5f6f4c84e2126434060a6d5a
2026-01-13T00:00:00-05:00
SecureCAI: Injection-Resilient LLM Assistants for Cybersecurity Operations
arXiv:2601.07835v1 Announce Type: new Abstract: Large Language Models have emerged as transformative tools for Security Operations Centers, enabling automated log analysis, phishing triage, and malware explanation; however, deployment in adversarial cybersecurity environments exposes critical vulnerabilities to prompt injection attacks where malicious instructions embedded in security artifacts manipulate model behavior. This paper introduces SecureCAI, a novel defense framework extending Constitutional AI principles with security-aware guardrails, adaptive constitution evolution, and Direct Preference Optimization for unlearning unsafe response patterns, addressing the unique challenges of high-stakes security contexts where traditional safety mechanisms prove insufficient against sophisticated adversarial manipulation. Experimental evaluation demonstrates that SecureCAI reduces attack success rates by 94.7% compared to baseline models while maintaining 95.1% accuracy on benign security analysis tasks, with the framework incorporating continuous red-teaming feedback loops enabling dynamic adaptation to emerging attack strategies and achieving constitution adherence scores exceeding 0.92 under sustained adversarial pressure, thereby establishing a foundation for trustworthy integration of language model capabilities into operational cybersecurity workflows and addressing a critical gap in current approaches to AI safety within adversarial domains.
https://arxiv.org/abs/2601.07835
Academic Papers
svg
7f75621e877a26e12fc96fc12915f0408ef0bc694a3b33d60a75aa9c4e337d13
2026-01-13T00:00:00-05:00
Certainty-Guided Reasoning in Large Language Models: A Dynamic Thinking Budget Approach
arXiv:2509.07820v1 Announce Type: cross Abstract: The rise of large reasoning language models (LRLMs) has unlocked new potential for solving complex tasks. These models operate with a thinking budget, that is, a predefined number of reasoning tokens used to arrive at a solution. We propose a novel approach, inspired by the generator/discriminator framework in generative adversarial networks, in which a critic model periodically probes its own reasoning to assess whether it has reached a confident conclusion. If not, reasoning continues until a target certainty threshold is met. This mechanism adaptively balances efficiency and reliability by allowing early termination when confidence is high, while encouraging further reasoning when uncertainty persists. Through experiments on the AIME2024 and AIME2025 datasets, we show that Certainty-Guided Reasoning (CGR) improves baseline accuracy while reducing token usage. Importantly, extended multi-seed evaluations over 64 runs demonstrate that CGR is stable, reducing variance across seeds and improving exam-like performance under penalty-based grading. Additionally, our token savings analysis shows that CGR can eliminate millions of tokens in aggregate, with tunable trade-offs between certainty thresholds and efficiency. Together, these findings highlight certainty as a powerful signal for reasoning sufficiency. By integrating confidence into the reasoning process, CGR makes large reasoning language models more adaptive, trustworthy, and resource efficient, paving the way for practical deployment in domains where both accuracy and computational cost matter.
https://arxiv.org/abs/2509.07820
Academic Papers
svg
c262e8bc3ac2748ecd05dfa1ed59d24eb811541d2f2ffca8208cdba2b17271c4
2026-01-13T00:00:00-05:00
Aligning by Misaligning: Boundary-aware Curriculum Learning for Multimodal Alignment
arXiv:2511.08399v1 Announce Type: cross Abstract: Most multimodal models treat every negative pair alike, ignoring the ambiguous negatives that differ from the positive by only a small detail. We propose Boundary-Aware Curriculum with Local Attention (BACL), a lightweight add-on that turns these borderline cases into a curriculum signal. A Boundary-aware Negative Sampler gradually raises difficulty, while a Contrastive Local Attention loss highlights where the mismatch occurs. The two modules are fully differentiable and work with any off-the-shelf dual encoder. Theory predicts a fast O(1/n) error rate; practice shows up to +32% R@1 over CLIP and new SOTA on four large-scale benchmarks, all without extra labels.
https://arxiv.org/abs/2511.08399
Academic Papers
svg
579928b0c3559e11f886a62b30873fde7de837394e5324188d31029660d09254
2026-01-13T00:00:00-05:00
SmartSplat: Feature-Smart Gaussians for Scalable Compression of Ultra-High-Resolution Images
arXiv:2512.20377v1 Announce Type: cross Abstract: Recent advances in generative AI have accelerated the production of ultra-high-resolution visual content, posing significant challenges for efficient compression and real-time decoding on end-user devices. Inspired by 3D Gaussian Splatting, recent 2D Gaussian image models improve representation efficiency, yet existing methods struggle to balance compression ratio and reconstruction fidelity in ultra-high-resolution scenarios. To address this issue, we propose SmartSplat, a highly adaptive and feature-aware GS-based image compression framework that supports arbitrary image resolutions and compression ratios. SmartSplat leverages image-aware features such as gradients and color variances, introducing a Gradient-Color Guided Variational Sampling strategy together with an Exclusion-based Uniform Sampling scheme to improve the non-overlapping coverage of Gaussian primitives in pixel space. In addition, we propose a Scale-Adaptive Gaussian Color Sampling method to enhance color initialization across scales. Through joint optimization of spatial layout, scale, and color initialization, SmartSplat efficiently captures both local structures and global textures using a limited number of Gaussians, achieving high reconstruction quality under strong compression. Extensive experiments on DIV8K and a newly constructed 16K dataset demonstrate that SmartSplat consistently outperforms state-of-the-art methods at comparable compression ratios and exceeds their compression limits, showing strong scalability and practical applicability. The code is publicly available at https://github.com/lif314/SmartSplat.
https://arxiv.org/abs/2512.20377
Academic Papers
svg
cef3af540cca6c8b3fe8853593e0677a0dd2e8e4191b913f60c22379dd399e0e
2026-01-13T00:00:00-05:00
Personalized Spiking Neural Networks with Ferroelectric Synapses for EEG Signal Processing
arXiv:2601.00020v2 Announce Type: cross Abstract: Electroencephalography (EEG)-based brain-computer interfaces (BCIs) are strongly affected by non-stationary neural signals that vary across sessions and individuals, limiting the generalization of subject-agnostic models and motivating adaptive and personalized learning on resource-constrained platforms. Programmable memristive hardware offers a promising substrate for such post-deployment adaptation; however, practical realization is challenged by limited weight resolution, device variability, nonlinear programming dynamics, and finite device endurance. In this work, we show that spiking neural networks (SNNs) can be deployed on ferroelectric memristive synaptic devices for adaptive EEG-based motor imagery decoding under realistic device constraints. We fabricate, characterize, and model ferroelectric synapses. We evaluate a convolutional-recurrent SNN architecture under two complementary deployment strategies: (i) device-aware training using a ferroelectric synapse model, and (ii) transfer of software-trained weights followed by low-overhead on-device re-tuning. To enable efficient adaptation, we introduce a device-aware weight-update strategy in which gradient-based updates are accumulated digitally and converted into discrete programming events only when a threshold is exceeded, emulating nonlinear, state-dependent programming dynamics while reducing programming frequency. Both deployment strategies achieve classification performance comparable to state-of-the-art software-based SNNs. Furthermore, subject-specific transfer learning achieved by retraining only the final network layers improves classification accuracy. These results demonstrate that programmable ferroelectric hardware can support robust, low-overhead adaptation in spiking neural networks, opening a practical path toward personalized neuromorphic processing of neural signals.
https://arxiv.org/abs/2601.00020
Academic Papers
svg
412c22c91c96c067f45ddc9acaff37f7d62a3f352f506e0fdab7c0fd82dd8405
2026-01-13T00:00:00-05:00
Efficient GPU-computing simulation platform JAX-PF for differentiable phase field model
arXiv:2601.06079v1 Announce Type: cross Abstract: We present JAX-PF, an open-source, GPU-accelerated, and differentiable Phase Field (PF) software package, supporting both explicit and implicit time stepping schemes. Leveraging the modern computing architecture JAX, JAX-PF achieves high performance through array programming and GPU acceleration, delivering ~5x speedup over PRISMS-PF with MPI (24 CPU cores) for systems with ~4.19 million degrees of freedom using explicit schemes, and scaling efficiently with implicit schemes for large-size problems. Furthermore, a key feature of JAX-PF is automatic differentiation (AD), eliminating manual derivations of free-energy functionals and Jacobians. Beyond forward simulations, JAX-PF demonstrates its potential in inverse design by providing sensitivities for gradient-based optimization. We demonstrate, for the first time, the calibration of PF material parameters using AD-based sensitivities, highlighting its capability for high-dimensional inverse problems. By combining efficiency, flexibility, and full differentiability, JAX-PF offers a fast, practical, and integrated tool for forward simulation and inverse design, advancing co-designing of material and manufacturing processes and supporting the goals of the Materials Genome Initiative.
https://arxiv.org/abs/2601.06079
Academic Papers
svg
d55c2a15070868841b4da8e3598544f1032c983a9965c5d72026d1c704155b5a
2026-01-13T00:00:00-05:00
First Multi-Constellation Observations of Navigation Satellite Signals in the Lunar Domain by Post-Processing L1/L5 IQ Snapshots
arXiv:2601.06081v1 Announce Type: cross Abstract: The use of Global Navigation Satellite Systems (GNSS) to increase spacecraft autonomy for orbit determination has gained renewed momentum following the Lunar GNSS Receiver Experiment (LuGRE), which demonstrated feasible onboard GPS and Galileo signal reception and tracking at lunar distances. This work processes in-phase and quadrature (IQ) snapshots collected by the LuGRE receiver in cis-lunar space and on the lunar surface to assess multi-frequency, multi-constellation signal availability. Signals from additional systems beyond GPS and Galileo, including RNSS and SBAS constellations, are observable and successfully acquired exclusively in the recorded IQ snapshots. These observations provide the first experimental evidence that signals from multiple constellations, including systems not supported by LuGRE realtime operations, are detectable at unprecedented distances from Earth. Useful observables can be extracted from the IQ snapshots, despite minimal sampling rates, 4-bit quantization, and short durations (200 ms-2 s), through a hybrid coherent/non-coherent acquisition stage compensating for code Doppler. These observations are exploited to tune simulation tools and to perform extended simulation campaigns, showing that the inclusion of additional constellations significantly improves availability; for a 26 dB-Hz acquisition threshold, the fraction of epochs with at least four visible satellites increases from 11% to 46% of the total epoch count. These findings indicate that BeiDou, RNSS, and SBAS signals can substantially enhance GNSS-based autonomy for lunar and cislunar missions.
https://arxiv.org/abs/2601.06081
Academic Papers
svg
73d51db08f3526cf135c2a9fe5ed306b0310b6505bf89a5a9769f354d48141e5
2026-01-13T00:00:00-05:00
PriceSeer: Evaluating Large Language Models in Real-Time Stock Prediction
arXiv:2601.06088v1 Announce Type: cross Abstract: Stock prediction, a subject closely related to people's investment activities in fully dynamic and live environments, has been widely studied. Current large language models (LLMs) have shown remarkable potential in various domains, exhibiting expert-level performance through advanced reasoning and contextual understanding. In this paper, we introduce PriceSeer, a live, dynamic, and data-uncontaminated benchmark specifically designed for LLMs performing stock prediction tasks. Specifically, PriceSeer includes 110 U.S. stocks from 11 industrial sectors, with each containing 249 historical data points. Our benchmark implements both internal and external information expansion, where LLMs receive extra financial indicators, news, and fake news to perform stock price prediction. We evaluate six cutting-edge LLMs under different prediction horizons, demonstrating their potential in generating investment strategies after obtaining accurate price predictions for different sectors. Additionally, we provide analyses of LLMs' suboptimal performance in long-term predictions, including the vulnerability to fake news and specific industries. The code and evaluation data will be open-sourced at https://github.com/BobLiang2113/PriceSeer.
https://arxiv.org/abs/2601.06088
Academic Papers
svg
da432364dd22848596687967c3da06490a2568bd6dd5496651fb23fdcba604ce
2026-01-13T00:00:00-05:00
Auditory Filter Behavior and Updated Estimated Constants
arXiv:2601.06094v1 Announce Type: cross Abstract: Filters from the Gammatone family are often used to model auditory signal processing, but the filter constant values used to mimic human hearing are largely set to values based on historical psychoacoustic data collected several decades ago. Here, we move away from this long-standing convention, and estimate filter constants using a range of more recent reported filter characteristics (such as quality factors and ratios between quality factors and peak group delay) within a characteristics-based framework that clarifies how filter behavior is related to the underlying constants. Using a sharp-filter approximation that captures shared peak-region behavior across certain classes of filters, we analyze the range of behaviors accessible when the full degrees of freedom of the filter are utilized rather than fixing the filter order or exponent to historically prescribed values. Filter behavior is characterized using magnitude-based and phase-based characteristics and their ratios, which reveal which characteristics are informative for constraining filter constants and which are only weakly constraining. We show that these insights and estimation methods extend to multiple realizable filter classes from the Gammatone family and apply them, together with recent physiological and psychoacoustic observations, to derive constraints on and estimates for filter constants for human auditory filters. More broadly, this framework supports the design of auditory filters with arbitrary characteristic-level specifications and enables systematic assessment of how variations in filter characteristics influence auditory models, perceptual findings, and technologies that rely on auditory filterbanks.
https://arxiv.org/abs/2601.06094
Academic Papers
svg
9e473eb556f87dbdea61c0b01d51c41ad321abeae50d96ba46e545833c37df4d
2026-01-13T00:00:00-05:00
Emergent Complexity in Nuclear Reaction Networks: A Study of Stellar Nucleosynthesis through Chemical Organization Theory
arXiv:2601.06143v1 Announce Type: cross Abstract: We explore the emergence of complex structures within reaction networks, focusing on nuclear reaction networks relevant to stellar nucleosynthesis. The work presents a theoretical framework rooted in Chemical Organization Theory (COT) to characterize how stable, self-sustaining structures arise from the interactions of basic components. Key theoretical contributions include the formalization of atom sets as fundamental reactive units and the concept of synergy to describe the emergence of new reactions and species from the interaction of these units. The property of separability is defined to distinguish dynamically coupled systems from those that can be decomposed. This framework is then applied to the STARLIB nuclear reaction network database, analyzing how network structure, particularly the formation and properties of atom sets and semi-self-maintaining sets, changes as a function of temperature. Results indicate that increasing temperature generally enhances network cohesion, leading to fewer, larger atom sets. Critical temperatures are identified where significant structural reorganizations occur, such as the merging of distinct clusters of atom sets and the disappearance of small, isolated reactive units. The analysis reveals core clusters - large (containing more that 1000 reactions), semi-self-maintaining structures that appear to form the core of all potentially stable nucleosynthetic configurations at various temperatures. Overall, the paper provides insights into the structural underpinnings of stability and emergence in complex reaction networks, with specific implications for understanding stellar evolution and nucleosynthesis.
https://arxiv.org/abs/2601.06143
Academic Papers
svg
c445ebd7d312747153aaf1037556a6fa51ac86610523ad09f75dd6cf7125adb4
2026-01-13T00:00:00-05:00
Certificate for Orthogonal Equivalence of Real Polynomials by Polynomial-Weighted Principal Component Analysis
arXiv:2601.06148v1 Announce Type: cross Abstract: Suppose that $f(x) \in \mathbb{R}[x_1,\dots, x_n]$ and $g(x) \in \mathbb{R}[x_1,\dots, x_n]$ are two real polynomials of degree $d$ in $n$ variables. If the polynomials $f$ and $g$ are the same up to orthogonal symmetry a natural question is then what element of the orthogonal group induces the orthogonal symmetry; i.e. to find the element $R\in O(n)$ such that $f(Rx)=g(x)$. One may directly solve this problem by constructing a nonlinear system of equations induced by the relation $f(Rx)=g(x)$ along with the identities of the orthogonal group however this approach becomes quite computationally expensive for larger values of $n$ and $d$. To give an alternative and significantly more scalable solution to this problem, we introduce the concept of Polynomial-Weighted Principal Component Analysis (PW-PCA). We in particular show how PW-PCA can be effectively computed and how these techniques can be used to obtain a certificate of orthogonal equivalence, that is we find the $R\in O(n)$ such that $f(Rx)=g(x)$.
https://arxiv.org/abs/2601.06148
Academic Papers
svg
6f075702dc413ff7b5f222cd147afa4b595e1fd3586b79624b62b3d59a874e0e
2026-01-13T00:00:00-05:00
Deep Joint Source-Channel Coding for Wireless Video Transmission with Asymmetric Context
arXiv:2601.06170v1 Announce Type: cross Abstract: In this paper, we propose a high-efficiency deep joint source-channel coding (JSCC) method for video transmission based on conditional coding with asymmetric context. The conditional coding-based neural video compression requires to predict the encoding and decoding conditions from the same context which includes the same reconstructed frames. However in JSCC schemes which fall into pseudo-analog transmission, the encoder cannot infer the same reconstructed frames as the decoder even a pipeline of the simulated transmission is constructed at the encoder. In the proposed method, without such a pipeline, we guide and design neural networks to learn encoding and decoding conditions from asymmetric contexts. Additionally, we introduce feature propagation, which allows intermediate features to be independently propagated at the encoder and decoder and help to generate conditions, enabling the framework to greatly leverage temporal correlation while mitigating the problem of error accumulation. To further exploit the performance of the proposed transmission framework, we implement content-adaptive coding which achieves variable bandwidth transmission using entropy models and masking mechanisms. Experimental results demonstrate that our method outperforms existing deep video transmission frameworks in terms of performance and effectively mitigates the error accumulation. By mitigating the error accumulation, our schemes can reduce the frequency of inserting intra-frame coding modes, further enhancing performance.
https://arxiv.org/abs/2601.06170
Academic Papers
svg
a19e8d9f9410c7a4c6b42fde0e35902c55c85a0fd8e2d03f5d45045457eba1dc
2026-01-13T00:00:00-05:00
A First Course in Sparse Optimization
arXiv:2601.06173v1 Announce Type: cross Abstract: This article aims to provide a comprehensive overview of sparse optimization, with a focus on both sparse signal recovery and sparse regularization techniques. We will begin by exploring the foundations of sparse optimization, delving into the mathematical tools and models that underpin sparse signal recovery and LASSO. We will then discuss key algorithms for both sparse recovery (e.g., basis pursuit, matching pursuit) and sparse regularization (e.g., LASSO, elastic net), along with their applications in real-world problems. Throughout the text, we balance intuitive explanations with rigorous mathematical formulations to provide a comprehensive resource for both newcomers and experts in the field. Our aim is twofold: to provide a self-contained entry point for students and researchers new to the field, and to offer a rigorous reference for practitioners seeking to apply sparse optimization in science and engineering.
https://arxiv.org/abs/2601.06173
Academic Papers
svg
c8f40d566474665133d2d3dce5a87229ce4def4667d894af04eacea37f342ded
2026-01-13T00:00:00-05:00
FastSLM: Hierarchical Frame Q-Former for Effective Speech Modality Adaptation
arXiv:2601.06199v1 Announce Type: cross Abstract: Recent advances in large language models (LLMs) have demonstrated human-expert-level capabilities, driving significant interest in their potential for achieving artificial general intelligence (AGI). In particular, there is growing momentum in adapting LLMs to various modalities, including vision, video, and speech, through the development of multimodal LLMs (MLLMs). However, existing speech-language model (SLM) research has largely overlooked cost-effective adaptation strategies for leveraging LLMs in the speech domain. In this paper, we propose FastSLM, a lightweight yet efficient SLM designed for effective understanding and reasoning over long-form speech. To address the challenge of aligning high-frame-rate speech features with LLMs, we introduce the Hierarchical Frame Querying Transformer (HFQ-Former), which compresses frame-level speech features while capturing both local and global context. Furthermore, we present a novel three-stage training strategy that enhances generalization across a wide range of speech-related tasks. Experimental results demonstrate that FastSLM achieves competitive performance compared to existing state-of-the-art models, despite operating with significantly lower FLOPs and parameter counts, while representing speech with only 1.67 tokens per second. The source code and model checkpoints are available at https://huggingface.co/okestro-ai-lab/FastSLM.
https://arxiv.org/abs/2601.06199
Academic Papers
svg
b131bddeb2e248d08d17afb8453a8271320134aae009771efe6c60d2355d8556
2026-01-13T00:00:00-05:00
Real-Time Image Processing Algorithms for Embedded Systems
arXiv:2601.06243v1 Announce Type: cross Abstract: Embedded vision systems need efficient and robust image processing algorithms to perform real-time, with resource-constrained hardware. This research investigates image processing algorithms, specifically edge detection, corner detection, and blob detection, that are implemented on embedded processors, including DSPs and FPGAs. To address latency, accuracy and power consumption noted in the image processing literature, optimized algorithm architectures and quantization techniques are employed. In addition, optimal techniques for inter-frame redundancy removal and adaptive frame averaging are used to improve throughput with reasonable image quality. Simulations and hardware trials of the proposed approaches show marked improvements in the speed and energy efficiency of processing as compared to conventional implementations. The advances of this research facilitate a path for scalable and inexpensive embedded imaging systems for the automotive, surveillance, and robotics sectors, and underscore the benefit of co-designing algorithms and hardware architectures for practical real-time embedded vision applications.
https://arxiv.org/abs/2601.06243
Academic Papers
svg
31c3aa1020dd8b590ed0014f5bdda1d11b75aa90f6480a07a647c3b1f03dedf9
2026-01-13T00:00:00-05:00
Hard Constraint Projection in a Physics Informed Neural Network
arXiv:2601.06244v1 Announce Type: cross Abstract: In this work, we embed hard constraints in a physics informed neural network (PINN) which predicts solutions to the 2D incompressible Navier Stokes equations. We extend the hard constraint method introduced by Chen et al. (arXiv:2012.06148) from a linear PDE to a strongly non-linear PDE. The PINN is used to estimate the stream function and pressure of the fluid, and by differentiating the stream function we can recover an incompressible velocity field. An unlearnable hard constraint projection (HCP) layer projects the predicted velocity and pressure to a hyperplane that admits only exact solutions to a discretised form of the governing equations.
https://arxiv.org/abs/2601.06244
Academic Papers
svg
7ae971ecd1d9cba2d9dc325b2865bd3277f35054fb5f4a431b7472f2367207e5
2026-01-13T00:00:00-05:00
Gamma2Patterns: Deep Cognitive Attention Region Identification and Gamma-Alpha Pattern Analysis
arXiv:2601.06257v1 Announce Type: cross Abstract: Deep cognitive attention is characterized by heightened gamma oscillations and coordinated visual behavior. Despite the physiological importance of these mechanisms, computational studies rarely synthesize these modalities or identify the neural regions most responsible for sustained focus. To address this gap, this work introduces Gamma2Patterns, a multimodal framework that characterizes deep cognitive attention by leveraging complementary Gamma and Alpha band EEG activity alongside Eye-tracking measurements. Using the SEED-IV dataset [1], we extract spectral power, burst-based temporal dynamics, and fixation-saccade-pupil signals across 62 channels or electrodes to analyze how neural activation differs between high-focus (Gamma-dominant) and low-focus (Alpha-dominant) states. Our findings reveal that frontopolar, temporal, anterior frontal, and parieto-occipital regions exhibit the strongest Gamma power and burst rates, indicating their dominant role in deep attentional engagement, while Eye-tracking signals confirm complementary contributions from frontal, frontopolar, and frontotemporal regions. Furthermore, we show that Gamma power and burst duration provide more discriminative markers of deep focus than Alpha power alone, demonstrating their value for attention decoding. Collectively, these results establish a multimodal, evidence-based map of cortical regions and oscillatory signatures underlying deep focus, providing a neurophysiological foundation for future brain-inspired attention mechanisms in AI systems.
https://arxiv.org/abs/2601.06257
Academic Papers
svg
f00f28cdb26f7354cf0af36ed24851195af8c58c713f4364b6c6fab574e2f10d
2026-01-13T00:00:00-05:00
Performance Analysis of DCT, Hadamard, and PCA in Block-Based Image Compression
arXiv:2601.06273v1 Announce Type: cross Abstract: Block based image compression relies on transform coding to concentrate signal energy into a small number of coefficients. While classical codecs use fixed transforms such as the Discrete Cosine Transform (DCT), data driven methods such as Principal Component Analysis (PCA) are theoretically optimal for decorrelation. This paper presents an experimental comparison of DCT, Hadamard, and PCA across multiple block sizes and compression rates. Using rate distortion and energy compaction analysis, we show that PCA outperforms fixed transforms only when block dimensionality is sufficiently large, while DCT remains near optimal for standard block sizes such as $8\times8$ and at low bit rates. These results explain the robustness of DCT in practical codecs and highlight the limitations of block wise learned transforms.
https://arxiv.org/abs/2601.06273
Academic Papers
svg
c4b8456ff0a46637562dbd1d3a9e26c389aa6197bc8b825a4e5c73506720af0e
2026-01-13T00:00:00-05:00
Timing Fragility Aware Selective Hardening of RISCV Soft Processors on SRAM Based FPGAs
arXiv:2601.06308v1 Announce Type: cross Abstract: Selective hardening is widely employed to improve the reliability of FPGA based soft processors while limiting the overhead of full redundancy. However, existing approaches primarily rely on architectural criticality or functional fault analysis, overlooking the impact of routing dependent timing sensitivity on processor robustness. This paper introduces a timing fragility aware selective hardening methodology for RISCV soft processors implemented on SRAM based FPGAs. Building on recent advances in in situ timing observability, the proposed approach quantifies the statistical timing sensitivity of pipeline components under controlled routing perturbations and uses this information to guide hardening decisions. Experimental results on a RISCV processor implemented on a commercial FPGA platform show that components exhibiting higher timing fragility also demonstrate increased vulnerability to routing induced delay effects. Leveraging this correlation, the proposed selective hardening strategy achieves robustness comparable to full hardening while significantly reducing area and timing overhead. These results demonstrate that timing fragility provides a practical and effective metric for reliability aware design optimization in FPGA based processor architectures.
https://arxiv.org/abs/2601.06308
Academic Papers
svg
cde8df50f6451a4136b002ba5a09d617c971bf1daa247ccc455ebd6a58a9d40d
2026-01-13T00:00:00-05:00
Computational Mapping of Reactive Stroma in Prostate Cancer Yields Interpretable, Prognostic Biomarkers
arXiv:2601.06360v1 Announce Type: cross Abstract: Current histopathological grading of prostate cancer relies primarily on glandular architecture, largely overlooking the tumor microenvironment. Here, we present PROTAS, a deep learning framework that quantifies reactive stroma (RS) in routine hematoxylin and eosin (H&amp;E) slides and links stromal morphology to underlying biology. PROTAS-defined RS is characterized by nuclear enlargement, collagen disorganization, and transcriptomic enrichment of contractile pathways. PROTAS detects RS robustly in the external Prostate, Lung, Colorectal, and Ovarian (PLCO) dataset and, using domain-adversarial training, generalizes to diagnostic biopsies. In head-to-head comparisons, PROTAS outperforms pathologists for RS detection, and spatial RS features predict biochemical recurrence independently of established prognostic variables (c-index 0.80). By capturing subtle stromal phenotypes associated with tumor progression, PROTAS provides an interpretable, scalable biomarker to refine risk stratification.
https://arxiv.org/abs/2601.06360
Academic Papers
svg
ecbf3ad819407bc56096d92f1e5546e2ef1f196b5f613ec66f1716c57e79b46a
2026-01-13T00:00:00-05:00
The Replicator-Optimization Mechanism: A Scale-Relative Formalism for Persistence-Conditioned Dynamics with Application to Consent-Based Metaethics
arXiv:2601.06363v1 Announce Type: cross Abstract: This paper formalizes a widely used dynamical class--replicator-mutator dynamics and Price-style selection-and-transmission--and makes explicit the modeling choices (scale, atomic unit, interaction topology, transmission kernel) that determine how this class instantiates across domains. The backbone is known; we do not claim to have discovered selection. The novel contributions are threefold: (i) a scale-relative kernel parameterization where atomic units are themselves parameters, enabling systematic instantiation across physics, biology, economics, cognition, and social organization; (ii) a consent-friction instantiation for political philosophy, where friction is the primitive, legitimacy functions as survival probability, and belief-transfer functions as mutation kernel; and (iii) a derivation path from social contract theory rather than from biology or physics, arriving at the same formal structure via an independent route. We provide a bridge principle connecting descriptive dynamics to instrumental normativity: if agents prefer lower expected friction, then "ought" claims are shorthand for policies that reduce expected friction under the specified dynamics. This conditional structure avoids the is-ought fallacy while grounding normative discourse in empirically tractable dynamics. We address pathological cases (authoritarian stability, suppressed friction) through explicit modeling of latent versus observed friction. The framework generates testable predictions through operationalization of friction, legitimacy, and belief-transfer dynamics, and is falsifiable at the level of measurement apparatus rather than formal structure.
https://arxiv.org/abs/2601.06363
Academic Papers
svg
6ac8146a1b25172a5416139c7f79bf3826455819ef03b6f3ef04948547a30d3c
2026-01-13T00:00:00-05:00
A Linear Combination of Unitaries Decomposition for the Laplace Operator
arXiv:2601.06370v1 Announce Type: cross Abstract: We provide novel linear combination of unitaries decompositions for a class of discrete elliptic differential operators. Specifically, Poisson problems augmented with periodic, Dirichlet, Neumann, Robin, and mixed boundary conditions are considered on the unit interval and on higher-dimensional rectangular domains. The number of unitary terms required for our decomposition is independent of the number of grid points used in the discretization and scales linearly with the spatial dimension. Explicit circuit constructions for each unitary are given and their complexities analyzed. The worst case depth and elementary gate cost of any such circuit is shown to scale at most logarithmically with respect to number of grid points in the underlying discrete system. We also investigate the cost of using our method within the Variational Quantum Linear Solver algorithm and show favorable scaling. Finally, we extend the proposed decomposition technique to treat problems that include first-order derivative terms with variable coefficients.
https://arxiv.org/abs/2601.06370
Academic Papers
svg
7bf4553d37ed84e64896f132c06bb8d3db4b958b23a00f3de2bf0d073beb86f5
2026-01-13T00:00:00-05:00
Continual Quantum Architecture Search with Tensor-Train Encoding: Theory and Applications to Signal Processing
arXiv:2601.06392v1 Announce Type: cross Abstract: We introduce CL-QAS, a continual quantum architecture search framework that mitigates the challenges of costly amplitude encoding and catastrophic forgetting in variational quantum circuits. The method uses Tensor-Train encoding to efficiently compress high-dimensional stochastic signals into low-rank quantum feature representations. A bi-loop learning strategy separates circuit parameter optimization from architecture exploration, while an Elastic Weight Consolidation regularization ensures stability across sequential tasks. We derive theoretical upper bounds on approximation, generalization, and robustness under quantum noise, demonstrating that CL-QAS achieves controllable expressivity, sample-efficient generalization, and smooth convergence without barren plateaus. Empirical evaluations on electrocardiogram (ECG)-based signal classification and financial time-series forecasting confirm substantial improvements in accuracy, balanced accuracy, F1 score, and reward. CL-QAS maintains strong forward and backward transfer and exhibits bounded degradation under depolarizing and readout noise, highlighting its potential for adaptive, noise-resilient quantum learning on near-term devices.
https://arxiv.org/abs/2601.06392
Academic Papers
svg
03d22126f97835a21c8fcdc340e9f7f752a67bae151ef15bb0c3ab80a7062416
2026-01-13T00:00:00-05:00
On a Gradient Approach to Chebyshev Center Problems with Applications to Function Learning
arXiv:2601.06434v1 Announce Type: cross Abstract: We introduce $\textsf{gradOL}$, the first gradient-based optimization framework for solving Chebyshev center problems, a fundamental challenge in optimal function learning and geometric optimization. $\textsf{gradOL}$ hinges on reformulating the semi-infinite problem as a finitary max-min optimization, making it amenable to gradient-based techniques. By leveraging automatic differentiation for precise numerical gradient computation, $\textsf{gradOL}$ ensures numerical stability and scalability, making it suitable for large-scale settings. Under strong convexity of the ambient norm, $\textsf{gradOL}$ provably recovers optimal Chebyshev centers while directly computing the associated radius. This addresses a key bottleneck in constructing stable optimal interpolants. Empirically, $\textsf{gradOL}$ achieves significant improvements in accuracy and efficiency on 34 benchmark Chebyshev center problems from a benchmark $\textsf{CSIP}$ library. Moreover, we extend $\textsf{gradOL}$ to general convex semi-infinite programming (CSIP), attaining up to $4000\times$ speedups over the state-of-the-art $\texttt{SIPAMPL}$ solver tested on the indicated $\textsf{CSIP}$ library containing 67 benchmark problems. Furthermore, we provide the first theoretical foundation for applying gradient-based methods to Chebyshev center problems, bridging rigorous analysis with practical algorithms. $\textsf{gradOL}$ thus offers a unified solution framework for Chebyshev centers and broader CSIPs.
https://arxiv.org/abs/2601.06434
Academic Papers
svg
47b1283e9d12612676b5c2b02aaae350d92eb51a42ebdae2df7f5cf34ed9b1e5
2026-01-13T00:00:00-05:00
Physics-informed Gaussian Process Regression in Solving Eigenvalue Problem of Linear Operators
arXiv:2601.06462v1 Announce Type: cross Abstract: Applying Physics-Informed Gaussian Process Regression to the eigenvalue problem $(\mathcal{L}-\lambda)u = 0$ poses a fundamental challenge, where the null source term results in a trivial predictive mean and a degenerate marginal likelihood. Drawing inspiration from system identification, we construct a transfer function-type indicator for the unknown eigenvalue/eigenfunction using the physics-informed Gaussian Process posterior. We demonstrate that the posterior covariance is only non-trivial when $\lambda$ corresponds to an eigenvalue of the partial differential operator $\mathcal{L}$, reflecting the existence of a non-trivial eigenspace, and any sample from the posterior lies in the eigenspace of the linear operator. We demonstrate the effectiveness of the proposed approach through several numerical examples with both linear and non-linear eigenvalue problems.
https://arxiv.org/abs/2601.06462
Academic Papers
svg
4abd207e4b1f06812f73d6cdcaeeea5c51fe75ea6047cd005cee750bac087792
2026-01-13T00:00:00-05:00
R$^3$D: Regional-guided Residual Radar Diffusion
arXiv:2601.06465v1 Announce Type: cross Abstract: Millimeter-wave radar enables robust environment perception in autonomous systems under adverse conditions yet suffers from sparse, noisy point clouds with low angular resolution. Existing diffusion-based radar enhancement methods either incur high learning complexity by modeling full LiDAR distributions or fail to prioritize critical structures due to uniform regional processing. To address these issues, we propose R3D, a regional-guided residual radar diffusion framework that integrates residual diffusion modeling-focusing on the concentrated LiDAR-radar residual encoding complementary high-frequency details to reduce learning difficulty-and sigma-adaptive regional guidance-leveraging radar-specific signal properties to generate attention maps and applying lightweight guidance only in low-noise stages to avoid gradient imbalance while refining key regions. Extensive experiments on the ColoRadar dataset demonstrate that R3D outperforms state-of-the-art methods, providing a practical solution for radar perception enhancement. Our anonymous code and pretrained models are released here: https://anonymous.4open.science/r/r3d-F836
https://arxiv.org/abs/2601.06465
Academic Papers
svg
42a8bd6e9ae1588e44ab89b483c9d5f642d94071e56cec87fec469500f1e210f
2026-01-13T00:00:00-05:00
Joint Impact of ADC and Fronthaul Quantization in Cell-Free Massive MIMO-OFDM Uplink
arXiv:2601.06483v1 Announce Type: cross Abstract: In the uplink of a cell-free massive MIMO system, quantization affects performance in two key domains: the time-domain distortion introduced by finite-resolution analog-to-digital converters (ADCs) at the access points (APs), and the fronthaul quantization of signals sent to the central processing unit (CPU). Although quantizing twice may seem redundant, the ADC quantization in orthogonal frequency-division duplex (OFDM) systems appears in the time domain, and one must then convert to the frequency domain, where quantization can be applied only to the signals at active subcarriers. This reduces fronthaul load and avoids unnecessary distortion, since the ADC output spans all OFDM samples while only a subset of subcarriers carries useful information. While both quantization effects have been extensively studied in narrowband systems, their joint impact in practical wideband OFDM-based cell-free massive MIMO remains largely unexplored. This paper addresses the gap by modeling the joint distortion and proposing a fronthaul strategy in which each AP processes the received signal to reduce quantization artifacts before transmission. We develop an efficient estimation algorithm that reconstructs the unquantized time-domain signal prior to fronthaul transmission and evaluate its effectiveness. The proposed design offers new insights for implementing efficient, quantization-aware uplink transmission in wideband cell-free architectures.
https://arxiv.org/abs/2601.06483
Academic Papers
svg
b79b99972aa84fc0a747c5dfb706beacb6c5aaa355678d8f0ef55dd47ca9b34d
2026-01-13T00:00:00-05:00
Cell-Free Massive MIMO with Hardware-Impaired Wireless Fronthaul
arXiv:2601.06486v1 Announce Type: cross Abstract: Cell-free massive MIMO (multiple-input multiple-output) enhances spectral and energy efficiency compared to conventional cellular networks by enabling joint transmission and reception across a large number of distributed access points (APs). Since these APs are envisioned to be low-cost and densely deployed, hardware impairments, stemming from non-ideal radio-frequency (RF) chains, are unavoidable. While existing studies primarily address hardware impairments on the access side, the impact of hardware impairments on the wireless fronthaul link has remained largely unexplored. In this work, we fill this important gap by introducing a novel amplify-and-forward (AF) based wireless fronthauling scheme tailored for cell-free massive MIMO. Focusing on the uplink, we develop an analytical framework that jointly models the hardware impairments at both the APs and the fronthaul transceivers, derives the resulting end-to-end distorted signal expression, and quantifies the individual contribution of each impairment to the spectral efficiency. Furthermore, we design distortion-aware linear combiners that optimally mitigate these effects. Numerical results demonstrate significant performance gains from distortion-aware processing and illustrate the potential of the proposed AF fronthauling scheme as a cost-effective enabler for future cell-free architectures.
https://arxiv.org/abs/2601.06486
Academic Papers
svg
d07d382848407c6c710631063222ddc02f0c8606aaf592386d0cc122d6659b44
2026-01-13T00:00:00-05:00
Inference-Time Alignment for Diffusion Models via Doob's Matching
arXiv:2601.06514v1 Announce Type: cross Abstract: Inference-time alignment for diffusion models aims to adapt a pre-trained diffusion model toward a target distribution without retraining the base score network, thereby preserving the generative capacity of the base model while enforcing desired properties at the inference time. A central mechanism for achieving such alignment is guidance, which modifies the sampling dynamics through an additional drift term. In this work, we introduce Doob's matching, a novel framework for guidance estimation grounded in Doob's $h$-transform. Our approach formulates guidance as the gradient of logarithm of an underlying Doob's $h$-function and employs gradient-penalized regression to simultaneously estimate both the $h$-function and its gradient, resulting in a consistent estimator of the guidance. Theoretically, we establish non-asymptotic convergence rates for the estimated guidance. Moreover, we analyze the resulting controllable diffusion processes and prove non-asymptotic convergence guarantees for the generated distributions in the 2-Wasserstein distance.
https://arxiv.org/abs/2601.06514
Academic Papers
svg
46c78926a18e0839577d4435b402cf3b01250c05c98c39ea1885705f41d4f46e
2026-01-13T00:00:00-05:00
Resource-constrained Project Scheduling with Time-of-Use Energy Tariffs and Machine States: A Logic-based Benders Decomposition Approach
arXiv:2601.06542v1 Announce Type: cross Abstract: In this paper, we investigate the Resource-Constrained Project Scheduling Problem (RCPSP) with time-of-use energy tariffs (TOU) and machine states, a variant of RCPSP for production scheduling where energy price is part of the criteria and one machine is highly energy-demanding and can be in one of the following three states: proc, idle, or off. The problem involves scheduling all tasks, respecting precedence constraints and resource limitations, while minimizing the combination of the overall makespan and the total energy cost (TEC), which varies according to the TOU pricing, which can take negative values. We propose two novel approaches to solve it: a monolithic Constraint Programming (CP) approach and a Logic-Based Benders Decomposition (LBBD) approach. The latter combines a master problem dealing with energy cost solved using Integer Linear Programming (ILP) with a subproblem handling the RCPSP resolved using CP. Both approaches surpass the monolithic compact ILP approach, but the LBBD significantly outperforms the CP when the ratio of energy-intensive tasks over the overall tasks is moderate, allowing for solving instances with up to 1600 tasks in sparse instances. Finally, we put forth a way of generalizing our LBBD approach to other problems sharing similar characteristics, and we applied it to a problem based on an RCPSP problem with blocking times & total weighted tardiness criterion and a flexible job shop.
https://arxiv.org/abs/2601.06542
Academic Papers
svg
e521b9fe1ab8430939f1a8dcce6ad5ef118f7229af740b2ee53e5fd1dd51d593
2026-01-13T00:00:00-05:00
Lightweight Resolution-Aware Audio Deepfake Detection via Cross-Scale Attention and Consistency Learning
arXiv:2601.06560v1 Announce Type: cross Abstract: Audio deepfake detection has become increasingly challenging due to rapid advances in speech synthesis and voice conversion technologies, particularly under channel distortions, replay attacks, and real-world recording conditions. This paper proposes a resolution-aware audio deepfake detection framework that explicitly models and aligns multi-resolution spectral representations through cross-scale attention and consistency learning. Unlike conventional single-resolution or implicit feature-fusion approaches, the proposed method enforces agreement across complementary time--frequency scales. The proposed framework is evaluated on three representative benchmarks: ASVspoof 2019 (LA and PA), the Fake-or-Real (FoR) dataset, and the In-the-Wild Audio Deepfake dataset under a speaker-disjoint protocol. The method achieves near-perfect performance on ASVspoof LA (EER 0.16%), strong robustness on ASVspoof PA (EER 5.09%), FoR rerecorded audio (EER 4.54%), and in-the-wild deepfakes (AUC 0.98, EER 4.81%), significantly outperforming single-resolution and non-attention baselines under challenging conditions. The proposed model remains lightweight and efficient, requiring only 159k parameters and less than 1~GFLOP per inference, making it suitable for practical deployment. Comprehensive ablation studies confirm the critical contributions of cross-scale attention and consistency learning, while gradient-based interpretability analysis reveals that the model learns resolution-consistent and semantically meaningful spectral cues across diverse spoofing conditions. These results demonstrate that explicit cross-resolution modeling provides a principled, robust, and scalable foundation for next-generation audio deepfake detection systems.
https://arxiv.org/abs/2601.06560
Academic Papers
svg
3191bce4fa3a71611169ebe21e8955115ca6d7923f37bed1fe9be4845c426dea
2026-01-13T00:00:00-05:00
Stereo Audio Rendering for Personal Sound Zones Using a Binaural Spatially Adaptive Neural Network (BSANN)
arXiv:2601.06621v1 Announce Type: cross Abstract: A binaural rendering framework for personal sound zones (PSZs) is proposed to enable multiple head-tracked listeners to receive fully independent stereo audio programs. Current PSZ systems typically rely on monophonic rendering and therefore cannot control the left and right ears separately, which limits the quality and accuracy of spatial imaging. The proposed method employs a Binaural Spatially Adaptive Neural Network (BSANN) to generate ear-optimized loudspeaker filters that reconstruct the desired acoustic field at each ear of multiple listeners. The framework integrates anechoically measured loudspeaker frequency responses, analytically modeled transducer directivity, and rigid-sphere head-related transfer functions (HRTFs) to enhance acoustic accuracy and spatial rendering fidelity. An explicit active crosstalk cancellation (XTC) stage further improves three-dimensional spatial perception. Experiments show significant gains in measured objective performance metrics, including inter-zone isolation (IZI), inter-program isolation (IPI), and crosstalk cancellation (XTC), with log-frequency-weighted values of 10.23/10.03 dB (IZI), 11.11/9.16 dB (IPI), and 10.55/11.13 dB (XTC), respectively, over 100-20,000 Hz. The combined use of ear-wise control, accurate acoustic modeling, and integrated active XTC produces a unified rendering method that delivers greater isolation performance, increased robustness to room asymmetry, and more faithful spatial reproduction in real acoustic environments.
https://arxiv.org/abs/2601.06621
Academic Papers
svg
e1f3e1358e3021b6a6e7ec014f15a2a692f2041faf4c90cc966494e161fedb75
2026-01-13T00:00:00-05:00
A Multimodal Deep Learning Framework for Predicting ICU Deterioration: Integrating ECG Waveforms with Clinical Data and Clinician Benchmarking
arXiv:2601.06645v1 Announce Type: cross Abstract: Artificial intelligence holds strong potential to support clinical decision making in intensive care units where timely and accurate risk assessment is critical. However, many existing models focus on isolated outcomes or limited data types, while clinicians integrate longitudinal history, real time physiology, and heterogeneous clinical information. To address this gap, we developed MDS ICU, a unified multimodal machine learning framework that fuses routinely collected data including demographics, biometrics, vital signs, laboratory values, ECG waveforms, surgical procedures, and medical device usage to provide continuous predictive support during ICU stays. Using 63001 samples from 27062 patients in MIMIC IV, we trained a deep learning architecture that combines structured state space S4 encoders for ECG waveforms with multilayer perceptron RealMLP encoders for tabular data to jointly predict 33 clinically relevant outcomes spanning mortality, organ dysfunction, medication needs, and acute deterioration. The model achieved strong discrimination with AUROCs of 0.90 for 24 hour mortality, 0.92 for sedative administration, 0.97 for invasive mechanical ventilation, and 0.93 for coagulation dysfunction. Calibration analysis showed close agreement between predicted and observed risks, with consistent gains from ECG waveform integration. Comparisons with clinicians and large language models showed that model predictions alone outperformed both, and that providing model outputs as decision support further improved their performance. These results demonstrate that multimodal AI can deliver clinically meaningful risk stratification across diverse ICU outcomes while augmenting rather than replacing clinical expertise, establishing a scalable foundation for precision critical care decision support.
https://arxiv.org/abs/2601.06645
Academic Papers
svg
05fdbf8dc4292e9bd5e51bfe72002ae6834c4f06e8efab2c2b8265c660730ae1
2026-01-13T00:00:00-05:00
Dereverberation Filter by Deconvolution with Frequency Bin Specific Faded Impulse Response
arXiv:2601.06662v1 Announce Type: cross Abstract: This work introduces a robust single-channel inverse filter for dereverberation of non-ideal recordings, validated on real audio. The developed method focuses on the calculation and modification of a discrete impulse response in order to filter the characteristics from a known digital single channel recording setup and room characteristics such as early reflections and reverberations. The aim is a dryer and clearer signal reconstruction, which ideally would be the direct-path signal. The time domain impulse response is calculated from the cepstral domain and faded by means of frequency bin specific exponential decay in the spectrum. The decay rates are obtained by using the blind estimates of reverberation time ratio between recorded output and test signals for each frequency bin. The modified impulse response does filter a recorded audio-signal by deconvolution. The blind estimation is well known and stands out for its robustness to noise and non-idealities. Estimation of a direct path signal is key to many applications.
https://arxiv.org/abs/2601.06662
Academic Papers
svg
4068a0f55a9a28d425a6ca99e2bd80890ca65641d098ee45436977488f8a7a37
2026-01-13T00:00:00-05:00
Diffusion Models with Heavy-Tailed Targets: Score Estimation and Sampling Guarantees
arXiv:2601.06715v1 Announce Type: cross Abstract: Score-based diffusion models have become a powerful framework for generative modeling, with score estimation as a central statistical bottleneck. Existing guarantees for score estimation largely focus on light-tailed targets or rely on restrictive assumptions such as compact support, which are often violated by heavy-tailed data in practice. In this work, we study conventional (Gaussian) score-based diffusion models when the target distribution is heavy-tailed and belongs to a Sobolev class with smoothness parameter $\beta>0$. We consider both exponential and polynomial tail decay, indexed by a tail parameter $\gamma$. Using kernel density estimation, we derive sharp minimax rates for score estimation, revealing a qualitative dichotomy: under exponential tails, the rate matches the light-tailed case up to polylogarithmic factors, whereas under polynomial tails the rate depends explicitly on $\gamma$. We further provide sampling guarantees for the associated continuous reverse dynamics. In total variation, the generated distribution converges at the minimax optimal rate $n^{-\beta/(2\beta+d)}$ under exponential tails (up to logarithmic factors), and at a $\gamma$-dependent rate under polynomial tails. Whether the latter sampling rate is minimax optimal remains an open question. These results characterize the statistical limits of score estimation and the resulting sampling accuracy for heavy-tailed targets, extending diffusion theory beyond the light-tailed setting.
https://arxiv.org/abs/2601.06715
Academic Papers
svg
94fa27836c086f02e5fc8c6739c78e0d99210fbb9ecf841b55807da2e2df47b9
2026-01-13T00:00:00-05:00
USFetal: Tools for Fetal Brain Ultrasound Compounding
arXiv:2601.06726v1 Announce Type: cross Abstract: Ultrasound offers a safe, cost-effective, and widely accessible technology for fetal brain imaging, making it especially suitable for routine clinical use. However, it suffers from view-dependent artifacts, operator variability, and a limited field of view, which make interpretation and quantitative evaluation challenging. Ultrasound compounding aims to overcome these limitations by integrating complementary information from multiple 3D acquisitions into a single, coherent volumetric representation. This work provides four main contributions: (1) We present the first systematic categorization of computational strategies for fetal brain ultrasound compounding, including both classical techniques and modern learning-based frameworks. (2) We implement and compare representative methods across four key categories - multi-scale, transformation-based, variational, and deep learning approaches - emphasizing their core principles and practical advantages. (3) Motivated by the lack of full-view, artifact-free ground truth required for supervised learning, we focus on unsupervised and self-supervised strategies and introduce two new deep learning based approaches: a self-supervised compounding framework and an adaptation of unsupervised deep plug-and-play priors for compounding. (4) We conduct a comprehensive evaluation on ten multi-view fetal brain ultrasound datasets, using both expert radiologist scoring and standard quantitative image-quality metrics. We also release the USFetal Compounding Toolbox, publicly available to support benchmarking and future research. Keywords: Ultrasound compounding, fetal brain, deep learning, self-supervised, unsupervised.
https://arxiv.org/abs/2601.06726
Academic Papers
svg
0e74f316dd364b84d58c13c6cd8fcadb03af9c998dcfa90604fa8b2807e71eeb
2026-01-13T00:00:00-05:00
Non-Abelian qLDPC: TQFT Formalism, Addressable Gauging Measurement and Application to Magic State Fountain on 2D Product Codes
arXiv:2601.06736v1 Announce Type: cross Abstract: A fundamental problem of fault-tolerant quantum computation with quantum low-density parity-check (qLDPC) codes is the tradeoff between connectivity and universality. It is widely believed that in order to perform native logical non-Clifford gates, one needs to resort to 3D product-code constructions. In this work, we extend Kitaev's framework of non-Abelian topological codes on manifolds to non-Abelian qLDPC codes (realized as Clifford-stabilizer codes) and the corresponding combinatorial topological quantum field theories (TQFT) defined on Poincar\'e CW complexes and certain types of general chain complexes. We also construct the spacetime path integrals as topological invariants on these complexes. Remarkably, we show that native non-Clifford logical gates can be realized using constant-rate 2D hypergraph-product codes and their Clifford-stabilizer variants. This is achieved by a spacetime path integral effectively implementing the addressable gauging measurement of a new type of 0-form subcomplex symmetries, which correspond to addressable transversal Clifford gates and become higher-form symmetries when lifted to higher-dimensional CW complexes or manifolds. Building on this structure, we apply the gauging protocol to the magic state fountain scheme for parallel preparation of $O(\sqrt{n})$ disjoint CZ magic states with code distance of $O(\sqrt{n})$, using a total number of $n$ qubits.
https://arxiv.org/abs/2601.06736
Academic Papers
svg
b91e4097e2ba16390565543bec3eb75995ffba98ce8b450fbd84cad9e610b80a
2026-01-13T00:00:00-05:00
Water Demand Maximization: Quick Recovery of Nonlinear Physics Solutions
arXiv:2601.06755v1 Announce Type: cross Abstract: Determining the maximum demand a water distribution network can satisfy is crucial for ensuring reliable supply and planning network expansion. This problem, typically formulated as a mixed-integer nonlinear program (MINLP), is computationally challenging. A common strategy to address this challenge is to solve mixed-integer linear program (MILP) relaxations derived by partitioning variable domains and constructing linear over- and under-estimators to nonlinear constraints over each partition. While MILP relaxations are easier to solve up to a modest level of partitioning, their solutions often violate nonlinear water flow physics. Thus, recovering feasible MINLP solutions from the MILP relaxations is crucial for enhancing MILP-based approaches. In this paper, we propose a robust solution recovery method that efficiently computes feasible MINLP solutions from MILP relaxations, regardless of partition granularity. Combined with iterative partition refinement, our method generates a sequence of feasible solutions that progressively approach the optimum. Through extensive numerical experiments, we demonstrate that our method outperforms baseline methods and direct MINLP solves by consistently recovering high-quality feasible solutions with significantly reduced computation times.
https://arxiv.org/abs/2601.06755
Academic Papers
svg
f28c99f90f2a930c8d790988a06600a256fdeb705c205f8f848614284e5b4d80
2026-01-13T00:00:00-05:00
Dimension-reduced outcome-weighted learning for estimating individualized treatment regimes in observational studies
arXiv:2601.06782v1 Announce Type: cross Abstract: Individualized treatment regimes (ITRs) aim to improve clinical outcomes by assigning treatment based on patient-specific characteristics. However, existing methods often struggle with high-dimensional covariates, limiting accuracy, interpretability, and real-world applicability. We propose a novel sufficient dimension reduction approach that directly targets the contrast between potential outcomes and identifies a low-dimensional subspace of the covariates capturing treatment effect heterogeneity. This reduced representation enables more accurate estimation of optimal ITRs through outcome-weighted learning. To accommodate observational data, our method incorporates kernel-based covariate balancing, allowing treatment assignment to depend on the full covariate set and avoiding the restrictive assumption that the subspace sufficient for modeling heterogeneous treatment effects is also sufficient for confounding adjustment. We show that the proposed method achieves universal consistency, i.e., its risk converges to the Bayes risk, under mild regularity conditions. We demonstrate its finite sample performance through simulations and an analysis of intensive care unit sepsis patient data to determine who should receive transthoracic echocardiography.
https://arxiv.org/abs/2601.06782
Academic Papers
svg
b3f6dc811e9016ad70822f93965ea1bb52f9d432184bb072c1878e9d60bc384a
2026-01-13T00:00:00-05:00
Constrained Density Estimation via Optimal Transport
arXiv:2601.06830v1 Announce Type: cross Abstract: A novel framework for density estimation under expectation constraints is proposed. The framework minimizes the Wasserstein distance between the estimated density and a prior, subject to the constraints that the expected value of a set of functions adopts or exceeds given values. The framework is generalized to include regularization inequalities to mitigate the artifacts in the target measure. An annealing-like algorithm is developed to address non-smooth constraints, with its effectiveness demonstrated through both synthetic and proof-of-concept real world examples in finance.
https://arxiv.org/abs/2601.06830
Academic Papers
svg
3b07b88d81a943e7cdd9fdbcae42a30e55b42cccd7246c9937f30ad7b44ff92b
2026-01-13T00:00:00-05:00
Deep Learning Based Channel Extrapolation for Dual-Band Massive MIMO Systems
arXiv:2601.06858v1 Announce Type: cross Abstract: Future wireless communication systems will increasingly rely on the integration of millimeter wave (mmWave) and sub-6 GHz bands to meet heterogeneous demands on high-speed data transmission and extensive coverage. To fully exploit the benefits of mmWave bands in massive multiple-input multiple-output (MIMO) systems, highly accurate channel state information (CSI) is required. However, directly estimating the mmWave channel demands substantial pilot overhead due to the large CSI dimension and low signal-to-noise ratio (SNR) led by severe path loss and blockage attenuation. In this paper, we propose an efficient \textbf{M}ulti-\textbf{D}omain \textbf{F}usion \textbf{C}hannel \textbf{E}xtrapolator (MDFCE) to extrapolate sub-6 GHz band CSI to mmWave band CSI, so as to reduce the pilot overhead for mmWave CSI acquisition in dual band massive MIMO systems. Unlike traditional channel extrapolation methods based on mathematical modeling, the proposed MDFCE combines the mixture-of-experts framework and the multi-head self-attention mechanism to fuse multi-domain features of sub-6 GHz CSI, aiming to characterize the mapping from sub-6 GHz CSI to mmWave CSI effectively and efficiently. The simulation results demonstrate that MDFCE can achieve superior performance with less training pilots compared with existing methods across various antenna array scales and signal-to-noise ratio levels while showing a much higher computational efficiency.
https://arxiv.org/abs/2601.06858
Academic Papers
svg
01444dc943365259d39b027156f3223eb570bf45046c7a22c5636df84c15c863
2026-01-13T00:00:00-05:00
Surface Dean--Kawasaki equations
arXiv:2601.06863v1 Announce Type: cross Abstract: We consider stochastic particle dynamics on hypersurfaces represented in Monge gauge parametrization. Starting from the underlying Langevin system, we derive the surface Dean-Kawasaki (DK) equation and formulate it in the martingale sense. The resulting SPDE explicitly reflects the geometry of the hypersurface through the induced metric and its differential operators. Our framework accommodates both pairwise interactions and environmental potentials, and we extend the analysis to evolving hypersurfaces driven by an SDE that interacts with the particles, yielding the corresponding surface DK equation for the coupled surface-particle system. We establish a weak uniqueness result in the non-interacting case, and we develop a finite-volume discretization preserving the fluctuation-dissipation relation. Numerical experiments illustrate equilibrium properties and dynamical behavior influenced by surface geometry and external potentials.
https://arxiv.org/abs/2601.06863
Academic Papers
svg
93569ac1a93788f1802ec244eebf1d4a4d5e27818cd06b8a00a26cd366678508
2026-01-13T00:00:00-05:00
TagSpeech: End-to-End Multi-Speaker ASR and Diarization with Fine-Grained Temporal Grounding
arXiv:2601.06896v1 Announce Type: cross Abstract: We present TagSpeech, a unified LLM-based framework that utilizes Temporal Anchor Grounding for joint multi-speaker ASR and diarization. The framework is built on two key designs: (1) decoupled semantic and speaker streams fine-tuned via Serialized Output Training (SOT) to learn turn-taking dynamics; and (2) an interleaved time anchor mechanism that not only supports fine-grained timestamp prediction but also acts as a synchronization signal between semantic understanding and speaker tracking. Compared to previous works that primarily focus on speaker-attributed ASR or implicit diarization, TagSpeech addresses the challenge of fine-grained speaker-content alignment and explicitly models "who spoke what and when" in an end-to-end manner. Experiments on AMI and AliMeeting benchmarks demonstrate that our method achieves consistent improvements in Diarization Error Rate (DER) over strong end-to-end baselines, including Qwen-Omni and Gemini, particularly in handling complex speech overlaps. Moreover, TagSpeech employs a parameter-efficient training paradigm in which the LLM backbone is frozen and only lightweight projectors are trained, resulting in strong performance with low computational cost.
https://arxiv.org/abs/2601.06896
Academic Papers
svg
c44340d77d2692b461094a1949ca244e624c780df2877d188ad9de7b0626fa31
2026-01-13T00:00:00-05:00
The Impact of Anisotropic Covariance Structure on the Training Dynamics and Generalization Error of Linear Networks
arXiv:2601.06961v1 Announce Type: cross Abstract: The success of deep neural networks largely depends on the statistical structure of the training data. While learning dynamics and generalization on isotropic data are well-established, the impact of pronounced anisotropy on these crucial aspects is not yet fully understood. We examine the impact of data anisotropy, represented by a spiked covariance structure, a canonical yet tractable model, on the learning dynamics and generalization error of a two-layer linear network in a linear regression setting. Our analysis reveals that the learning dynamics proceed in two distinct phases, governed initially by the input-output correlation and subsequently by other principal directions of the data structure. Furthermore, we derive an analytical expression for the generalization error, quantifying how the alignment of the spike structure of the data with the learning task improves performance. Our findings offer deep theoretical insights into how data anisotropy shapes the learning trajectory and final performance, providing a foundation for understanding complex interactions in more advanced network architectures.
https://arxiv.org/abs/2601.06961
Academic Papers
svg
a05fe7213ace404d4361f43e37704882c71cc6a03834efbc014bfeaacc70eba6
2026-01-13T00:00:00-05:00
Benchmarking Autonomy in Scientific Experiments: A Hierarchical Taxonomy for Autonomous Large-Scale Facilities
arXiv:2601.06978v1 Announce Type: cross Abstract: The transition from automated data collection to fully autonomous discovery requires a shared vocabulary to benchmark progress. While the automotive industry relies on the SAE J3016 standard, current taxonomies for autonomous science presuppose an owner-operator model that is incompatible with the operational rigidities of Large-Scale User Facilities. Here, we propose the Benchmarking Autonomy in Scientific Experiments (BASE) Scale, a 6-level taxonomy (Levels 0-5) specifically adapted for these unique constraints. Unlike owner-operator models, User Facilities require zero-shot deployment where agents must operate immediately without extensive training periods. We define the specific technical requirements for each tier, identifying the Inference Barrier (Level 3) as the critical latency threshold where decisions shift from scalar feedback to semantic digital twins. Fundamentally, this level extends the decision manifold from spatial exploration to temporal gating, enabling the agent to synchronise acquisition with the onset of transient physical events. By establishing these operational definitions, the BASE Scale provides facility directors, funding bodies, and beamline scientists with a standardised metric to assess risk, define liability, and quantify the intelligence of experimental workflows.
https://arxiv.org/abs/2601.06978
Academic Papers
svg
e8545e0e9818e3d0641b795357884683368531b7881474dc02944c21b56625a5
2026-01-13T00:00:00-05:00
Match Made with Matrix Completion: Efficient Learning under Matching Interference
arXiv:2601.06982v1 Announce Type: cross Abstract: Matching markets face increasing needs to learn the matching qualities between demand and supply for effective design of matching policies. In practice, the matching rewards are high-dimensional due to the growing diversity of participants. We leverage a natural low-rank matrix structure of the matching rewards in these two-sided markets, and propose to utilize matrix completion to accelerate reward learning with limited offline data. A unique property for matrix completion in this setting is that the entries of the reward matrix are observed with matching interference -- i.e., the entries are not observed independently but dependently due to matching or budget constraints. Such matching dependence renders unique technical challenges, such as sub-optimality or inapplicability of the existing analytical tools in the matrix completion literature, since they typically rely on sample independence. In this paper, we first show that standard nuclear norm regularization remains theoretically effective under matching interference. We provide a near-optimal Frobenius norm guarantee in this setting, coupled with a new analytical technique. Next, to guide certain matching decisions, we develop a novel ``double-enhanced'' estimator, based off the nuclear norm estimator, with a near-optimal entry-wise guarantee. Our double-enhancement procedure can apply to broader sampling schemes even with dependence, which may be of independent interest. Additionally, we extend our approach to online learning settings with matching constraints such as optimal matching and stable matching, and present improved regret bounds in matrix dimensions. Finally, we demonstrate the practical value of our methods using both synthetic data and real data of labor markets.
https://arxiv.org/abs/2601.06982
Academic Papers
svg
bcb0f0217142b44fb5d93602fdcf1ed786ff3f8f7a2967f530991448367e35de
2026-01-13T00:00:00-05:00
Unity Forests: Improving Interaction Modelling and Interpretability in Random Forests
arXiv:2601.07003v1 Announce Type: cross Abstract: Random forests (RFs) are widely used for prediction and variable importance analysis and are often believed to capture any types of interactions via recursive splitting. However, since the splits are chosen locally, interactions are only reliably captured when at least one involved covariate has a marginal effect. We introduce unity forests (UFOs), an RF variant designed to better exploit interactions involving covariates without marginal effects. In UFOs, the first few splits of each tree are optimized jointly across a random covariate subset to form a "tree root" capturing such interactions; the remainder is grown conventionally. We further propose the unity variable importance measure (VIM), which is based on out-of-bag split criterion values from the tree roots. Here, only a small fraction of tree root splits with the highest in-bag criterion values are considered per covariate, reflecting that covariates with purely interaction-based effects are discriminative only if a split in an interacting covariate occurred earlier in the tree. Finally, we introduce covariate-representative tree roots (CRTRs), which select representative tree roots per covariate and provide interpretable insight into the conditions - marginal or interactive - under which each covariate has its strongest effects. In a simulation study, the unity VIM reliably identified interacting covariates without marginal effects, unlike conventional RF-based VIMs. In a large-scale real-data comparison, UFOs achieved higher discrimination and predictive accuracy than standard RFs, with comparable calibration. The CRTRs reproduced the covariates' true effect types reliably in simulated data and provided interesting insights in a real data analysis.
https://arxiv.org/abs/2601.07003
Academic Papers
svg
8d033234e332752294a665ab6b0a74c13cb3eafe50ea91b45ffd43f3829ef9d9
2026-01-13T00:00:00-05:00
Conditional Normalizing Flows for Forward and Backward Joint State and Parameter Estimation
arXiv:2601.07013v1 Announce Type: cross Abstract: Traditional filtering algorithms for state estimation -- such as classical Kalman filtering, unscented Kalman filtering, and particle filters - show performance degradation when applied to nonlinear systems whose uncertainty follows arbitrary non-Gaussian, and potentially multi-modal distributions. This study reviews recent approaches to state estimation via nonlinear filtering based on conditional normalizing flows, where the conditional embedding is generated by standard MLP architectures, transformers or selective state-space models (like Mamba-SSM). In addition, we test the effectiveness of an optimal-transport-inspired kinetic loss term in mitigating overparameterization in flows consisting of a large collection of transformations. We investigate the performance of these approaches on applications relevant to autonomous driving and patient population dynamics, paying special attention to how they handle time inversion and chained predictions. Finally, we assess the performance of various conditioning strategies for an application to real-world COVID-19 joint SIR system forecasting and parameter estimation.
https://arxiv.org/abs/2601.07013
Academic Papers
svg
5ec8fb7d3aa7d108231be4b2ec58b53971e05f9135d8e73b1fb14d9fe6856a43
2026-01-13T00:00:00-05:00
Local EGOP for Continuous Index Learning
arXiv:2601.07061v1 Announce Type: cross Abstract: We introduce the setting of continuous index learning, in which a function of many variables varies only along a small number of directions at each point. For efficient estimation, it is beneficial for a learning algorithm to adapt, near each point $x$, to the subspace that captures the local variability of the function $f$. We pose this task as kernel adaptation along a manifold with noise, and introduce Local EGOP learning, a recursive algorithm that utilizes the Expected Gradient Outer Product (EGOP) quadratic form as both a metric and inverse-covariance of our target distribution. We prove that Local EGOP learning adapts to the regularity of the function of interest, showing that under a supervised noisy manifold hypothesis, intrinsic dimensional learning rates are achieved for arbitrarily high-dimensional noise. Empirically, we compare our algorithm to the feature learning capabilities of deep learning. Additionally, we demonstrate improved regression quality compared to two-layer neural networks in the continuous single-index setting.
https://arxiv.org/abs/2601.07061
Academic Papers
svg
5a75dc57c0b04c2374a767312d754b68ce7b2ff2cb2828ef041403b85754b9fd
2026-01-13T00:00:00-05:00
Robust Mean Estimation under Quantization
arXiv:2601.07074v1 Announce Type: cross Abstract: We consider the problem of mean estimation under quantization and adversarial corruption. We construct multivariate robust estimators that are optimal up to logarithmic factors in two different settings. The first is a one-bit setting, where each bit depends only on a single sample, and the second is a partial quantization setting, in which the estimator may use a small fraction of unquantized data.
https://arxiv.org/abs/2601.07074
Academic Papers
svg
5f1246654042a4a0dd1d82af18bf2ab7ce8c871327d576024e61b0a3cba208ec
2026-01-13T00:00:00-05:00
Primal-Dual algorithms for Abstract convex functions with respect to quadratic functions
arXiv:2601.07076v1 Announce Type: cross Abstract: We consider the saddle point problem where the objective functions are abstract convex with respect to the class of quadratic functions. We propose primal-dual algorithms using the corresponding abstract proximal operator and investigate the convergence under certain restrictions. We test our algorithms by several numerical examples.
https://arxiv.org/abs/2601.07076
Academic Papers
svg
ec5f6127f559d09dfa7f2681e90f05f86bc11ae43f9fb90be94f0638fa84e8f4
2026-01-13T00:00:00-05:00
Adaptive Robust Control for Uncertain Systems with Ellipsoid-Set Learning
arXiv:2601.07079v1 Announce Type: cross Abstract: Despite the celebrated success of stochastic control approaches for uncertain systems, such approaches are limited in the ability to handle non-Gaussian uncertainties. This work presents an adaptive robust control for linear uncertain systems, whose process noise, observation noise, and system states are depicted by ellipsoid sets rather than Gaussian distributions. We design an ellipsoid-set learning method to estimate the boundaries of state sets, and incorporate the learned sets into the control law derivation to reduce conservativeness in robust control. Further, we consider the parametric uncertainties in state-space matrices. Particularly, we assign finite candidates for the uncertain parameters, and construct a bank of candidate-conditional robust control problems for each candidate. We derive the final control law by aggregating the candidate-conditional control laws. In this way, we separate the control scheme into parallel robust controls, decoupling the learning and control, which otherwise renders the control unattainable. We demonstrate the effectiveness of the proposed control in numerical simulations in the cases of linear quadratic regulation and tracking control.
https://arxiv.org/abs/2601.07079
Academic Papers
svg
057b524b04dc73df64b4fa74460ba56832283d452fe2a4e3e2b317f7cf71fe8a
2026-01-13T00:00:00-05:00
Robust Bayesian Optimization via Tempered Posteriors
arXiv:2601.07094v1 Announce Type: cross Abstract: Bayesian optimization (BO) iteratively fits a Gaussian process (GP) surrogate to accumulated evaluations and selects new queries via an acquisition function such as expected improvement (EI). In practice, BO often concentrates evaluations near the current incumbent, causing the surrogate to become overconfident and to understate predictive uncertainty in the region guiding subsequent decisions. We develop a robust GP-based BO via tempered posterior updates, which downweight the likelihood by a power $\alpha \in (0,1]$ to mitigate overconfidence under local misspecification. We establish cumulative regret bounds for tempered BO under a family of generalized improvement rules, including EI, and show that tempering yields strictly sharper worst-case regret guarantees than the standard posterior $(\alpha=1)$, with the most favorable guarantees occurring near the classical EI choice. Motivated by our theoretic findings, we propose a prequential procedure for selecting $\alpha$ online: it decreases $\alpha$ when realized prediction errors exceed model-implied uncertainty and returns $\alpha$ toward one as calibration improves. Empirical results demonstrate that tempering provides a practical yet theoretically grounded tool for stabilizing BO surrogates under localized sampling.
https://arxiv.org/abs/2601.07094
Academic Papers
svg
9e7a023d2f1e7e5d21fe80eb20bf76f6ad403cbf0731bbf628783f221639e1ec
2026-01-13T00:00:00-05:00
Symmetry Breaking, Hysteresis, and Convergence to the Mean Voter in two-party Spatial Competition
arXiv:2601.07108v1 Announce Type: cross Abstract: Classical spatial models of two-party competition typically predict convergence to the median voter, yet real-world party systems often exhibit persistent and asymmetric polarization. We develop a spatial model of two-party competition in which voters evaluate parties through general satisfaction functions, and a width parameter $q$ captures how tolerant they are of ideological distance. This parameter governs the balance between centripetal and centrifugal incentives and acts as the bifurcation parameter governing equilibrium configurations. Under mild regularity assumptions, we characterize Nash equilibria through center-distance coordinates, which separate the endogenous political center from polarization. When the voter density is symmetric, the reduced equilibrium condition exhibits a generic supercritical pitchfork bifurcation at a critical value $q_{c}$. Above $q_{c}$, the unique stable equilibrium features convergence to the center, recovering the classical median voter result, whereas below it two symmetric polarized equilibria arise. Asymmetry in the voter distribution unfolds the pitchfork, producing drift in the endogenous center and asymmetric polarized equilibria. The resulting equilibrium diagram has an S-shaped geometry that generates hysteresis, allowing polarization to persist even after tolerance returns to levels that would support convergence in a symmetric environment. In the high-tolerance regime, we show that the unique non-polarized equilibrium converges to the mean of the voter distribution, while the median is recovered only under symmetry. Hence, unlike the Hotelling--Downs model, where convergence to the median is universal, the median voter appears here as an asymptotic benchmark rather than a robust predictor.
https://arxiv.org/abs/2601.07108
Academic Papers
svg
d66d7496459ff317e1224b69f2d25b6198df1e7059dce36f29d73cb964cd1154
2026-01-13T00:00:00-05:00
The Potential Impact of Neuromorphic Computing on Radio Telescope Observatories
arXiv:2601.07130v1 Announce Type: cross Abstract: Radio astronomy relies on bespoke, experimental and innovative computing solutions. This will continue as next-generation telescopes such as the Square Kilometre Array (SKA) and next-generation Very Large Array (ngVLA) take shape. Under increasingly demanding power consumption, and increasingly challenging radio environments, science goals may become intractable with conventional von Neumann computing due to related power requirements. Neuromorphic computing offers a compelling alternative, and combined with a desire for data-driven methods, Spiking Neural Networks (SNNs) are a promising real-time power-efficient alternative. Radio Frequency Interference (RFI) detection is an attractive use-case for SNNs where recent exploration holds promise. This work presents a comprehensive analysis of the potential impact of deploying varying neuromorphic approaches across key stages in radio astronomy processing pipelines for several existing and near-term instruments. Our analysis paves a realistic path from near-term FPGA deployment of SNNs in existing instruments, allowing the addition of advanced data-driven RFI detection for no capital cost, to neuromorphic ASICs for future instruments, finding that commercially available solutions could reduce the power budget for key processing elements by up to three orders of magnitude, transforming the operational budget of the observatory. High-data-rate spectrographic processing could be a well-suited target for the neuromorphic computing industry, as we cast radio telescopes as the world's largest in-sensor compute challenge.
https://arxiv.org/abs/2601.07130
Academic Papers
svg
f9172d240f6330fd74e4d614bbbcb26b0507f3a230d47075584296f2eee01cab
2026-01-13T00:00:00-05:00
Optimal Transport under Group Fairness Constraints
arXiv:2601.07144v1 Announce Type: cross Abstract: Ensuring fairness in matching algorithms is a key challenge in allocating scarce resources and positions. Focusing on Optimal Transport (OT), we introduce a novel notion of group fairness requiring that the probability of matching two individuals from any two given groups in the OT plan satisfies a predefined target. We first propose \texttt{FairSinkhorn}, a modified Sinkhorn algorithm to compute perfectly fair transport plans efficiently. Since exact fairness can significantly degrade matching quality in practice, we then develop two relaxation strategies. The first one involves solving a penalised OT problem, for which we derive novel finite-sample complexity guarantees. This result is of independent interest as it can be generalized to arbitrary convex penalties. Our second strategy leverages bilevel optimization to learn a ground cost that induces a fair OT solution, and we establish a bound guaranteeing that the learned cost yields fair matchings on unseen data. Finally, we present empirical results that illustrate the trade-offs between fairness and performance.
https://arxiv.org/abs/2601.07144
Academic Papers
svg
8686278beb113e291f4fbfb202aabf3ed455c613a1a2061b4beaec413893f378
2026-01-13T00:00:00-05:00
Approximate FKG inequalities for phase-bound spin systems
arXiv:2601.07169v1 Announce Type: cross Abstract: The FKG inequality is an invaluable tool in monotone spin systems satisfying the FKG lattice condition, which provides positive correlations for all coordinate-wise increasing functions of spins. However, the FKG lattice condition is somewhat brittle and is not preserved when confining a spin system to a particular phase. For instance, consider the Curie-Weiss model, which is a model of a ferromagnet with two phases at low temperature corresponding to positive and negative overall magnetization. It is not a priori clear if each phase internally has positive correlations for increasing functions, or if the positive correlations in the model arise primarily from the global choice of positive or negative magnetization. In this article, we show that the individual phases do indeed satisfy an approximate form of the FKG inequality in a class of generalized higher-order Curie-Weiss models (including the standard Curie-Weiss model as a special case), as well as in ferromagnetic exponential random graph models (ERGMs). To cover both of these settings, we present a general result which allows for the derivation of such approximate FKG inequalities in a straightforward manner from inputs related to metastable mixing; we expect that this general result will be widely applicable. In addition, we derive some consequences of the approximate FKG inequality, including a version of a useful covariance inequality originally due to Newman as well as Bulinski and Shabanovich. We use this to extend the proof of the central limit theorem for ERGMs within a phase at low temperatures, due to the second author, to the non-forest phase-coexistence regime, answering a question posed by Bianchi, Collet, and Magnanini for the edge-triangle model.
https://arxiv.org/abs/2601.07169
Academic Papers
svg
80255c6a150a35a14aa1f8512ad430c876f1df560b6a4188132dffc9d71c40ea
2026-01-13T00:00:00-05:00
On Lie Groups Preserving Subspaces of Degenerate Clifford Algebras
arXiv:2601.07191v1 Announce Type: cross Abstract: This paper introduces Lie groups in degenerate geometric (Clifford) algebras that preserve four fundamental subspaces determined by the grade involution and reversion under the adjoint and twisted adjoint representations. We prove that these Lie groups can be equivalently defined using norm functions of multivectors applied in the theory of spin groups. We also study the corresponding Lie algebras. Some of these Lie groups and algebras are closely related to Heisenberg Lie groups and algebras. The introduced groups are interesting for various applications in physics and computer science, in particular, for constructing equivariant neural networks.
https://arxiv.org/abs/2601.07191
Academic Papers
svg
e3fa7404553e5fd45da6f9158c1ad52a783254e7c36996e73fc2336e6d454f94
2026-01-13T00:00:00-05:00
The ICASSP 2026 Automatic Song Aesthetics Evaluation Challenge
arXiv:2601.07237v1 Announce Type: cross Abstract: This paper summarizes the ICASSP 2026 Automatic Song Aesthetics Evaluation (ASAE) Challenge, which focuses on predicting the subjective aesthetic scores of AI-generated songs. The challenge consists of two tracks: Track 1 targets the prediction of the overall musicality score, while Track 2 focuses on predicting five fine-grained aesthetic scores. The challenge attracted strong interest from the research community and received numerous submissions from both academia and industry. Top-performing systems significantly surpassed the official baseline, demonstrating substantial progress in aligning objective metrics with human aesthetic preferences. The outcomes establish a standardized benchmark and advance human-aligned evaluation methodologies for modern music generation systems.
https://arxiv.org/abs/2601.07237
Academic Papers
svg
ae4885c944809cc38d78361533f5bd00b5e89f0f66504eec27032ab0b4cc3ec2
2026-01-13T00:00:00-05:00
Multi-environment Invariance Learning with Missing Data
arXiv:2601.07247v1 Announce Type: cross Abstract: Learning models that can handle distribution shifts is a key challenge in domain generalization. Invariance learning, an approach that focuses on identifying features invariant across environments, improves model generalization by capturing stable relationships, which may represent causal effects when the data distribution is encoded within a structural equation model (SEM) and satisfies modularity conditions. This has led to a growing body of work that builds on invariance learning, leveraging the inherent heterogeneity across environments to develop methods that provide causal explanations while enhancing robust prediction. However, in many practical scenarios, obtaining complete outcome data from each environment is challenging due to the high cost or complexity of data collection. This limitation in available data hinders the development of models that fully leverage environmental heterogeneity, making it crucial to address missing outcomes to improve both causal insights and robust prediction. In this work, we derive an estimator from the invariance objective under missing outcomes. We establish non-asymptotic guarantees on variable selection property and $\ell_2$ error convergence rates, which are influenced by the proportion of missing data and the quality of imputation models across environments. We evaluate the performance of the new estimator through extensive simulations and demonstrate its application using the UCI Bike Sharing dataset to predict the count of bike rentals. The results show that despite relying on a biased imputation model, the estimator is efficient and achieves lower prediction error, provided the bias is within a reasonable range.
https://arxiv.org/abs/2601.07247
Academic Papers
svg
f62cecbab7decd4d21b8f0aeb20d72c2bb3c6156512bb6ba04fd7aa320e47be6
2026-01-13T00:00:00-05:00
Robust maximum hands-off optimal control: existence, maximum principle, and $L^{0}$-$L^1$ equivalence
arXiv:2601.07256v1 Announce Type: cross Abstract: This work advances the maximum hands-off sparse control framework by developing a robust counterpart for constrained linear systems with parametric uncertainties. The resulting optimal control problem minimizes an $L^{0}$ objective subject to an uncountable, compact family of constraints, and is therefore a nonconvex, nonsmooth robust optimization problem. To address this, we replace the $L^{0}$ objective with its convex $L^{1}$ surrogate and, using a nonsmooth variant of the robust Pontryagin maximum principle, show that the $L^{0}$ and $L^{1}$ formulations have identical sets of optimal solutions -- we call this the robust hands-off principle. Building on this equivalence, we propose an algorithmic framework -- drawing on numerically viable techniques from the semi-infinite robust optimization literature -- to solve the resulting problems. An illustrative example is provided to demonstrate the effectiveness of the approach.
https://arxiv.org/abs/2601.07256
Academic Papers
svg
922a16427c20aa9f728369d17b2e6bbfbde7bcfe166122f64cd590ff360a6834
2026-01-13T00:00:00-05:00
Covariance-Driven Regression Trees: Reducing Overfitting in CART
arXiv:2601.07281v1 Announce Type: cross Abstract: Decision trees are powerful machine learning algorithms, widely used in fields such as economics and medicine for their simplicity and interpretability. However, decision trees such as CART are prone to overfitting, especially when grown deep or the sample size is small. Conventional methods to reduce overfitting include pre-pruning and post-pruning, which constrain the growth of uninformative branches. In this paper, we propose a complementary approach by introducing a covariance-driven splitting criterion for regression trees (CovRT). This method is more robust to overfitting than the empirical risk minimization criterion used in CART, as it produces more balanced and stable splits and more effectively identifies covariates with true signals. We establish an oracle inequality of CovRT and prove that its predictive accuracy is comparable to that of CART in high-dimensional settings. We find that CovRT achieves superior prediction accuracy compared to CART in both simulations and real-world tasks.
https://arxiv.org/abs/2601.07281
Academic Papers
svg
a9559d429f5799df0ad0103e388eeea33c161c2703755cc934d32f0920b03f1e
2026-01-13T00:00:00-05:00
Condorcet's Paradox as Non-Orientability
arXiv:2601.07283v1 Announce Type: cross Abstract: Preference cycles are prevalent in problems of decision-making, and are contradictory when preferences are assumed to be transitive. This contradiction underlies Condorcet's Paradox, a pioneering result of Social Choice Theory, wherein intuitive and seemingly desirable constraints on decision-making necessarily lead to contradictory preference cycles. Topological methods have since broadened Social Choice Theory and elucidated existing results. However, characterisations of preference cycles in Topological Social Choice Theory are lacking. In this paper, we address this gap by introducing a framework for topologically modelling preference cycles that generalises Baryshnikov's existing topological model of strict, ordinal preferences on 3 alternatives. In our framework, the contradiction underlying Condorcet's Paradox topologically corresponds to the non-orientability of a surface homeomorphic to either the Klein Bottle or Real Projective Plane, depending on how preference cycles are represented. These findings allow us to reduce Arrow's Impossibility Theorem to a statement about the orientability of a surface. Furthermore, these results contribute to existing wide-ranging interest in the relationship between non-orientability, impossibility phenomena in Economics, and logical paradoxes more broadly.
https://arxiv.org/abs/2601.07283
Academic Papers
svg
3424cbd23f56093723541affdc5d50cbbf33d3bd92ffc1d4e53160cba3db295c
2026-01-13T00:00:00-05:00
Variational Approximations for Robust Bayesian Inference via Rho-Posteriors
arXiv:2601.07325v1 Announce Type: cross Abstract: The $\rho$-posterior framework provides universal Bayesian estimation with explicit contamination rates and optimal convergence guarantees, but has remained computationally difficult due to an optimization over reference distributions that precludes intractable posterior computation. We develop a PAC-Bayesian framework that recovers these theoretical guarantees through temperature-dependent Gibbs posteriors, deriving finite-sample oracle inequalities with explicit rates and introducing tractable variational approximations that inherit the robustness properties of exact $\rho$-posteriors. Numerical experiments demonstrate that this approach achieves theoretical contamination rates while remaining computationally feasible, providing the first practical implementation of $\rho$-posterior inference with rigorous finite-sample guarantees.
https://arxiv.org/abs/2601.07325
Academic Papers
svg
7d892a398f43aafbba9b91679235883b99212854384498b4b33f6b478dcf93fb
2026-01-13T00:00:00-05:00
Convergence Rate Analysis of the AdamW-Style Shampoo: Unifying One-sided and Two-Sided Preconditioning
arXiv:2601.07326v1 Announce Type: cross Abstract: This paper studies the AdamW-style Shampoo optimizer, an effective implementation of classical Shampoo that notably won the external tuning track of the AlgoPerf neural network training algorithm competition. Our analysis unifies one-sided and two-sided preconditioning and establishes the convergence rate $\frac{1}{K}\sum_{k=1}^K E\left[\|\nabla f(X_k)\|_*\right]\leq O(\frac{\sqrt{m+n}C}{K^{1/4}})$ measured by nuclear norm, where $K$ represents the iteration number, $(m,n)$ denotes the size of matrix parameters, and $C$ matches the constant in the optimal convergence rate of SGD. Theoretically, we have $\|\nabla f(X)\|_F\leq \|\nabla f(X)\|_*\leq \sqrt{m+n}\|\nabla f(X)\|_F$, supporting that our convergence rate can be considered to be analogous to the optimal $\frac{1}{K}\sum_{k=1}^KE\left[\|\nabla f(X_k)\|_F\right]\leq O(\frac{C}{K^{1/4}})$ convergence rate of SGD in the ideal case of $\|\nabla f(X)\|_*= \Theta(\sqrt{m+n})\|\nabla f(X)\|_F$.
https://arxiv.org/abs/2601.07326
Academic Papers
svg
93b046c0ac40a88dae82a7060f554446aa3ef4c1c81ec7b6158d97066017eafd
2026-01-13T00:00:00-05:00
Efficient Convolutional Forward Model for Passive Acoustic Mapping and Temporal Monitoring
arXiv:2601.07356v1 Announce Type: cross Abstract: Passive acoustic mapping (PAM) is a key imaging technique for characterizing cavitation activity in therapeutic ultrasound applications. Recent model-based beamforming algorithms offer high reconstruction quality and strong physical interpretability. However, their computational burden and limited temporal resolution restrict their use in applications with time-evolving cavitation. To address these challenges, we introduce a PAM beamforming framework based on a novel convolutional formulation in the time domain, which enables efficient computation. In this framework, PAM is formulated as an inverse problem in which the forward operator maps spatiotemporal cavitation activity to recorded radio-frequency signals accounting for time-of-flight delays defined by the acquisition geometry. We then formulate a regularized inversion algorithm that incorporates prior knowledge on cavitation activity. Experimental results demonstrate that our framework outperforms classical beamforming methods, providing higher temporal resolution than frequency-domain techniques while substantially reducing computational burden compared with iterative time-domain formulations.
https://arxiv.org/abs/2601.07356
Academic Papers
svg
fde661dc74c222a610b5edd093d4478c8c5422d3a81b2752d26710fc684b3b2e
2026-01-13T00:00:00-05:00
Optimizing the Design of a Simple Three-Sphere Magnetic Microswimmer
arXiv:2601.07370v1 Announce Type: cross Abstract: When swimming at low Reynolds numbers, inertial effects are negligible and reciprocal movements cannot induce net motion. Instead, symmetry breaking is necessary to achieve net propulsion. Directed swimming can be supported by magnetic fields, which simultaneously provide a versatile means of remote actuation. Thus, we analyze the motion of a straight microswimmer composed of three magnetizable beads connected by two elastic links. The swimming mechanism is based on oriented external magnetic fields that oscillate in magnitude. Through induced reversible hysteretic collapse of the two segments of the swimmer, the two pairs of beads jump into contact and separate nonreciprocally. Due to higher-order hydrodynamic interactions, net displacement results after each cycle. Different microswimmers can be tuned to different driving amplitudes and frequencies, allowing for simultaneous independent control by just one external magnetic field. The swimmer geometry and magnetic field shape are optimized for maximum swimming speed using an evolutionary optimization strategy. Thanks to the simple working principle, an experimental realization of such a microrobot seems feasible and may open new approaches for microinvasive medical interventions such as targeted drug delivery.
https://arxiv.org/abs/2601.07370
Academic Papers
svg
c76016c12ad878edb1e53f40405b8006189453480962d61d2c2be672a363f1a2
2026-01-13T00:00:00-05:00
Layerwise goal-oriented adaptivity for neural ODEs: an optimal control perspective
arXiv:2601.07397v1 Announce Type: cross Abstract: In this work, we propose a novel layerwise adaptive construction method for neural network architectures. Our approach is based on a goal--oriented dual-weighted residual technique for the optimal control of neural differential equations. This leads to an ordinary differential equation constrained optimization problem with controls acting as coefficients and a specific loss function. We implement our approach on the basis of a DG(0) Galerkin discretization of the neural ODE, leading to an explicit Euler time marching scheme. For the optimization we use steepest descent. Finally, we apply our method to the construction of neural networks for the classification of data sets, where we present results for a selection of well known examples from the literature.
https://arxiv.org/abs/2601.07397
Academic Papers
svg
9a2a471f8af8b45a59e9e2f9cdd632f6b27cccc7283ad17538d6b3fe592fe8f7
2026-01-13T00:00:00-05:00
Position: Don't be Afraid of Over-Smoothing And Over-Squashing
arXiv:2601.07419v1 Announce Type: cross Abstract: Over-smoothing and over-squashing have been extensively studied in the literature on Graph Neural Networks (GNNs) over the past years. We challenge this prevailing focus in GNN research, arguing that these phenomena are less critical for practical applications than assumed. We suggest that performance decreases often stem from uninformative receptive fields rather than over-smoothing. We support this position with extensive experiments on several standard benchmark datasets, demonstrating that accuracy and over-smoothing are mostly uncorrelated and that optimal model depths remain small even with mitigation techniques, thus highlighting the negligible role of over-smoothing. Similarly, we challenge that over-squashing is always detrimental in practical applications. Instead, we posit that the distribution of relevant information over the graph frequently factorises and is often localised within a small k-hop neighbourhood, questioning the necessity of jointly observing entire receptive fields or engaging in an extensive search for long-range interactions. The results of our experiments show that architectural interventions designed to mitigate over-squashing fail to yield significant performance gains. This position paper advocates for a paradigm shift in theoretical research, urging a diligent analysis of learning tasks and datasets using statistics that measure the underlying distribution of label-relevant information to better understand their localisation and factorisation.
https://arxiv.org/abs/2601.07419
Academic Papers
svg
8111c571f9dfba03bd34921c551de2265a1663345f0c1d440a2e77819cef067f
2026-01-13T00:00:00-05:00
Nonquadratic global asymptotic stability certificates for saturated linear feedbacks
arXiv:2601.07431v1 Announce Type: cross Abstract: We establish sufficient conditions for positive (semi-)definiteness, with or without radial unboundedness, for nonquadratic Lyapunov function constructed as sign-indefinite quadratic forms involving the state and the deadzone of a suitable input. We then use these conditions to build weak nonquadratic Lyapunov functions establishing global asymptotic stability of linear systems in feedback through a saturation, leveraging invariance principles. Our results are shown to be non-conservative (necessary and sufficient) for a family of well known prototypical examples of linear SISO feedbacks that are not globally exponentially stabilizable (the so-called ANCBI plants). Our multi-input extension leads to convex stability analysis tests, formulated as linear matrix inequalities that are applicable to ANCBI non-globally-exponentially-stabilizable plants.
https://arxiv.org/abs/2601.07431
Academic Papers
svg
2a09142500e6abf219d16daab54a9485f2bab328cb81751fd245e7a14fa5c286
2026-01-13T00:00:00-05:00
PIDT: Physics-Informed Digital Twin for Optical Fiber Parameter Estimation
arXiv:2601.07436v1 Announce Type: cross Abstract: We propose physics-informed digital twin (PIDT): a fiber parameter estimation approach that combines a parameterized split-step method with a physics-informed loss. PIDT improves accuracy and convergence speed with lower complexity compared to previous neural operators.
https://arxiv.org/abs/2601.07436
Academic Papers
svg
f8cceb57c5288d79952d09866fa3f4975ee723d2078eb80e5e9318d0dc0439f3
2026-01-13T00:00:00-05:00
Advanced computing for reproducibility of astronomy Big Data Science, with a showcase of AMIGA and the SKA Science prototype
arXiv:2601.07439v1 Announce Type: cross Abstract: The Square Kilometre Array Observatory (SKAO) faces un- precedented technological challenges due to the vast scale and complexity of its data. This paper provides an overview of research by the AMIGA group to address these computing and reproducibility challenges. We present advancements in semantic data models, analysis services integrated into federated infrastructures, and the application to astronomy studies of techniques that enhance research transparency. By showcasing these astronomy work, we demonstrate that achieving reproducible science in the Big Data era is feasible. However, we conclude that for the SKAO to succeed, the development of the SKA Regional Centre Network (SRCNet) must explicitly incorporate these reproducibility requirements into its fundamental architectural design. Embedding these standards is crucial to enable the global community to conduct verifiable and sustainable research within a federated environment.
https://arxiv.org/abs/2601.07439
Academic Papers
svg
66686a711979c3268f94daad1077bf13c16b243277e94fa733aa079a4e668e03
2026-01-13T00:00:00-05:00
Data-Driven Stochastic VRP: Integration of Forecast Duration into Optimization for Utility Workforce Management
arXiv:2601.07514v1 Announce Type: cross Abstract: This paper investigates the integration of machine learning forecasts of intervention durations into a stochastic variant of the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). In particular, we exploit tree-based gradient boosting (XGBoost) trained on eight years of gas meter maintenance data to produce point predictions and uncertainty estimates, which then drive a multi-objective evolutionary optimization routine. The methodology addresses uncertainty through sub-Gaussian concentration bounds for route-level risk buffers and explicitly accounts for competing operational KPIs through a multi-objective formulation. Empirical analysis of prediction residuals validates the sub-Gaussian assumption underlying the risk model. From an empirical point of view, our results report improvements around 20-25\% in operator utilization and completion rates compared with plans computed using default durations. The integration of uncertainty quantification and risk-aware optimization provides a practical framework for handling stochastic service durations in real-world routing applications.
https://arxiv.org/abs/2601.07514
Academic Papers
svg
a970df61bd1e1e579e0980f62301596778ad38a5bc88c181ed0dea7710bb46ab
2026-01-13T00:00:00-05:00
Fast Multi-Stack Slice-to-Volume Reconstruction via Multi-Scale Unrolled Optimization
arXiv:2601.07519v1 Announce Type: cross Abstract: Fully convolutional networks have become the backbone of modern medical imaging due to their ability to learn multi-scale representations and perform end-to-end inference. Yet their potential for slice-to-volume reconstruction (SVR), the task of jointly estimating 3D anatomy and slice poses from misaligned 2D acquisitions, remains underexplored. We introduce a fast convolutional framework that fuses multiple orthogonal 2D slice stacks to recover coherent 3D structure and refines slice alignment through lightweight model-based optimization. Applied to fetal brain MRI, our approach reconstructs high-quality 3D volumes in under 10s, with 1s slice registration and accuracy on par with state-of-the-art iterative SVR pipelines, offering more than speedup. The framework uses non-rigid displacement fields to represent transformations, generalizing to other SVR problems like fetal body and placental MRI. Additionally, the fast inference time paves the way for real-time, scanner-side volumetric feedback during MRI acquisition.
https://arxiv.org/abs/2601.07519
Academic Papers
svg
edf67956bd2bfdef458e82cd26af9d9fdfb48768fb9a4ca528b882359258445c
2026-01-13T00:00:00-05:00
Nonparametric Kernel Clustering with Bandit Feedback
arXiv:2601.07535v1 Announce Type: cross Abstract: Clustering with bandit feedback refers to the problem of partitioning a set of items, where the clustering algorithm can sequentially query the items to receive noisy observations. The problem is formally posed as the task of partitioning the arms of an N-armed stochastic bandit according to their underlying distributions, grouping two arms together if and only if they share the same distribution, using samples collected sequentially and adaptively. This setting has gained attention in recent years due to its applicability in recommendation systems and crowdsourcing. Existing works on clustering with bandit feedback rely on a strong assumption that the underlying distributions are sub-Gaussian. As a consequence, the existing methods mainly cover settings with linearly-separable clusters, which has little practical relevance. We introduce a framework of ``nonparametric clustering with bandit feedback'', where the underlying arm distributions are not constrained to any parametric, and hence, it is applicable for active clustering of real-world datasets. We adopt a kernel-based approach, which allows us to reformulate the nonparametric problem as the task of clustering the arms according to their kernel mean embeddings in a reproducing kernel Hilbert space (RKHS). Building on this formulation, we introduce the KABC algorithm with theoretical correctness guarantees and analyze its sampling budget. We introduce a notion of signal-to-noise ratio for this problem that depends on the maximum mean discrepancy (MMD) between the arm distributions and on their variance in the RKHS. Our algorithm is adaptive to this unknown quantity: it does not require it as an input yet achieves instance-dependent guarantees.
https://arxiv.org/abs/2601.07535
Academic Papers
svg
8bdc470a032e4ef1426eacbfaafad0120c43949bc795d5ff74ec3d4918fcebdb
2026-01-13T00:00:00-05:00
A Model of Artificial Jagged Intelligence
arXiv:2601.07573v1 Announce Type: cross Abstract: Generative AI systems often display highly uneven performance across tasks that appear ``nearby'': they can be excellent on one prompt and confidently wrong on another with only small changes in wording or context. We call this phenomenon Artificial Jagged Intelligence (AJI). This paper develops a tractable economic model of AJI that treats adoption as an information problem: users care about \emph{local} reliability, but typically observe only coarse, global quality signals. In a baseline one-dimensional landscape, truth is a rough Brownian process, and the model ``knows'' scattered points drawn from a Poisson process. The model interpolates optimally, and the local error is measured by posterior variance. We derive an adoption threshold for a blind user, show that experienced errors are amplified by the inspection paradox, and interpret scaling laws as denser coverage that improves average quality without eliminating jaggedness. We then study mastery and calibration: a calibrated user who can condition on local uncertainty enjoys positive expected value even in domains that fail the blind adoption test. Modelling mastery as learning a reliability map via Gaussian process regression yields a learning-rate bound driven by information gain, clarifying when discovering ``where the model works'' is slow. Finally, we study how scaling interacts with discoverability: when calibrated signals and user mastery accelerate the harvesting of scale improvements, and when opacity can make gains from scaling effectively invisible.
https://arxiv.org/abs/2601.07573
Academic Papers
svg
280ae1b131071bdccb98d687be177e989ad0e9521a78beb0a94fe8722cb09429
2026-01-13T00:00:00-05:00
Large Language Models for Physics Instrument Design
arXiv:2601.07580v1 Announce Type: cross Abstract: We study the use of large language models (LLMs) for physics instrument design and compare their performance to reinforcement learning (RL). Using only prompting, LLMs are given task constraints and summaries of prior high-scoring designs and propose complete detector configurations, which we evaluate with the same simulators and reward functions used in RL-based optimization. Although RL yields stronger final designs, we find that modern LLMs consistently generate valid, resource-aware, and physically meaningful configurations that draw on broad pretrained knowledge of detector design principles and particle--matter interactions, despite having no task-specific training. Based on this result, as a first step toward hybrid design workflows, we explore pairing the LLMs with a dedicated trust region optimizer, serving as a precursor to future pipelines in which LLMs propose and structure design hypotheses while RL performs reward-driven optimization. Based on these experiments, we argue that LLMs are well suited as meta-planners: they can design and orchestrate RL-based optimization studies, define search strategies, and coordinate multiple interacting components within a unified workflow. In doing so, they point toward automated, closed-loop instrument design in which much of the human effort required to structure and supervise optimization can be reduced.
https://arxiv.org/abs/2601.07580
Academic Papers
svg
15977846df6c447e73053b2233304115d863da2fca06eaba57d05e7924aab553
2026-01-13T00:00:00-05:00
Machine learning nonequilibrium phase transitions in charge-density wave insulators
arXiv:2601.07583v1 Announce Type: cross Abstract: Nonequilibrium electronic forces play a central role in voltage-driven phase transitions but are notoriously expensive to evaluate in dynamical simulations. Here we develop a machine learning framework for adiabatic lattice dynamics coupled to nonequilibrium electrons, and demonstrate it for a gating induced insulator to metal transition out of a charge density wave state in the Holstein model. Although exact electronic forces can be obtained from nonequilibrium Green's function (NEGF) calculations, their high computational cost renders long time dynamical simulations prohibitively expensive. By exploiting the locality of the electronic response, we train a neural network to directly predict instantaneous local electronic forces from the lattice configuration, thereby bypassing repeated NEGF calculations during time evolution. When combined with Brownian dynamics, the resulting machine learning force field quantitatively reproduces domain wall motion and nonequilibrium phase transition dynamics obtained from full NEGF simulations, while achieving orders of magnitude gains in computational efficiency. Our results establish direct force learning as an efficient and accurate approach for simulating nonequilibrium lattice dynamics in driven quantum materials.
https://arxiv.org/abs/2601.07583
Academic Papers
svg
8ad7ebdc28eaff7b0d25164b7d5957db636a5f2ddfa9f83127d1ecc8afe732e0
2026-01-13T00:00:00-05:00
Temporal-Aligned Meta-Learning for Risk Management: A Stacking Approach for Multi-Source Credit Scoring
arXiv:2601.07588v1 Announce Type: cross Abstract: This paper presents a meta-learning framework for credit risk assessment of Italian Small and Medium Enterprises (SMEs) that explicitly addresses the temporal misalignment of credit scoring models. The approach aligns financial statement reference dates with evaluation dates, mitigating bias arising from publication delays and asynchronous data sources. It is based on a two-step temporal decomposition that at first estimates annual probabilities of default (PDs) anchored to balance-sheet reference dates (December 31st) through a static model. Then it models the monthly evolution of PDs using higher-frequency behavioral data. Finally, we employ stacking-based architecture to aggregate multiple scoring systems, each capturing complementary aspects of default risk, into a unified predictive model. In this way, first level model outputs are treated as learned representations that encode non-linear relationships in financial and behavioral indicators, allowing integration of new expert-based features without retraining base models. This design provides a coherent and interpretable solution to challenges typical of low-default environments, including heterogeneous default definitions and reporting delays. Empirical validation shows that the framework effectively captures credit risk evolution over time, improving temporal consistency and predictive stability relative to standard ensemble methods.
https://arxiv.org/abs/2601.07588
Academic Papers
svg
d264411e7b17cd10337955be99b9144c8bc3956db5f74b3419a26e1d2890f208
2026-01-13T00:00:00-05:00
Aggregating swarms through morphology handling design contingencies: from the sweet spot to a rich expressivity
arXiv:2601.07610v1 Announce Type: cross Abstract: Morphological computing, the use of the physical design of a robot to ease the realization of a given task has been proven to be a relevant concept in the context of swarm robotics. Here we demonstrate both experimentally and numerically, that the success of such a strategy may heavily rely on the type of policy adopted by the robots, as well as on the details of the physical design. To do so, we consider a swarm of robots, composed of Kilobots embedded in an exoskeleton, the design of which controls the propensity of the robots to align or anti-align with the direction of the external force they experience. We find experimentally that the contrast that was observed between the two morphologies in the success rate of a simple phototactic task, where the robots were programmed to stop when entering a light region, becomes dramatic, if the robots are not allowed to stop, and can only slow down. Building on a faithful physical model of the self-aligning dynamics of the robots, we perform numerical simulations and demonstrate on one hand that a precise tuning of the self-aligning strength around a sweet spot is required to achieve an efficient phototactic behavior, on the other hand that exploring a range of self-alignment strength allows for a rich expressivity of collective behaviors.
https://arxiv.org/abs/2601.07610
Academic Papers
svg
c5d585f210d215b0a7dcd98ab4b3d0235564f36529269b95fef43e2602b61e46
2026-01-13T00:00:00-05:00
Scattering at Interluminal Interfaces
arXiv:2601.06073v1 Announce Type: new Abstract: Scattering at interluminal modulation interfaces, where a sharp space-time perturbation moves at a velocity lying between the wave velocities of the two surrounding media, has remained an open problem for decades. This regime is somewhat reminiscent of the Cherenkov regime, in which the velocity of a charged particle exceeds the phase velocity of light in a medium. However, because it involves two media and a moving interface, it gives rise to richer and more complex scattering dynamics, with a single scattered wave when the incident wave propagates in the same direction as the interface and three scattered waves when they propagate in opposite directions. Existing studies address only limited non-magnetic configurations, and a general formulation has yet to be established. In this paper, we present a complete and general solution to scattering in the interluminal regime using a symmetric decomposition approach based on subluminal and superluminal limit interfaces, together with a space-time impulse response. This approach provides clear physical insight into the scattering features of the interluminal regime. Our results bridge the long-standing gap between the subluminal and superluminal regimes and elucidate the fundamental mechanisms underlying interluminal scattering.
https://arxiv.org/abs/2601.06073
Academic Papers
svg
3d9075e0d73f0980b3cfade93e6b0466580a44f726d1a7bada1e469fc3bd21ff
2026-01-13T00:00:00-05:00
A Polarization Hall Effect in Hydrated DNA
arXiv:2601.06089v1 Announce Type: new Abstract: Understanding how biological soft matter responds to electromagnetic fields under ambient conditions remains a central challenge, as thermal fluctuations are generally expected to suppress long-range organization. Here, we report that hydrated DNA exhibits a reproducible magnetic-field-induced transition characterized by a sharp transverse-voltage threshold (40-50 mV), followed by a regime of regular, phase-stable oscillations in the transverse polarization signal. These features emerge only beyond the threshold and display a pronounced temperature dependence, consistent with the formation of a collective mode within the hydrogen-bond network of the DNA-water interface. Motivated by recent studies of Hall-like responses carried by neutral excitations, including phonons, magnons, and excitons, we interpret the observed transverse signal in terms of coherent polarization dynamics of proton-proton-hole dipoles confined to a quasi-two-dimensional hydrated layer. Within this framework, the transverse response is attributed to a field-organized polarization mode; the measured transverse voltage arises from collective dipolar dynamics rather than steady carrier transport. These results identify hydrated DNA as a soft-matter system in which magnetic field and temperature jointly modulate collective polarization dynamics, providing a biologically relevant platform for exploring coherence and transverse responses in hydrogen-bonded media.
https://arxiv.org/abs/2601.06089
Academic Papers
svg
5ec8c29ae5d8fd858b1343347f1da16aa3d75c55a822a433280905b0d9728dc5
2026-01-13T00:00:00-05:00
Energetic vs Inference-Based Invisibility: Fisher-Information Analysis of Two-Layer Acoustic Near-Cloaks
arXiv:2601.06091v1 Announce Type: new Abstract: Near-cloaks based on passive coatings can strongly suppress scattered-field energy in a narrow frequency band, yet an observer's ability to infer object parameters from noisy measurements need not decrease proportionally. We develop a fully theoretical two-dimensional (2D) framework for a coated acoustic cylinder in an air background. Using an exact cylindrical-harmonic solution of the Helmholtz equation, we compute the modal scattering coefficients a_m(omega) for a core of radius a surrounded by two concentric effective-fluid layers, and we design the coating to cancel the dominant low-order multipoles (monopole m=0 and dipole m=+/-1) at a target frequency, yielding a narrowband near-cloak. Beyond the conventional energetic metric (total scattering width), we quantify information-based detectability through the Fisher information matrix (FIM) and the associated Cramer-Rao lower bounds (CRLBs) for joint estimation of the size-material parameter vector x=[a, rho1, c1]^T from noisy far-field data. A representative air-background study exhibits an approximately 25 dB reduction in total scattering width near the design frequency, while tr(FIM) decreases by only a few dB, demonstrating that energy-based and inference-based notions of invisibility are distinct objectives. We further provide a low-order analytic argument clarifying the mechanism behind this energetic-informational decoupling and report design-space and local-robustness diagnostics that highlight persistent trade-offs between scattering suppression and parameter identifiability.
https://arxiv.org/abs/2601.06091
Academic Papers
svg