id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
f252dff2cbc6a0467e638e6b1bc6aa890f30a606153a815775e9ceee0f75474f
2026-02-02T00:00:00-05:00
Emotions Where Art Thou: Understanding and Characterizing the Emotional Latent Space of Large Language Models
arXiv:2510.22042v2 Announce Type: replace Abstract: This work investigates how large language models (LLMs) internally represent emotion by analyzing the geometry of their hidden-state space. The paper identifies a low-dimensional emotional manifold and shows that emotional representations are directionally encoded, distributed across layers, and aligned with interpretable dimensions. These structures are stable across depth and generalize to eight real-world emotion datasets spanning five languages. Cross-domain alignment yields low error and strong linear probe performance, indicating a universal emotional subspace. Within this space, internal emotion perception can be steered while preserving semantics using a learned intervention module, with especially strong control for basic emotions across languages. These findings reveal a consistent and manipulable affective geometry in LLMs and offer insight into how they internalize and process emotion.
https://arxiv.org/abs/2510.22042
Academic Papers
svg
3b829bc6be9326106f5e259c54325235adb858a25cb8ed8ea6dc24e23e96f79a
2026-02-02T00:00:00-05:00
Batch Speculative Decoding Done Right
arXiv:2510.22876v2 Announce Type: replace Abstract: Speculative decoding must produce outputs distribution identical to standard autoregressive generation-this output equivalence is not an optimization target but the defining criterion of valid speculative decoding. We demonstrate that all existing batch speculative decoding implementations violate this fundamental requirement, producing corrupted outputs ranging from repetitive tokens to gibberish. These failures stem from the ragged tensor problem: sequences in the same batch accept different numbers of draft tokens, desynchronizing position IDs, attention masks, and KV-cache state. We present the first authentic batch speculative decoding framework. We (1) formalize the synchronization invariants that valid batch speculative decoding must satisfy, (2) present EQSPEC, the first algorithm that guarantees output equivalence, and analyze its cost structure to show that alignment overhead grows superlinearly and consumes up to 40\% of computation, and (3) introduce EXSPEC, which reduces this overhead through cross-batch scheduling that dynamically groups same-length sequences. On SpecBench across Vicuna-7B/68M, Qwen3-8B/0.6B, and GLM-4-9B/0.6B pairs, our methods achieve up to 3x throughput improvement at batch size 8 while maintaining algorithmic correctness. Our methods achieve 95\% decoding-equivalence, with residual divergence attributable to floating-point non-determinism in GPU inference, not the synchronization failures that cause near-zero equivalence of prior methods. Our code is available at https://github.com/eBay/spec_dec.
https://arxiv.org/abs/2510.22876
Academic Papers
svg
6004d982a376eb2ec83a9312d20d2fca3d82215e307bf15887f21c020f31371b
2026-02-02T00:00:00-05:00
Equivariant Neural Networks for General Linear Symmetries on Lie Algebras
arXiv:2510.22984v2 Announce Type: replace Abstract: Many scientific and geometric problems exhibit general linear symmetries, yet most equivariant neural networks are built for compact groups or simple vector features, limiting their reuse on matrix-valued data such as covariances, inertias, or shape tensors. We introduce Reductive Lie Neurons (ReLNs), an exactly GL(n)-equivariant architecture that natively supports matrix-valued and Lie-algebraic features. ReLNs resolve a central stability issue for reductive Lie algebras by introducing a non-degenerate adjoint (conjugation)-invariant bilinear form, enabling principled nonlinear interactions and invariant feature construction in a single architecture that transfers across subgroups without redesign. We demonstrate ReLNs on algebraic tasks with sl(3) and sp(4) symmetries, Lorentz-equivariant particle physics, uncertainty-aware drone state estimation via joint velocity-covariance processing, learning from 3D Gaussian-splat representations, and EMLP double-pendulum benchmark spanning multiple symmetry groups. ReLNs consistently match or outperform strong equivariant and self-supervised baselines while using substantially fewer parameters and compute, improving the accuracy-efficiency trade-off and providing a practical, reusable backbone for learning with broad linear symmetries. Project page: https://reductive-lie-neuron.github.io/
https://arxiv.org/abs/2510.22984
Academic Papers
svg
e541b9106acc35d1f9f0b0875cfbf5dbbbdd67e3822de21c868e8ed869cdc3bd
2026-02-02T00:00:00-05:00
Are Agents Probabilistic Automata? A Trace-Based, Memory-Constrained Theory of Agentic AI
arXiv:2510.23487v2 Announce Type: replace Abstract: This paper studies standard controller architectures for agentic AI and derives automata-theoretic models of their interaction behavior via trace semantics and abstraction. We model an agent implementation as a finite control program augmented with explicit memory primitives (bounded buffers, a call stack, or read/write external memory) and a stochastic policy component (e.g., an LLM) that selects among architecturally permitted actions. Instead of equating the concrete agent with a deterministic acceptor, we treat the agent-environment closed loop as inducing a probability distribution over finite interaction traces. Given an abstraction function $\Abs$ from concrete configurations to a finite abstract state space, we obtain a probabilistic trace language and an abstract probabilistic transition model $M_{\Abs}$ suitable for probabilistic model checking. Imposing explicit, framework-auditable restrictions on memory access and control flow, we prove that the support of the resulting trace language is regular for bounded-memory controllers, context-free for strict call-return controllers, and recursively enumerable for controllers equipped with unbounded read/write memory. These correspondences allow the reuse of existing verification methods for finite-state and pushdown systems, and they delineate precisely when undecidability barriers arise. The probabilistic semantics leads to quantitative analyses such as: what is the probability of entering an unsafe abstract region, and how can we bound this probability in the presence of environment nondeterminism.
https://arxiv.org/abs/2510.23487
Academic Papers
svg
b90a0178b989a97a2aac8e2f7f49e9b063e66593bb504802bd660a320f31a2dc
2026-02-02T00:00:00-05:00
Key and Value Weights Are Probably All You Need: On the Necessity of the Query, Key, Value weight Triplet in Encoder-Only and Decoder-Only Transformers
arXiv:2510.23912v4 Announce Type: replace Abstract: We theoretically investigate whether the Query, Key, Value weight triplet can be reduced in encoder-only and decoder-only transformers. Under mild assumptions, we prove that Query weights are redundant and can be replaced with the identity matrix, reducing attention parameters by $25\%$. This also simplifies optimization: attention logits become linear rather than quadratic in learned weights. Validating on decoder-only GPT-style small models trained from scratch, we find that with adjusted attention scaling and weight decay, reduced models match baseline performance despite fewer parameters. Training remains stable at over $3\times$ lower weight decay, suggesting Query weight elimination provides implicit regularization. Our analysis has also led us to a structural expressivity boundary: in the mathematically tractable ReLU setting, skip connections push MLPs into a generically disjoint function class at fixed width. These findings motivate investigation across modalities and at scale, where the observed stability and efficiency gains may prove most consequential.
https://arxiv.org/abs/2510.23912
Academic Papers
svg
557309a50795aed335e79b16f3e4e6961d9f4e248e7172e0badf615e163aa28a
2026-02-02T00:00:00-05:00
Can Aha Moments Be Fake? Identifying True and Decorative Thinking Steps in Chain-of-Thought
arXiv:2510.24941v2 Announce Type: replace Abstract: Large language models can generate long chain-of-thought (CoT) reasoning, but it remains unclear whether the verbalized steps reflect the models' internal thinking. In this work, we propose a True Thinking Score (TTS) to quantify the causal contribution of each step in CoT to the model's final prediction. Our experiments show that LLMs often interleave between true-thinking steps (which are genuinely used to compute the final output) and decorative-thinking steps (which give the appearance of reasoning but have minimal causal influence). We reveal that only a small subset of the total reasoning steps causally drive the model's prediction: e.g., on AIME, only an average of 2.3% of reasoning steps in CoT have a TTS >= 0.7 (range: 0-1) for Qwen-2.5. Furthermore, we find that LLMs can be steered to internally follow or disregard specific steps in their verbalized CoT using the identified TrueThinking direction. We highlight that self-verification steps in CoT (i.e., aha moments) can be decorative, while steering along the TrueThinking direction can force internal reasoning over these steps. Overall, our work reveals that LLMs often verbalize reasoning steps without performing them internally, challenging the efficiency of LLM reasoning and the trustworthiness of CoT.
https://arxiv.org/abs/2510.24941
Academic Papers
svg
8746b01a7f477299f3add446177cbb088ac8bdcbbd2650e4b767cd30747f3713
2026-02-02T00:00:00-05:00
An Analysis of Causal Effect Estimation using Outcome Invariant Data Augmentation
arXiv:2510.25128v2 Announce Type: replace Abstract: The technique of data augmentation (DA) is often used in machine learning for regularization purposes to better generalize under i.i.d. settings. In this work, we present a unifying framework with topics in causal inference to make a case for the use of DA beyond just the i.i.d. setting, but for generalization across interventions as well. Specifically, we argue that when the outcome generating mechanism is invariant to our choice of DA, then such augmentations can effectively be thought of as interventions on the treatment generating mechanism itself. This can potentially help to reduce bias in causal effect estimation arising from hidden confounders. In the presence of such unobserved confounding we typically make use of instrumental variables (IVs) -- sources of treatment randomization that are conditionally independent of the outcome. However, IVs may not be as readily available as DA for many applications, which is the main motivation behind this work. By appropriately regularizing IV based estimators, we introduce the concept of IV-like (IVL) regression for mitigating confounding bias and improving predictive performance across interventions even when certain IV properties are relaxed. Finally, we cast parameterized DA as an IVL regression problem and show that when used in composition can simulate a worst-case application of such DA, further improving performance on causal estimation and generalization tasks beyond what simple DA may offer. This is shown both theoretically for the population case and via simulation experiments for the finite sample case using a simple linear example. We also present real data experiments to support our case.
https://arxiv.org/abs/2510.25128
Academic Papers
svg
ad5f426f19b739630c9df8f9357e1243d0b2886ab486392dff1274d4d4c49fb9
2026-02-02T00:00:00-05:00
An Aristotelian ontology of instrumental goals: Structural features to be managed and not failures to be eliminated
arXiv:2510.25471v2 Announce Type: replace Abstract: Instrumental goals such as resource acquisition, power-seeking, and self-preservation are key to contemporary AI alignment research, yet the phenomenon's ontology remains under-theorised. This article develops an ontological account of instrumental goals and draws out governance-relevant distinctions for advanced AI systems. After systematising the dominant alignment literature on instrumental goals we offer an exploratory Aristotelian framework that treats advanced AI systems as complex artefacts whose ends are externally imposed through design, training and deployment. On a structural reading, Aristotle's notion of hypothetical necessity explains why, given an imposed end pursued over extended horizons in particular environments, certain enabling conditions become conditionally required, thereby yielding robust instrumental tendencies. On a contingent reading, accidental causation and chance-like intersections among training regimes, user inputs, infrastructure and deployment contexts can generate instrumental-goal-like behaviours not entailed by the imposed end-structure. This dual-aspect ontology motivates for governance and management approaches that treat instrumental goals as features of advanced AI systems to be managed rather than anomalies eliminable by technical interventions.
https://arxiv.org/abs/2510.25471
Academic Papers
svg
841922589d5be934af827c312b0ba0934f4654c7caac988984a8af4768a6bd29
2026-02-02T00:00:00-05:00
Right for the Right Reasons: Avoiding Reasoning Shortcuts via Prototypical Neurosymbolic AI
arXiv:2510.25497v3 Announce Type: replace Abstract: Neurosymbolic AI is growing in popularity thanks to its ability to combine neural perception and symbolic reasoning in end-to-end trainable models. However, recent findings reveal these are prone to shortcut reasoning, i.e., to learning unindented concepts--or neural predicates--which exploit spurious correlations to satisfy the symbolic constraints. In this paper, we address reasoning shortcuts at their root cause and we introduce Prototypical Neurosymbolic architectures. These models are able to satisfy the symbolic constraints (be right) because they have learnt the correct basic concepts (for the right reasons) and not because of spurious correlations, even in extremely low data regimes. Leveraging the theory of prototypical learning, we demonstrate that we can effectively avoid reasoning shortcuts by training the models to satisfy the background knowledge while taking into account the similarity of the input with respect to the handful of labelled datapoints. We extensively validate our approach on the recently proposed rsbench benchmark suite in a variety of settings and tasks with very scarce supervision: we show significant improvements in learning the right concepts both in synthetic tasks (MNIST-EvenOdd and Kand-Logic) and real-world, high-stake ones (BDD-OIA). Our findings pave the way to prototype grounding as an effective, annotation-efficient strategy for safe and reliable neurosymbolic learning.
https://arxiv.org/abs/2510.25497
Academic Papers
svg
5ca2dbb6bebc4c9dcc5d6e47cd2e1febd545bd1ff18f309fd0e64d5a03be9a21
2026-02-02T00:00:00-05:00
Metis-SPECS: Decoupling Multimodal Learning via Self-distilled Preference-based Cold Start
arXiv:2510.25801v3 Announce Type: replace Abstract: Reinforcement learning (RL) with verifiable rewards has recently catalyzed a wave of "MLLM-r1" approaches that bring RL to vision language models. Most representative paradigms begin with a cold start, typically employing supervised fine-tuning (SFT), to initialize the policy before RL. However, SFT-based cold start adopts the reasoning paradigm intertwined with task solution and output format, which may induce instruction-style overfitting, weakens out-of-distribution generalization, and ultimately affects downstream RL. We revisit the cold start along two views, its training method and data construction, and introduce the Generalization Factor (GF) coefficient to quantify the generalization capability under different methods. Our empirical study finds that preference-based training methods (e.g. DPO) generalizes better than SFT-based methods in cold start. Motivated by this, we propose SPECS-a Self-distilled, Preference-based Cold Start framework that decouples multimodal learning: (1) generates introspective preference data pairs via self-distillation, avoiding reliance on larger teachers or manual annotation; (2) performs preference-based training to learn, focusing on shallow, transferable surface-form criteria (format, structure, style) rather than memorizing content; and (3) hands off to RL with verifiable rewards for deep reasoning results. Experimental results across multiple multimodal benchmarks show that our decoupling learning framework yields consistent performance gains over strong baselines, improving MEGA-Bench by 4.1% and MathVista by 12.2%. Additional experiments indicate that SPECS contributes to reducing in-distribution "stuckness," improving exploration, stabilizing training, and raising the performance ceiling. Project Page: https://kwen-chen.github.io/SPECS-VL/
https://arxiv.org/abs/2510.25801
Academic Papers
svg
2301e80f84f474a563a3a6badd75920efd2de5685e8f665cee978b82dc2a7f03
2026-02-02T00:00:00-05:00
OneTrans: Unified Feature Interaction and Sequence Modeling with One Transformer in Industrial Recommender
arXiv:2510.26104v2 Announce Type: replace Abstract: In recommendation systems, scaling up feature-interaction modules (e.g., Wukong, RankMixer) or user-behavior sequence modules (e.g., LONGER) has achieved notable success. However, these efforts typically proceed on separate tracks, which not only hinders bidirectional information exchange but also prevents unified optimization and scaling. In this paper, we propose OneTrans, a unified Transformer backbone that simultaneously performs user-behavior sequence modeling and feature interaction. OneTrans employs a unified tokenizer to convert both sequential and non-sequential attributes into a single token sequence. The stacked OneTrans blocks share parameters across similar sequential tokens while assigning token-specific parameters to non-sequential tokens. Through causal attention and cross-request KV caching, OneTrans enables precomputation and caching of intermediate representations, significantly reducing computational costs during both training and inference. Experimental results on industrial-scale datasets demonstrate that OneTrans scales efficiently with increasing parameters, consistently outperforms strong baselines, and yields a 5.68% lift in per-user GMV in online A/B tests.
https://arxiv.org/abs/2510.26104
Academic Papers
svg
dfac74e5b608ec6e08065327395e60cdd42e84f9cfdc17a18a42ec27ce4e11ff
2026-02-02T00:00:00-05:00
BOTS: A Unified Framework for Bayesian Online Task Selection in LLM Reinforcement Finetuning
arXiv:2510.26374v3 Announce Type: replace Abstract: Reinforcement finetuning (RFT) is a key technique for aligning Large Language Models (LLMs) with human preferences and enhancing reasoning, yet its effectiveness is highly sensitive to which tasks are explored during training. Uniform task sampling is inefficient, wasting computation on tasks that are either trivial or unsolvable, while existing task selection methods often suffer from high rollout costs, poor adaptivity, or incomplete evidence. We introduce BOTS, a unified framework for Bayesian Online Task Selection in LLM reinforcement finetuning. Grounded in Bayesian inference, BOTS adaptively maintains posterior estimates of task difficulty as the model evolves. It jointly incorporates explicit evidence from direct evaluations of selected tasks and implicit evidence inferred from these evaluations for unselected tasks, with Thompson sampling ensuring a principled balance between exploration and exploitation for task selection. To make implicit evidence practical, we instantiate it with an ultra-light interpolation-based plug-in that estimates difficulties of tasks without extra rollouts, adding negligible overhead. Empirically, across diverse domains and LLM scales, BOTS consistently improves data efficiency and performance over baselines and ablations, providing a practical and extensible solution for dynamic task selection in RFT. Code is available at https://github.com/agentscope-ai/Trinity-RFT/tree/main/examples/bots.
https://arxiv.org/abs/2510.26374
Academic Papers
svg
bca4c8dd70d188828e44e804c2ebe875dff3f378201d6db96f6bc0a39153f802
2026-02-02T00:00:00-05:00
LoCoT2V-Bench: Benchmarking Long-Form and Complex Text-to-Video Generation
arXiv:2510.26412v2 Announce Type: replace Abstract: Recent advances in text-to-video generation have achieved impressive performance on short clips, yet evaluating long-form generation under complex textual inputs remains a significant challenge. In response to this challenge, we present LoCoT2V-Bench, a benchmark for long video generation (LVG) featuring multi-scene prompts with hierarchical metadata (e.g., character settings and camera behaviors), constructed from collected real-world videos. We further propose LoCoT2V-Eval, a multi-dimensional framework covering perceptual quality, text-video alignment, temporal quality, dynamic quality, and Human Expectation Realization Degree (HERD), with an emphasis on aspects such as fine-grained text-video alignment and temporal character consistency. Experiments on 13 representative LVG models reveal pronounced capability disparities across evaluation dimensions, with strong perceptual quality and background consistency but markedly weaker fine-grained text-video alignment and character consistency. These findings suggest that improving prompt faithfulness and identity preservation remains a key challenge for long-form video generation.
https://arxiv.org/abs/2510.26412
Academic Papers
svg
4e2080a2d00738392277092f137c53104832622350ae380af9eb4ffe64da35aa
2026-02-02T00:00:00-05:00
A Comprehensive Evaluation and Practice of System Penetration Testing
arXiv:2510.26555v2 Announce Type: replace Abstract: With the rapid advancement of information technology, the complexity of applications continues to increase, and the cybersecurity challenges we face are also escalating. This paper aims to investigate the methods and practices of system security penetration testing, exploring how to enhance system security through systematic penetration testing processes and technical approaches. It also examines existing penetration tools, analyzing their strengths, weaknesses, and applicable domains to guide penetration testers in tool selection. Furthermore, based on the penetration testing process outlined in this paper, appropriate tools are selected to replicate attack processes using target ranges and target machines. Finally, through practical case analysis, lessons learned from successful attacks are summarized to inform future research.
https://arxiv.org/abs/2510.26555
Academic Papers
svg
c44e6bbd1c17f4a742cae18f9f0ee0b39574c6906196fe20f51df5d7ea50ac78
2026-02-02T00:00:00-05:00
CATArena: Evaluating Evolutionary Capabilities of Code Agents via Iterative Tournaments
arXiv:2510.26852v2 Announce Type: replace Abstract: Current evaluation for Large Language Model (LLM) code agents predominantly focus on generating functional code in single-turn scenarios, which fails to evaluate the agent's capability for continuous code optimization and multi-turn iterative development. To bridge this gap, we introduce CATArena, a framework designed to evaluate the evolutionary capabilities of code agents via iterative tournaments. Agents engage in multi-turn tournaments and continuously refine their code through self-reflection and peer-learning based on comprehensive execution feedback. For evaluation, we propose a dual-metric system to decouple static generation proficiency from evolutionary potential. Extensive experiments reveal that an agent's evolutionary potential is not strictly correlated with its initial proficiency. Our analysis further reveals that current agents struggle to concurrently leverage both peer-learning and self-reflection for effective performance gains. Furthermore, the results validate CATArena's high extensibility and resistance to variance tasks, establishing it as a continuous and reliable standard for assessing the evolutionary capability of LLM code agents.
https://arxiv.org/abs/2510.26852
Academic Papers
svg
d4a6e21760b36ef2039a7243d39322ef4062914aa9c57bdc949e5d69003a7f8f
2026-02-02T00:00:00-05:00
Closing the Expression Gap in LLM Instructions via Socratic Questioning
arXiv:2510.27410v3 Announce Type: replace Abstract: A fundamental bottleneck in human-AI collaboration is the ``intention expression gap," the difficulty for humans to effectively convey complex, high-dimensional thoughts to AI. This challenge often traps users in inefficient trial-and-error loops and is exacerbated by the diverse expertise levels of users. We reframe this problem from passive instruction following to a Socratic collaboration paradigm, proposing an agent that actively probes for information to resolve its uncertainty about user intent. we name the proposed agent Nous, trained to acquire proficiency in this inquiry policy. The core mechanism of Nous is a training framework grounded in the first principles of information theory. Within this framework, we define the information gain from dialogue as an intrinsic reward signal, which is fundamentally equivalent to the reduction of Shannon entropy over a structured task space. This reward design enables us to avoid reliance on costly human preference annotations or external reward models. To validate our framework, we develop an automated simulation pipeline to generate a large-scale, preference-based dataset for the challenging task of scientific diagram generation. Comprehensive experiments, including ablations, subjective and objective evaluations, and tests across user expertise levels, demonstrate the effectiveness of our proposed framework. Nous achieves leading efficiency and output quality, while remaining robust to varying user expertise. In conclusion, our research provides a systematic methodology and a new perspective for addressing the issue of ambiguous intentions in complex human-machine collaboration.
https://arxiv.org/abs/2510.27410
Academic Papers
svg
d17d845c2865eafdd2bc2aeecb6e7dcee1663d72c951a6a87fc2d6e6c2327a02
2026-02-02T00:00:00-05:00
NegoCollab: A Common Representation Negotiation Approach for Heterogeneous Collaborative Perception
arXiv:2510.27647v3 Announce Type: replace Abstract: Collaborative perception improves task performance by expanding the perception range through information sharing among agents. . Immutable heterogeneity poses a significant challenge in collaborative perception, as participating agents may employ different and fixed perception models. This leads to domain gaps in the intermediate features shared among agents, consequently degrading collaborative performance. Aligning the features of all agents to a common representation can eliminate domain gaps with low training cost. However, in existing methods, the common representation is designated as the representation of a specific agent, making it difficult for agents with significant domain discrepancies from this specific agent to achieve proper alignment. This paper proposes NegoCollab, a heterogeneous collaboration method based on the negotiated common representation. It introduces a negotiator during training to derive the common representation from the local representations of each modality's agent, effectively reducing the inherent domain gap with the various local representations. In NegoCollab, the mutual transformation of features between the local representation space and the common representation space is achieved by a pair of sender and receiver. To better align local representations to the common representation containing multimodal information, we introduce structural alignment loss and pragmatic alignment loss in addition to the distribution alignment loss to supervise the training. This enables the knowledge in the common representation to be fully distilled into the sender.
https://arxiv.org/abs/2510.27647
Academic Papers
svg
b6fd91b3dccb3e558f101e5b64616ff99b5ea2498bc9c6d7fb908385d7ff95a2
2026-02-02T00:00:00-05:00
Sharpness-Guided Group Relative Policy Optimization via Probability Shaping
arXiv:2511.00066v3 Announce Type: replace Abstract: Reinforcement learning with verifiable rewards (RLVR) has become a practical route to improve large language model reasoning, and Group Relative Policy Optimization (GRPO) is a widely used optimizer in this setting. However, RLVR training is typically performed with limited control over generalization. We revisit GRPO through a robustness-based generalization view, where the generalization loss is upper bounded by a combination of the empirical loss and a sharpness surrogate measured by the gradient norm. Building on this perspective, we propose Sharpness-Guided GRPO (GRPO-SG), a simple token-weighted variant of GRPO that downweights tokens likely to cause overly large gradients, reducing sharp updates and stabilizing optimization, thereby improving generalization. Experiments across mathematical reasoning, logic puzzles and tool-augmented question answering show consistent improvements over GRPO, along with smoother gradient-norm trajectories, supporting GRPO-SG as a simple and effective generalization-oriented upgrade to GRPO for RLVR.
https://arxiv.org/abs/2511.00066
Academic Papers
svg
2694afc346a38ed8b13ac9bbf28a4422ac2296a5e6044a0cd0d7168f0bc3e22a
2026-02-02T00:00:00-05:00
Latent Domain Prompt Learning for Vision-Language Models
arXiv:2511.00067v2 Announce Type: replace Abstract: The objective of domain generalization (DG) is to enable models to be robust against domain shift. DG is crucial for deploying vision-language models (VLMs) in real-world applications, yet most existing methods rely on domain labels that may not be available and often ambiguous. We instead study the DG setting where models must generalize well without access to explicit domain labels. Our key idea is to represent an unseen target domain as a combination of latent domains automatically discovered from training data, enabling the model to adaptively transfer knowledge across domains. To realize this, we perform latent domain clustering on image features and fuse domain-specific text features based on the similarity between the input image and each latent domain. Experiments on four benchmarks show that this strategy yields consistent gains over VLM-based baselines and provides new insights into improving robustness under domain shift.
https://arxiv.org/abs/2511.00067
Academic Papers
svg
ab6381dfbacfa0bcc125ee9cf4561d8e9034c795846c94338a78398e9814627a
2026-02-02T00:00:00-05:00
Reviving Stale Updates: Data-Free Knowledge Distillation for Asynchronous Federated Learning
arXiv:2511.00655v2 Announce Type: replace Abstract: Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data, yet its scalability is limited by synchronization overhead. Asynchronous federated learning (AFL) alleviates this issue by allowing clients to communicate independently, thereby improving wall-clock efficiency in large-scale, hardware-heterogeneous environments. However, asynchrony introduces updates computed on outdated global models (staleness) that can destabilize optimization and hinder convergence. We propose FedRevive, an AFL framework that revives stale updates through data-free knowledge distillation (DFKD). FedRevive integrates parameter-space aggregation with a lightweight, server-side DFKD process that transfers knowledge from stale client updates to the current global model without access to data. A meta-learned generator synthesizes pseudo-samples used for multi-teacher distillation. A hybrid aggregation scheme that combines raw with DFKD updates effectively mitigates staleness while retaining AFL scalability. Experiments on various vision and text benchmarks show that FedRevive achieves faster training by up to 38.4% and higher final accuracy by up to 16.5% than asynchronous baselines.
https://arxiv.org/abs/2511.00655
Academic Papers
svg
e4a35409375cb6314b16572ee8f75a5a64399eccf3f61073106878076dd1ca1b
2026-02-02T00:00:00-05:00
Multi-Step Knowledge Interaction Analysis via Rank-2 Subspace Disentanglement
arXiv:2511.01706v2 Announce Type: replace Abstract: Natural Language Explanations (NLEs) describe how Large Language Models (LLMs) make decisions by drawing on external Context Knowledge (CK) and Parametric Knowledge (PK). Understanding the interaction between these sources is key to assessing NLE grounding, yet these dynamics remain underexplored. Prior work has largely focused on (1) single-step generation and (2) modelled PK-CK interaction as a binary choice within a rank-1 subspace. This approach overlooks richer interactions and how they unfold over longer generations, such as complementary or supportive knowledge. We propose a novel rank-2 projection subspace that disentangles PK and CK contributions more accurately and use it for the first multi-step analysis of knowledge interactions across longer NLE sequences. Experiments across four QA datasets and three open-weight LLMs demonstrate that while rank-1 subspaces struggle to represent diverse interactions, our rank-2 formulation captures them effectively, highlighting PK alignment for supportive interactions and CK alignment for conflicting ones. Our multi-step analysis reveals, among others, that hallucinated generations exhibit strong alignment with the PK direction, whereas context-faithful generations maintain a more balanced alignment between PK and CK.
https://arxiv.org/abs/2511.01706
Academic Papers
svg
e6eafdd14b64c73f9090063165e352b7e88a2bea5c60b8bd26427c9bdeb0b7d6
2026-02-02T00:00:00-05:00
On the Coordination of Value-Maximizing Bidders
arXiv:2511.04993v2 Announce Type: replace Abstract: While the auto-bidding literature predominantly considers independent bidding, we investigate the coordination problem among multiple auto-bidders in online advertising platforms. Two motivating scenarios are: collaborative bidding among multiple bidders managed by a third-party bidding agent, and strategic bid selection for multiple ad campaigns managed by a single advertiser. We formalize this coordination problem as a theoretical model and investigate the coordination mechanism where only the highest-value bidder competes with outside bidders, while other coordinated bidders refrain from competing. We demonstrate that such a coordination mechanism dominates independent bidding, improving both Return-on-Spend (RoS) compliance and the total value accrued for the participating auto-bidders or ad campaigns, for a broad class of auto-bidding algorithms. Additionally, our simulations on synthetic and real-world datasets support the theoretical result that coordination outperforms independent bidding. These findings highlight both the theoretical potential and the practical robustness of coordinated auto-bidding in online auctions.
https://arxiv.org/abs/2511.04993
Academic Papers
svg
65f9cac853851f13f3360b19baa2766e91a1b55770ed2b2aab14a13d906eb94d
2026-02-02T00:00:00-05:00
Multi-agent Coordination via Flow Matching
arXiv:2511.05005v2 Announce Type: replace Abstract: This work presents MAC-Flow, a simple yet expressive framework for multi-agent coordination. We argue that requirements of effective coordination are twofold: (i) a rich representation of the diverse joint behaviors present in offline data and (ii) the ability to act efficiently in real time. However, prior approaches often sacrifice one for the other, i.e., denoising diffusion-based solutions capture complex coordination but are computationally slow, while Gaussian policy-based solutions are fast but brittle in handling multi-agent interaction. MAC-Flow addresses this trade-off by first learning a flow-based representation of joint behaviors, and then distilling it into decentralized one-step policies that preserve coordination while enabling fast execution. Across four different benchmarks, including $12$ environments and $34$ datasets, MAC-Flow alleviates the trade-off between performance and computational cost, specifically achieving about $\boldsymbol{\times14.5}$ faster inference compared to diffusion-based MARL methods, while maintaining good performance. At the same time, its inference speed is similar to that of prior Gaussian policy-based offline multi-agent reinforcement learning (MARL) methods.
https://arxiv.org/abs/2511.05005
Academic Papers
svg
8e1c7b383c8643fc7edc901a206577c802b81745675f83ba15282a9d417b0d96
2026-02-02T00:00:00-05:00
Omni-View: Unlocking How Generation Facilitates Understanding in Unified 3D Model based on Multiview images
arXiv:2511.07222v2 Announce Type: replace Abstract: This paper presents Omni-View, which extends the unified multimodal understanding and generation to 3D scenes based on multiview images, exploring the principle that "generation facilitates understanding". Consisting of understanding model, texture module, and geometry module, Omni-View jointly models scene understanding, novel view synthesis, and geometry estimation, enabling synergistic interaction between 3D scene understanding and generation tasks. By design, it leverages the spatiotemporal modeling capabilities of its texture module responsible for appearance synthesis, alongside the explicit geometric constraints provided by its dedicated geometry module, thereby enriching the model's holistic understanding of 3D scenes. Trained with a two-stage strategy, Omni-View achieves a state-of-the-art score of 55.4 on the VSI-Bench benchmark, outperforming existing specialized 3D understanding models, while simultaneously delivering strong performance in both novel view synthesis and 3D scene generation. The code and pretraiend models are open-sourced at https://github.com/AIDC-AI/Omni-View.
https://arxiv.org/abs/2511.07222
Academic Papers
svg
f68669ddea819813ce2941c7b08c149d68db82d853e9b9bdc32ba555ddbdfd77
2026-02-02T00:00:00-05:00
Breaking the Adversarial Robustness-Performance Trade-off in Text Classification via Manifold Purification
arXiv:2511.07888v2 Announce Type: replace Abstract: A persistent challenge in text classification (TC) is that enhancing model robustness against adversarial attacks typically degrades performance on clean data. We argue that this challenge can be resolved by modeling the distribution of clean samples in the encoder embedding manifold. To this end, we propose the Manifold-Correcting Causal Flow (MC^2F), a two-module system that operates directly on sentence embeddings. A Stratified Riemannian Continuous Normalizing Flow (SR-CNF) learns the density of the clean data manifold. It identifies out-of-distribution embeddings, which are then corrected by a Geodesic Purification Solver. This solver projects adversarial points back onto the learned manifold via the shortest path, restoring a clean, semantically coherent representation. We conducted extensive evaluations on text classification (TC) across three datasets and multiple adversarial attacks. The results demonstrate that our method, MC^2F, not only establishes a new state-of-the-art in adversarial robustness but also fully preserves performance on clean data, even yielding modest gains in accuracy.
https://arxiv.org/abs/2511.07888
Academic Papers
svg
0c474b7c11692635e25c352a4f36c7d6f8a9698950566c2d5312175dc129c7e6
2026-02-02T00:00:00-05:00
MACEval: A Multi-Agent Continual Evaluation Network for Large Models
arXiv:2511.09139v2 Announce Type: replace Abstract: Hundreds of benchmarks dedicated to evaluating large models have been presented over the past few years. However, most of them remain closed-ended and are prone to overfitting due to the potential data contamination. Moreover, the increasing scale and scope of current benchmarks with transient metrics, as well as the heavily human-dependent curation procedure, pose significant challenges for timely maintenance and adaptation. In this paper, we introduce MACEval, a Multi-Agent Continual Evaluation network for dynamic evaluation of large models, and define new metrics to quantify performance longitudinally. MACEval employs an interactive and autonomous evaluation mode, utilizing role assignment, in-process data generation, and evaluation routing through a cascaded agent network. Extensive experiments on 23 large models demonstrate the effectiveness of MACEval, which also lightens the evaluation process and reduces a considerable amount of overhead. We hope that MACEval can broaden future directions of large model evaluation. Project page: https://github.com/zijianchen98/MACEval.
https://arxiv.org/abs/2511.09139
Academic Papers
svg
dc0889be37bde620d6583dc771bff52bf5bca63413a08c0bd0d400516a6093db
2026-02-02T00:00:00-05:00
SiDGen: Structure-informed Diffusion for Generative modeling of Ligands for Proteins
arXiv:2511.09529v3 Announce Type: replace Abstract: Structure-based drug design (SBDD) faces a fundamental scaling fidelity dilemma: rich pocket-aware conditioning captures interaction geometry but can be costly, often scales quadratically ($O(L^2)$) or worse with protein length ($L$), while efficient sequence-only conditioning can miss key interaction structure. We propose SiDGen, a structure-informed discrete diffusion framework that resolves this trade-off through a Topological Information Bottleneck (TIB). SiDGen leverages a learned, soft assignment mechanism to compress residue-level protein representations into a compact bottleneck enabling downstream pairwise computations on the coarse grid ($O(L^2/s^2)$). This design reduces memory and computational cost without compromising generative accuracy. Our approach achieves state-of-the-art performance on CrossDocked2020 and DUD-E benchmarks while significantly reducing pairwise-tensor memory. SiDGen bridges the gap between sequence-based efficiency and pocket-aware conditioning, offering a scalable path for high-throughput structure-based discovery.
https://arxiv.org/abs/2511.09529
Academic Papers
svg
57be49021535e02478c936f4aa7a084aeb3ea4633fe66959aa2474ee5e3ad82a
2026-02-02T00:00:00-05:00
Optimal Fairness under Local Differential Privacy
arXiv:2511.16377v2 Announce Type: replace Abstract: We investigate how to optimally design local differential privacy (LDP) mechanisms that reduce data unfairness and thereby improve fairness in downstream classification. We first derive a closed-form optimal mechanism for binary sensitive attributes and then develop a tractable optimization framework that yields the corresponding optimal mechanism for multi-valued attributes. As a theoretical contribution, we establish that for discrimination-accuracy optimal classifiers, reducing data unfairness necessarily leads to lower classification unfairness, thus providing a direct link between privacy-aware pre-processing and classification fairness. Empirically, we demonstrate that our approach consistently outperforms existing LDP mechanisms in reducing data unfairness across diverse datasets and fairness metrics, while maintaining accuracy close to that of non-private models. Moreover, compared with leading pre-processing and post-processing fairness methods, our mechanism achieves a more favorable accuracy-fairness trade-off while simultaneously preserving the privacy of sensitive attributes. Taken together, these results highlight LDP as a principled and effective pre-processing fairness intervention technique.
https://arxiv.org/abs/2511.16377
Academic Papers
svg
7e218333f63972021b89cf2bd526804db82747a43255fdaaa2a474b4304acd34
2026-02-02T00:00:00-05:00
Geometric-disentangelment Unlearning
arXiv:2511.17100v3 Announce Type: replace Abstract: Large language models (LLMs) can internalize private or harmful content, motivating unlearning that removes a forget set while preserving retaining knowledge. However, forgetting updates often cause collateral degradation on retaining knowledge, creating a persistent trade-off. Existing LLM unlearning methods are often heuristic, and other theoretical approaches rely on offline feature constructions that do not capture update-time forget-retain interaction in LLMs. To address this limitation, we aim to develop an LLM unlearning method that reduces the forget-retain trade-off with theoretical guarantees. We take a first-principles view by formalizing "no side effects" as local retain invariance under small parameter updates, and prove an equivalence under optimizer-induced geometry: the retain loss is locally invariant if and only if the update direction is orthogonal to the subspace spanned by retain gradients. Based on the insight, we propose Geometric-disentanglement Unlearning (GU), a lightweight and theoretically grounded projection that can be plug-and-play to existing gradient-based unlearning methods to mitigate forget-retain side effects. Experiments on TOFU, MUSE, and WMDP-cyber show that GU strengthens forgetting while reducing retain drift. When added to SimNPO, it achieves up to 62\% improved forgetting Extraction Strength (ES) and 31\% higher retain ES. We open-sourced our code in https://github.com/Lemutisme/Geometric-Unlearning.
https://arxiv.org/abs/2511.17100
Academic Papers
svg
320a774c87b03eaef8b0657de4cc16073456b4c777722c4d4891aca511d9fd3e
2026-02-02T00:00:00-05:00
RoboArmGS: High-Quality Robotic Arm Splatting via B\'ezier Curve Refinement
arXiv:2511.17961v2 Announce Type: replace Abstract: Constructing photorealistic and controllable robotic arm digital assets from real observations is fundamental to robotic applications. Current approaches naively bind static 3D Gaussians according to URDF links, forcing them to follow an URDF-rigged motion passively. However, the idealized URDF-rigged motion cannot accurately model the actual motion captured in real-world observations, leading to severe rendering artifacts in 3D Gaussians. To address these challenges, we propose RoboArmGS, a novel hybrid representation that refines the URDF-rigged motion with learnable B\'ezier curves, enabling more accurate real-world motion modeling. To be more specific, we present a learnable B\'ezier Curve motion refiner that corrects per-joint residuals to address mismatches between real-world motion and URDF-rigged motion. RoboArmGS enables the learning of more accurate real-world motion while achieving a coherent binding of 3D Gaussians across arm parts. To support future research, we contribute a carefully collected dataset named RoboArm4D, which comprises several widely used robotic arms for evaluating the quality of building high-quality digital assets. We evaluate our approach on RoboArm4D, and RoboArmGS achieves state-of-the-art performance in real-world motion modeling and rendering quality. The code and dataset will be released.
https://arxiv.org/abs/2511.17961
Academic Papers
svg
ef55d30d85bc32e8c88427274f56b3bf9eb051163f40ffbbd4d0e3c81f33ded2
2026-02-02T00:00:00-05:00
What Helps Language Models Predict Human Beliefs: Demographics or Prior Stances?
arXiv:2511.18616v2 Announce Type: replace Abstract: Beliefs shape how people reason, communicate, and behave. Rather than existing in isolation, they exhibit a rich correlational structure--some connected through logical dependencies, others through indirect associations or social processes. As usage of large language models (LLMs) becomes more ubiquitous in our society, LLMs' ability to understand and reason through human beliefs has many implications from privacy issues to personalized persuasion and the potential for stereotyping. Yet how LLMs capture this interrelated landscape of beliefs remains unclear. For instance, when predicting someone's beliefs, what information affects the prediction most--who they are (demographics), what else they believe (prior stances), or a combination of both? We address these questions using data from an online debate platform, evaluating the ability of off-the-shelf open-weight LLMs to predict individuals' stance under four conditions: no context, demographics only, prior beliefs only, and both combined. We find that both types of information improve predictions over a blind baseline, with their combination yielding the best performance in most cases. However, the relative value of each varies substantially across belief domains. These findings reveal how current LLMs leverage different types of social information when reasoning about human beliefs, highlighting both their capabilities and limitations.
https://arxiv.org/abs/2511.18616
Academic Papers
svg
cfd075dc7ed37d84ed9d37f1cb1af337b4d31beb0523d3c3c21a94baae91ade7
2026-02-02T00:00:00-05:00
SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature Space
arXiv:2511.20102v2 Announce Type: replace Abstract: Sparse attention reduces the quadratic complexity of full self-attention but faces two challenges: (1) an attention gap, where applying sparse attention to full-attention-trained models causes performance degradation due to train-inference distribution mismatch, and (2) a capability gap, where models trained purely with sparse attention lack complete gradient flow, preventing them from matching full-attention performance. We propose SSA (Sparse Sparse Attention), a training framework that integrates both sparse and full attention with bidirectional attention-output alignment. We prove that the approximation error scales linearly with the attention mass dropped under sparse attention, and show that SSA's alignment objective substantially reduces this quantity compared to baselines. Experiments demonstrate that SSA achieves state-of-the-art performance under both inference modes, adapts smoothly to varying sparsity budgets, and demonstrates superior long-context capabilities. The code is available at https://github.com/zhenyi4/ssa.
https://arxiv.org/abs/2511.20102
Academic Papers
svg
ab37ccc094f6cef1f96609e721356a2a724ecb3be43bcc189223defe79434225
2026-02-02T00:00:00-05:00
HAFO: A Force-Adaptive Control Framework for Humanoid Robots in Intense Interaction Environments
arXiv:2511.20275v4 Announce Type: replace Abstract: Reinforcement learning (RL) controllers have made impressive progress in humanoid locomotion and light-weight object manipulation. However, achieving robust and precise motion control with intense force interaction remains a significant challenge. To address these limitations, this paper proposes HAFO, a dual-agent reinforcement learning framework that concurrently optimizes both a robust locomotion strategy and a precise upper-body manipulation strategy via coupled training. We employ a constrained residual action space to improve dual-agent training stability and sample efficiency. The external tension disturbances are explicitly modeled using a spring-damper system, allowing for fine-grained force control through manipulation of the virtual spring. In this process, the reinforcement learning policy autonomously generates a disturbance-rejection response by utilizing environmental feedback. The experimental results demonstrate that HAFO achieves whole-body control for humanoid robots across diverse force-interaction environments using a single dual-agent policy, delivering outstanding performance under load-bearing and thrust-disturbance conditions, while maintaining stable operation even under rope suspension state.
https://arxiv.org/abs/2511.20275
Academic Papers
svg
42b31fa9412e27395a1f598556040f71d180d583ab26d86692e2b3196332dc5b
2026-02-02T00:00:00-05:00
Readout-Side Bypass for Residual Hybrid Quantum-Classical Models
arXiv:2511.20922v3 Announce Type: replace Abstract: Quantum machine learning (QML) promises compact and expressive representations, but suffers from the measurement bottleneck - a narrow quantum-to-classical readout that limits performance and amplifies privacy risk. We propose a lightweight residual hybrid architecture that concatenates quantum features with raw inputs before classification, bypassing the bottleneck without increasing quantum complexity. Experiments show our model outperforms pure quantum and prior hybrid models in both centralized and federated settings. It achieves up to +55% accuracy improvement over quantum baselines, while retaining low communication cost and enhanced privacy robustness. Ablation studies confirm the effectiveness of the residual connection at the quantum-classical interface. Our method offers a practical, near-term pathway for integrating quantum models into privacy-sensitive, resource-constrained settings like federated edge learning.
https://arxiv.org/abs/2511.20922
Academic Papers
svg
bd6a1152e33f1775f25a5813630bf0038dac4a322a81cdcab96e03f7e49a8bd7
2026-02-02T00:00:00-05:00
Robust gene prioritization for Dietary Restriction via Fast-mRMR Feature Selection techniques
arXiv:2511.21211v2 Announce Type: replace Abstract: Gene prioritization (identifying genes potentially associated with a biological process) is increasingly tackled with Artificial Intelligence. However, existing methods struggle with the high dimensionality and incomplete labelling of biomedical data. This work proposes a more robust and efficient pipeline that leverages Fast-mRMR Feature Selection to retain only relevant, non-redundant features for classifiers, building simpler, more interpretable and more efficient models. Experiments in our domain of interest, prioritizing genes related to Dietary Restriction (DR), show significant improvements over existing methods and enables us to integrate heterogeneous biological feature sets for better performance, a strategy that previously degraded performance due to noise accumulation. This work focuses on DR given the availability of curated data and expert knowledge for validation, yet this pipeline would be applicable to other biological processes, proving that feature selection is critical for reliable gene prioritization in high-dimensional omics.
https://arxiv.org/abs/2511.21211
Academic Papers
svg
b110be2792079cd34c7ca5e30bef1a50e3fcca68b08b18ae64e331ca44fb367c
2026-02-02T00:00:00-05:00
TALES: A Taxonomy and Analysis of Cultural Representations in LLM-generated Stories
arXiv:2511.21322v2 Announce Type: replace Abstract: Millions of users across the globe turn to AI chatbots for their creative needs, inviting widespread interest in understanding how they represent diverse cultures. However, evaluating cultural representations in open-ended tasks remains challenging and underexplored. In this work, we present TALES, an evaluation of cultural misrepresentations in LLM-generated stories for diverse Indian cultural identities. First, we develop TALES-Tax, a taxonomy of cultural misrepresentations by collating insights from participants with lived experiences in India through focus groups (N=9) and individual surveys (N=15). Using TALES-Tax, we evaluate 6 models through a large-scale annotation study spanning 2925 annotations from 108 annotators with lived experience and native language proficiency from across 71 regions in India and 14 languages. Concerningly, we find that 88% of the generated stories contain misrepresentations, and such errors are more prevalent in mid- and low-resourced languages and stories based in peri-urban regions in India. We also transform the annotations into TALES-QA, a standalone question bank to evaluate the cultural knowledge of models.
https://arxiv.org/abs/2511.21322
Academic Papers
svg
9ddeb71bdd376dc21cb28f05b074f81cd30aa71165f1c7879d2bcbc40ae5d8d4
2026-02-02T00:00:00-05:00
Age Optimal Sampling and Routing under Intermittent Links and Energy Constraints
arXiv:2512.00985v2 Announce Type: replace Abstract: Links in practical systems, such as satellite--terrestrial integrated networks, exhibit distinct delay distributions, intermittent availability, and heterogeneous energy costs. These characteristics pose significant challenges to maintaining timely and energy-efficient status updates. While link availability restricts feasible transmission routes, routing decisions determine the actual delay and energy expenditure. This paper tackles these challenges by jointly optimizing sampling and routing decisions to minimize monotonic, non-linear Age of Information (AoI). The proposed formulation incorporates key system features, including multiple routes with correlated random delays, stochastic link availability, and route-dependent energy consumption. We model the problem as an infinite-horizon Constrained Semi-Markov Decision Process (CSMDP) with a hybrid state--action space and develop an efficient nested algorithm, termed Bisec-\textsc{ReaVI}, to solve this problem. We analyze the structural properties of the solution and reveal a well-defined jointly optimal policy structure: (i) For general monotonic penalty functions, the optimal sampling policy is a piecewise linear waiting policy with at most $N$ breakpoints given $N$ routes; and (ii) under a derived Expected Penalty Ordering condition, the optimal routing policy is a monotonic threshold-based handover policy characterized by at most $\binom{N}{2}$ thresholds. Numerical experiments in a \textit{satellite--terrestrial} integrated routing scenario demonstrate that the proposed scheme efficiently balances energy usage and information freshness, and reveal a counter-intuitive insight: \textit{even routes with higher average delay, higher delay variance or lower availability can still play a critical role in minimizing monotonic functions of AoI}.
https://arxiv.org/abs/2512.00985
Academic Papers
svg
3c3d93c8ee86e42d7b48b631adaec6c217f1f3e34feda7f3c3e34340ebbc15d4
2026-02-02T00:00:00-05:00
Beyond Retrieval: A Modular Benchmark for Academic Deep Research Agents
arXiv:2512.00986v2 Announce Type: replace Abstract: A surge in academic publications calls for automated deep research (DR) systems, but accurately evaluating them is still an open problem. First, existing benchmarks often focus narrowly on retrieval while neglecting high-level planning and reasoning. Second, existing benchmarks favor general domains over the academic domains that are the core application for DR agents. To address these gaps, we introduce ADRA-Bank, a modular benchmark for Academic DR Agents. Grounded in academic literature, our benchmark is a human-annotated dataset of 200 instances across 10 academic domains, including both research and review papers. Furthermore, we propose a modular Evaluation Paradigm for Academic DR Agents (ADRA-Eval), which leverages the rich structure of academic papers to assess the core capabilities of planning, retrieval, and reasoning. It employs two complementary modes: an end-to-end evaluation for \task agents and an isolated evaluation for foundational LLMs as potential backbones. Results reveal uneven capabilities: while agents show specialized strengths, they struggle with multi-source retrieval and cross-field consistency. Moreover, improving high-level planning capability is the crucial factor for unlocking the reasoning potential of foundational LLMs as backbones. By exposing these actionable failure modes, ADRA-Bank provides a diagnostic tool to guide the development of more reliable automatic academic research assistants.
https://arxiv.org/abs/2512.00986
Academic Papers
svg
49ef5655ee6a80ce4a8e477075be623550f675545dc8ba1d3c4326d618bebe53
2026-02-02T00:00:00-05:00
ChartAnchor: Chart Grounding with Structural-Semantic Fidelity
arXiv:2512.01017v3 Announce Type: replace Abstract: Recent advances in multimodal large language models (MLLMs) highlight the need for benchmarks that rigorously evaluate structured chart comprehension. Chart grounding refers to the bidirectional alignment between a chart's visual appearance and its structured semantics. This task requires models to produce a symbolic specification that faithfully captures the chart's visual and structural intent, while also recovering the underlying tabular data with precise values and relationships. Chart grounding directly reflects a model's capabilities in numerical reasoning, multimodal alignment, and structural reconstruction, and has several important real-world applications. Existing benchmarks, constrained by narrow chart diversity, isolated tasks, and incomplete evaluation frameworks, fail to holistically assess grounding. To address this, we propose ChartAnchor, a comprehensive benchmark of 8k+ chart-table-code triples spanning 30 chart types drawn from diverse real-world and augmented sources. ChartAnchor introduces two complementary tasks: chart-to-code generation and controlled chart-to-table reconstruction, enabling cross-validation of visual and numerical fidelity. A multi-level evaluation framework integrates semantic validation, stylistic analysis, and perceptual metrics to assess both structural and content-level correctness. Extensive experiments on MLLMs reveal critical limitations in numerical precision and code synthesis, emphasizing the need for structured reasoning beyond surface-level perception. By unifying symbolic and data-driven grounding, ChartAnchor establishes a rigorous foundation for chart grounding, offering meaningful insights for advancing MLLMs in scientific, financial, and industrial domains.
https://arxiv.org/abs/2512.01017
Academic Papers
svg
7a9a0a606503d6e3e9393f3559b9e8f3ba5cdf8d7c9d36b2f1970b2c6882d805
2026-02-02T00:00:00-05:00
Structured Spectral Reasoning for Frequency-Adaptive Multimodal Recommendation
arXiv:2512.01372v3 Announce Type: replace Abstract: Multimodal recommendation aims to integrate collaborative signals with heterogeneous content such as visual and textual information, but remains challenged by modality-specific noise, semantic inconsistency, and unstable propagation over user-item graphs. These issues are often exacerbated by naive fusion or shallow modeling strategies, leading to degraded generalization and poor robustness. While recent work has explored the frequency domain as a lens to separate stable from noisy signals, most methods rely on static filtering or reweighting, lacking the ability to reason over spectral structure or adapt to modality-specific reliability. To address these challenges, we propose a Structured Spectral Reasoning (SSR) framework for frequency-aware multimodal recommendation. Our method follows a four-stage pipeline: (i) Decompose graph-based multimodal signals into spectral bands via graph-guided transformations to isolate semantic granularity; (ii) Modulate band-level reliability with spectral band masking, a training-time masking with a prediction-consistency objective that suppresses brittle frequency components; (iii) Fuse complementary frequency cues using hyperspectral reasoning with low-rank cross-band interaction; and (iv) Align modality-specific spectral features via contrastive regularization to promote semantic and structural consistency. Experiments on three real-world benchmarks show consistent gains over strong baselines, particularly under sparse and cold-start settings. Additional analyses indicate that structured spectral modeling improves robustness and provides clearer diagnostics of how different bands contribute to performance.
https://arxiv.org/abs/2512.01372
Academic Papers
svg
69cc86dbf8006a9568635dd457f10009af0fe89cb2f36b9af780c918b34d701b
2026-02-02T00:00:00-05:00
Tuning-Free Structured Sparse Recovery of Multiple Measurement Vectors using Implicit Regularization
arXiv:2512.03393v2 Announce Type: replace Abstract: Recovering jointly sparse signals in the multiple measurement vectors (MMV) setting is a fundamental problem in machine learning, but traditional methods often require careful parameter tuning or prior knowledge of the sparsity of the signal and/or noise variance. We propose a tuning-free framework that leverages implicit regularization (IR) from overparameterization to overcome this limitation. Our approach reparameterizes the estimation matrix into factors that decouple the shared row-support from individual vector entries and applies gradient descent to a standard least-squares objective. We prove that with a sufficiently small and balanced initialization, the optimization dynamics exhibit a "momentum-like" effect where the true support grows significantly faster. Leveraging a Lyapunov-based analysis of the gradient flow, we further establish formal guarantees that the solution trajectory converges towards an idealized row-sparse solution. Empirical results demonstrate that our tuning-free approach achieves performance comparable to optimally tuned established methods. Furthermore, our framework significantly outperforms these baselines in scenarios where accurate priors are unavailable to the baselines.
https://arxiv.org/abs/2512.03393
Academic Papers
svg
6c0aa1580c638413ff18c8c6e73f601e970747b3ce85e4f6f13690fe54611a91
2026-02-02T00:00:00-05:00
An Automated Framework for Large-Scale Graph-Based Cerebrovascular Analysis
arXiv:2512.03869v3 Announce Type: replace Abstract: We present CaravelMetrics, a computational framework for automated cerebrovascular analysis that models vessel morphology through skeletonization-derived graph representations. The framework integrates atlas-based regional parcellation, centerline extraction, and graph construction to compute fifteen morphometric, topological, fractal, and geometric features. The features can be estimated globally from the complete vascular network or regionally within arterial territories, enabling multiscale characterization of cerebrovascular organization. Applied to 570 3D TOF-MRA scans from the IXI dataset (ages 20-86), CaravelMetrics yields reproducible vessel graphs capturing age- and sex-related variations and education-associated increases in vascular complexity, consistent with findings reported in the literature. The framework provides a scalable and fully automated approach for quantitative cerebrovascular feature extraction, supporting normative modeling and population-level studies of vascular health and aging.
https://arxiv.org/abs/2512.03869
Academic Papers
svg
aaa191f3c86cb1758ade0c514ae33d8cdcca24d2a6722bea7184e68604386c7a
2026-02-02T00:00:00-05:00
EtCon: Edit-then-Consolidate for Reliable Knowledge Editing
arXiv:2512.04753v2 Announce Type: replace Abstract: Knowledge editing aims to update specific facts in large language models (LLMs) without full retraining. Prior efforts sought to tune the knowledge layers of LLMs, achieving improved performance in controlled, teacher-forced evaluations. However, they still encounter challenges in real-world autoregressive generation scenarios, which greatly limit their practical applicability. Our empirical analysis reveals two issues: (1) Most methods degrade pre-trained capabilities after injecting new knowledge; (2) They may exhibit a discrepancy between stored parametric knowledge and inference-time autoregressive generation behavior. To this end, we propose EtCon, an edit-then-consolidate paradigm that couples targeted edits with post-edit consolidation. Specifically, our framework comprises two stages: (1) Targeted Proximal Supervised Fine-Tuning (TPSFT) performs a constrained targeted edit to update parametric knowledge while controlling policy drift. (2) Group Relative Policy Optimization (GRPO) consolidates the edit by aligning autoregressive trajectories with the intended fact. Extensive experiments demonstrate that our EtCon improves editing reliability and real-world generalization, while better preserving pre-trained capabilities.
https://arxiv.org/abs/2512.04753
Academic Papers
svg
04f43e573718d8ab7c7300a55cd5e1c926a94dd2762783aa69defe15d524ac1d
2026-02-02T00:00:00-05:00
SHAP-Guided Kernel Actor-Critic for Explainable Reinforcement Learning
arXiv:2512.05291v2 Announce Type: replace Abstract: Actor-critic (AC) methods are a cornerstone of reinforcement learning (RL) but offer limited interpretability. Current explainable RL methods seldom use state attributions to assist training. Rather, they treat all state features equally, thereby neglecting the heterogeneous impacts of individual state dimensions on the reward. We propose RKHS-SHAP-based Advanced Actor-Critic (RSA2C), an attribution-aware, kernelized, two-timescale AC algorithm, including Actor, Value Critic, and Advantage Critic. The Actor is instantiated in a vector-valued reproducing kernel Hilbert space (RKHS) with a Mahalanobis-weighted operator-valued kernel, while the Value Critic and Advantage Critic reside in scalar RKHSs. These RKHS-enhanced components use sparsified dictionaries: the Value Critic maintains its own dictionary, while the Actor and Advantage Critic share one. State attributions, computed from the Value Critic via RKHS-SHAP (kernel mean embedding for on-manifold and conditional mean embedding for off-manifold expectations), are converted into Mahalanobis-gated weights that modulate Actor gradients and Advantage Critic targets. We derive a global, non-asymptotic convergence bound under state perturbations, showing stability through the perturbation-error term and efficiency through the convergence-error term. Empirical results on three continuous-control environments show that RSA2C achieves efficiency, stability, and interpretability.
https://arxiv.org/abs/2512.05291
Academic Papers
svg
627ac8534c4c4f6a98223e0ad77cfa841c40be846a40755c58eb4ea9389537ae
2026-02-02T00:00:00-05:00
Mitigating the Safety Alignment Tax with Null-Space Constrained Policy Optimization
arXiv:2512.11391v2 Announce Type: replace Abstract: As Large Language Models (LLMs) are increasingly deployed in real-world applications, it is important to ensure their behaviors align with human values, societal norms, and ethical principles. However, safety alignment under Reinforcement Learning (RL) often suffers from forgetting learned general abilities, which is also known as the alignment tax. To address this issue, we introduce Null-Space constrained Policy Optimization (NSPO), a novel RL framework for LLM safety alignment while preserving their core abilities. The safety policy gradients are geometrically projected into the null space of general tasks, thereby mitigating the safety alignment tax. In addition, we theoretically prove that NSPO preserves the model's original core capabilities, while still guaranteeing a descent direction for effective safety alignment. Extensive experiments demonstrate that NSPO outperforms existing methods by a large margin, achieving state-of-the-art safety performance without sacrificing accuracy on general tasks, including math, code, and instruction-following tasks. Notably, NSPO is data-efficient and only requires 40% of public human-annotated safety data from PKU-SafeRLHF to achieve promising safety performance, without a large amount of mixed general tasks data in existing alignment methods.
https://arxiv.org/abs/2512.11391
Academic Papers
svg
13972954ba02658a2479adce7ee2e531982b29335a794a1dd70280993b662a93
2026-02-02T00:00:00-05:00
Bounding Hallucinations: Information-Theoretic Guarantees for RAG Systems via Merlin-Arthur Protocols
arXiv:2512.11614v2 Announce Type: replace Abstract: Retrieval-augmented generation (RAG) relies on retrieved context to guide large language models (LLM), yet treats retrieval as a weak heuristic rather than verifiable evidence -- leading to unsupported answers, hallucinations, and reliance on spurious context. We introduce a novel training framework that treats the RAG pipeline as an interactive proof system by adapting the Merlin-Arthur (M/A) protocol: Arthur (the generator LLM) trains on questions with unknown context provenance and Merlin gives helpful evidence, while Morgana injects adversarial, misleading context. Both use an XAI method to identify and modify evidence most influential to Arthur. This trains Arthur to (1) answer when evidence supports the answer, (2) reject when evidence is insufficient, and (3) rely on the context spans that truly ground the answer. We further introduce a verification framework that disentangles explanation fidelity from model predictive errors, and introduce the Explained Information Fraction (EIF), which normalizes M/A mutual-information guarantees. Across three RAG datasets and multiple LLM families and sizes, M/A training makes LLMs more grounded in evidence, increases information theoretic measures (soundness, completeness) and reject behavior with less hallucinations, without manually annotated unanswerable samples. Finally, the retriever also improves recall and MRR via automatically generated M/A hard positives and negatives. While high accuracy does not guarantee entropy flow from context to answer, our EIF results show that autonomous interactive-proof-style supervision enables RAG systems that treat retrieved documents as verifiable evidence. % rather than suggestions.
https://arxiv.org/abs/2512.11614
Academic Papers
svg
5bfe11efc542be0a560a4addcd59eebd95bf613051e8bcbaa1baeb647d56dda6
2026-02-02T00:00:00-05:00
Video Deepfake Abuse: How Company Choices Predictably Shape Misuse Patterns
arXiv:2512.11815v2 Announce Type: replace Abstract: In 2022, AI image generators crossed a key threshold, enabling much more efficient and dynamic production of photorealistic deepfake images than before. This enabled opportunities for creative and positive uses of these models. However, it also enabled unprecedented opportunities for the low-effort creation of AI-generated non-consensual intimate imagery (AIG-NCII), including AI-generated child sexual abuse material (AIG-CSAM). Empirically, these harms were principally enabled by a small number of models that were trained on web data with pornographic content, released with open weights, and insufficiently safeguarded. In this paper, we observe ways in which the same patterns are emerging with video generation models in 2025. Specifically, we analyze how a small number of open-weight AI video generation models have become the dominant tools for videorealistic AIG-NCII video generation. We then analyze the literature on model safeguards and conclude that (1) developers who openly release the weights of capable video generation models without appropriate data curation and/or post-training safeguards foreseeably contribute to mitigatable downstream harm, and (2) model distribution platforms that do not proactively moderate individual misuse or models designed for AIG-NCII foreseeably amplify this harm. While there are no perfect defenses against AIG-NCII and AIG-CSAM from open-weight AI models, we argue that risk management by model developers and distributors, informed by emerging safeguard techniques, will substantially affect the future ease of creating AIG-NCII and AIG-CSAM with generative AI video tools.
https://arxiv.org/abs/2512.11815
Academic Papers
svg
29ec692da57fe9908c2dfaee023472616cc0a7b38513ecadf9217001b0776297
2026-02-02T00:00:00-05:00
ReGlove: A Soft Pneumatic Glove for Activities of Daily Living Assistance via Wrist-Mounted Vision
arXiv:2512.11824v2 Announce Type: replace Abstract: This paper presents ReGlove, a system that converts low-cost commercial pneumatic rehabilitation gloves into vision-guided assistive orthoses. Chronic upper-limb impairment affects millions worldwide, yet existing assistive technologies remain prohibitively expensive or rely on unreliable biological signals. Our platform integrates a wrist-mounted camera with an edge-computing inference engine (Raspberry Pi 5) to enable context-aware grasping without requiring reliable muscle signals. By adapting real-time YOLO-based computer vision models, the system achieves 96.73% grasp classification accuracy with sub-40.00 millisecond end-to-end latency. Physical validation using standardized benchmarks shows 82.71% success on YCB object manipulation and reliable performance across 27 Activities of Daily Living (ADL) tasks. With a total cost under $250 and exclusively commercial components, ReGlove provides a technical foundation for accessible, vision-based upper-limb assistance that could benefit populations excluded from traditional EMG-controlled devices.
https://arxiv.org/abs/2512.11824
Academic Papers
svg
191f1f3b11e6aa1583f65c9a14525400f7044e696317a86c9d0c2b30875bef9e
2026-02-02T00:00:00-05:00
From Tokens to Photons: Test-Time Physical Prompting for Vision-Language Models
arXiv:2512.12571v2 Announce Type: replace Abstract: To extend the application of vision-language models (VLMs) from web images to sensor-mediated physical environments, we propose Multi-View Physical-prompt for Test-Time Adaptation (MVP), a forward-only framework that moves test-time adaptation (TTA) from tokens to photons by treating the camera exposure triangle--ISO, shutter speed, and aperture--as physical prompts. At inference, MVP acquires a library of physical views per scene, selects the top-k sensor settings using a source-affinity score, evaluates each retained view under lightweight digital augmentations, filters the lowest-entropy subset of augmented views, and aggregates predictions with Zero-temperature softmax (i.e., hard voting). This selection-then-vote design is simple, calibration-friendly, and requires no gradients or model modifications. On ImageNet-ES and ImageNet-ES-Diverse, MVP consistently outperforms digital-only TTA on single Auto-Exposure captures, by up to 25.6 percentage points (pp), and delivers up to 3.4 pp additional gains over pipelines that combine conventional sensor control with TTA. MVP remains effective under reduced parameter candidate sets that lower capture latency, demonstrating practicality. These results support the main claim that, beyond post-capture prompting, measurement-time control--selecting and combining real physical views--substantially improves robustness for VLMs.
https://arxiv.org/abs/2512.12571
Academic Papers
svg
1ed73776aeb2830522638fd6537bee16f0425c0561801e50ad6ea5be2412b5b7
2026-02-02T00:00:00-05:00
Dual-Phase Federated Deep Unlearning via Weight-Aware Rollback and Reconstruction
arXiv:2512.13381v2 Announce Type: replace Abstract: Federated Unlearning (FUL) focuses on client data and computing power to offer a privacy-preserving solution. However, high computational demands, complex incentive mechanisms, and disparities in client-side computing power often lead to long times and higher costs. To address these challenges, many existing methods rely on server-side knowledge distillation that solely removes the updates of the target client, overlooking the privacy embedded in the contributions of other clients, which can lead to privacy leakage. In this work, we introduce DPUL, a novel server-side unlearning method that deeply unlearns all influential weights to prevent privacy pitfalls. Our approach comprises three components: (i) identifying high-weight parameters by filtering client update magnitudes, and rolling them back to ensure deep removal. (ii) leveraging the variational autoencoder (VAE) to reconstruct and eliminate low-weight parameters. (iii) utilizing a projection-based technique to recover the model. Experimental results on four datasets demonstrate that DPUL surpasses state-of-the-art baselines, providing a 1%-5% improvement in accuracy and up to 12x reduction in time cost.
https://arxiv.org/abs/2512.13381
Academic Papers
svg
1604e747f1b033f0147174f5f5915cf186bfec07a58a63d660850171e24fbae2
2026-02-02T00:00:00-05:00
Random-Bridges as Stochastic Transports for Generative Models
arXiv:2512.14190v2 Announce Type: replace Abstract: This paper motivates the use of random-bridges -- stochastic processes conditioned to take target distributions at fixed timepoints -- in the realm of generative modelling. Herein, random-bridges can act as stochastic transports between two probability distributions when appropriately initialized, and can display either Markovian or non-Markovian, and either continuous, discontinuous or hybrid patterns depending on the driving process. We show how one can start from general probabilistic statements and then branch out into specific representations for learning and simulation algorithms in terms of information processing. Our empirical results, built on Gaussian random bridges, produce high-quality samples in significantly fewer steps compared to traditional approaches, while achieving competitive Frechet inception distance scores. Our analysis provides evidence that the proposed framework is computationally cheap and suitable for high-speed generation tasks.
https://arxiv.org/abs/2512.14190
Academic Papers
svg
6eafb6849c240e04b7a1794701adb5fb0722a9e6a9ea3574dffd8988b074e166
2026-02-02T00:00:00-05:00
Uni-Parser Technical Report
arXiv:2512.15098v3 Announce Type: replace Abstract: This technical report introduces Uni-Parser, an industrial-grade document parsing engine tailored for scientific literature and patents, delivering high throughput, robust accuracy, and cost efficiency. Unlike pipeline-based document parsing methods, Uni-Parser employs a modular, loosely coupled multi-expert architecture that preserves fine-grained cross-modal alignments across text, equations, tables, figures, and chemical structures, while remaining easily extensible to emerging modalities. The system incorporates adaptive GPU load balancing, distributed inference, dynamic module orchestration, and configurable modes that support either holistic or modality-specific parsing. Optimized for large-scale cloud deployment, Uni-Parser achieves a processing rate of up to 20 PDF pages per second on 8 x NVIDIA RTX 4090D GPUs, enabling cost-efficient inference across billions of pages. This level of scalability facilitates a broad spectrum of downstream applications, ranging from literature retrieval and summarization to the extraction of chemical structures, reaction schemes, and bioactivity data, as well as the curation of large-scale corpora for training next-generation large language models and AI4Science models.
https://arxiv.org/abs/2512.15098
Academic Papers
svg
c0b861ea5f454eecb9b9c70fb709b7f9edf5ccd1de83d1a5623bf8301b2d636b
2026-02-02T00:00:00-05:00
RefineBridge: Generative Bridge Models Improve Financial Forecasting by Foundation Models
arXiv:2512.21572v2 Announce Type: replace Abstract: Financial time series forecasting is particularly challenging for transformer-based time series foundation models (TSFMs) due to non-stationarity, heavy-tailed distributions, and high-frequency noise present in data. Low-rank adaptation (LoRA) has become a popular parameter-efficient method for adapting pre-trained TSFMs to downstream data domains. However, it still underperforms in financial data, as it preserves the network architecture and training objective of TSFMs rather than complementing the foundation model. To further enhance TSFMs, we propose a novel refinement module, RefineBridge, built upon a tractable Schr\"odinger Bridge (SB) generative framework. Given the forecasts of TSFM as generative prior and the observed ground truths as targets, RefineBridge learns context-conditioned stochastic transport maps to improve TSFM predictions, iteratively approaching the ground-truth target from even a low-quality prior. Simulations on multiple financial benchmarks demonstrate that RefineBridge consistently improves the performance of state-of-the-art TSFMs across different prediction horizons.
https://arxiv.org/abs/2512.21572
Academic Papers
svg
161789b3eafdbdfd91039a4fac887e550cc9f0ef13698abe31f263288d1a0a52
2026-02-02T00:00:00-05:00
Multi-agent Adaptive Mechanism Design
arXiv:2512.21794v2 Announce Type: replace Abstract: We study a sequential mechanism design problem in which a principal seeks to elicit truthful reports from multiple rational agents while starting with no prior knowledge of agents' beliefs. We introduce Distributionally Robust Adaptive Mechanism (DRAM), a general framework combining insights from both mechanism design and online learning to jointly address truthfulness and cost-optimality. Throughout the sequential game, the mechanism estimates agents' beliefs and iteratively updates a distributionally robust linear program with shrinking ambiguity sets to reduce payments while preserving truthfulness. Our mechanism guarantees truthful reporting with high probability while achieving $\tilde{O}(\sqrt{T})$ cumulative regret, and we establish a matching lower bound showing that no truthful adaptive mechanism can asymptotically do better. The framework generalizes to plug-in estimators, supporting structured priors and delayed feedback. To our knowledge, this is the first adaptive mechanism under general settings that maintains truthfulness and achieves optimal regret when incentive constraints are unknown and must be learned.
https://arxiv.org/abs/2512.21794
Academic Papers
svg
f54cb57ee54352450de6b14555888cbcacb86134c1fdeda4153f416ec2e049d7
2026-02-02T00:00:00-05:00
SLIM-Brain: A Data- and Training-Efficient Foundation Model for fMRI Data Analysis
arXiv:2512.21881v3 Announce Type: replace Abstract: Foundation models are emerging as a powerful paradigm for fMRI analysis, but current approaches face a dual bottleneck of data- and training-efficiency. Atlas-based methods aggregate voxel signals into fixed regions of interest, reducing data dimensionality but discarding fine-grained spatial details, and requiring extremely large cohorts to train effectively as general-purpose foundation models. Atlas-free methods, on the other hand, operate directly on voxel-level information - preserving spatial fidelity but are prohibitively memory- and compute-intensive, making large-scale pre-training infeasible. We introduce SLIM-Brain (Sample-efficient, Low-memory fMRI Foundation Model for Human Brain), a new atlas-free foundation model that simultaneously improves both data- and training-efficiency. SLIM-Brain adopts a two-stage adaptive design: (i) a lightweight temporal extractor captures global context across full sequences and ranks data windows by saliency, and (ii) a 4D hierarchical encoder (Hiera-JEPA) learns fine-grained voxel-level representations only from the top-$k$ selected windows, while deleting about 70% masked patches. Extensive experiments across seven public benchmarks show that SLIM-Brain establishes new state-of-the-art performance on diverse tasks, while requiring only 4 thousand pre-training sessions and approximately 30% of GPU memory comparing to traditional voxel-level methods.
https://arxiv.org/abs/2512.21881
Academic Papers
svg
05b9af5b2e666c43970bc3927760013c2d439cb02c5b2f292e9f109d9b9e56f8
2026-02-02T00:00:00-05:00
Towards a Benchmark for Dependency Decision-Making
arXiv:2601.00205v2 Announce Type: replace Abstract: AI coding agents increasingly modify real software repositories and make dependency decisions, including adding, removing, or updating third-party packages. These choices can materially affect security posture and maintenance burden, yet repository-level evaluations largely emphasize test passing and executability without explicitly scoring whether systems (i) reuse existing dependencies, (ii) avoid unnecessary additions, or (iii) select versions that satisfy security and policy constraints. We propose DepDec-Bench, a benchmark for evaluating dependency decision-making beyond functional correctness. To ground DepDec-Bench in real-world behavior, we conduct a preliminary study of 117,062 dependency changes from agent- and human-authored pull requests across seven ecosystems. We show that coding agents frequently make dependency decisions with security consequences that remain invisible to test-focused evaluation: agents select PR-time known-vulnerable versions (2.46%) and exhibit net-negative security impact overall (net impact -98 vs. +1,316 for humans). These observations inform DepDec-Bench task families and metrics that evaluate safe version selection, reuse discipline, and restraint against dependency bloat alongside test passing.
https://arxiv.org/abs/2601.00205
Academic Papers
svg
60e3bcad086ec02ade556057d38986da566d94e147449acd7a5dedac9bf8f160
2026-02-02T00:00:00-05:00
Deep Delta Learning
arXiv:2601.00417v2 Announce Type: replace Abstract: The effectiveness of deep residual networks hinges on the identity shortcut connection. While this mechanism alleviates the vanishing-gradient problem, it also has a strictly additive inductive bias on feature transformations, limiting the network's ability to model complex hidden state transitions. In this paper, we introduce \textbf{Deep Delta Learning (DDL)}, which generalizes the shortcut from a fixed identity map to a learnable, state-dependent linear operator. The resulting Delta Operator is a rank-1 perturbation of the identity, $\mathbf{A}(\mathbf{X}) = \mathbf{I}- \beta(\mathbf{X})\mathbf{k} (\mathbf{X}) \mathbf{k} (\mathbf{X})^\top$, parameterized by a unit direction $\mathbf{k}(\mathbf{X})$ and a scalar gate $\beta(\mathbf{X})$. We provide a spectral analysis showing that $\beta(\mathbf{X})$ continuously interpolates the shortcut between identity ($\beta=0$), orthogonal projection ($\beta=1$), and Householder reflection ($\beta=2$). Furthermore, we rewrite the residual update as a synchronized rank-1 delta write: $\beta$ scales both the removal of the current $\mathbf{k}$-component and the injection of the new $\mathbf{k}$-component. This unification enables explicit control of the shortcut spectrum along a data-dependent direction while retaining stable training behavior. Empirically, replacing Transformer residual additions with DDL improves validation loss and perplexity, as well as downstream evaluation accuracy on language modeling tasks, with larger gains in the expanded-state setting.
https://arxiv.org/abs/2601.00417
Academic Papers
svg
e0ba5da409d36f12ad546916fa5a9f19f9ecfa888816f3d09a392e62e8f78739
2026-02-02T00:00:00-05:00
IRPM: Intergroup Relative Preference Modeling for Pointwise Generative Reward Models
arXiv:2601.00677v2 Announce Type: replace Abstract: Generative Reward Models (GRMs) have demonstrated strong performance in reward modeling, due to their interpretability and potential for refinement through reinforcement learning (RL). However, widely used pairwise GRMs create a computational bottleneck in reinforcement learning from human feedback (RLHF), when calibrating or aggregating preference signals over n candidates, often incurring O(n^2) pairwise judgments. To address this issue, we propose Intergroup Relative Preference Modeling (IRPM), an RL-based method that extends the Bradley--Terry preference-learning paradigm via intergroup comparisons to train pointwise GRMs from pairwise preference data. IRPM derives pointwise reward for each response by contrasting groups of chosen vs. rejected samples, enabling pointwise scores comparable across candidate sets and O(n) reward evaluation for a variable number of candidates during RL training, while preserving interpretability and scalability. Experiments show that IRPM achieves state-of-the-art performance among pointwise GRMs on RM-Bench, JudgeBench and RewardBench, and approaches the performance of leading pairwise GRMs. In addition, IRPM achieves substantial gains in post-training evaluations, demonstrating its effectiveness.
https://arxiv.org/abs/2601.00677
Academic Papers
svg
df7d7a7b239c62e5d650b979081ba5acbb2adf862b371fd3126e6db771952827
2026-02-02T00:00:00-05:00
AnimatedLLM: Explaining LLMs with Interactive Visualizations
arXiv:2601.04213v2 Announce Type: replace Abstract: Large language models (LLMs) are becoming central to natural language processing education, yet materials showing their mechanics are sparse. We present AnimatedLLM, an interactive web application that provides step-by-step visualizations of a Transformer language model. AnimatedLLM runs entirely in the browser, using pre-computed traces of open LLMs applied on manually curated inputs. The application is available at https://animatedllm.github.io, both as a teaching aid and for self-educational purposes.
https://arxiv.org/abs/2601.04213
Academic Papers
svg
22fa5844510494010dc23f9b4844a72b94c1ffe62737795817af12b37be0df5d
2026-02-02T00:00:00-05:00
Emergent Coordination in Multi-Agent Systems via Pressure Fields and Temporal Decay
arXiv:2601.08129v3 Announce Type: replace Abstract: Current multi-agent LLM frameworks rely on explicit orchestration patterns borrowed from human organizational structures: planners delegate to executors, managers coordinate workers, and hierarchical control flow governs agent interactions. These approaches suffer from coordination overhead that scales poorly with agent count and task complexity. We propose a fundamentally different paradigm inspired by natural coordination mechanisms: agents operate locally on a shared artifact, guided only by pressure gradients derived from measurable quality signals, with temporal decay preventing premature convergence. We formalize this as optimization over a pressure landscape and prove convergence guarantees under mild conditions. Empirically, on meeting room scheduling across 1,350 trials, pressure-field coordination outperforms all baselines: 48.5% aggregate solve rate versus 12.6% for conversation-based coordination, 1.5% for hierarchical control, and 0.4% for sequential and random baselines (all pairwise comparisons p < 0.001). Temporal decay is essential: disabling it reduces solve rate by 10 percentage points. On easy problems, pressure-field achieves 86.7% solve rate. The approach maintains consistent performance from 1 to 4 agents. Implicit coordination through shared pressure gradients outperforms explicit hierarchical control, suggesting that constraint-driven emergence offers a simpler and more effective foundation for multi-agent AI.
https://arxiv.org/abs/2601.08129
Academic Papers
svg
49b4f9196184a4eefe6e8877b0a4b3703e7985a5e04aac6c5120d9eb1e5f04eb
2026-02-02T00:00:00-05:00
DeepResearch Bench II: Diagnosing Deep Research Agents via Rubrics from Expert Report
arXiv:2601.08536v2 Announce Type: replace Abstract: Deep Research Systems (DRS) aim to help users search the web, synthesize information, and deliver comprehensive investigative reports. However, how to rigorously evaluate these systems remains under-explored. Existing deep-research benchmarks often fall into two failure modes. Some do not adequately test a system's ability to analyze evidence and write coherent reports. Others rely on evaluation criteria that are either overly coarse or directly defined by LLMs (or both), leading to scores that can be biased relative to human experts and are hard to verify or interpret. To address these issues, we introduce Deep Research Bench II, a new benchmark for evaluating DRS-generated reports. It contains 132 grounded research tasks across 22 domains; for each task, a system must produce a long-form research report that is evaluated by a set of 9430 fine-grained binary rubrics in total, covering three dimensions: information recall, analysis, and presentation. All rubrics are derived from carefully selected expert-written investigative articles and are constructed through a four-stage LLM+human pipeline that combines automatic extraction with over 400 human-hours of expert review, ensuring that the criteria are atomic, verifiable, and aligned with human expert judgment. We evaluate several state-of-the-art deep-research systems on Deep Research Bench II and find that even the strongest models satisfy fewer than 50% of the rubrics, revealing a substantial gap between current DRSs and human experts.
https://arxiv.org/abs/2601.08536
Academic Papers
svg
5974c3a9728977232d8bcbf206086fe4f2933dbd93c48492050c776089bdbe0f
2026-02-02T00:00:00-05:00
ATOD: An Evaluation Framework and Benchmark for Agentic Task-Oriented Dialogue Systems
arXiv:2601.11854v2 Announce Type: replace Abstract: Recent advances in task-oriented dialogue (TOD) systems, driven by large language models (LLMs) with extensive API and tool integration, have enabled conversational agents to coordinate interleaved goals, maintain long-horizon context, and act proactively through asynchronous execution. These capabilities extend beyond traditional TOD systems, yet existing benchmarks lack systematic support for evaluating such agentic behaviors. To address this gap, we introduce ATOD, a benchmark and synthetic dialogue generation pipeline that produces richly annotated conversations requiring long-term reasoning. ATOD captures key characteristics of advanced TOD, including multi-goal coordination, dependency management, memory, adaptability, and proactivity. Building on ATOD, we propose ATOD-Eval, a holistic evaluation framework that translates these dimensions into fine-grained metrics and supports reproducible offline and online evaluation. We further present a strong agentic memory-based evaluator for benchmarking on ATOD. Experiments show that ATOD-Eval enables comprehensive assessment across task completion, agentic capability, and response quality, and that the proposed evaluator offers a better accuracy-efficiency tradeoff compared to existing memory- and LLM-based approaches under this evaluation setting.
https://arxiv.org/abs/2601.11854
Academic Papers
svg
feb2571557e56c58eecee0e589673a2a74becef774b7a02bceb26c6ae7bf5209
2026-02-02T00:00:00-05:00
A combined criterion of surface free energy and roughness to predict the wettability of non-ideal low-energy surfaces
arXiv:2601.22172v1 Announce Type: new Abstract: The significance of wettability between solid and liquid substances in different fields encourages scientists to develop accurate models to estimate the resultant apparent contact angles. Surface free energy (SFE), which is principally defined for ideal (flat) surfaces, is not applicable to predict the wettability of real (rough) surfaces. This paper introduces a new parameter, namely normalized surface free energy (NSFE) as a combination of SFE and roughness, to predict the contact angle of liquids on non-ideal low-energy surfaces. The remarkable consistency of the predicted and measured contact angles of liquids on some rough surfaces also confirm the validity of the approach.
https://arxiv.org/abs/2601.22172
Academic Papers
svg
66b5f4e207124eb027edbc30426e49ae4e2a82feaa3cea6df20045189b916fbb
2026-02-02T00:00:00-05:00
Universal rapid machine learning models for predicting unconvoluted and convoluted X-ray Absorption Spectra
arXiv:2601.22173v1 Announce Type: new Abstract: X-ray absorption near edge structure (XANES) is an essential tool for elucidating the atomic-scale, local three-dimensional (3D) structure of given materials and molecules. The rapid computation of XANES based on molecular 3D structures constitutes a vital element of quantitative XANES analysis. Here, we present an XANES prediction model. It takes 3D structures as input and generates either unconvoluted XANES or convoluted spectra as output, demonstrating excellent generalizability across diverse instrumental broadening. This model has validated its predictive capability for both hard X-ray XAS (exemplified by K-edges of 3d 4d metals and lanthanides) and soft X-ray XAS (using S K-edge as examples). Adopting the model, XANES spectra of multiple elements can be predicted using a single unified model. A highly efficient 3D structure fitting algorithm based on this unconvoluted XANES prediction model, aiming to serve as an online data analysis method suitable for XAS beamlines.
https://arxiv.org/abs/2601.22173
Academic Papers
svg
246aa6daf0bfcb64e44ed3383d8350c8de56f787bc020c6a7fddbb85f361f369
2026-02-02T00:00:00-05:00
Resonant Coupling Between Electromagnetic Waves and Protein Conformational Dynamics Revealed by Molecular Dynamics Simulations
arXiv:2601.22180v1 Announce Type: new Abstract: The biological effects of electromagnetic fields on proteins remain controversial beyond well-established thermal mechanisms, particularly with respect to frequency-dependent responses. Here, we propose that electromagnetic waves can modulate protein conformation through resonant coupling with intrinsic protein dynamics. Molecular dynamics simulations were employed to characterize spontaneous conformational fluctuations in the absence of external fields, and a tiered screening strategy combined with fast Fourier transform analysis was used to identify dominant intrinsic frequencies associated with periodically fluctuating non-covalent atom or residue pairs. Oscillating external electric fields were subsequently applied at resonant and off-resonant frequencies to evaluate conformational responses across diverse protein systems. The results demonstrate that resonant excitation induces significantly enhanced backbone conformational deviations compared to off-resonant conditions, with the effect becoming more pronounced in structurally flexible and multichain proteins. These findings provide atomistic evidence for frequency-specific resonance between electromagnetic fields and protein conformational dynamics, offering mechanistic insight into frequency-dependent electromagnetic effects and a computational framework for electromagnetic wave-based modulation of protein function.
https://arxiv.org/abs/2601.22180
Academic Papers
svg
8c53e3fcc5a69e8efff7137abfd93408b8400bd2bf6dd3c9368d9542061c66a1
2026-02-02T00:00:00-05:00
Performance evaluation of an offshore wave measurement buoy in monochromatic waves
arXiv:2601.22186v1 Announce Type: new Abstract: The accurate measurement of waves underpins marine energy resource characterization, device design, and project development. Datawell wave buoys are widely deployed around the world and have long served as a trusted standard for wave measurements. We quantify the measurement performance, including wave elevation and energy flux estimation, of a Datawell DWR-MkIII buoy using prescribed monochromatic heave motions on a large-amplitude six-degree-of-freedom motion platform at the National Laboratory of the Rockies, assuming the buoy behaves as an ideal wave follower. Commanded motions were validated with an optical motion tracking system while buoy elevation and raw acceleration were recorded. Wave elevations were propagated to wave energy flux estimation using four methods, including one frequency-domain method and three time-domain methods. The Bayesian optimization was applied for design of experiments, and records from three test sites were also applied and evaluated in the present study. Results show two error regions within the nominal period range of 1.6s to 30s}. For wave periods between 5s and 25s, the buoy provides accurate wave height measurements. For short periods less than 5s, the 1.28Hz sampling frequency induces sub-Nyquist artifacts that bias elevation and can drive maximum energy flux estimation errors above 100%. For long periods exceeding 25s, the buoy reported elevation is underpredicted with error depending on period but relatively independent of wave height, with maximum wave height and wave energy flux errors reaching 64% and 87%, respectively. The analysis of field data also indicates that the currently recommended method for estimating wave energy flux may underestimate the wave energy flux.
https://arxiv.org/abs/2601.22186
Academic Papers
svg
68cedcd9b72cbd59977b129932210d12b6b28d6666633724f25bec1cd57600fc
2026-02-02T00:00:00-05:00
The Beta-Bound: Drift constraints for Gated Quantum Probabilities
arXiv:2601.22188v1 Announce Type: new Abstract: Quantum mechanics provides extraordinarily accurate probabilistic predictions, yet the framework remains silent on what distinguishes quantum systems from definite measurement outcomes. This paper develops a measurement-theoretic framework for projective gating. The central object is the $\beta$-bound, an inequality that controls how much probability assignments can drift when gating and measurement fail to commute. For a density operator $\rho$, projector $F$, and effect $E$, with gate-passage probability $s = {\rm Tr}(\rho F)$ and commutator norm $\varepsilon = \|[F, E]\|$, the symmetric partial-gating drift satisfies $|\Delta p_F(E)| \leq 2 \sqrt{(1 - s)/s} \cdot \varepsilon$. The constant 2 is sharp. We introduce two diagnostic quantities: the coherence witness $W(\rho, F) = \|F \rho (I - F)\|_1$, measuring cross-boundary coherence, and the record fidelity gap $\Delta_T(\rho_F, R)$, measuring expectation-value change under symmetrisation. Three experimental vignettes demonstrate falsifiability: Hong--Ou--Mandel interferometry, atomic energy-basis dephasing, and decoherence-induced classicality. The framework is operational and interpretation-neutral, compatible with Everettian, Bohmian, QBist, and collapse approaches. It provides quantitative structure that any interpretation must accommodate, along with a template for experimental tests.
https://arxiv.org/abs/2601.22188
Academic Papers
svg
3435d77522d109a697c9117ccc00b99371fdcc69bec81dc86c5ecce59ddcd786
2026-02-02T00:00:00-05:00
Chitosan/alginate bionanocomposites adorned with mesoporous silica nanoparticles for bone tissue engineering
arXiv:2601.22192v1 Announce Type: new Abstract: The regeneration of oral and craniofacial bone defects ranging from minor periodontal and peri-implant defects to large and critical lesions imposes a substantial global health burden. Conventional therapies are associated with several limitations, highlighting the development of a unique treatment strategy, such as tissue engineering. A well-designed scaffold for bone tissue engineering should possess biocompatibility, biodegradability, mechanical strength, and osteoconductivity. For this purpose, mesoporous silica nanoparticles (MSNs) were synthesized and incorporated at different ratios (10, 20, and 30%) into alginate/chitosan (Alg/Chit)-based porous composite scaffolds fabricated through the freeze-drying method. The MSN incorporation significantly improved the mechanical strength of the scaffolds while showing a negligible decreasing effect on the porosity. All of the samples showed desirable swelling behaviors, which is beneficial for cell attachment and proliferation. The MSN-containing scaffolds indicated a decreased hydrolytic degradation in an MSN percentage-dependent manner. The fabricated scaffolds did not depict cytotoxic characteristics. The Alg/Chit/MSN30 scaffolds not only showed noncytotoxic properties, but also increased the cell viability significantly compared to the control group. The biomineralization properties of the MSN-containing nanocomposite scaffolds were significantly higher than the Alg/Chit composite, suggesting the potential of these nanoparticles for bone tissue engineering applications. Taken together, it is concluded that the Alg/Chit/ MSN30 scaffolds are considerable substances for bone tissue regeneration, and MSN has a great tissue engineering potential in addition to its extensive biomedical applications.
https://arxiv.org/abs/2601.22192
Academic Papers
svg
13d3123a8cdeafb1064756ca4a94d5e47af6a8581f6ec8a243771ea6765232d6
2026-02-02T00:00:00-05:00
Zero-information limit of a collective olfactory search model
arXiv:2601.22233v1 Announce Type: new Abstract: We address the problem of how individuals can integrate efficiently their private behavior with information provided by others within a group. To this end, we consider the model of collective search introduced in [https://doi.org/10.1103/PhysRevE.102.012402], under a minimal setting with no olfactory information. Agents combine a private exploratory behavior and a social imitation consisting in aligning to their neighbors, and weigh the two contributions with a single ``trust" parameter that controls their relative influence. We find that an optimal trust parameter exists even in the absence of olfactory information, as was observed in the original model. Optimality is dictated by the need to explore the minimal region of space that contains the target. An optimal trust parameter emerges from this constraint because it it tunes imitation, which induces a collective mechanism of inertia affecting the size and path of the swarm. We predict the optimal trust parameter for cohesive groups where all agents interact with one another. We show how optimality depends on the initialization of the agents and the unknown location of the target, in close agreement with numerical simulations. Our results may be leveraged to optimize the design of swarm robotics or to understand information integration in organisms with decentralized nervous systems such as cephalopods.
https://arxiv.org/abs/2601.22233
Academic Papers
svg
8a66b0f508d87a2565e69dcfd02d779eb2cc0d6fc7479fbfc58db32de2d70750
2026-02-02T00:00:00-05:00
Time-domain optical coherence tomography at 2 $\mu\mathrm{m}$ using GaSb-based broadband superluminescent diode
arXiv:2601.22261v1 Announce Type: new Abstract: We report a time-domain optical coherence tomography (TD-OCT) system operating in the 2 $\mu\mathrm{m}$ spectral region, enabled by a GaSb-based superluminescent diode (SLD). The spectrum emitted by the SLD exhibits a full-width half-maximum (FWHM) of $\sim$80 nm centred near 2.1 $\mu\mathrm{m}$. For OCT operation, stable amplified spontaneous emission with low spectral ripple ($<20\%$) is maintained at drive currents below 150 mA. The SLD is fiber coupled and integrated into a fiber-based Michelson interferometer. In the OCT system, the measured coherence envelope yields an axial resolution of approximately 300 $\mu$m in air and enables depth-resolved imaging of scattering paint-based coating samples. In contrast to OCT implementations at 2 $\mu\mathrm{m}$ wavelength region that commonly rely on supercontinuum sources, the use of GaSb-based SLDs offers a compact practical alternative, leveraging the maturity and scalability of electrically driven semiconductor light sources packaged in a standard "butterfly" module. This report represents the first demonstration of TD-OCT imaging at 2 $\mu\mathrm{m}$ using a GaSb-based SLD source and establishes its suitability for compact and scalable mid-IR OCT instrumentation targeting non-biological, low-water-content materials.
https://arxiv.org/abs/2601.22261
Academic Papers
svg
9ef1db9b96c92047ddd1c684e079e95426ea3a4874f564082ad092aee00bbe79
2026-02-02T00:00:00-05:00
Distinguishable spreading dynamics in microbial communities
arXiv:2601.22293v1 Announce Type: new Abstract: A packed community of exponentially proliferating microbes will spread in size exponentially. However, due to nutrient depletion, mechanical constraints, or other limitations, exponential proliferation is not indefinite, and the spreading slows. Here, we theoretically explore a fundamental question: is it possible to infer the dominant limitation type from the spreading dynamics? Using a continuum active fluid model, we consider three limitations to cell proliferation: intrinsic growth arrest (e.g., due to sporulation), pressure from other cells, and nutrient access. We find that memoryless growth arrest still results in superlinear (accelerating) spreading, but at a reduced rate. In contrast, pressure-limited growth results in linear (constant-speed) spreading in the long-time limit. We characterize how the expansion speed depends on the maximum growth rate, the limiting pressure value, and the effective fluid friction. Interestingly, nutrient-limited growth results in a phase transition: depending on the nutrient supply and how efficiently nutrient is converted to biomass, the spreading can be either superlinear or sublinear (decelerating). We predict the phase boundary in terms of these parameters and confirm with simulations. Thus, our results suggest that when an expansion slowdown is observed, its dominant cause is likely nutrient depletion. More generally, our work suggests that cell-level growth limitations can be inferred from population-level dynamics, and it offers a methodology for connecting these two scales.
https://arxiv.org/abs/2601.22293
Academic Papers
svg
b0e40c23d94ce6163a68afc730e3e522b82be3074fb5c9b583a10b9c04bcf2aa
2026-02-02T00:00:00-05:00
Online unsupervised Hebbian learning in deep photonic neuromorphic networks
arXiv:2601.22300v1 Announce Type: new Abstract: While software implementations of neural networks have driven significant advances in computation, the von Neumann architecture imposes fundamental limitations on speed and energy efficiency. Neuromorphic networks, with structures inspired by the brain's architecture, offer a compelling solution with the potential to approach the extreme energy efficiency of neurobiological systems. Photonic neuromorphic networks (PNNs) are particularly attractive because they leverage the inherent advantages of light, namely high parallelism, low latency, and exceptional energy efficiency. Previous PNN demonstrations have largely focused on device-level functionalities or system-level implementations reliant on supervised learning and inefficient optical-electrical-optical (OEO) conversions. Here, we introduce a purely photonic deep PNN architecture that enables online, unsupervised learning. We propose a local feedback mechanism operating entirely in the optical domain that implements a Hebbian learning rule using non-volatile phase-change material synapses. We experimentally demonstrate this approach on a non-trivial letter recognition task using a commercially available fiber-optic platform and achieve a 100 percent recognition rate, showcasing an all-optical solution for efficient, real-time information processing. This work unlocks the potential of photonic computing for complex artificial intelligence applications by enabling direct, high-throughput processing of optical information without intermediate OEO signal conversions.
https://arxiv.org/abs/2601.22300
Academic Papers
svg
160be70a546db066e367cde44bae56607925ef8359ea16a714ef7437f9a8d9c3
2026-02-02T00:00:00-05:00
Correcting temporal bias in mobility data using time-use surveys
arXiv:2601.22330v1 Announce Type: new Abstract: GPS mobility data is a valuable source of behavioral measurement which is subject to systematic biases including the over- or under-representation of demographic groups, and variations in the quality of location sampling across time. In this paper, we address the challenge of temporal bias in mobility data, which can skew the representation of mobility behaviors due to the event-based nature of location data sampling. We use the American Time Use Survey (ATUS) to assess the accuracy of a place-based measure of economic segregation drawn from large-scale mobility data across 11 U.S. cities. We show that comparisons with high quality time use surveys such as the ATUS can validate behavioral insights from mobility data, while quantifying uncertainty and highlighting areas of relative instability in analytical findings. We also propose a temporal re-weighting method that can complement existing bias-mitigation techniques to improve the accuracy of conclusions drawn from GPS-based mobility data.
https://arxiv.org/abs/2601.22330
Academic Papers
svg
21d889f15d178b5dc4d5da1f4f01df8ecae757972b673eda44a539399d62ea33
2026-02-02T00:00:00-05:00
High-resolution calorimetric sample platforms for cryogenic thermodynamic studies with multimodal synchrotron x-ray compatibility
arXiv:2601.22342v1 Announce Type: new Abstract: X-ray calorimetric sample platforms combining specific heat and synchrotron x-ray measurements provide a powerful means to investigate fundamental material properties. Calorimeter cell designs featuring a compact heater and thermometer arranged in a sidecar geometry, with the sample positioned directly above the heater at the center of a silicon nitride membrane, are presented. High-yield, wafer-level batch fabrication of precision calorimetric sensor chips, beamline and laboratory cryostat plugins with sensor mounting and packaging are described. Using our calorimetric sensors, we present specific heat measurements on samples with masses ranging from 4 {\mu}g to 145 {\mu}g. The sample and reference cells are characterized with relaxation and ac steady-state measurements. The thermal response is captured using lock-in detection at carefully optimized measurement frequencies, with phase-lag correction ensuring precise extraction of heat capacity. The reference cell's background heat capacity was measured to be under 320 nJ/K at 300 K, decreasing to just 0.4 nJ/K at 0.7 K. The calorimeter performance is illustrated by studying the specific heat of small samples of superconducting Nb and a 4 {\mu}g piece of superconducting Al under different magnetic field strengths. The determination of fundamental thermodynamic quantities from low-temperature electronic and lattice specific heat measurements is discussed. These versatile, high-throughput sample platforms are engineered for small-sample calorimetry across a broad cryogenic temperature range, and they support scalable integration with a wide range of cryostats, including beamline cryostats at the Advanced Photon Source. They accommodate multimodal geometries and enable operation under ultra-high vacuum, millikelvin temperatures, magnetic fields, and x-ray illumination.
https://arxiv.org/abs/2601.22342
Academic Papers
svg
642d4d418d797fe63a1aa42cebd2c1ae8e66766371047df5c58acf274fc24682
2026-02-02T00:00:00-05:00
Low energy elastic scattering of H, D and T on $^{3}$He and $^{4}$He
arXiv:2601.22360v1 Announce Type: new Abstract: Motivated by the needs of atomic tritium sources for neutrino mass experiments, we present calculations of energy-dependent elastic scattering cross sections of hydrogen isotopes (H, D and T) on helium isotopes ($^3$He and $^4$He) in the temperature range 1~mK to 300~K. The tritium-on-helium cross sections are found to be enhanced over their hydrogen-on-helium counterparts by a near-threshold resonant s-wave bound state at low energy, similar to that predicted in the triplet T-T system. While the energy-dependent cross sections span a wide range at low energy due to this s-wave enhancement, they tend toward a common value at high energy where the scattering becomes effectively geometric in nature.
https://arxiv.org/abs/2601.22360
Academic Papers
svg
f30ca7ba8f4c085a05f83ae9545c0bae985f413dd145c4e8f375639bf4fe9905
2026-02-02T00:00:00-05:00
PPG-Based Heart Rate Accuracy in Diverse Populations: Investigating Inequities Across Body Composition and Skin Tones
arXiv:2601.22377v1 Announce Type: new Abstract: Wearable devices are widely used for heart rate (HR) monitoring, yet their accuracy across diverse body compositions and skin tones remains uncertain. This study evaluated four wrist worn devices (Apple, Fitbit, Samsung, Garmin) in 58 Hispanic adults with Fitzpatrick skin types III to V during a cycling protocol alternating moderate (0.64 to 0.76 HRmax) and vigorous (0.77 to 0.95 HRmax) intensities. Criterion HR was obtained using a Polar H10 ECG, and accuracy was assessed using mean absolute error, mean absolute percentage error (MAPE), bias, and intraclass correlation coefficients. All devices showed significant deviation from criterion measures. Apple and Garmin demonstrated the lowest error, whereas Fitbit and Samsung exhibited greater inaccuracies. Higher BMI and darker skin tones were associated with increased MAPE. These biases disproportionately affect higher risk populations, underscoring the need for improved algorithms to ensure equitable health monitoring.
https://arxiv.org/abs/2601.22377
Academic Papers
svg
431e8b6ea013dcd6f1a404bdc1dc77edca69e5b85316a2ef87a6677368a5d0c8
2026-02-02T00:00:00-05:00
Convergent Discovery of Critical Phenomena Mathematics Across Disciplines: A Cross-Domain Analysis
arXiv:2601.22389v1 Announce Type: new Abstract: Techniques for detecting critical phenomena -- phase transitions where correlation length diverges and small perturbations have large effects -- have been developed across at least eight fields of application over nine decades. We document this convergence pattern. The physicist's correlation length $\xi$, the cardiologist's DFA scaling exponent $\alpha$, the financial analyst's Hurst exponent $H$, and the machine learning engineer's spectral radius $\chi$ all measure correlation decay rate, detecting the same critical signatures under different notation. Citation analysis reveals minimal cross-domain awareness during the formative period (1987--2010): researchers in biomedicine, finance, machine learning, power systems, and traffic flow developed equivalent techniques independently, each with distinct notation and terminology. We present Metatron Dynamics, a framework derived from distributed systems engineering, as a candidate ninth independent discovery -- strengthening the convergence pattern while acknowledging that as authors of both the framework and this analysis, external validation would strengthen this claim. Correspondence testing on the 2D Ising model confirms that measures from multiple frameworks correctly identify the critical regime at $T_c = 2.269$. We argue that repeated independent discovery establishes criticality mathematics as fundamental public knowledge, with implications for cross-disciplinary education and research accessibility. Because these findings affect fields beyond mathematics and physics, we include a plain-language summary in Appendix B for non-specialist readers.
https://arxiv.org/abs/2601.22389
Academic Papers
svg
84114fe3f837d1d41730b374d2009866d18b9047290660625e850af635decaa5
2026-02-02T00:00:00-05:00
Body Fat, Skin Tone, and the Accuracy of Smartwatch Caloric Expenditure Estimates
arXiv:2601.22391v1 Announce Type: new Abstract: Smartwatches are widely used to estimate caloric expenditure for weight management, clinical decision making, and public health monitoring. These devices combine photoplethysmography, accelerometry, and proprietary algorithms. However, prior studies report substantial error, and the influence of moderators such as skin tone and body fat percentage (BF) remains underexamined. This study tested whether smartwatch brand, BF, and Fitzpatrick skin type (III to V) predict caloric expenditure error relative to indirect calorimetry. Fifty eight Hispanic adults completed a single laboratory visit including a ten minute recumbent cycling protocol with alternating two minute moderate and vigorous intensity intervals, bracketed by rest and recovery. Participants wore four consumer devices: Apple Watch Series 8, Fitbit Sense 2, Samsung Galaxy Watch 5, and Garmin Forerunner 955. Energy expenditure was measured using a COSMED K5 metabolic system. After device specific data quality filtering, valid participant device pairings ranged from 44 to 52 per brand. One sample tests showed significant mean bias for three devices: Apple, Garmin, and Samsung. Fitbit showed no significant overall bias, although this depended on device specific outlier removal. Mean bias varied by brand, with Garmin and Samsung showing the largest overestimations. Mixed effects models revealed significant effects of device and BF, as well as a device by BF interaction, with physical activity energy expenditure error increasing as adiposity increased. Overall, common smartwatches substantially misestimate caloric expenditure compared with indirect calorimetry. Error varies by brand and worsens with higher body fat, highlighting limitations of current consumer wearables and the need for improved accuracy across diverse body types.
https://arxiv.org/abs/2601.22391
Academic Papers
svg
15ddc93f9f0184a73662e8892c1b9177c7ca6fe0e9c5c237502db2586af3af5f
2026-02-02T00:00:00-05:00
Active Learning vs Traditional Lecturing in Introductory Mechanics: A Pooled Pass-Rate Benchmark Under Common Departmental Assessments from a Latin American Institutional Change Initiative
arXiv:2601.22428v1 Announce Type: new Abstract: Improving student success in introductory physics remains a persistent challenge despite substantial progress from research-based instructional practices. Evidence from the Latin American context remains limited, where resources for instructional change are often constrained. This study reports a transparent benchmark of student passing outcomes in \textit{Elementary Mechanics I} at a large public university in M\'exico, comparing sections using Active Learning (AL) with those using Traditional Lecturing (TL). The labels AL and TL are operational, referring to section-level implementations by individual instructors rather than standardized protocols. Using aggregated counts from coordinator reports and common departmental assessments -- written by a committee independent of instructional modality -- we estimated pooled student-level pass probabilities for the first and second midterm exams, the global exam, and the final mark. Modality differences are summarized primarily by the risk difference, $RD_a=p_{\mathrm{AL},a}-p_{\mathrm{TL},a}$ (percentage points), with uncertainty quantified using Wilson confidence intervals and a Bayesian reference analysis with Jeffreys priors for binomial proportions. Across assessments, pooled pass rates were higher under AL than under TL, with the strongest separation observed for the global exam and the final mark. For these outcomes, the $95\%$ confidence intervals excluded zero, including under a random-intercept Bayesian model. We emphasize a constrained interpretation: the results provide a student-weighted benchmark of ``AL as implemented'' versus ``TL as implemented'' in this setting, without isolating the causal effect of individual instructional techniques. Implications are discussed for departmental decision-making and feasible next steps in evaluation, including improved student data collection and more robust qualitative analysis.
https://arxiv.org/abs/2601.22428
Academic Papers
svg
6edae6879ecf067f29222ba82ddc6c18f4754caf49aeeb54d5bc3aa0e9444273
2026-02-02T00:00:00-05:00
Correlation-Based Diagnostics of Social Contagion Dynamics in Multiplex Networks
arXiv:2601.22459v1 Announce Type: new Abstract: Multiplex contagion dynamics display localization phenomena in which spreading activity concentrates on a subset of layers, as well as delocalized regimes where layers behave collectively. We investigate how these regimes are encoded in temporal correlations of node activity. By deriving a closed-form mean-field expression for node autocorrelations in a contact-based social contagion multiplex model and validating it through simulations, we show that lag-one autocorrelations act as sensitive indicators of both activation and localization transitions. Our results establish temporal correlations as lightweight, structure-agnostic probes of multiplex spreading dynamics, particularly valuable in partially observable systems.
https://arxiv.org/abs/2601.22459
Academic Papers
svg
913a78bc1ee57293a1ad2771d205b3197674fe18b1803c99bd40eddb1b2a1fc1
2026-02-02T00:00:00-05:00
Observation of Janus Chirality for Coherent Thermal Emission from Metasurfaces
arXiv:2601.22506v1 Announce Type: new Abstract: Metasurfaces emerged as a powerful tool for controlling thermal radiation, yet achieving coherent emission with opposite circular handednesses remains a highly challenging problem. Here, we demonstrate experimentally the Janus chiral thermal emission from metasurfaces with opposite circular handednesses on either side of a single device. We employ anisotropic metasurfaces supporting high-Q resonances with photonic flatbands enabling near-unity circular dichroism through in-plane symmetry control. Our experiments confirm the Janus coherent emission, and they are validated by the results of the coupled-mode theory. The flatband resonant metasurfaces enabling a control of chiral thermal emission provide an efficient platform for spin-controlled light-matter interaction.
https://arxiv.org/abs/2601.22506
Academic Papers
svg
7be7cf4815309045f9983227c10d9c58496aa050e9ab280d77be7e8360da278c
2026-02-02T00:00:00-05:00
Linking Extratropical Forecast Degradation to Tropical Cyclones in Physical and AI Models
arXiv:2601.22540v1 Announce Type: new Abstract: Global medium-range weather forecasts suffer occasional failures ("busts"), often linked to tropical cyclones (TCs). We systematically investigate the TC influences by clustering historical TC tracks and comparing skill of forecasts from a physics-based model (ECMWF-IFS) and an AI-physics hybrid model (Google-NGCM) initialized near TC genesis. Case analysis shows both models exhibit similar large-scale error growth in the extratropics, suggesting prediction skill bounded by similar limits despite model differences in spatial resolution and parameterized physics. Aggregated statistics reveal that low skill of Week-2 forecasts may occur after TC genesis, regardless of whether they recurve or not. While recurving tracks are established error sources, zonal-track clusters can be associated with similarly profound forecast degradation, acting through Rossby wave dynamics and remote moisture transport mechanisms. Furthermore, the stochastic NGCM generally outperforms its deterministic counterpart and suggests that TC-related forecast degradation is more pronounced for Europe than elsewhere in the Northern Hemisphere.
https://arxiv.org/abs/2601.22540
Academic Papers
svg
3e2221b0a516c1199e1d45eda3b84e6a001e82466369daf166f546e4f0c7746c
2026-02-02T00:00:00-05:00
Strong Coupling Between RF Photons and Plasmons of Electrons on Liquid Helium
arXiv:2601.22552v1 Announce Type: new Abstract: Plasmons, arising from the collective motion of electrons, can interact strongly with electromagnetic fields or photons; this capability has been exploited across a broad range of applications, from chemical reactivity to biosensing. Recently, there has been growing interest in plasmons for applications in quantum information processing. Electrons floating on liquid helium provide an exceptionally clean, disorder-free system and have emerged as a promising platform for this purpose. In this work, we establish this system as a tunable plasmon-photon hybrid platform. We demonstrate strong coupling between floating-electron plasmons and radio-frequency (RF) photons confined in an LC resonator. Time-resolved measurements reveal coherent oscillatory energy exchange between the plasmonic and photonic modes, providing direct evidence of their coherent coupling. These results represent a step towards cavity quantum electrodynamics with a floating-electron plasmon coupled to a resonator. Furthermore, the LC resonator serves as a sensitive probe of electron-on-helium physics, enabling the observation of the Wigner crystal transition and a quantitative study of the temperature-dependent plasmon decay arising from ripplon-induced scattering.
https://arxiv.org/abs/2601.22552
Academic Papers
svg
51d8cb1fa8604a98f46dd0a3af0c013d18677b0638c41ebf6667344185804512
2026-02-02T00:00:00-05:00
Cross-feeding yields high-dimensional chaos and coexistence of species beyond exclusion principle
arXiv:2601.22564v1 Announce Type: new Abstract: Species interactions through cross-feeding via leakage and uptake of chemicals are important in microbial communities, and play an essential role in the coexistence of diverse species. Here, we study a simple dynamical model of a microbial community in which species interact by competing for the uptake of common metabolites that are leaked by other species. The model includes coupled dynamics of species populations and chemical concentrations in the medium, allowing for a variety of uptake and leakage networks among species. Depending on the structure of these networks, the system exhibits different attractors, including fixed points, limit cycles, low-dimensional chaos, and high-dimensional chaos. In the fixed-point and limit-cycle cases, the number of coexisting species is bounded by the number of exchangeable chemicals, consistent with the well-known competitive exclusion principle. In contrast, in the low-dimensional chaotic regime, the number of coexisting species exhibits noticeable but limited excess over this limit. Remarkably, in the high-dimensional chaotic regime, a much larger number of species beyond this limit coexist persistently over time. In this case, the rank-abundance distribution is broader than exponential, as often observed in real ecosystems. The population dynamics displays intermittent switching among quasi-stationary states, while the chemical dynamics explore most of the high dimensions. We find that such high-dimensional chaos is ubiquitous when the number of uptake chemicals is moderately larger than the number of leaked chemicals. Our results identify high-dimensional chaos with intermittent switching as a generic dynamical mechanism that stabilizes coexistence in interacting systems. We discuss its relevance to sustaining diverse microbial communities with leak-uptake cross-feeding.
https://arxiv.org/abs/2601.22564
Academic Papers
svg
7ed5b95a6823ed2cb7d61c0a4116d627775d5101200a1a5966175afe9ecc4959
2026-02-02T00:00:00-05:00
Sculpting of Martian brain terrain reveals the drying of ancient Mars
arXiv:2601.22606v1 Announce Type: new Abstract: The Martian brain terrain (MBT), characterized by its unique brain-like morphology, is a potential geological archive for finding hints of paleoclimatic conditions during its formation period. The morphological similarity of MBT to self-organized patterned ground on Earth suggests a shared formation mechanism. However, the lack of quantitative descriptions and robust physical modeling of self-organized stone transport jointly limits the study of the thermal and aqueous conditions governing MBT's formation. Here we established a specialized quantitative system for extracting the morphological features of MBT, taking a typical region located in the northern Arabia Terra as an example, and then employed a numerical model to investigate its formation mechanisms. Our simulation results accurately replicate the observed morphology of MBT, matching its key geometric metrics with deviations $<10\%$. Crucially, however, we find that the self-organized transport can solely produce relief $<0.5$ m, insufficient to explain the formation of MBT with average relief of $3.29 \pm 0.65$ m. We attribute this discrepancy to sculpting driven by late-stage sublimation, constraining cumulative subsurface ice loss in this region to $\sim 3$ meters over the past $\sim 3$ Ma. These findings demonstrate that MBT's formation is a multi-stage process: initial patterning driven by freeze-thaw cycles (implying liquid water) followed by vertical sculpting via sublimation (requiring a dry environment). This evolution provides physical evidence for the transition of the ancient Martian climate from a wetter period to a colder hyper-arid state.
https://arxiv.org/abs/2601.22606
Academic Papers
svg
cbec9155bed08e83e75b61013e9d14fd1c291598dc518b91c9a7fa59c12b7898
2026-02-02T00:00:00-05:00
Fluid transport by a single active filament in a three-dimensional two-phase flow
arXiv:2601.22698v1 Announce Type: new Abstract: Micro-scale cilia play a vital role in mucociliary clearance (MCC) in the human respiratory airways. In this numerical study, we examine fluid transport driven by the active beating of a single filament immersed in a three-dimensional two-phase flow. The cilium is modeled as an elastic filament actuated by a time-varying basal angle. The two-phase flow is resolved using the Shan-Chen model in a lattice Boltzmann solver, while the two-way coupling between the filament and the fluid is treated by the immersed boundary method. Pathological conditions such as cystic fibrosis and chronic obstructive pulmonary disease are associated with drastic alterations of MCC properties, including changes in periciliary layer (PCL) thickness and the viscosity ratio between the PCL and the mucus layer (ML). Here, we systematically investigate the effects of these parameters, along with filament bending stiffness, on the beating pattern and fluid transport. Within the parameter ranges investigated, a moderate PCL thickness and viscosity ratio, together with high bending stiffness, tend to yield higher net flow rate and transport efficiency. The underlying hydrodynamic mechanisms are characterized through analyses of the beating pattern, filament dynamics, energy partition, and flow-field evolution. Two competing mechanisms are identified: the drag-elastic force balance and the viscous diffusion of momentum. Furthermore, quantitative relationships are established between flow rate and beating pattern, expressed in terms of tip amplitude and beating asymmetry.
https://arxiv.org/abs/2601.22698
Academic Papers
svg
fedc44c5549812761269d0b94d4f6bfddae49143557007c99ef7e0e86eed409b
2026-02-02T00:00:00-05:00
Hybrid MCP-PMT characterisation on a testbeam with Cherenkov setup
arXiv:2601.22713v1 Announce Type: new Abstract: A novel photodetector based on a MCP-PMT vacuum tube with encapsulated CMOS ASIC has been tested at the CERN SPS high energy hadron beam, allowing single photon Cherenkov detection operating at 10$^4$ gain and with timing resolution of about 280~ps.
https://arxiv.org/abs/2601.22713
Academic Papers
svg
8d079739673ad2d82685c65260a9890fc666fd29a6396cc6ef05f58f36c30bd1
2026-02-02T00:00:00-05:00
A Wide Bandwidth Trans-impedance Amplifier for Picosecond-Scale SiPM Characterization in a Wide Temperature Range
arXiv:2601.22727v1 Announce Type: new Abstract: Future high-energy physics experiments using SiPMs as photosensitive elements may require operation at low temperatures (down to 80 K) to measure single photons with high time resolution in a highly radioactive environment. This calls for a complete characterization of these sensors over a wide temperature range to find the best compromise between detector performance and cooling requirements. This paper presents the design of a transimpedance amplifier featuring high gain ($\sim 7500$ $\mathrm{V/A}$), very high speed ($ < 500$ $\mathrm{ps}$ rise time) and low input noise ($\lesssim 0.2$ $\mathrm{pA/\sqrt{Hz}}$), able to faithfully reproduce all the features of SiPM signals with very low noise and time jitter. These features make the amplifier suitable for precise measurements of the time-of-arrival of single-photon signals, as well as gain and recovery time. This article provides a detailed and thorough analysis of the circuit. The network was simulated and measured in two configurations that differ in their open-loop gain and dominant pole frequencies. After selecting the best configuration for our purposes, the amplifier was characterized in detail at ambient temperature and at 80 K. Finally, we evaluated the amplifier using a SiPM operated at low over-voltage. While SiPMs are typically characterized at high over-voltage to enhance gain and minimize timing jitter, testing at low over-voltage allowed us to assess the amplifier's performance under more challenging and realistic conditions for single-photon timing.
https://arxiv.org/abs/2601.22727
Academic Papers
svg
04275f096ff73f56c8ea2d89ea81661e928a5c220ae6477975af266ac00b27d5
2026-02-02T00:00:00-05:00
Combining quasi-static and high frequency experiments for the viscoelastic characterization of brain tissue
arXiv:2601.22743v1 Announce Type: new Abstract: Mechanical models of brain tissue are a beneficial tool to simulate neurosurgical interventions, disease progression, or brain development. However, the accuracy and predictive capacity of such a model relies on a precise experimental characterization of the tissue's mechanical behavior. Such a characterization is yet limited by inconsistent or contradictory experimental responses reported in the literature, particularly when measurements are performed in different time or length scales. Although brain tissue has been extensively investigated in previous studies, the combination of experimental findings from different scales has received limited attention. In this study, we combine ex vivo mechanical responses of porcine brain tissue obtained at different time scales in a mechanical model. We investigated the mechanical behavior of three different brain regions in the quasi-static domain with multi-modal large strain rheometer measurements and at high frequencies with magnetic resonance elastography (MRE). A comparative analysis of the mechanical parameters obtained from both experimental techniques demonstrated consistent regional variations in the viscoelastic behavior across the two domains. However, the mechanical behavior changes from a higher elasticity in the quasi-static and low frequency domain to a dominating viscosity at high frequencies. Based on the quasi-static and the high frequency behavior, we calibrated a fractional Kelvin-Voigt model and consequently unified the two responses in a single mechanical model to obtain a comprehensive characterization of the tissue's mechanical behavior.
https://arxiv.org/abs/2601.22743
Academic Papers
svg
4b110181ff489ab22b6408121cd39ba07e93d661eb23b47a5e957bdb6cd5e4cc
2026-02-02T00:00:00-05:00
Femtosecond Nonadiabatic Confinement of Molecular Dication Yield
arXiv:2601.22750v1 Announce Type: new Abstract: Doubly charged molecular cations often carry signatures of electronic correlation and electron-nuclear entanglement present in the parent cation. Here, we produce ethylene dications using a combination of an extreme ultraviolet pump and near-infrared probe pulses, observing a peak in the dication yield at a pump-probe delay of approximately 15 fs. Ab-initio calculations, which explicitly take into account coupled electron-nuclear dynamics induced by the pump and the multiphoton nature of the probe-induced ionization step, reproduced the observed delay in the yield. It originates from resonant enhancement of the multiphoton ionization of the electronically excited ethylene cation as the carbon-carbon double bond expands. However, this effect is tempered by rapid nonadiabatic relaxation of the excited ionic states. Our results suggest a general mechanism whereby ultrafast nonadiabatic relaxation of a molecular ion can compete with its strong-field ionization rate, confining the dication yield to a narrow temporal window of a few femtoseconds.
https://arxiv.org/abs/2601.22750
Academic Papers
svg
5f13ff7f1fc88fc555319940ebd386641e560c7e646586befd332c15a8956375
2026-02-02T00:00:00-05:00
Electroactive morphing effects on the aerodynamic performance through wobulation around an A320 wing with vibrating trailing edge at high Reynolds number
arXiv:2601.22768v1 Announce Type: new Abstract: This study aims to investigate the effects of electroactive morphing on a 70cm chord A320 wing by means of near trailing edge slight deformation and vibration. Wing morphing is performed by Macro Fiber Composites (MFC) mini-piezoelectric actuators distributed along the span of the ''Reduced Scale'' (RS) A320 prototype of the H2020 No 723402 European research project SMS, ''Smart Morphing and Sensing for aeronautical configurations'', (https://cordis.europa.eu/project/id/723402 and www.smartwing.org/SMS/EU). The configuration studied corresponds to a low-subsonic regime (Mach number 0.063) with a 10 degree incidence and a Reynolds number of 1 Million. The numerical simulations are carried out with the Navier-Stokes Multi-Block (NSMB) solver, which takes into account the deformation of the rear part of the wing implemented experimentally with the piezoelectric actuators. A detailed physical analysis of the morphing effects on the wake dynamics and on the aerodynamic performance is conducted with a constant amplitude of 0.7cm over a wide range of actuation frequencies [10-600]Hz. Optimal vibration ranges of [180-192]Hz and [205-215]Hz were found to respectively provide a 1% drag reduction and a 2% lift-to-drag ratio increase compared to the non-morphing (static) configuration. The natural frequencies associated with the shear layer Kelvin-Helmholtz (KH) vortices and the Von-Karman (VK) vortex shedding were found to play a central role in the modification of the wake dynamics by morphing as well as in the increase of the aerodynamic performance. Actuating at (or close to) the upper shear layer (USL) natural frequency (~185Hz) provides an order of 1% drag reduction and 1% lift-to-drag ratio increase, while actuating at (or close to) the lower shear layer (LSL) natural frequency (~208Hz) provides an order of 8% lift increase and 2% lift-to-drag increase. Furthermore, the linear variation of the actuation frequency over time, called wobulation, was shown to have significant effects. This approach demonstrated, through an appropriate mapping, the ability to quickly and efficiently detect optimal constant actuation frequency ranges providing aerodynamic performance increase and simultaneously reducing the amplitude of the main instability modes.
https://arxiv.org/abs/2601.22768
Academic Papers
svg
506e117f52bbc94cf462af903102f38e26d3348241555f321bbc5703a9ba458b
2026-02-02T00:00:00-05:00
Fast Eikonal Phase Retrieval for High-Throughput Beamlines
arXiv:2601.22793v1 Announce Type: new Abstract: We introduce a fast Eikonal Phase Retrieval (EPR) formulation that accelerates eikonal phase retrieval by more than two orders of magnitude while retaining controlled accuracy. The method is derived from a second-order asymptotic expansion in the propagation distance $L$ and complemented by the leading Wentzel--Kramers--Brillouin (WKB) wave-optics correction, yielding an efficient iterative correction scheme preconditioned by FFT-diagonal, energy-dependent inverse operators (Paganin-type filters). To ensure robustness across practical experimental regimes, we combine two complementary solvers: (i) a local $O(L^2)$ closure that is accurate when eikonal shifts remain sub-pixel, and (ii) a non-local formulation for multi-pixel shifts, in which intensity is propagated through an explicit eikonal ray mapping using a mass-conserving bilinear redisribution on the detector grid, and detector residuals are transferred back to the object grid by the corresponding adjoint (transpose), implemented as bilinear interpolation, before applying an approximate FFT-diagonal preconditioner to accelerate convergence. The same framework supports polychromatic data through a compact spectral discretisation, allowing energy-dependent transport and inversion while keeping the iteration GPU/FFT efficient. Overall, this unified approach enables accurate and computationally efficient phase retrieval across propagation conditions relevant to high-throughput PPC-$\mu$CT experiments.
https://arxiv.org/abs/2601.22793
Academic Papers
svg
4570341d48b44e72d6a69d753044593e7bfefec61da959fd936a71c388c82650
2026-02-02T00:00:00-05:00
Batch Bayesian optimization of attosecond betatron pulses from laser wakefield acceleration
arXiv:2601.22794v1 Announce Type: new Abstract: Laser wakefield acceleration can generate a femtosecond-scale broadband X-ray betatron radiation pulse from electrons accelerated by an intense laser pulse in a plasma. The micrometer-scale of the source makes wakefield betatron radiation well-suited for advanced imaging techniques, including diffraction and phase-contrast imaging. Recent progress in laser technology can expand these capabilities into the attosecond regime, where the practical applications would significantly benefit from the increased energy contained within the pulse. Here we use numerical simulations combined with batch Bayesian optimization to enhance the radiation produced by an attosecond betatron source. The method enables an efficient exploration of a multi-parameter space and identifies a regime in which a plasma density spike triggers the generation of a high-charge electron beam. This results in an improvement of more than one order of magnitude in the on-axis time-averaged power within the central time containing half of the radiated energy, compared to the reference case without the density spike.
https://arxiv.org/abs/2601.22794
Academic Papers
svg
2d5a8e1b609cc1a42f5ee79bfff4067e68f2145bd7c2e63c411fa1bde51cf2b8
2026-02-02T00:00:00-05:00
Inverse Design of the Topology Bandwidth Tradeoff in Valley Photonic Crystals
arXiv:2601.22958v1 Announce Type: new Abstract: Integrated on-chip photonics increasingly relies on wave propagation that remains stable in the presence of fabrication imperfections, tight bends, and dense routing. Valley photonic crystals (VPCs) offer an attractive path: by opening a gap at the Dirac points of a hexagonal lattice, one can engineer guided modes confined to domain walls that thread around corners with reduced backreflection. We develop a design framework that co-optimizes the photonic bulk band gap and valley Chern number using a modified particle-swarm optimization (PSO), while evaluating the photonic band structure via plane-wave expansion and the topological characteristics using a gauge-invariant lattice discretization to compute the Berry-curvature. The optimized structures exhibit a clean valley-Hall gap with edge bands traversing the gap and high interface transmission in full-wave simulations. These results consolidate topology-aware geometry optimization for robust on-chip guiding.
https://arxiv.org/abs/2601.22958
Academic Papers
svg
5b5b9d231b6b1530906e56a9efa67df719d28f4b8649e76bba2c86e6145692a0
2026-02-02T00:00:00-05:00
Simulation and optimization of the Active Magnetic Shield of the n2EDM experiment
arXiv:2601.22960v1 Announce Type: new Abstract: The n2EDM experiment at the Paul Scherrer Institute aims to conduct a high-sensitivity search for the electric dipole moment of the neutron. Magnetic stability and control are achieved through a combination of passive shielding, provided by a magnetically shielded room (MSR), and a surrounding active field compensation system by an Active Magnetic Shield (AMS). The AMS is a feedback-controlled system of eight coils spanned on an irregular grid, designed to provide magnetic stability to the enclosed volume by actively suppressing external magnetic disturbances. It can compensate static and variable magnetic fields up to $\pm 50$ $\mu$T (homogeneous components) and $\pm 5$ $\mu$T/m (first-order gradients), suppressing them to a few $\mu$T in the sub-Hertz frequency range. We present a full finite element simulation of magnetic fields generated by the AMS in the presence of the MSR. This simulation is of sufficient accuracy to approach our measurements. We demonstrate how the simulation can be used with an example, obtaining an optimal number and placement of feedback sensors using genetic algorithms.
https://arxiv.org/abs/2601.22960
Academic Papers
svg
875da95a96206b08909b3b2ffebbbcfb128aa6df00670b4a4b4afd0bbe36828c
2026-02-02T00:00:00-05:00
Wichmann-Kroll Correction in Muonic Atoms and Hydrogen-Like Electronic Ions: a Comparative Study of Two Methods
arXiv:2601.22979v1 Announce Type: new Abstract: Wichmann-Kroll corrections are calculated in both hydrogen-like electronic ions and muonic systems ($Z = \{36$--$92\}$) using two independent methods. The Gaussian finite basis set approach, enhanced with dual basis construction, analytical large-distance corrections, and $B$-spline representations, provides computational efficiency. The Green function method, based on semi-analytical construction from Dirac solutions with Fermi nuclear charge distributions, offers higher systematic accuracy and freedom from basis-dependent artifacts. Results are consistent with the literature values, providing reliable reference data for precision spectroscopy of exotic atoms.
https://arxiv.org/abs/2601.22979
Academic Papers
svg
c87321e83da049db4ab89be8a490fd8d20a7e2b57d5587f60310cec3a1401247
2026-02-02T00:00:00-05:00
Direct observation of the optical Magnus effect with a trapped ion
arXiv:2601.22981v1 Announce Type: new Abstract: We directly observe and spatially map an optical analog of the Magnus effect, where intrinsic spin-orbit-like coupling of light generates a spin-dependent transverse displacement of the atom-light interaction profile for a $^{40}$Ca$^+$ ion. Probed on a quadrupole transition using a tightly focused beam, we observe displacements of the maximum in the profile of the effective interaction by several 100 nm originating from intrinsic longitudinal electric field components beyond the paraxial approximation. The tight focus of the beam induces additional transverse polarization gradients, which we characterize through a phase-sensitive measurement and spatial maps for different beam configurations. The results establish the physical basis of polarization-gradient interactions relevant to optical tweezer-based quantum control.
https://arxiv.org/abs/2601.22981
Academic Papers
svg
8d8e6fd5eeb8c420ce340eda4c70cda5b189f55e50283f72515d63bdd0005789
2026-02-02T00:00:00-05:00
Unambiguous Vector Magnetometry with Structured Light in Atomic Vapor
arXiv:2601.22998v1 Announce Type: new Abstract: Absorption profiles of vector light upon interaction with atomic vapor carries distinct signatures of external magnetic field vector. However, this signature becomes ambiguous for anti parallel magnetic field vectors of equal magnitude, which makes their absorption profiles visually indistinguishable. To resolve this ambiguity, we present theoretical analysis of the interaction of vector light with optically polarized atoms immersed in reference and test magnetic fields. Furthermore, we demonstrate the complete characterization of the arbitrarily oriented (transverse) test magnetic field via Fourier analysis of the absorption profile. This analysis reveals a one to one correspondence between the magnetic field properties and the profiles contrast and rotational angle. Our findings open an avenue to design an optical vector atomic magnetometer based on structured light fields.
https://arxiv.org/abs/2601.22998
Academic Papers
svg
ba0896fcdfc86944af7947128ae2d7c1dfc9ef9826682025ff274c8e0639b57c
2026-02-02T00:00:00-05:00
Perturbative Born theory for light scattering by time-modulated scatterers
arXiv:2601.23003v1 Announce Type: new Abstract: We present a theoretical framework for electromagnetic scattering by particles with a permittivity that is periodically varying in time, based on a perturbative approach. Within this framework, we derive explicit expressions for the scattering matrix of the dynamic system in a first-order Born approximation, relating it directly to the corresponding static problem. We show that inelastic scattering amplitudes are governed by overlap integrals between static modes at the input and output frequencies. Using this insight, we analyze scattering from a time-modulated, isotropic, dielectric sphere and a high-permittivity dielectric cylinder, and demonstrate how modal orthogonality can suppress inelastic channels, while appropriate tuning of geometric parameters can significantly enhance them. In particular, we show that cylindrical resonators support strong inelastic scattering when resonance-to-resonance optical transitions, induced by the temporal variation, involve a high-Q supercavity mode. Comparison with full time-Floquet calculations confirms that the first-order Born approximation remains quantitatively accurate for modest modulation amplitudes and provides clear physical intuition for frequency conversion and resonance-mediated scattering processes in time-modulated photonic resonators.
https://arxiv.org/abs/2601.23003
Academic Papers
svg
5495e48b4e440b2a0cbac9716523f6a52bfcc8155a8279a8ab11c83bca35c344
2026-02-02T00:00:00-05:00
Dancing rivulets in an air-filled Hele-Shaw cell
arXiv:2601.23025v1 Announce Type: new Abstract: We study the behaviour of a thin fluid filament (a rivulet) flowing in an air-filled Hele-Shaw cell. Transverse and longitudinal deformations can propagate on this rivulet, although both are linearly attenuated in the parameter range we use. On this seemingly simple system, we impose an external acoustic forcing, homogeneous in space and harmonic in time. When the forcing amplitude exceeds a given threshold, the rivulet responds nonlinearly, adopting a peculiar pattern. We investigate the dance of the rivulet both experimentally using spatiotemporal measurements, and theoretically using a model based on depth-averaged Navier-Stokes equations. The instability is due to a three-wave resonant interaction between waves along the rivulet, the resonance condition fixing the pattern wavelength. Although the forcing is additive, the amplification of transverse and longitudinal waves is effectively parametric, being mediated by the linear response of the system to the homogeneous forcing. Our model successfully explains the mode selection and phase-locking between the waves, it notably allows us to predict the frequency dependence of the instability threshold. The dominant spatiotemporal features of the generated pattern are understood through a multiple-scale analysis.
https://arxiv.org/abs/2601.23025
Academic Papers
svg