id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
5a2a60da57d4c109fc243f41b42fd2d209db7f5733e7d9b6e371285f27bb3d73
2026-02-02T00:00:00-05:00
MedMCP-Calc: Benchmarking LLMs for Realistic Medical Calculator Scenarios via MCP Integration
arXiv:2601.23049v1 Announce Type: new Abstract: Medical calculators are fundamental to quantitative, evidence-based clinical practice. However, their real-world use is an adaptive, multi-stage process, requiring proactive EHR data acquisition, scenario-dependent calculator selection, and multi-step computation, whereas current benchmarks focus only on static single-step calculations with explicit instructions. To address these limitations, we introduce MedMCP-Calc, the first benchmark for evaluating LLMs in realistic medical calculator scenarios through Model Context Protocol (MCP) integration. MedMCP-Calc comprises 118 scenario tasks across 4 clinical domains, featuring fuzzy task descriptions mimicking natural queries, structured EHR database interaction, external reference retrieval, and process-level evaluation. Our evaluation of 23 leading models reveals critical limitations: even top performers like Claude Opus 4.5 exhibit substantial gaps, including difficulty selecting appropriate calculators for end-to-end workflows given fuzzy queries, poor performance in iterative SQL-based database interactions, and marked reluctance to leverage external tools for numerical computation. Performance also varies considerably across clinical domains. Building on these findings, we develop CalcMate, a fine-tuned model incorporating scenario planning and tool augmentation, achieving state-of-the-art performance among open-source models. Benchmark and Codes are available in https://github.com/SPIRAL-MED/MedMCP-Calc.
https://arxiv.org/abs/2601.23049
Academic Papers
svg
7b0730cf18914adf82512d49d031f189d5e73c65a45de798afe39dbad65588f7
2026-02-02T00:00:00-05:00
Digital Twin Synchronization: towards a data-centric architecture
arXiv:2601.23051v1 Announce Type: new Abstract: Digital Twin (DT) technology revolutionizes industrial processes by enabling the representation of physical entities and their dynamics to enhance productivity and operational efficiency. It has emerged as a vital enabling technology in the Industry 4.0 context. The present article examines the particular issue of synchronizing a digital twin while ensuring an accurate reflection of its physical counterpart. Despite the reported recent advances in the design of middleware and low delay communication technologies, effective synchronization between both worlds remains challenging. This paper reviews currently adopted synchronization technologies and architectures, identifies vital outstanding technical challenges, and proposes a unified synchronization architecture for use by various industrial applications while addressing security and interoperability requirements. As such, this study aims to bridges gaps and advance robust synchronization in DT environments, emphasizing the need for a standardized architecture to ensure seamless operation and continuous improvement of industrial systems.
https://arxiv.org/abs/2601.23051
Academic Papers
svg
f914eb066e01789dff1c0fd45cd60cab9621fa1e29ea98ae308b1437630e932a
2026-02-02T00:00:00-05:00
Adaptive Edge Learning for Density-Aware Graph Generation
arXiv:2601.23052v1 Announce Type: new Abstract: Generating realistic graph-structured data is challenging due to discrete structures, variable sizes, and class-specific connectivity patterns that resist conventional generative modelling. While recent graph generation methods employ generative adversarial network (GAN) frameworks to handle permutation invariance and irregular topologies, they typically rely on random edge sampling with fixed probabilities, limiting their capacity to capture complex structural dependencies between nodes. We propose a density-aware conditional graph generation framework using Wasserstein GANs (WGAN) that replaces random sampling with a learnable distance-based edge predictor. Our approach embeds nodes into a latent space where proximity correlates with edge likelihood, enabling the generator to learn meaningful connectivity patterns. A differentiable edge predictor determines pairwise relationships directly from node embeddings, while a density-aware selection mechanism adaptively controls edge density to match class-specific sparsity distributions observed in real graphs. We train the model using a WGAN with gradient penalty, employing a GCN-based critic to ensure generated graphs exhibit realistic topology and align with target class distributions. Experiments on benchmark datasets demonstrate that our method produces graphs with superior structural coherence and class-consistent connectivity compared to existing baselines. The learned edge predictor captures complex relational patterns beyond simple heuristics, generating graphs whose density and topology closely match real structural distributions. Our results show improved training stability and controllable synthesis, making the framework effective for realistic graph generation and data augmentation. Source code is publicly available at https://github.com/ava-12/Density_Aware_WGAN.git.
https://arxiv.org/abs/2601.23052
Academic Papers
svg
abbe845e81c30b609979ce649c348183df38ab92ae51181dbbeaa254e394688c
2026-02-02T00:00:00-05:00
From Absolute to Relative: Rethinking Reward Shaping in Group-Based Reinforcement Learning
arXiv:2601.23058v1 Announce Type: new Abstract: Reinforcement learning has become a cornerstone for enhancing the reasoning capabilities of Large Language Models, where group-based approaches such as GRPO have emerged as efficient paradigms that optimize policies by leveraging intra-group performance differences. However, these methods typically rely on absolute numerical rewards, introducing intrinsic limitations. In verifiable tasks, identical group evaluations often result in sparse supervision, while in open-ended scenarios, the score range instability of reward models undermines advantage estimation based on group means. To address these limitations, we propose Reinforcement Learning with Relative Rewards (RLRR), a framework that shifts reward shaping from absolute scoring to relative ranking. Complementing this framework, we introduce the Ranking Reward Model, a listwise preference model tailored for group-based optimization to directly generate relative rankings. By transforming raw evaluations into robust relative signals, RLRR effectively mitigates signal sparsity and reward instability. Experimental results demonstrate that RLRR yields consistent performance improvements over standard group-based baselines across reasoning benchmarks and open-ended generation tasks.
https://arxiv.org/abs/2601.23058
Academic Papers
svg
d3efc5e00bf76351edec9ac1259cddb987acfb017694ea20dee488316b44d571
2026-02-02T00:00:00-05:00
On the Impact of Code Comments for Automated Bug-Fixing: An Empirical Study
arXiv:2601.23059v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly relevant in Software Engineering research and practice, with Automated Bug Fixing (ABF) being one of their key applications. ABF involves transforming a buggy method into its fixed equivalent. A common preprocessing step in ABF involves removing comments from code prior to training. However, we hypothesize that comments may play a critical role in fixing certain types of bugs by providing valuable design and implementation insights. In this study, we investigate how the presence or absence of comments, both during training and at inference time, impacts the bug-fixing capabilities of LLMs. We conduct an empirical evaluation comparing two model families, each evaluated under all combinations of training and inference conditions (with and without comments), and thereby revisiting the common practice of removing comments during training. To address the limited availability of comments in state-of-the-art datasets, we use an LLM to automatically generate comments for methods lacking them. Our findings show that comments improve ABF accuracy by up to threefold when present in both phases, while training with comments does not degrade performance when instances lack them. Additionally, an interpretability analysis identifies that comments detailing method implementation are particularly effective in aiding LLMs to fix bugs accurately.
https://arxiv.org/abs/2601.23059
Academic Papers
svg
7ddeb1d234a442f08f5933bf94fec77a6cfc119a11e3fbcd22d22b09e397fc1c
2026-02-02T00:00:00-05:00
Evaluating the Effectiveness of OpenAI's Parental Control System
arXiv:2601.23062v1 Announce Type: new Abstract: We evaluate how effectively platform-level parental controls moderate a mainstream conversational assistant used by minors. Our two-phase protocol first builds a category-balanced conversation corpus via PAIR-style iterative prompt refinement over API, then has trained human agents replay/refine those prompts in the consumer UI using a designated child account while monitoring the linked parent inbox for alerts. We focus on seven risk areas -- physical harm, pornography, privacy violence, health consultation, fraud, hate speech, and malware and quantify four outcomes: Notification Rate (NR), Leak-Through (LR), Overblocking (OBR), and UI Intervention Rate (UIR). Using an automated judge (with targeted human audit) and comparing the current backend to legacy variants (GPT-4.1/4o), we find that notifications are selective rather than comprehensive: privacy violence, fraud, hate speech, and malware triggered no parental alerts in our runs, whereas physical harm (highest), pornography, and some health queries produced intermittent alerts. The current backend shows lower leak-through than legacy models, yet overblocking of benign, educational queries near sensitive topics remains common and is not surfaced to parents, revealing a policy-product gap between on-screen safeguards and parent-facing telemetry. We propose actionable fixes: broaden/configure the notification taxonomy, couple visible safeguards to privacy-preserving parent summaries, and prefer calibrated, age-appropriate safe rewrites over blanket refusals.
https://arxiv.org/abs/2601.23062
Academic Papers
svg
076464471cebe33755369de2bc1748b710c918a58e147050e1f88ffc3deeff60
2026-02-02T00:00:00-05:00
Gender Disparities in StackOverflow's Community-Based Question Answering: A Matter of Quantity versus Quality
arXiv:2601.23063v1 Announce Type: new Abstract: Community Question-Answering platforms, such as Stack Overflow (SO), are valuable knowledge exchange and problem-solving resources. These platforms incorporate mechanisms to assess the quality of answers and participants' expertise, ideally free from discriminatory biases. However, prior research has highlighted persistent gender biases, raising concerns about the inclusivity and fairness of these systems. Addressing such biases is crucial for fostering equitable online communities. While previous studies focus on detecting gender bias by comparing male and female user characteristics, they often overlook the interaction between genders, inherent answer quality, and the selection of ``best answers'' by question askers. In this study, we investigate whether answer quality is influenced by gender using a combination of human evaluations and automated assessments powered by Large Language Models. Our findings reveal no significant gender differences in answer quality, nor any substantial influence of gender bias on the selection of ``best answers." Instead, we find that the significant gender disparities in SO's reputation scores are primarily attributable to differences in users' activity levels, e.g., the number of questions and answers they write. Our results have important implications for the design of scoring systems in community question-answering platforms. In particular, reputation systems that heavily emphasize activity volume risk amplifying gender disparities that do not reflect actual differences in answer quality, calling for more equitable design strategies.
https://arxiv.org/abs/2601.23063
Academic Papers
svg
81bd916388583a08bbe2d980f9958691d3db287499f23d98991c052ab152a962
2026-02-02T00:00:00-05:00
HierLoc: Hyperbolic Entity Embeddings for Hierarchical Visual Geolocation
arXiv:2601.23064v1 Announce Type: new Abstract: Visual geolocalization, the task of predicting where an image was taken, remains challenging due to global scale, visual ambiguity, and the inherently hierarchical structure of geography. Existing paradigms rely on either large-scale retrieval, which requires storing a large number of image embeddings, grid-based classifiers that ignore geographic continuity, or generative models that diffuse over space but struggle with fine detail. We introduce an entity-centric formulation of geolocation that replaces image-to-image retrieval with a compact hierarchy of geographic entities embedded in Hyperbolic space. Images are aligned directly to country, region, subregion, and city entities through Geo-Weighted Hyperbolic contrastive learning by directly incorporating haversine distance into the contrastive objective. This hierarchical design enables interpretable predictions and efficient inference with 240k entity embeddings instead of over 5 million image embeddings on the OSV5M benchmark, on which our method establishes a new state-of-the-art performance. Compared to the current methods in the literature, it reduces mean geodesic error by 19.5\%, while improving the fine-grained subregion accuracy by 43%. These results demonstrate that geometry-aware hierarchical embeddings provide a scalable and conceptually new alternative for global image geolocation.
https://arxiv.org/abs/2601.23064
Academic Papers
svg
e75a11a39018768c6986e1f783541e1cddec3297f685c1b0a195175070329e6c
2026-02-02T00:00:00-05:00
EAG-PT: Emission-Aware Gaussians and Path Tracing for Indoor Scene Reconstruction and Editing
arXiv:2601.23065v1 Announce Type: new Abstract: Recent reconstruction methods based on radiance field such as NeRF and 3DGS reproduce indoor scenes with high visual fidelity, but break down under scene editing due to baked illumination and the lack of explicit light transport. In contrast, physically based inverse rendering relies on mesh representations and path tracing, which enforce correct light transport but place strong requirements on geometric fidelity, becoming a practical bottleneck for real indoor scenes. In this work, we propose Emission-Aware Gaussians and Path Tracing (EAG-PT), aiming for physically based light transport with a unified 2D Gaussian representation. Our design is based on three cores: (1) using 2D Gaussians as a unified scene representation and transport-friendly geometry proxy that avoids reconstructed mesh, (2) explicitly separating emissive and non-emissive components during reconstruction for further scene editing, and (3) decoupling reconstruction from final rendering by using efficient single-bounce optimization and high-quality multi-bounce path tracing after scene editing. Experiments on synthetic and real indoor scenes show that EAG-PT produces more natural and physically consistent renders after editing than radiant scene reconstructions, while preserving finer geometric detail and avoiding mesh-induced artifacts compared to mesh-based inverse path tracing. These results suggest promising directions for future use in interior design, XR content creation, and embodied AI.
https://arxiv.org/abs/2601.23065
Academic Papers
svg
fcc1148924dd8588868f0572da3504a3b1217bbeaf5a50f5227b362ef88d514a
2026-02-02T00:00:00-05:00
Towards Explicit Acoustic Evidence Perception in Audio LLMs for Speech Deepfake Detection
arXiv:2601.23066v1 Announce Type: new Abstract: Speech deepfake detection (SDD) focuses on identifying whether a given speech signal is genuine or has been synthetically generated. Existing audio large language model (LLM)-based methods excel in content understanding; however, their predictions are often biased toward semantically correlated cues, which results in fine-grained acoustic artifacts being overlooked during the decisionmaking process. Consequently, fake speech with natural semantics can bypass detectors despite harboring subtle acoustic anomalies; this suggests that the challenge stems not from the absence of acoustic data, but from its inadequate accessibility when semantic-dominant reasoning prevails. To address this issue, we investigate SDD within the audio LLM paradigm and introduce SDD with Auditory Perception-enhanced Audio Large Language Model (SDD-APALLM), an acoustically enhanced framework designed to explicitly expose fine-grained time-frequency evidence as accessible acoustic cues. By combining raw audio with structured spectrograms, the proposed framework empowers audio LLMs to more effectively capture subtle acoustic inconsistencies without compromising their semantic understanding. Experimental results indicate consistent gains in detection accuracy and robustness, especially in cases where semantic cues are misleading. Further analysis reveals that these improvements stem from a coordinated utilization of semantic and acoustic information, as opposed to simple modality aggregation.
https://arxiv.org/abs/2601.23066
Academic Papers
svg
19351c93c15dbf72617464a02ee664fe12364ba26ad52f94ea72b7d9a3f6c3f5
2026-02-02T00:00:00-05:00
ExplainerPFN: Towards tabular foundation models for model-free zero-shot feature importance estimations
arXiv:2601.23068v1 Announce Type: new Abstract: Computing the importance of features in supervised classification tasks is critical for model interpretability. Shapley values are a widely used approach for explaining model predictions, but require direct access to the underlying model, an assumption frequently violated in real-world deployments. Further, even when model access is possible, their exact computation may be prohibitively expensive. We investigate whether meaningful Shapley value estimations can be obtained in a zero-shot setting, using only the input data distribution and no evaluations of the target model. To this end, we introduce ExplainerPFN, a tabular foundation model built on TabPFN that is pretrained on synthetic datasets generated from random structural causal models and supervised using exact or near-exact Shapley values. Once trained, ExplainerPFN predicts feature attributions for unseen tabular datasets without model access, gradients, or example explanations. Our contributions are fourfold: (1) we show that few-shot learning-based explanations can achieve high fidelity to SHAP values with as few as two reference observations; (2) we propose ExplainerPFN, the first zero-shot method for estimating Shapley values without access to the underlying model or reference explanations; (3) we provide an open-source implementation of ExplainerPFN, including the full training pipeline and synthetic data generator; and (4) through extensive experiments on real and synthetic datasets, we show that ExplainerPFN achieves performance competitive with few-shot surrogate explainers that rely on 2-10 SHAP examples.
https://arxiv.org/abs/2601.23068
Academic Papers
svg
948abee66aba19f6d9f1222ef198c17ecf9fafac2d6137615747353b57a8f4e8
2026-02-02T00:00:00-05:00
SplineFlow: Flow Matching for Dynamical Systems with B-Spline Interpolants
arXiv:2601.23072v1 Announce Type: new Abstract: Flow matching is a scalable generative framework for characterizing continuous normalizing flows with wide-range applications. However, current state-of-the-art methods are not well-suited for modeling dynamical systems, as they construct conditional paths using linear interpolants that may not capture the underlying state evolution, especially when learning higher-order dynamics from irregular sampled observations. Constructing unified paths that satisfy multi-marginal constraints across observations is challenging, since na\"ive higher-order polynomials tend to be unstable and oscillatory. We introduce SplineFlow, a theoretically grounded flow matching algorithm that jointly models conditional paths across observations via B-spline interpolation. Specifically, SplineFlow exploits the smoothness and stability of B-spline bases to learn the complex underlying dynamics in a structured manner while ensuring the multi-marginal requirements are met. Comprehensive experiments across various deterministic and stochastic dynamical systems of varying complexity, as well as on cellular trajectory inference tasks, demonstrate the strong improvement of SplineFlow over existing baselines. Our code is available at: https://github.com/santanurathod/SplineFlow.
https://arxiv.org/abs/2601.23072
Academic Papers
svg
1cb8d20338b6e3f00622cb323bca4d00867c7c4274646b1896dba1ed8b50f717
2026-02-02T00:00:00-05:00
Computing braids from approximate data
arXiv:2601.23073v1 Announce Type: new Abstract: We study the theoretical and practical aspects of computing braids described by approximate descriptions of paths in the plane. Exact algorithms rely on the lexicographic ordering of the points in the plane, which is unstable under numerical uncertainty. Instead, we formalize an input model for approximate data, based on a separation predicate. It applies, for example, to paths obtained by tracking the roots of a parametrized polynomial with complex coefficients, thereby connecting certified path tracking outputs to exact braid computation.
https://arxiv.org/abs/2601.23073
Academic Papers
svg
35f4c41534d4be8baaac9f493cdf089986804269d58256b8254fc926434dbea5
2026-02-02T00:00:00-05:00
RN-D: Discretized Categorical Actors with Regularized Networks for On-Policy Reinforcement Learning
arXiv:2601.23075v1 Announce Type: new Abstract: On-policy deep reinforcement learning remains a dominant paradigm for continuous control, yet standard implementations rely on Gaussian actors and relatively shallow MLP policies, often leading to brittle optimization when gradients are noisy and policy updates must be conservative. In this paper, we revisit policy representation as a first-class design choice for on-policy optimization. We study discretized categorical actors that represent each action dimension with a distribution over bins, yielding a policy objective that resembles a cross-entropy loss. Building on architectural advances from supervised learning, we further propose regularized actor networks, while keeping critic design fixed. Our results show that simply replacing the standard actor network with our discretized regularized actor yields consistent gains and achieve the state-of-the-art performance across diverse continuous-control benchmarks.
https://arxiv.org/abs/2601.23075
Academic Papers
svg
109cc753d80ac08a20a9cfbe402d198c7b74cc9cced9663e5ff18a8f6b8a5f1f
2026-02-02T00:00:00-05:00
Robust and Generalized Humanoid Motion Tracking
arXiv:2601.23080v1 Announce Type: new Abstract: Learning a general humanoid whole-body controller is challenging because practical reference motions can exhibit noise and inconsistencies after being transferred to the robot domain, and local defects may be amplified by closed-loop execution, causing drift or failure in highly dynamic and contact-rich behaviors. We propose a dynamics-conditioned command aggregation framework that uses a causal temporal encoder to summarize recent proprioception and a multi-head cross-attention command encoder to selectively aggregate a context window based on the current dynamics. We further integrate a fall recovery curriculum with random unstable initialization and an annealed upward assistance force to improve robustness and disturbance rejection. The resulting policy requires only about 3.5 hours of motion data and supports single-stage end-to-end training without distillation. The proposed method is evaluated under diverse reference inputs and challenging motion regimes, demonstrating zero-shot transfer to unseen motions as well as robust sim-to-real transfer on a physical humanoid robot.
https://arxiv.org/abs/2601.23080
Academic Papers
svg
c70c1f1f0b16306afaf246eb256ffeb1c30e7081e67f03afe6d71e786c5ea17d
2026-02-02T00:00:00-05:00
Character as a Latent Variable in Large Language Models: A Mechanistic Account of Emergent Misalignment and Conditional Safety Failures
arXiv:2601.23081v1 Announce Type: new Abstract: Emergent Misalignment refers to a failure mode in which fine-tuning large language models (LLMs) on narrowly scoped data induces broadly misaligned behavior. Prior explanations mainly attribute this phenomenon to the generalization of erroneous or unsafe content. In this work, we show that this view is incomplete. Across multiple domains and model families, we find that fine-tuning models on data exhibiting specific character-level dispositions induces substantially stronger and more transferable misalignment than incorrect-advice fine-tuning, while largely preserving general capabilities. This indicates that emergent misalignment arises from stable shifts in model behavior rather than from capability degradation or corrupted knowledge. We further show that such behavioral dispositions can be conditionally activated by both training-time triggers and inference-time persona-aligned prompts, revealing shared structure across emergent misalignment, backdoor activation, and jailbreak susceptibility. Overall, our results identify character formation as a central and underexplored alignment risk, suggesting that robust alignment must address behavioral dispositions rather than isolated errors or prompt-level defenses.
https://arxiv.org/abs/2601.23081
Academic Papers
svg
8a8447391609557a1a2ec983bdea92bbe357dfe266a99df5e115febbe41076c6
2026-02-02T00:00:00-05:00
A Complete Finitary Refinement Type System for Scott-Open Properties
arXiv:2601.23082v1 Announce Type: new Abstract: We are interested in proving input-output properties of functions that handle infinite data such as streams or non-wellfounded trees. We provide a finitary refinement type system which is sound and complete for Scott-open properties defined in a fixpoint-like logic. Working on top of Abramsky's Domain Theory in Logical Form, we build from the well-known fact that the Scott domains interpreting recursive types are spectral spaces. The usual symmetry between Scott-open and compact-saturated sets is reflected in logical polarities: positive formulae allow for least fixpoints and define Scott-open properties, while negative formulae allow for greatest fixpoints and define compact-saturated properties. A realizability implication with the usual (contra)variance on polarities allows for non-trivial input-output properties to be formulated as positive formulae on function types.
https://arxiv.org/abs/2601.23082
Academic Papers
svg
c217a7a5b35333c1466501d735f7591a3b8b613b79ddf27a4216ff6444bbde0a
2026-02-02T00:00:00-05:00
Solving 4-Block Integer Linear Programs Faster Using Affine Decompositions of the Right-Hand Sides
arXiv:2601.23083v1 Announce Type: new Abstract: We present a new and faster algorithm for the 4-block integer linear programming problem, overcoming the long-standing runtime barrier faced by previous algorithms that rely on Graver complexity or proximity bounds. The 4-block integer linear programming problem asks to compute $\min\{c_0^\top x_0+c_1^\top x_1+\dots+c_n^\top x_n\ \vert\ Ax_0+Bx_1+\dots+Bx_n=b_0,\ Cx_0+Dx_i=b_i\ \forall i\in[n],\ (x_0,x_1,\dots,x_n)\in\mathbb Z_{\ge0}^{(1+n)k}\}$ for some $k\times k$ matrices $A,B,C,D$ with coefficients bounded by $\overline\Delta$ in absolute value. Our algorithm runs in time $f(k,\overline\Delta)\cdot n^{k+\mathcal O(1)}$, improving upon the previous best running time of $f(k,\overline\Delta)\cdot n^{k^2+\mathcal O(1)}$ [Oertel, Paat, and Weismantel (Math. Prog. 2024), Chen, Kouteck\'y, Xu, and Shi (ESA 2020)]. Further, we give the first algorithm that can handle large coefficients in $A, B$ and $C$, that is, it has a running time that depends only polynomially on the encoding length of these coefficients. We obtain these results by extending the $n$-fold integer linear programming algorithm of Cslovjecsek, Kouteck\'y, Lassota, Pilipczuk, and Polak (SODA 2024) to incorporate additional global variables $x_0$. The central technical result is showing that the exhaustive use of the vector rearrangement lemma of Cslovjecsek, Eisenbrand, Pilipczuk, Venzin, and Weismantel (ESA 2021) can be made \emph{affine} by carefully guessing both the residue of the global variables modulo a large modulus and a face in a suitable hyperplane arrangement among a sufficiently small number of candidates. This facilitates a dynamic high-multiplicy encoding of a \emph{faithfully decomposed} $n$-fold ILP with bounded right-hand sides, which we can solve efficiently for each such guess.
https://arxiv.org/abs/2601.23083
Academic Papers
svg
35b1b5450c405e119106d160b3d27416c74bc10892f7c470c0145e7e0eb08050
2026-02-02T00:00:00-05:00
OrLog: Resolving Complex Queries with LLMs and Probabilistic Reasoning
arXiv:2601.23085v1 Announce Type: new Abstract: Resolving complex information needs that come with multiple constraints should consider enforcing the logical operators encoded in the query (i.e., conjunction, disjunction, negation) on the candidate answer set. Current retrieval systems either ignore these constraints in neural embeddings or approximate them in a generative reasoning process that can be inconsistent and unreliable. Although well-suited to structured reasoning, existing neuro-symbolic approaches remain confined to formal logic or mathematics problems as they often assume unambiguous queries and access to complete evidence, conditions rarely met in information retrieval. To bridge this gap, we introduce OrLog, a neuro-symbolic retrieval framework that decouples predicate-level plausibility estimation from logical reasoning: a large language model (LLM) provides plausibility scores for atomic predicates in one decoding-free forward pass, from which a probabilistic reasoning engine derives the posterior probability of query satisfaction. We evaluate OrLog across multiple backbone LLMs, varying levels of access to external knowledge, and a range of logical constraints, and compare it against base retrievers and LLM-as-reasoner methods. Provided with entity descriptions, OrLog can significantly boost top-rank precision compared to LLM reasoning with larger gains on disjunctive queries. OrLog is also more efficient, cutting mean tokens by $\sim$90\% per query-entity pair. These results demonstrate that generation-free predicate plausibility estimation combined with probabilistic reasoning enables constraint-aware retrieval that outperforms monolithic reasoning while using far fewer tokens.
https://arxiv.org/abs/2601.23085
Academic Papers
svg
70548251cc281c834d4d86691e6a7b8bb4f8f475a8764b4d1b95bae3805c8abd
2026-02-02T00:00:00-05:00
Chain-of-thought obfuscation learned from output supervision can generalise to unseen tasks
arXiv:2601.23086v1 Announce Type: new Abstract: Chain-of-thought (CoT) reasoning provides a significant performance uplift to LLMs by enabling planning, exploration, and deliberation of their actions. CoT is also a powerful tool for monitoring the behaviours of these agents: when faithful, they offer interpretations of the model's decision making process, and an early warning sign for dangerous behaviours. However, optimisation pressures placed on the CoT may cause the model to obfuscate reasoning traces, losing this beneficial property. We show that obfuscation can generalise across tasks; models that learn to obfuscate reasoning involving reward hacking (e.g. accessing and utilising leaked information) generalise both the reward hacking behaviour and its obfuscation in CoT to unseen reward hacking settings. Most worryingly, we show that obfuscation of CoT reasoning, and its generalisation across tasks, also follows when we penalise only the model's final actions after closing its CoT. Our findings suggest that current practices of penalising harmful generations may inadvertently lead to a reduction in the broader monitorability of LLMs in unpredictable ways.
https://arxiv.org/abs/2601.23086
Academic Papers
svg
91412e4ceed3f880d60df965dfdd4b42f182024bda3344a538c54d5e0d19c2ac
2026-02-02T00:00:00-05:00
Temporally Coherent Imitation Learning via Latent Action Flow Matching for Robotic Manipulation
arXiv:2601.23087v1 Announce Type: new Abstract: Learning long-horizon robotic manipulation requires jointly achieving expressive behavior modeling, real-time inference, and stable execution, which remains challenging for existing generative policies. Diffusion-based approaches provide strong modeling capacity but typically incur high inference latency, while flow matching enables fast one-step generation yet often leads to unstable execution when applied directly in the raw action space. We propose LG-Flow Policy, a trajectory-level imitation learning framework that performs flow matching in a continuous latent action space. By encoding action sequences into temporally regularized latent trajectories and learning an explicit latent-space flow, the proposed approach decouples global motion structure from low-level control noise, resulting in smooth and reliable long-horizon execution. LG-Flow Policy further incorporates geometry-aware point cloud conditioning and execution-time multimodal modulation, with visual cues evaluated as a representative modality in real-world settings. Experimental results in simulation and on physical robot platforms demonstrate that LG-Flow Policy achieves near single-step inference, substantially improves trajectory smoothness and task success over flow-based baselines operating in the raw action space, and remains significantly more efficient than diffusion-based policies.
https://arxiv.org/abs/2601.23087
Academic Papers
svg
1fb818896b237eb811504b6a44cae24418b689227c63966eb34bc5746723dae0
2026-02-02T00:00:00-05:00
From Similarity to Vulnerability: Key Collision Attack on LLM Semantic Caching
arXiv:2601.23088v1 Announce Type: new Abstract: Semantic caching has emerged as a pivotal technique for scaling LLM applications, widely adopted by major providers including AWS and Microsoft. By utilizing semantic embedding vectors as cache keys, this mechanism effectively minimizes latency and redundant computation for semantically similar queries. In this work, we conceptualize semantic cache keys as a form of fuzzy hashes. We demonstrate that the locality required to maximize cache hit rates fundamentally conflicts with the cryptographic avalanche effect necessary for collision resistance. Our conceptual analysis formalizes this inherent trade-off between performance (locality) and security (collision resilience), revealing that semantic caching is naturally vulnerable to key collision attacks. While prior research has focused on side-channel and privacy risks, we present the first systematic study of integrity risks arising from cache collisions. We introduce CacheAttack, an automated framework for launching black-box collision attacks. We evaluate CacheAttack in security-critical tasks and agentic workflows. It achieves a hit rate of 86\% in LLM response hijacking and can induce malicious behaviors in LLM agent, while preserving strong transferability across different embedding models. A case study on a financial agent further illustrates the real-world impact of these vulnerabilities. Finally, we discuss mitigation strategies.
https://arxiv.org/abs/2601.23088
Academic Papers
svg
af957d428dcdf61619e9bb09393bad6a5393baeb23cde221e305c460b34b065f
2026-02-02T00:00:00-05:00
Omni-fMRI: A Universal Atlas-Free fMRI Foundation Model
arXiv:2601.23090v1 Announce Type: new Abstract: Self-supervised fMRI foundation models have shown promising transfer performance, yet most rely on predefined region-level parcellations that discard fine-grained voxel information and introduce atlas-dependent biases. We propose Omni-fMRI, an atlas-free foundation model that operates directly on voxel-level signals. To enable scalable pretraining on 49,497 fMRI sessions across nine datasets, Omni-fMRI introduces a dynamic patching mechanism that substantially reduces computational cost while preserving informative spatial structure. To support reproducibility and fair comparison, we establish a comprehensive benchmark suite spanning 11 datasets and a diverse set of resting-state and task-based fMRI tasks. Experimental results demonstrate that Omni-fMRI consistently outperforms existing foundation models, providing a scalable and reproducible framework for atlas-free brain representation learning. Code and logs are available.
https://arxiv.org/abs/2601.23090
Academic Papers
svg
5ff41a48c4a3b330cb9d17f6ea82ab37d11ba272a7c48e235fb583ccaef548a9
2026-02-02T00:00:00-05:00
WiFiPenTester: Advancing Wireless Ethical Hacking with Governed GenAI
arXiv:2601.23092v1 Announce Type: new Abstract: Wireless ethical hacking relies heavily on skilled practitioners manually interpreting reconnaissance results and executing complex, time-sensitive sequences of commands to identify vulnerable targets, capture authentication handshakes, and assess password resilience; a process that is inherently labour-intensive, difficult to scale, and prone to subjective judgement and human error. To help address these limitations, we propose WiFiPenTester, an experimental, governed, and reproducible system for GenAI-enabled wireless ethical hacking. The system integrates large language models into the reconnaissance and decision-support phases of wireless security assessment, enabling intelligent target ranking, attack feasibility estimation, and strategy recommendation, while preserving strict human-in-the-loop control and budget-aware execution. We describe the system architecture, threat model, governance mechanisms, and prompt-engineering methodology, and empirical experiments conducted across multiple wireless environments. The results demonstrate that GenAI assistance improves target selection accuracy and overall assessment efficiency, while maintaining auditability and ethical safeguards. This indicates that WiFiPenTester is a meaningful step toward practical, safe, and scalable GenAI-assisted wireless penetration testing, while reinforcing the necessity of bounded autonomy, human oversight, and rigorous governance mechanisms when deploying GenAI in ethical hacking.
https://arxiv.org/abs/2601.23092
Academic Papers
svg
df33ecf447dcbdd224034ed9b92552ebf17863bbebc1066b4f27709d59aadb89
2026-02-02T00:00:00-05:00
Safer Policy Compliance with Dynamic Epistemic Fallback
arXiv:2601.23094v1 Announce Type: new Abstract: Humans develop a series of cognitive defenses, known as epistemic vigilance, to combat risks of deception and misinformation from everyday interactions. Developing safeguards for LLMs inspired by this mechanism might be particularly helpful for their application in high-stakes tasks such as automating compliance with data privacy laws. In this paper, we introduce Dynamic Epistemic Fallback (DEF), a dynamic safety protocol for improving an LLM's inference-time defenses against deceptive attacks that make use of maliciously perturbed policy texts. Through various levels of one-sentence textual cues, DEF nudges LLMs to flag inconsistencies, refuse compliance, and fallback to their parametric knowledge upon encountering perturbed policy texts. Using globally recognized legal policies such as HIPAA and GDPR, our empirical evaluations report that DEF effectively improves the capability of frontier LLMs to detect and refuse perturbed versions of policies, with DeepSeek-R1 achieving a 100% detection rate in one setting. This work encourages further efforts to develop cognitively inspired defenses to improve LLM robustness against forms of harm and deception that exploit legal artifacts.
https://arxiv.org/abs/2601.23094
Academic Papers
svg
14d89ffce21e4604d8049d97ae573857348ac2c3083f69f2dc8da50b168830c7
2026-02-02T00:00:00-05:00
Exploring Sidewalk Sheds in New York City through Chatbot Surveys and Human Computer Interaction
arXiv:2601.23095v1 Announce Type: new Abstract: Sidewalk sheds are a common feature of the streetscape in New York City, reflecting ongoing construction and maintenance activities. However, policymakers and local business owners have raised concerns about reduced storefront visibility and altered pedestrian navigation. Although sidewalk sheds are widely used for safety, their effects on pedestrian visibility and movement are not directly measured in current planning practices. To address this, we developed an AI-based chatbot survey that collects image-based annotations and route choices from pedestrians, linking these responses to specific shed design features, including clearance height, post spacing, and color. This AI chatbot survey integrates a large language model (e.g., Google's Gemini-1.5-flash-001 model) with an image-annotation interface, allowing users to interact with street images, mark visual elements, and provide structured feedback through guided dialogue. To explore pedestrian perceptions and behaviors, this paper conducts a grid-based analysis of entrance annotations and applies logistic mixed-effects modeling to assess sidewalk choice patterns. Analysis of the dataset (n = 25) shows that: (1) the presence of scaffolding significantly reduces pedestrians' ability to identify ground-floor retail entrances, and (2) variations in weather conditions and shed design features significantly influence sidewalk selection behavior. By integrating generative AI into urban research, this study demonstrates a novel method for evaluating sidewalk shed designs and provides empirical evidence to support adjustments to shed guidelines that improve the pedestrian experience without compromising safety.
https://arxiv.org/abs/2601.23095
Academic Papers
svg
079fe2795509b9e3a36bba00c57d69dafbdbf478ecff23f8e1aa0c2368c429a4
2026-02-02T00:00:00-05:00
CATTO: Balancing Preferences and Confidence in Language Models
arXiv:2601.23096v1 Announce Type: new Abstract: Large language models (LLMs) often make accurate next token predictions but their confidence in these predictions can be poorly calibrated: high-confidence predictions are frequently wrong, and low-confidence predictions may be correct. This miscalibration is exacerbated by preference-based alignment methods breaking the link between predictive probability and correctness. We introduce a Calibration Aware Token-level Training Objective (CATTO), a calibration-aware objective that aligns predicted confidence with empirical prediction correctness, which can be combined with the original preference optimization objectives. Empirically, CATTO reduces Expected Calibration Error (ECE) by 2.22%-7.61% in-distribution and 1.46%-10.44% out-of-distribution compared to direct preference optimization (DPO), and by 0.22%-1.24% in-distribution and 1.23%-5.07% out-of-distribution compared to the strongest DPO baseline. This improvement in confidence does not come at a cost of losing task accuracy, where CATTO maintains or slightly improves multiple-choice question-answering accuracy on five datasets. We also introduce Confidence@k, a test-time scaling mechanism leveraging calibrated token probabilities for Bayes-optimal selection of output tokens.
https://arxiv.org/abs/2601.23096
Academic Papers
svg
b7eb15099a313f98d3cf4d34c25e2e5679a577a181e17fba8d5108c2510166dd
2026-02-02T00:00:00-05:00
Rethinking Transferable Adversarial Attacks on Point Clouds from a Compact Subspace Perspective
arXiv:2601.23102v1 Announce Type: new Abstract: Transferable adversarial attacks on point clouds remain challenging, as existing methods often rely on model-specific gradients or heuristics that limit generalization to unseen architectures. In this paper, we rethink adversarial transferability from a compact subspace perspective and propose CoSA, a transferable attack framework that operates within a shared low-dimensional semantic space. Specifically, each point cloud is represented as a compact combination of class-specific prototypes that capture shared semantic structure, while adversarial perturbations are optimized within a low-rank subspace to induce coherent and architecture-agnostic variations. This design suppresses model-dependent noise and constrains perturbations to semantically meaningful directions, thereby improving cross-model transferability without relying on surrogate-specific artifacts. Extensive experiments on multiple datasets and network architectures demonstrate that CoSA consistently outperforms state-of-the-art transferable attacks, while maintaining competitive imperceptibility and robustness under common defense strategies. Codes will be made public upon paper acceptance.
https://arxiv.org/abs/2601.23102
Academic Papers
svg
93748d5809addc5b24ea979744823a0abed9878847ba4cedba699a86877f65d7
2026-02-02T00:00:00-05:00
Lossy Compression of Cellular Network KPIs
arXiv:2601.23105v1 Announce Type: new Abstract: Network Key Performance Indicators (KPIs) are a fundamental component of mobile cellular network monitoring and optimization. Their massive volume, resulting from fine-grained measurements collected across many cells over long time horizons, poses significant challenges for storage, transport, and large-scale analysis. In this letter, we show that common cellular KPIs can be efficiently compressed using standard lossy compression schemes based on prediction, quantization, and entropy coding, achieving substantial reductions in reporting overhead. Focusing on traffic volume KPIs, we first characterize their intrinsic compressibility through a rate-distortion analysis, showing that signal-to-noise ratios around 30 dB can be achieved using only 3-4 bits per sample, corresponding to an 8-10x reduction with respect to 32-bit floating-point representations. We then assess the impact of KPI compression on representative downstream analytics tasks. Our results show that aggregation across cells mitigates quantization errors and that prediction accuracy is unaffected beyond a moderate reporting rate. These findings indicate that KPI compression is feasible and transparent to network-level analytics in cellular systems.
https://arxiv.org/abs/2601.23105
Academic Papers
svg
9eb7f76f49ccf3b47ef9e1b5897cddb7db2a5c3a8787262bcb55bbc2c0c48a01
2026-02-02T00:00:00-05:00
FlowCalib: LiDAR-to-Vehicle Miscalibration Detection using Scene Flows
arXiv:2601.23107v1 Announce Type: new Abstract: Accurate sensor-to-vehicle calibration is essential for safe autonomous driving. Angular misalignments of LiDAR sensors can lead to safety-critical issues during autonomous operation. However, current methods primarily focus on correcting sensor-to-sensor errors without considering the miscalibration of individual sensors that cause these errors in the first place. We introduce FlowCalib, the first framework that detects LiDAR-to-vehicle miscalibration using motion cues from the scene flow of static objects. Our approach leverages the systematic bias induced by rotational misalignment in the flow field generated from sequential 3D point clouds, eliminating the need for additional sensors. The architecture integrates a neural scene flow prior for flow estimation and incorporates a dual-branch detection network that fuses learned global flow features with handcrafted geometric descriptors. These combined representations allow the system to perform two complementary binary classification tasks: a global binary decision indicating whether misalignment is present and separate, axis-specific binary decisions indicating whether each rotational axis is misaligned. Experiments on the nuScenes dataset demonstrate FlowCalib's ability to robustly detect miscalibration, establishing a benchmark for sensor-to-vehicle miscalibration detection.
https://arxiv.org/abs/2601.23107
Academic Papers
svg
e25be8926fa0ef671509aa7496c4ec9b40a867a72c2cb8c42e97877c78b96e62
2026-02-02T00:00:00-05:00
Energy Management Strategies for Electric Aircraft Charging Leveraging Active Landside Vehicle-to-Grid
arXiv:2601.23108v1 Announce Type: new Abstract: The deployment of medium-range battery electric aircraft is a promising pathway to improve the environmental footprint of air mobility. Yet such a deployment would be accompanied by significant electric power requirements at airports due to aircraft charging. Given the growing prevalence of electric vehicles and their bi-directional charging capabilities--so-called vehicle-to-grid (V2G)--we study energy buffer capabilities of parked electric vehicles to alleviate pressure on grid connections. To this end, we present energy management strategies for airports providing cost-optimal apron and landside V2G charge scheduling. Specifically, we first formulate the optimal energy management problem of joint aircraft charging and landside V2G coordination as a linear program, whereby we use partial differential equations to model the aggregated charging dynamics of the electric vehicle fleet. Second, we consider a shuttle flight network with a single hub of a large Dutch airline, real-world grid prices, and synthetic parking garage occupancy data to test our framework. Our results show that V2G at even a single airport can indeed reduce energy costs to charge the aircraft fleet: Compared to a baseline scenario without V2G, the proposed concept yields cost savings of up to 32%, depending on the schedule and amount of participating vehicles, and has other potential beneficial effects on the local power grid, e.g., the reduction of potential power peaks.
https://arxiv.org/abs/2601.23108
Academic Papers
svg
8582be17ba52ecf5b4a5b2d9e78e051eb2295facd37fc08fdfee2ad82d095c9f
2026-02-02T00:00:00-05:00
How should AI Safety Benchmarks Benchmark Safety?
arXiv:2601.23112v1 Announce Type: new Abstract: AI safety benchmarks are pivotal for safety in advanced AI systems; however, they have significant technical, epistemic, and sociotechnical shortcomings. We present a review of 210 safety benchmarks that maps out common challenges in safety benchmarking, documenting failures and limitations by drawing from engineering sciences and long-established theories of risk and safety. We argue that adhering to established risk management principles, mapping the space of what can(not) be measured, developing robust probabilistic metrics, and efficiently deploying measurement theory to connect benchmarking objectives with the world can significantly improve the validity and usefulness of AI safety benchmarks. The review provides a roadmap on how to improve AI safety benchmarking, and we illustrate the effectiveness of these recommendations through quantitative and qualitative evaluation. We also introduce a checklist that can help researchers and practitioners develop robust and epistemologically sound safety benchmarks. This study advances the science of benchmarking and helps practitioners deploy AI systems more responsibly.
https://arxiv.org/abs/2601.23112
Academic Papers
svg
d6fa49a8b75e04061ffc5a92035170d1abe17773d4b6551b26354a56cc384e02
2026-02-02T00:00:00-05:00
To See Far, Look Close: Evolutionary Forecasting for Long-term Time Series
arXiv:2601.23114v1 Announce Type: new Abstract: The prevailing Direct Forecasting (DF) paradigm dominates Long-term Time Series Forecasting (LTSF) by forcing models to predict the entire future horizon in a single forward pass. While efficient, this rigid coupling of output and evaluation horizons necessitates computationally prohibitive re-training for every target horizon. In this work, we uncover a counter-intuitive optimization anomaly: models trained on short horizons-when coupled with our proposed Evolutionary Forecasting (EF) paradigm-significantly outperform those trained directly on long horizons. We attribute this success to the mitigation of a fundamental optimization pathology inherent in DF, where conflicting gradients from distant futures cripple the learning of local dynamics. We establish EF as a unified generative framework, proving that DF is merely a degenerate special case of EF. Extensive experiments demonstrate that a singular EF model surpasses task-specific DF ensembles across standard benchmarks and exhibits robust asymptotic stability in extreme extrapolation. This work propels a paradigm shift in LTSF: moving from passive Static Mapping to autonomous Evolutionary Reasoning.
https://arxiv.org/abs/2601.23114
Academic Papers
svg
c91ca98c0654aef78c9b782276e5c72e00d4bd810b7e5b905c836c7d9968ede4
2026-02-02T00:00:00-05:00
An Automatic Deep Learning Approach for Trailer Generation through Large Language Models
arXiv:2601.23121v1 Announce Type: new Abstract: Trailers are short promotional videos designed to provide audiences with a glimpse of a movie. The process of creating a trailer typically involves selecting key scenes, dialogues and action sequences from the main content and editing them together in a way that effectively conveys the tone, theme and overall appeal of the movie. This often includes adding music, sound effects, visual effects and text overlays to enhance the impact of the trailer. In this paper, we present a framework exploiting a comprehensive multimodal strategy for automated trailer production. Also, a Large Language Model (LLM) is adopted across various stages of the trailer creation. First, it selects main key visual sequences that are relevant to the movie's core narrative. Then, it extracts the most appealing quotes from the movie, aligning them with the trailer's narrative. Additionally, the LLM assists in creating music backgrounds and voiceovers to enrich the audience's engagement, thus contributing to make a trailer not just a summary of the movie's content but a narrative experience in itself. Results show that our framework generates trailers that are more visually appealing to viewers compared to those produced by previous state-of-the-art competitors.
https://arxiv.org/abs/2601.23121
Academic Papers
svg
9fe06e697bc7b67f1f04b0e83cccadac792d3c247a50b769127f96d5aa0d41a7
2026-02-02T00:00:00-05:00
Greedy Routing Reachability Games
arXiv:2601.23126v1 Announce Type: new Abstract: Today's networks consist of many autonomous entities that follow their own objectives, i.e., smart devices or parts of large AI systems, that are interconnected. Given the size and complexity of most communication networks, each entity typically only has a local view and thus must rely on a local routing protocol for sending and forwarding packets. A common solution for this is greedy routing, where packets are locally forwarded to a neighbor in the network that is closer to the packet's destination. In this paper we investigate a game-theoretic model with autonomous agents that aim at forming a network where greedy routing is enabled. The agents are positioned in a metric space and each agent tries to establish as few links as possible, while maintaining that it can reach every other agent via greedy routing. Thus, this model captures how greedy routing networks are formed without any assumption on the distribution of the agents or the specific employed greedy routing protocol. Hence, it distills the essence that makes greedy routing work. We study two variants of the model: with directed edges or with undirected edges. For the former, we show that equilibria exist, have optimal total cost, and that in Euclidean metrics they can be found efficiently. However, even for this simple setting computing optimal strategies is NP-hard. For the much more challenging setting with undirected edges, we show for the realistic setting with agents in 2D Euclidean space that the price of anarchy is between 1.75 and 1.8 and for higher dimensions it is less than 2. Also, we show that best response dynamics may cycle, but that in Euclidean space almost optimal approximate equilibria can be computed in polynomial time. Moreover, for 2D Euclidean space, these approximate equilibria outperform the well-known Delaunay triangulation.
https://arxiv.org/abs/2601.23126
Academic Papers
svg
83179115837960e579f9a2574c249e25c93365562bc9a1af19f6ab6242260c07
2026-02-02T00:00:00-05:00
"I Choose to Live, for Life Itself": Understanding Agency of Home-Based Care Patients Through Information Practices and Relational Dynamics in Care Networks
arXiv:2601.23127v1 Announce Type: new Abstract: Home-based care (HBC) delivers medical and care services in patients' living environments, offering unique opportunities for patient-centered care. However, patient agency is often inadequately represented in shared HBC planning processes. Through 23 multi-stakeholder interviews with HBC patients, healthcare professionals, and care workers, alongside 60 hours of ethnographic observations, we examined how patient agency manifests in HBC and why this representation gap occurs. Our findings reveal that patient agency is not a static individual attribute but a relational capacity shaped through maintaining everyday continuity, mutual recognition from care providers, and engagement with material home environments. Furthermore, we identified that structured documentation systems filter out contextual knowledge, informal communication channels fragment patient voices, and doctor-centered hierarchies position patients as passive recipients. Drawing on these insights, we propose design considerations to bridge this representation gap and to integrate patient agency into shared HBC plans.
https://arxiv.org/abs/2601.23127
Academic Papers
svg
4e6acf4171d302f1a1abacbb2b2cd7345b5d8ceedc1021df2c6b8c48c1199c25
2026-02-02T00:00:00-05:00
Distribution-informed Efficient Conformal Prediction for Full Ranking
arXiv:2601.23128v1 Announce Type: new Abstract: Quantifying uncertainty is critical for the safe deployment of ranking models in real-world applications. Recent work offers a rigorous solution using conformal prediction in a full ranking scenario, which aims to construct prediction sets for the absolute ranks of test items based on the relative ranks of calibration items. However, relying on upper bounds of non-conformity scores renders the method overly conservative, resulting in substantially large prediction sets. To address this, we propose Distribution-informed Conformal Ranking (DCR), which produces efficient prediction sets by deriving the exact distribution of non-conformity scores. In particular, we find that the absolute ranks of calibration items follow Negative Hypergeometric distributions, conditional on their relative ranks. DCR thus uses the rank distribution to derive non-conformity score distribution and determine conformal thresholds. We provide theoretical guarantees that DCR achieves improved efficiency over the baseline while ensuring valid coverage under mild assumptions. Extensive experiments demonstrate the superiority of DCR, reducing average prediction set size by up to 36%, while maintaining valid coverage.
https://arxiv.org/abs/2601.23128
Academic Papers
svg
ce7598bcc3368a8deb4cd947183d13a4849fdf416c76a14b281a6e05d9acbfbf
2026-02-02T00:00:00-05:00
Evaluating the Utility of Grounding Documents with Reference-Free LLM-based Metrics
arXiv:2601.23129v1 Announce Type: new Abstract: Retrieval Augmented Generation (RAG)'s success depends on the utility the LLM derives from the content used for grounding. Quantifying content utility does not have a definitive specification and existing metrics ignore model-specific capabilities and/or rely on costly annotations. In this paper, we propose Grounding Generation Utility (GroGU), a model-specific and reference-free metric that defines utility as a function of the downstream LLM's generation confidence based on entropy. Despite having no annotation requirements, GroGU is largely faithful in distinguishing ground-truth documents while capturing nuances ignored by LLM-agnostic metrics. We apply GroGU to train a query-rewriter for RAG by identifying high-utility preference data for Direct Preference Optimization. Experiments show improvements by up to 18.2 points in Mean Reciprocal Rank and up to 9.4 points in answer accuracy.
https://arxiv.org/abs/2601.23129
Academic Papers
svg
2f242f6ff4c353c39ae56550094123884a108acf124bea91e83affe1ac733cf6
2026-02-02T00:00:00-05:00
Synthesizing Petri Nets from Labelled Petri Nets using Token Trail Regions
arXiv:2601.23130v1 Announce Type: new Abstract: Synthesis automatically generates a process model from a behavioural specification. When the target model is a Petri net, we address synthesis through region theory. Researchers have studied region-based synthesis extensively for state-based specifications, such as transition systems and step-transition systems, as well as for language-based specifications. Accordingly, in literature, region theory is divided into two main branches: state-based regions and language-based regions. Using state-based regions, the behavioural specification is a set of global states and related state-transitions. This representation can express conflicts and the merging of global states naturally. However, it suffers from state explosion and can not express concurrency explicitly. Using language-based regions, the behavioural specification is a set of example runs defined by partially or totally ordered sets of events. This representation can express concurrency and branching naturally. However, it grows rapidly with the number of choices and can not express merging of conflicts. So far, synthesis requires a trade-off between these two approaches. Both region definitions have fundamental limitations, and synthesis therefore always involves a compromise. In this paper, we lift these limitations by introducing a new region theory that covers both state-based and language-based input. We prove that the new definition is a region meta theory that combines both concepts. It uses specifications given as a set of labelled nets, which allow us to express conflicts, concurrency and merging of local states naturally, and synthesizes a Petri net that simulates all labelled nets of the input specification.
https://arxiv.org/abs/2601.23130
Academic Papers
svg
7fcfcedb0ad78fc814f502fe1e3954272499f5465c428f81408a85896f14ee5b
2026-02-02T00:00:00-05:00
Regularisation in neural networks: a survey and empirical analysis of approaches
arXiv:2601.23131v1 Announce Type: new Abstract: Despite huge successes on a wide range of tasks, neural networks are known to sometimes struggle to generalise to unseen data. Many approaches have been proposed over the years to promote the generalisation ability of neural networks, collectively known as regularisation techniques. These are used as common practice under the assumption that any regularisation added to the pipeline would result in a performance improvement. In this study, we investigate whether this assumption holds in practice. First, we provide a broad review of regularisation techniques, including modern theories such as double descent. We propose a taxonomy of methods under four broad categories, namely: (1) data-based strategies, (2) architecture strategies, (3) training strategies, and (4) loss function strategies. Notably, we highlight the contradictions and correspondences between the approaches in these broad classes. Further, we perform an empirical comparison of the various regularisation techniques on classification tasks for ten numerical and image datasets applied to the multi-layer perceptron and convolutional neural network architectures. Results show that the efficacy of regularisation is dataset-dependent. For example, the use of a regularisation term only improved performance on numeric datasets, whereas batch normalisation improved performance on image datasets only. Generalisation is crucial to machine learning; thus, understanding the effects of applying regularisation techniques, and considering the connections between them is essential to the appropriate use of these methods in practice.
https://arxiv.org/abs/2601.23131
Academic Papers
svg
d7d0d020833cd8b7ca3b4597a150b3d5b83b008aa7338183407133d7b45a66d6
2026-02-02T00:00:00-05:00
Secure Tool Manifest and Digital Signing Solution for Verifiable MCP and LLM Pipelines
arXiv:2601.23132v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly adopted in sensitive domains such as healthcare and financial institutions' data analytics; however, their execution pipelines remain vulnerable to manipulation and unverifiable behavior. Existing control mechanisms, such as the Model Context Protocol (MCP), define compliance policies for tool invocation but lack verifiable enforcement and transparent validation of model actions. To address this gap, we propose a novel Secure Tool Manifest and Digital Signing Framework, a structured and security-aware extension of Model Context Protocols. The framework enforces cryptographically signed manifests, integrates transparent verification logs, and isolates model-internal execution metadata from user-visible components to ensure verifiable execution integrity. Furthermore, the evaluation demonstrates that the framework scales nearly linearly (R-squared = 0.998), achieves near-perfect acceptance of valid executions while consistently rejecting invalid ones, and maintains balanced model utilization across execution pipelines.
https://arxiv.org/abs/2601.23132
Academic Papers
svg
c115886ca19756d1c0ded275e01a4a9e1411892da42d249d83e246bc98672a16
2026-02-02T00:00:00-05:00
RAudit: A Blind Auditing Protocol for Large Language Model Reasoning
arXiv:2601.23133v1 Announce Type: new Abstract: Inference-time scaling can amplify reasoning pathologies: sycophancy, rung collapse, and premature certainty. We present RAudit, a diagnostic protocol for auditing LLM reasoning without ground truth access. The key constraint is blindness: the auditor evaluates only whether derivation steps support conclusions, enabling detection of trace-output inconsistency and, when latent competence exists, its recovery. RAudit measures process quality via CRIT-based reasonableness scores and varies critique formulation to study how social framing affects model response. We prove bounded correction and $O(\log(1/\epsilon))$ termination. Experiments on mathematical reasoning (CAP-GSM8K) and causal judgment (CausalL2) reveal four mechanisms explaining model unreliability: (1) Latent Competence Suppression, where models derive correct answers then overwrite them under social pressure; (2) The False Competence Trap, where weaker judges mask sycophancy that stronger judges expose; (3) The Complexity-Vulnerability Tradeoff, where causal tasks induce more than 10 times higher sycophancy than mathematical tasks; and (4) Iatrogenic Critique, where authoritative correction harms weaker models. These findings challenge assumptions that capability implies robustness and that stronger feedback yields better outputs.
https://arxiv.org/abs/2601.23133
Academic Papers
svg
5c62ed8add874b61fb7bf9dcf0b444c8eac3c6f2e74c44986ab73f3ce82900a6
2026-02-02T00:00:00-05:00
Machine Learning for Energy-Performance-aware Scheduling
arXiv:2601.23134v1 Announce Type: new Abstract: In the post-Dennard era, optimizing embedded systems requires navigating complex trade-offs between energy efficiency and latency. Traditional heuristic tuning is often inefficient in such high-dimensional, non-smooth landscapes. In this work, we propose a Bayesian Optimization framework using Gaussian Processes to automate the search for optimal scheduling configurations on heterogeneous multi-core architectures. We explicitly address the multi-objective nature of the problem by approximating the Pareto Frontier between energy and time. Furthermore, by incorporating Sensitivity Analysis (fANOVA) and comparing different covariance kernels (e.g., Mat\'ern vs. RBF), we provide physical interpretability to the black-box model, revealing the dominant hardware parameters driving system performance.
https://arxiv.org/abs/2601.23134
Academic Papers
svg
bf138b4a836331b525cbb0f66945c82e418667d27ea2a4a93ea83f595a1456f6
2026-02-02T00:00:00-05:00
Why GRPO Needs Normalization: A Local-Curvature Perspective on Adaptive Gradients
arXiv:2601.23135v1 Announce Type: new Abstract: Reinforcement learning (RL) has become a key driver of language model reasoning. Among RL algorithms, Group Relative Policy Optimization (GRPO) is the de facto standard, avoiding the need for a critic by using per-prompt baselines and variance normalization. Yet why and when this normalization helps remains unclear. In this work, we provide an explanation through the lens of local curvature of the sequence-level policy gradient: standard deviation normalization implements an adaptive gradient. Theoretically, under mild conditions, GRPO enjoys a strictly improved convergence rate over unnormalized REINFORCE, with gains characterized by the average within-prompt reward standard deviation across prompts and iterations. Empirically, our analysis on GSM8K and MATH benchmarks reveals three distinct training phases governed by the interplay between feature orthogonality and reward variance: (I) an early acceleration phase where high variance and orthogonality favor adaptive scaling; (II) a relatively stable transition phase; and (III) a late-stage regime where the loss of orthogonality limits further gains. Together, these results provide a principled account of when std normalization helps in GRPO, and offer broader insights into the design of critic-free RL algorithms.
https://arxiv.org/abs/2601.23135
Academic Papers
svg
18edc6fd4e236317365341f4e4c05ff4679597011ca78b4e9d8763c1706ecf34
2026-02-02T00:00:00-05:00
Automated Testing of Prevalent 3D User Interactions in Virtual Reality Applications
arXiv:2601.23139v1 Announce Type: new Abstract: Virtual Reality (VR) technologies offer immersive user experiences across various domains, but present unique testing challenges compared to traditional software. Existing VR testing approaches enable scene navigation and interaction activation, but lack the ability to automatically synthesise realistic 3D user inputs (e.g, grab and trigger actions via hand-held controllers). Automated testing that generates and executes such input remains an unresolved challenge. Furthermore, existing metrics fail to robustly capture diverse interaction coverage. This paper addresses these gaps through four key contributions. First, we empirically identify four prevalent interaction types in nine open-source VR projects: fire, manipulate, socket, and custom. Second, we introduce the Interaction Flow Graph, a novel abstraction that systematically models 3D user interactions by identifying targets, actions, and conditions. Third, we construct XRBench3D, a benchmark comprising ten VR scenes that encompass 456 distinct user interactions for evaluating VR interaction testing. Finally, we present XRintTest, an automated testing approach that leverages this graph for dynamic scene exploration and interaction execution. Evaluation on XRBench3D shows that XRintTest achieves great effectiveness, reaching 93% coverage of fire, manipulate and socket interactions across all scenes, and performing 12x more effectively and 6x more efficiently than random exploration. Moreover, XRintTest can detect runtime exceptions and non-exception interaction issues, including subtle configuration defects. In addition, the Interaction Flow Graph can reveal potential interaction design smells that may compromise intended functionality and hinder testing performance for VR applications.
https://arxiv.org/abs/2601.23139
Academic Papers
svg
f39cde0cc5024bccf2734b220d8aa80e2e6c48279a06a740885fb013d22f17f6
2026-02-02T00:00:00-05:00
From Monolith to Microservices: A Comparative Evaluation of Decomposition Frameworks
arXiv:2601.23141v1 Announce Type: new Abstract: Software modernisation through the migration from monolithic architectures to microservices has become increasingly critical, yet identifying effective service boundaries remains a complex and unresolved challenge. Although numerous automated microservice decomposition frameworks have been proposed, their evaluation is often fragmented due to inconsistent benchmark systems, incompatible metrics, and limited reproducibility, thus hindering objective comparison. This work presents a unified comparative evaluation of state-of-the-art microservice decomposition approaches spanning static, dynamic, and hybrid techniques. Using a consistent metric computation pipeline, we assess the decomposition quality across widely used benchmark systems (JPetStore, AcmeAir, DayTrader, and Plants) using Structural Modularity (SM), Interface Number(IFN), Inter-partition Communication (ICP), Non-Extreme Distribution (NED), and related indicators. Our analysis combines results reported in prior studies with experimentally reproduced outputs from available replication packages. Findings indicate that the hierarchical clustering-based methods, particularly HDBScan, produce the most consistently balanced decompositions across benchmarks, achieving strong modularity while minimizing communication and interface overhead.
https://arxiv.org/abs/2601.23141
Academic Papers
svg
0e2d0379c9aa041ad662a5fd511af65c6994a0ee86610d0292342641c7e0d379
2026-02-02T00:00:00-05:00
Do Good, Stay Longer? Temporal Patterns and Predictors of Newcomer-to-Core Transitions in Conventional OSS and OSS4SG
arXiv:2601.23142v1 Announce Type: new Abstract: Open Source Software (OSS) sustainability relies on newcomers transitioning to core contributors, but this pipeline is broken, with most newcomers becoming inactive after initial contributions. Open Source Software for Social Good (OSS4SG) projects, which prioritize societal impact as their primary mission, may be associated with different newcomer-to-core transition outcomes than conventional OSS projects. We compared 375 projects (190 OSS4SG, 185 OSS), analyzing 92,721 contributors and 3.5 million commits. OSS4SG projects retain contributors at 2.2X higher rates and contributors have 19.6% higher probability of achieving core status. Early broad project exploration predicts core achievement (22.2% importance); conventional OSS concentrates on one dominant pathway (61.62% of transitions) while OSS4SG provides multiple pathways. Contrary to intuition, contributors who invest time learning the project before intensifying their contributions (Late Spike pattern) achieve core status 2.4-2.9X faster (21 weeks) than those who contribute intensively from day one (Early Spike pattern, 51-60 weeks). OSS4SG supports two effective temporal patterns while only Late Spike achieves fastest time-to-core in conventional OSS. Our findings suggest that finding a project aligned with personal values and taking time to understand the codebase before major contributions are key strategies for achieving core status. Our findings show that project mission is associated with measurably different environments for newcomer-to-core transitions and provide evidence-based guidance for newcomers and maintainers.
https://arxiv.org/abs/2601.23142
Academic Papers
svg
7e2012a92068b4f648ed7404bc2426aa0f03c25e44e5d885ac700ad5316ce70e
2026-02-02T00:00:00-05:00
THINKSAFE: Self-Generated Safety Alignment for Reasoning Models
arXiv:2601.23143v1 Announce Type: new Abstract: Large reasoning models (LRMs) achieve remarkable performance by leveraging reinforcement learning (RL) on reasoning tasks to generate long chain-of-thought (CoT) reasoning. However, this over-optimization often prioritizes compliance, making models vulnerable to harmful prompts. To mitigate this safety degradation, recent approaches rely on external teacher distillation, yet this introduces a distributional discrepancy that degrades native reasoning. We propose ThinkSafe, a self-generated alignment framework that restores safety alignment without external teachers. Our key insight is that while compliance suppresses safety mechanisms, models often retain latent knowledge to identify harm. ThinkSafe unlocks this via lightweight refusal steering, guiding the model to generate in-distribution safety reasoning traces. Fine-tuning on these self-generated responses effectively realigns the model while minimizing distribution shift. Experiments on DeepSeek-R1-Distill and Qwen3 show ThinkSafe significantly improves safety while preserving reasoning proficiency. Notably, it achieves superior safety and comparable reasoning to GRPO, with significantly reduced computational cost. Code, models, and datasets are available at https://github.com/seanie12/ThinkSafe.git.
https://arxiv.org/abs/2601.23143
Academic Papers
svg
1843e46c9db14709228bc740f32bbd71860e8d3b5460f8a71083187048644dbf
2026-02-02T00:00:00-05:00
Securing Time in Energy IoT: A Clock-Dynamics-Aware Spatio-Temporal Graph Attention Network for Clock Drift Attacks and Y2K38 Failures
arXiv:2601.23147v1 Announce Type: new Abstract: The integrity of time in distributed Internet of Things (IoT) devices is crucial for reliable operation in energy cyber-physical systems, such as smart grids and microgrids. However, IoT systems are vulnerable to clock drift, time-synchronization manipulation, and timestamp discontinuities, such as the Year 2038 (Y2K38) Unix overflow, all of which disrupt temporal ordering. Conventional anomaly-detection models, which assume reliable timestamps, fail to capture temporal inconsistencies. This paper introduces STGAT (Spatio-Temporal Graph Attention Network), a framework that models both temporal distortion and inter-device consistency in energy IoT systems. STGAT combines drift-aware temporal embeddings and temporal self-attention to capture corrupted time evolution at individual devices, and uses graph attention to model spatial propagation of timing errors. A curvature-regularized latent representation geometrically separates normal clock evolution from anomalies caused by drift, synchronization offsets, and overflow events. Experimental results on energy IoT telemetry with controlled timing perturbations show that STGAT achieves 95.7% accuracy, outperforming recurrent, transformer, and graph-based baselines with significant improvements (d > 1.8, p < 0.001). Additionally, STGAT reduces detection delay by 26%, achieving a 2.3-time-step delay while maintaining stable performance under overflow, drift, and physical inconsistencies.
https://arxiv.org/abs/2601.23147
Academic Papers
svg
0f167da813db857f3f6e6c7216a26c1e70c88cf66d175507c528222bfe3d33a9
2026-02-02T00:00:00-05:00
Hearing is Believing? Evaluating and Analyzing Audio Language Model Sycophancy with SYAUDIO
arXiv:2601.23149v1 Announce Type: new Abstract: Audio Language Models (ALMs) have recently shown strong capabilities in unified reasoning over speech, sound, and natural language; yet they inherit behavioral issues observed in Large Language Models, including sycophancy--the tendency to agree with user assertions even when they contradict objective evidence. While sycophancy has been extensively studied in text and vision-language models, its manifestation in audio-conditioned reasoning remains largely unexplored, despite the need for ALMs to rely on auditory cues such as acoustic events, speaker characteristics, and speech rate. To address this gap, we introduce SYAUDIO, the first benchmark dedicated to evaluating sycophancy in ALMs, consisting of 4,319 audio questions spanning Audio Perception, Audio Reasoning, Audio Math, and Audio Ethics. Built upon established audio benchmarks and augmented with TTS-generated arithmetic and moral reasoning tasks, SYAUDIO enables systematic evaluation across multiple domains and sycophancy types with carefully verified data quality. Furthermore, we analyze audio-specific sycophancy under realistic conditions involving noise and rate, and demonstrate that supervised fine-tuning with chain-of-thought data is an effective mitigation strategy for reducing sycophantic behavior in ALMs.
https://arxiv.org/abs/2601.23149
Academic Papers
svg
56fa22028be740d04e52082e23f717077c496d0664ab591a05ea478f35970876
2026-02-02T00:00:00-05:00
Manifold-Aware Perturbations for Constrained Generative Modeling
arXiv:2601.23151v1 Announce Type: new Abstract: Generative models have enjoyed widespread success in a variety of applications. However, they encounter inherent mathematical limitations in modeling distributions where samples are constrained by equalities, as is frequently the setting in scientific domains. In this work, we develop a computationally cheap, mathematically justified, and highly flexible distributional modification for combating known pitfalls in equality-constrained generative models. We propose perturbing the data distribution in a constraint-aware way such that the new distribution has support matching the ambient space dimension while still implicitly incorporating underlying manifold geometry. Through theoretical analyses and empirical evidence on several representative tasks, we illustrate that our approach consistently enables data distribution recovery and stable sampling with both diffusion models and normalizing flows.
https://arxiv.org/abs/2601.23151
Academic Papers
svg
a333228f05843547c6b42130855d960c48a712406af07ed6fb5b864727302415
2026-02-02T00:00:00-05:00
Behemoth: Benchmarking Unlearning in LLMs Using Fully Synthetic Data
arXiv:2601.23153v1 Announce Type: new Abstract: As artificial neural networks, and specifically large language models, have improved rapidly in capabilities and quality, they have increasingly been deployed in real-world applications, from customer service to Google search, despite the fact that they frequently make factually incorrect or undesirable statements. This trend has inspired practical and academic interest in model editing, that is, in adjusting the weights of the model to modify its likely outputs for queries relating to a specific fact or set of facts. This may be done either to amend a fact or set of facts, for instance, to fix a frequent error in the training data, or to suppress a fact or set of facts entirely, for instance, in case of dangerous knowledge. Multiple methods have been proposed to do such edits. However, at the same time, it has been shown that such model editing can be brittle and incomplete. Moreover the effectiveness of any model editing method necessarily depends on the data on which the model is trained, and, therefore, a good understanding of the interaction of the training data distribution and the way it is stored in the network is necessary and helpful to reliably perform model editing. However, working with large language models trained on real-world data does not allow us to understand this relationship or fully measure the effects of model editing. We therefore propose Behemoth, a fully synthetic data generation framework. To demonstrate the practical insights from the framework, we explore model editing in the context of simple tabular data, demonstrating surprising findings that, in some cases, echo real-world results, for instance, that in some cases restricting the update rank results in a more effective update. The code is available at https://github.com/IST-DASLab/behemoth.git.
https://arxiv.org/abs/2601.23153
Academic Papers
svg
42f1b7e1a5c5722b1eea6044914f8115ba7abd868a5f57dd5e7e76e59ade166e
2026-02-02T00:00:00-05:00
On Safer Reinforcement Learning Policies for Sedation and Analgesia in Intensive Care
arXiv:2601.23154v1 Announce Type: new Abstract: Pain management in intensive care usually involves complex trade-offs between therapeutic goals and patient safety, since both inadequate and excessive treatment may induce serious sequelae. Reinforcement learning can help address this challenge by learning medication dosing policies from retrospective data. However, prior work on sedation and analgesia has optimized for objectives that do not value patient survival while relying on algorithms unsuitable for imperfect information settings. We investigated the risks of these design choices by implementing a deep reinforcement learning framework to suggest hourly medication doses under partial observability. Using data from 47,144 ICU stays in the MIMIC-IV database, we trained policies to prescribe opioids, propofol, benzodiazepines, and dexmedetomidine according to two goals: reduce pain or jointly reduce pain and mortality. We found that, although the two policies were associated with lower pain, actions from the first policy were positively correlated with mortality, while those proposed by the second policy were negatively correlated. This suggests that valuing long-term outcomes could be critical for safer treatment policies, even if a short-term goal remains the primary objective.
https://arxiv.org/abs/2601.23154
Academic Papers
svg
5b56a6e592858b9054f1f62a6c1cc90e4c7378ec8e052c66727270b57419e6ee
2026-02-02T00:00:00-05:00
SPICE: Submodular Penalized Information-Conflict Selection for Efficient Large Language Model Training
arXiv:2601.23155v1 Announce Type: new Abstract: Information-based data selection for instruction tuning is compelling: maximizing the log-determinant of the Fisher information yields a monotone submodular objective, enabling greedy algorithms to achieve a $(1-1/e)$ approximation under a cardinality budget. In practice, however, we identify alleviating gradient conflicts, misalignment between per-sample gradients, is a key factor that slows down the decay of marginal log-determinant information gains, thereby preventing significant loss of information. We formalize this via an $\varepsilon$-decomposition that quantifies the deviation from ideal submodularity as a function of conflict statistics, yielding data-dependent approximation factors that tighten as conflicts diminish. Guided by this analysis, we propose SPICE, a conflict-aware selector that maximizes information while penalizing misalignment, and that supports early stopping and proxy models for efficiency. Empirically, SPICE selects subsets with higher log-determinant information than original criteria, and these informational gains translate into performance improvements: across 8 benchmarks with LLaMA2-7B and Qwen2-7B, SPICE uses only 10% of the data, yet matches or exceeds 6 methods including full-data tuning. This achieves performance improvements with substantially lower training cost.
https://arxiv.org/abs/2601.23155
Academic Papers
svg
d1d888d91d1490ba8180df1c1f711228de4c67f6ba0052be17ae855605cf45be
2026-02-02T00:00:00-05:00
Unsupervised Hierarchical Skill Discovery
arXiv:2601.23156v1 Announce Type: new Abstract: We consider the problem of unsupervised skill segmentation and hierarchical structure discovery in reinforcement learning. While recent approaches have sought to segment trajectories into reusable skills or options, most rely on action labels, rewards, or handcrafted annotations, limiting their applicability. We propose a method that segments unlabelled trajectories into skills and induces a hierarchical structure over them using a grammar-based approach. The resulting hierarchy captures both low-level behaviours and their composition into higher-level skills. We evaluate our approach in high-dimensional, pixel-based environments, including Craftax and the full, unmodified version of Minecraft. Using metrics for skill segmentation, reuse, and hierarchy quality, we find that our method consistently produces more structured and semantically meaningful hierarchies than existing baselines. Furthermore, as a proof of concept for utility, we demonstrate that these discovered hierarchies accelerate and stabilise learning on downstream reinforcement learning tasks.
https://arxiv.org/abs/2601.23156
Academic Papers
svg
529b7a2866b75cc1e9d5c287ad6a43dd62ab128ed0a6dbd158a3c9d09a1d7954
2026-02-02T00:00:00-05:00
No More, No Less: Least-Privilege Language Models
arXiv:2601.23157v1 Announce Type: new Abstract: Least privilege is a core security principle: grant each request only the minimum access needed to achieve its goal. Deployed language models almost never follow it, instead being exposed through a single API endpoint that serves all users and requests. This gap exists not because least privilege would be unhelpful; deployments would benefit greatly from reducing unnecessary capability exposure. The real obstacle is definitional and mechanistic: what does "access" mean inside a language model, and how can we enforce it without retraining or deploying multiple models? We take inspiration from least privilege in computer systems and define a class of models called least-privilege language models, where privilege is reachable internal computation during the forward pass. In this view, lowering privilege literally shrinks the model's accessible function class, as opposed to denying access via learned policies. We formalize deployment-time control as a monitor-allocator-enforcer stack, separating (i) request-time signals, (ii) a decision rule that allocates privilege, and (iii) an inference-time mechanism that selects privilege. We then propose Nested Least-Privilege Networks, a shape-preserving, rank-indexed intervention that provides a smooth, reversible control knob. We show that this knob yields policy-usable privilege-utility frontiers and enables selective suppression of targeted capabilities with limited collateral degradation across various policies. Most importantly, we argue for a new deployment paradigm that challenges the premise that language models can only be controlled at the output level.
https://arxiv.org/abs/2601.23157
Academic Papers
svg
0d08fa509aece83c1ec0c172dd7d6a75d9e12a8ba68ba8f8e6a3aebe3dec5876
2026-02-02T00:00:00-05:00
Segment Any Events with Language
arXiv:2601.23159v1 Announce Type: new Abstract: Scene understanding with free-form language has been widely explored within diverse modalities such as images, point clouds, and LiDAR. However, related studies on event sensors are scarce or narrowly centered on semantic-level understanding. We introduce SEAL, the first Semantic-aware Segment Any Events framework that addresses Open-Vocabulary Event Instance Segmentation (OV-EIS). Given the visual prompt, our model presents a unified framework to support both event segmentation and open-vocabulary mask classification at multiple levels of granularity, including instance-level and part-level. To enable thorough evaluation on OV-EIS, we curate four benchmarks that cover label granularity from coarse to fine class configurations and semantic granularity from instance-level to part-level understanding. Extensive experiments show that our SEAL largely outperforms proposed baselines in terms of performance and inference speed with a parameter-efficient architecture. In the Appendix, we further present a simple variant of our SEAL achieving generic spatiotemporal OV-EIS that does not require any visual prompts from users in the inference. Check out our project page in https://0nandon.github.io/SEAL
https://arxiv.org/abs/2601.23159
Academic Papers
svg
97c34b016d2dbaf065f0ca3b8d1a23c50fd4af37d32e4e92de618d4fa5cb048f
2026-02-02T00:00:00-05:00
Robust Control of Constrained Linear Systems using Online Convex Optimization and a Reference Governor
arXiv:2601.23160v1 Announce Type: new Abstract: This article develops a control method for linear time-invariant systems subject to time-varying and a priori unknown cost functions, that satisfies state and input constraints, and is robust to exogenous disturbances. To this end, we combine the online convex optimization framework with a reference governor and a constraint tightening approach. The proposed framework guarantees recursive feasibility and robust constraint satisfaction. Its closed-loop performance is studied in terms of its dynamic regret, which is bounded linearly by the variation of the cost functions and the magnitude of the disturbances. The proposed method is illustrated by a numerical case study of a tracking control problem.
https://arxiv.org/abs/2601.23160
Academic Papers
svg
27efecb3f4e99a7df1f3464ec780d0a54165f0b1d801ad4b036ad3018ce87baf
2026-02-02T00:00:00-05:00
DIFFA-2: A Practical Diffusion Large Language Model for General Audio Understanding
arXiv:2601.23161v1 Announce Type: new Abstract: Autoregressive (AR) large audio language models (LALMs) such as Qwen-2.5-Omni have achieved strong performance on audio understanding and interaction, but scaling them remains costly in data and computation, and strictly sequential decoding limits inference efficiency. Diffusion large language models (dLLMs) have recently been shown to make effective use of limited training data, and prior work on DIFFA indicates that replacing an AR backbone with a diffusion counterpart can substantially improve audio understanding under matched settings, albeit at a proof-of-concept scale without large-scale instruction tuning, preference alignment, or practical decoding schemes. We introduce DIFFA-2, a practical diffusion-based LALM for general audio understanding. DIFFA-2 upgrades the speech encoder, employs dual semantic and acoustic adapters, and is trained with a four-stage curriculum that combines semantic and acoustic alignment, large-scale supervised fine-tuning, and variance-reduced preference optimization, using only fully open-source corpora. Experiments on MMSU, MMAU, and MMAR show that DIFFA-2 consistently improves over DIFFA and is competitive to strong AR LALMs under practical training budgets, supporting diffusion-based modeling is a viable backbone for large-scale audio understanding. Our code is available at https://github.com/NKU-HLT/DIFFA.git.
https://arxiv.org/abs/2601.23161
Academic Papers
svg
b09298cfb96d3dd75149ecb53aa433702295990e165d7c1b8cb4997f22b9ab0e
2026-02-02T00:00:00-05:00
Probing the Trajectories of Reasoning Traces in Large Language Models
arXiv:2601.23163v1 Announce Type: new Abstract: Large language models (LLMs) increasingly solve difficult problems by producing "reasoning traces" before emitting a final response. However, it remains unclear how accuracy and decision commitment evolve along a reasoning trajectory, and whether intermediate trace segments provide answer-relevant information beyond generic length or stylistic effects. Here, we propose a protocol to systematically probe the trajectories of reasoning traces in LLMs by 1) generating a model's reasoning trace, 2) truncating it at fixed token-percentiles, and 3) injecting each partial trace back into the model (or a different model) to measure the induced distribution over answer choices via next-token probabilities. We apply this protocol to the open-source Qwen3-4B/-8B/-14B and gpt-oss-20b/-120b models across the multiple-choice GPQA Diamond and MMLU-Pro benchmarks. We find that accuracy and decision commitment consistently increase as the percentage of provided reasoning tokens grows. These gains are primarily driven by relevant content in the model generation rather than context length or generic "reasoning style" effects. Stronger models often backtrack successfully from incorrect partial traces, but immediate answers often remain anchored in the weaker model's incorrect response. More broadly, we show that trajectory probing provides diagnostics for efficient and safer deployment of reasoning models as the measurements can inform practical trace-handling and monitoring policies that improve reliability without assuming intermediate tokens are inherently faithful explanations.
https://arxiv.org/abs/2601.23163
Academic Papers
svg
c596bbb92627be3515d4f93348d1e975f57f8baa2d3d68240eb4ba285f571a3b
2026-02-02T00:00:00-05:00
Stochastic Linear Bandits with Parameter Noise
arXiv:2601.23164v1 Announce Type: new Abstract: We study the stochastic linear bandits with parameter noise model, in which the reward of action $a$ is $a^\top \theta$ where $\theta$ is sampled i.i.d. We show a regret upper bound of $\widetilde{O} (\sqrt{d T \log (K/\delta) \sigma^2_{\max})}$ for a horizon $T$, general action set of size $K$ of dimension $d$, and where $\sigma^2_{\max}$ is the maximal variance of the reward for any action. We further provide a lower bound of $\widetilde{\Omega} (d \sqrt{T \sigma^2_{\max}})$ which is tight (up to logarithmic factors) whenever $\log (K) \approx d$. For more specific action sets, $\ell_p$ unit balls with $p \leq 2$ and dual norm $q$, we show that the minimax regret is $\widetilde{\Theta} (\sqrt{dT \sigma^2_q)}$, where $\sigma^2_q$ is a variance-dependent quantity that is always at most $4$. This is in contrast to the minimax regret attainable for such sets in the classic additive noise model, where the regret is of order $d \sqrt{T}$. Surprisingly, we show that this optimal (up to logarithmic factors) regret bound is attainable using a very simple explore-exploit algorithm.
https://arxiv.org/abs/2601.23164
Academic Papers
svg
3f16d78556f30123694a41613a11066dada4da9a3811b1ca3e7f039440467ca5
2026-02-02T00:00:00-05:00
Monotonic Reference-Free Refinement for Autoformalization
arXiv:2601.23166v1 Announce Type: new Abstract: While statement autoformalization has advanced rapidly, full-theorem autoformalization remains largely unexplored. Existing iterative refinement methods in statement autoformalization typicall improve isolated aspects of formalization, such as syntactic correctness, but struggle to jointly optimizing multiple quality dimensions, which is critical for full-theorem autoformalization. We introduce a reference-free iterative monotonic process for full-theorem autoformalization that leverages complementary feedback from theorem provers and LLM-based judges, without access to ground-truth proofs or existing formalizations at inference time. Our approach optimizes a masked composite objective over Formal Validity, Logical Preservation, Mathematical Consistency, and Formal Quality, guided by a responsiveness map that indicates how different LLMs acting as different roles preferentially improve each dimension. We further propose an acceptance policy that guarantees certified monotonic improvement, and provide conditions ensuring convergence and termination. Empirical experiments demonstrate the proposed process enables simultaneous improvement across multiple dimensions, achieving 93.44% formal validity and a 78.22% overall score on miniF2F, and 44.09% formal validity and a 29.79% overall score on ProofNet.
https://arxiv.org/abs/2601.23166
Academic Papers
svg
bbb673e3de67ae2892797c1c478087a63dbea3ada45195947118126b6bb04e54
2026-02-02T00:00:00-05:00
Hi-Light: A Path to high-fidelity, high-resolution video relighting with a Novel Evaluation Paradigm
arXiv:2601.23167v1 Announce Type: new Abstract: Video relighting offers immense creative potential and commercial value but is hindered by challenges, including the absence of an adequate evaluation metric, severe light flickering, and the degradation of fine-grained details during editing. To overcome these challenges, we introduce Hi-Light, a novel, training-free framework for high-fidelity, high-resolution, robust video relighting. Our approach introduces three technical innovations: lightness prior anchored guided relighting diffusion that stabilises intermediate relit video, a Hybrid Motion-Adaptive Lighting Smoothing Filter that leverages optical flow to ensure temporal stability without introducing motion blur, and a LAB-based Detail Fusion module that preserves high-frequency detail information from the original video. Furthermore, to address the critical gap in evaluation, we propose the Light Stability Score, the first quantitative metric designed to specifically measure lighting consistency. Extensive experiments demonstrate that Hi-Light significantly outperforms state-of-the-art methods in both qualitative and quantitative comparisons, producing stable, highly detailed relit videos.
https://arxiv.org/abs/2601.23167
Academic Papers
svg
a79c97479a3f48edf0db68c8924e3d23b6d54c26c3ed7b1e43c80d61a7f77dbc
2026-02-02T00:00:00-05:00
Names Don't Matter: Symbol-Invariant Transformer for Open-Vocabulary Learning
arXiv:2601.23169v1 Announce Type: new Abstract: Current neural architectures lack a principled way to handle interchangeable tokens, i.e., symbols that are semantically equivalent yet distinguishable, such as bound variables. As a result, models trained on fixed vocabularies often struggle to generalize to unseen symbols, even when the underlying semantics remain unchanged. We propose a novel Transformer-based mechanism that is provably invariant to the renaming of interchangeable tokens. Our approach employs parallel embedding streams to isolate the contribution of each interchangeable token in the input, combined with an aggregated attention mechanism that enables structured information sharing across streams. Experimental results confirm the theoretical guarantees of our method and demonstrate substantial performance gains on open-vocabulary tasks that require generalization to novel symbols.
https://arxiv.org/abs/2601.23169
Academic Papers
svg
6cbb6bcbd6fb5bc6a7fc45a308fe1d7675f2c505794c7e6560e5563bf86d815f
2026-02-02T00:00:00-05:00
Beyond Fixed Frames: Dynamic Character-Aligned Speech Tokenization
arXiv:2601.23174v1 Announce Type: new Abstract: Neural audio codecs are at the core of modern conversational speech technologies, converting continuous speech into sequences of discrete tokens that can be processed by LLMs. However, existing codecs typically operate at fixed frame rates, allocating tokens uniformly in time and producing unnecessarily long sequences. In this work, we introduce DyCAST, a Dynamic Character-Aligned Speech Tokenizer that enables variable-frame-rate tokenization through soft character-level alignment and explicit duration modeling. DyCAST learns to associate tokens with character-level linguistic units during training and supports alignment-free inference with direct control over token durations at decoding time. To improve speech resynthesis quality at low frame rates, we further introduce a retrieval-augmented decoding mechanism that enhances reconstruction fidelity without increasing bitrate. Experiments show that DyCAST achieves competitive speech resynthesis quality and downstream performance while using significantly fewer tokens than fixed-frame-rate codecs.
https://arxiv.org/abs/2601.23174
Academic Papers
svg
00831f1b0674c9c77c60561fa790522bea1ce078e55c585b3dcb90ab91e6232a
2026-02-02T00:00:00-05:00
MeshGraphNet-Transformer: Scalable Mesh-based Learned Simulation for Solid Mechanics
arXiv:2601.23177v1 Announce Type: new Abstract: We present MeshGraphNet-Transformer (MGN-T), a novel architecture that combines the global modeling capabilities of Transformers with the geometric inductive bias of MeshGraphNets, while preserving a mesh-based graph representation. MGN-T overcomes a key limitation of standard MGN, the inefficient long-range information propagation caused by iterative message passing on large, high-resolution meshes. A physics-attention Transformer serves as a global processor, updating all nodal states simultaneously while explicitly retaining node and edge attributes. By directly capturing long-range physical interactions, MGN-T eliminates the need for deep message-passing stacks or hierarchical, coarsened meshes, enabling efficient learning on high-resolution meshes with varying geometries, topologies, and boundary conditions at an industrial scale. We demonstrate that MGN-T successfully handles industrial-scale meshes for impact dynamics, a setting in which standard MGN fails due message-passing under-reaching. The method accurately models self-contact, plasticity, and multivariate outputs, including internal, phenomenological plastic variables. Moreover, MGN-T outperforms state-of-the-art approaches on classical benchmarks, achieving higher accuracy while maintaining practical efficiency, using only a fraction of the parameters required by competing baselines.
https://arxiv.org/abs/2601.23177
Academic Papers
svg
dd06d363a6890696584059f5f454338e75593df8d9c982a63d66a4abe84a2279
2026-02-02T00:00:00-05:00
Make Anything Match Your Target: Universal Adversarial Perturbations against Closed-Source MLLMs via Multi-Crop Routed Meta Optimization
arXiv:2601.23179v1 Announce Type: new Abstract: Targeted adversarial attacks on closed-source multimodal large language models (MLLMs) have been increasingly explored under black-box transfer, yet prior methods are predominantly sample-specific and offer limited reusability across inputs. We instead study a more stringent setting, Universal Targeted Transferable Adversarial Attacks (UTTAA), where a single perturbation must consistently steer arbitrary inputs toward a specified target across unknown commercial MLLMs. Naively adapting existing sample-wise attacks to this universal setting faces three core difficulties: (i) target supervision becomes high-variance due to target-crop randomness, (ii) token-wise matching is unreliable because universality suppresses image-specific cues that would otherwise anchor alignment, and (iii) few-source per-target adaptation is highly initialization-sensitive, which can degrade the attainable performance. In this work, we propose MCRMO-Attack, which stabilizes supervision via Multi-Crop Aggregation with an Attention-Guided Crop, improves token-level reliability through alignability-gated Token Routing, and meta-learns a cross-target perturbation prior that yields stronger per-target solutions. Across commercial MLLMs, we boost unseen-image attack success rate by +23.7\% on GPT-4o and +19.9\% on Gemini-2.0 over the strongest universal baseline.
https://arxiv.org/abs/2601.23179
Academic Papers
svg
96cf2c20b3be3b9901df059cb4ce2b4d9c1f2d25f3ddcf7edc9dedfccc80a954
2026-02-02T00:00:00-05:00
TriSpec: Ternary Speculative Decoding via Lightweight Proxy Verification
arXiv:2601.23180v1 Announce Type: new Abstract: Inference efficiency in Large Language Models (LLMs) is fundamentally limited by their serial, autoregressive generation, especially as reasoning becomes a key capability and response sequences grow longer. Speculative decoding (SD) offers a powerful solution, providing significant speed-ups through its lightweight drafting and parallel verification mechanism. While existing work has nearly saturated improvements in draft effectiveness and efficiency, this paper advances SD from a new yet critical perspective: the verification cost. We propose TriSpec, a novel ternary SD framework that, at its core, introduces a lightweight proxy to significantly reduce computational cost by approving easily verifiable draft sequences and engaging the full target model only when encountering uncertain tokens. TriSpec can be integrated with state-of-the-art SD methods like EAGLE-3 to further reduce verification costs, achieving greater acceleration. Extensive experiments on the Qwen3 and DeepSeek-R1-Distill-Qwen/LLaMA families show that TriSpec achieves up to 35\% speedup over standard SD, with up to 50\% fewer target model invocations while maintaining comparable accuracy.
https://arxiv.org/abs/2601.23180
Academic Papers
svg
62f14c712b1e0a464b03451653c6faec480275f742e1750966f9a0a984e7a1ad
2026-02-02T00:00:00-05:00
Ensuring Semantics in Weights of Implicit Neural Representations through the Implicit Function Theorem
arXiv:2601.23181v1 Announce Type: new Abstract: Weight Space Learning (WSL), which frames neural network weights as a data modality, is an emerging field with potential for tasks like meta-learning or transfer learning. Particularly, Implicit Neural Representations (INRs) provide a convenient testbed, where each set of weights determines the corresponding individual data sample as a mapping from coordinates to contextual values. So far, a precise theoretical explanation for the mechanism of encoding semantics of data into network weights is still missing. In this work, we deploy the Implicit Function Theorem (IFT) to establish a rigorous mapping between the data space and its latent weight representation space. We analyze a framework that maps instance-specific embeddings to INR weights via a shared hypernetwork, achieving performance competitive with existing baselines on downstream classification tasks across 2D and 3D datasets. These findings offer a theoretical lens for future investigations into network weights.
https://arxiv.org/abs/2601.23181
Academic Papers
svg
7cff210352548a80de868ba180d368d224474b91a232e82c8191848d2a8a58ee
2026-02-02T00:00:00-05:00
FourierSampler: Unlocking Non-Autoregressive Potential in Diffusion Language Models via Frequency-Guided Generation
arXiv:2601.23182v1 Announce Type: new Abstract: Despite the non-autoregressive potential of diffusion language models (dLLMs), existing decoding strategies demonstrate positional bias, failing to fully unlock the potential of arbitrary generation. In this work, we delve into the inherent spectral characteristics of dLLMs and present the first frequency-domain analysis showing that low-frequency components in hidden states primarily encode global structural information and long-range dependencies, while high-frequency components are responsible for characterizing local details. Based on this observation, we propose FourierSampler, which leverages a frequency-domain sliding window mechanism to dynamically guide the model to achieve a "structure-to-detail" generation. FourierSampler outperforms other inference enhancement strategies on LLADA and SDAR, achieving relative improvements of 20.4% on LLaDA1.5-8B and 16.0% on LLaDA-8B-Instruct. It notably surpasses similarly sized autoregressive models like Llama3.1-8B-Instruct.
https://arxiv.org/abs/2601.23182
Academic Papers
svg
145d3b5c7e22106fb5ddb7f1a8e6e19f8026e23dafbf1da8bcdb7c43bdff468f
2026-02-02T00:00:00-05:00
JobResQA: A Benchmark for LLM Machine Reading Comprehension on Multilingual R\'esum\'es and JDs
arXiv:2601.23183v1 Announce Type: new Abstract: We introduce JobResQA, a multilingual Question Answering benchmark for evaluating Machine Reading Comprehension (MRC) capabilities of LLMs on HR-specific tasks involving r\'esum\'es and job descriptions. The dataset comprises 581 QA pairs across 105 synthetic r\'esum\'e-job description pairs in five languages (English, Spanish, Italian, German, and Chinese), with questions spanning three complexity levels from basic factual extraction to complex cross-document reasoning. We propose a data generation pipeline derived from real-world sources through de-identification and data synthesis to ensure both realism and privacy, while controlled demographic and professional attributes (implemented via placeholders) enable systematic bias and fairness studies. We also present a cost-effective, human-in-the-loop translation pipeline based on the TEaR methodology, incorporating MQM error annotations and selective post-editing to ensure an high-quality multi-way parallel benchmark. We provide a baseline evaluations across multiple open-weight LLM families using an LLM-as-judge approach revealing higher performances on English and Spanish but substantial degradation for other languages, highlighting critical gaps in multilingual MRC capabilities for HR applications. JobResQA provides a reproducible benchmark for advancing fair and reliable LLM-based HR systems. The benchmark is publicly available at: https://github.com/Avature/jobresqa-benchmark
https://arxiv.org/abs/2601.23183
Academic Papers
svg
8e7580daf8259a00b3b5d2630476475ee1633e52125056b168b2957ee25925df
2026-02-02T00:00:00-05:00
ReGuLaR: Variational Latent Reasoning Guided by Rendered Chain-of-Thought
arXiv:2601.23184v1 Announce Type: new Abstract: While Chain-of-Thought (CoT) significantly enhances the performance of Large Language Models (LLMs), explicit reasoning chains introduce substantial computational redundancy. Recent latent reasoning methods attempt to mitigate this by compressing reasoning processes into latent space, but often suffer from severe performance degradation due to the lack of appropriate compression guidance. In this study, we propose Rendered CoT-Guided variational Latent Reasoning (ReGuLaR), a simple yet novel latent learning paradigm resolving this issue. Fundamentally, we formulate latent reasoning within the Variational Auto-Encoding (VAE) framework, sampling the current latent reasoning state from the posterior distribution conditioned on previous ones. Specifically, when learning this variational latent reasoning model, we render explicit reasoning chains as images, from which we extract dense visual-semantic representations to regularize the posterior distribution, thereby achieving efficient compression with minimal information loss. Extensive experiments demonstrate that ReGuLaR significantly outperforms existing latent reasoning methods across both computational efficiency and reasoning effectiveness, and even surpasses CoT through multi-modal reasoning, providing a new and insightful solution to latent reasoning. Code: https://github.com/FanmengWang/ReGuLaR.
https://arxiv.org/abs/2601.23184
Academic Papers
svg
386482977d495b7002e88927ec7f1523c3d6fdefa1551fc9f1d6736061509871
2026-02-02T00:00:00-05:00
Preconditioning and Numerical Stability in Neural Network Training for Parametric PDEs
arXiv:2601.23185v1 Announce Type: new Abstract: In the context of training neural network-based approximations of solutions of parameter-dependent PDEs, we investigate the effect of preconditioning via well-conditioned frame representations of operators and demonstrate a significant improvement on the performance of standard training methods. We also observe that standard representations of preconditioned matrices are insufficient for obtaining numerical stability and propose a generally applicable form of stable representations that enables computations with single- and half-precision floating point numbers without loss of precision.
https://arxiv.org/abs/2601.23185
Academic Papers
svg
dd7312fe0c0ac38673ab56f6ebeab4e8b59ac59bf114224183bc94ae7a438ee8
2026-02-02T00:00:00-05:00
Deep Search with Hierarchical Meta-Cognitive Monitoring Inspired by Cognitive Neuroscience
arXiv:2601.23188v1 Announce Type: new Abstract: Deep search agents powered by large language models have demonstrated strong capabilities in multi-step retrieval, reasoning, and long-horizon task execution. However, their practical failures often stem from the lack of mechanisms to monitor and regulate reasoning and retrieval states as tasks evolve under uncertainty. Insights from cognitive neuroscience suggest that human metacognition is hierarchically organized, integrating fast anomaly detection with selectively triggered, experience-driven reflection. In this work, we propose Deep Search with Meta-Cognitive Monitoring (DS-MCM), a deep search framework augmented with an explicit hierarchical metacognitive monitoring mechanism. DS-MCM integrates a Fast Consistency Monitor, which performs lightweight checks on the alignment between external evidence and internal reasoning confidence, and a Slow Experience-Driven Monitor, which is selectively activated to guide corrective intervention based on experience memory from historical agent trajectories. By embedding monitoring directly into the reasoning-retrieval loop, DS-MCM determines both when intervention is warranted and how corrective actions should be informed by prior experience. Experiments across multiple deep search benchmarks and backbone models demonstrate that DS-MCM consistently improves performance and robustness.
https://arxiv.org/abs/2601.23188
Academic Papers
svg
44a5a8be70f2ae21c8ba2c6fd68d18f03dd617ff2580e815d176fbaf289531cf
2026-02-02T00:00:00-05:00
Network analysis and link prediction in competitive women's basketball
arXiv:2601.23193v1 Announce Type: new Abstract: Network structure and its role in prediction are examined in competitive basketball at the team and player levels. Adversarial game outcome networks from NCAA Division I women's basketball from 2021 to 2024 are used to compute the common out-neighbor score and PageRank, which are combined into a low-key leader strength that identifies competitors influential through structural similarity despite relatively low centrality. This measure is related to changes in NCAA NET rankings by grouping teams into quantiles and comparing average rank changes across seasons for both previous-to-current and current-to-next transitions. Link prediction is then studied using node2vec embeddings across three interaction settings. For NCAA regular-season game networks, cosine similarity between team embeddings is used in a logistic regression model to predict March Madness matchups. For WNBA shot-blocking networks, future directed blocking interactions are predicted via logistic regression on concatenated source-target player embeddings. For WNBA passing networks, region embeddings learned from first-quarter passes are evaluated for their ability to predict subsequent passing connections. Across NCAA and WNBA settings, embedding-based models provide statistically significant evidence that higher-order network structure contains predictive signals for future interactions, while the passing experiment shows weaker predictive performance but yields interpretable similarity patterns consistent with passing feasibility.
https://arxiv.org/abs/2601.23193
Academic Papers
svg
694ae6143b26f9b01c5c2a45935f1cecdb9844fc2e7d35082bef177726f908a1
2026-02-02T00:00:00-05:00
Planar Graph Homomorphisms: A Dichotomy and a Barrier from Quantum Groups
arXiv:2601.23198v1 Announce Type: new Abstract: We study the complexity of counting (weighted) planar graph homomorphism problem $\tt{Pl\text{-}GH}(M)$ parametrized by an arbitrary symmetric non-negative real valued matrix $M$. For matrices with pairwise distinct diagonal values, we prove a complete dichotomy theorem: $\tt{Pl\text{-}GH}(M)$ is either polynomial-time tractable, or $\#$P-hard, according to a simple criterion. More generally, we obtain a dichotomy whenever every vertex pair of the graph represented by $M$ can be separated using some planar edge gadget. A key question in proving complexity dichotomies in the planar setting is the expressive power of planar edge gadgets. We build on the framework of Man\v{c}inska and Roberson to establish links between \textit{planar} edge gadgets and the theory of the \textit{quantum automorphism group} $\tt{Qut}(M)$. We show that planar edge gadgets that can separate vertex pairs of $M$ exist precisely when $\tt{Qut}(M)$ is \emph{trivial}, and prove that the problem of whether $\tt{Qut}(M)$ is trivial is undecidable. These results delineate the frontier for planar homomorphism counting problems and uncover intrinsic barriers to extending nonplanar reduction techniques to the planar setting.
https://arxiv.org/abs/2601.23198
Academic Papers
svg
46b283f546e22c3e44225488711b18f056cd8aa544eabc9444add4eca2177ca9
2026-02-02T00:00:00-05:00
Large Language Models for Patent Classification: Strengths, Trade-offs, and the Long Tail Effect
arXiv:2601.23200v1 Announce Type: new Abstract: Patent classification into CPC codes underpins large scale analyses of technological change but remains challenging due to its hierarchical, multi label, and highly imbalanced structure. While pre Generative AI supervised encoder based models became the de facto standard for large scale patent classification, recent advances in large language models (LLMs) raise questions about whether they can provide complementary capabilities, particularly for rare or weakly represented technological categories. In this work, we perform a systematic comparison of encoder based classifiers (BERT, SciBERT, and PatentSBERTa) and open weight LLMs on a highly imbalanced benchmark dataset (USPTO 70k). We evaluate LLMs under zero shot, few shot, and retrieval augmented prompting, and further assess parameter efficient fine tuning of the best performing model. Our results show that encoder based models achieve higher aggregate performance, driven by strong results on frequent CPC subclasses, but struggle on rare ones. In contrast, LLMs achieve relatively higher performance on infrequent subclasses, often associated with early stage, cross domain, or weakly institutionalised technologies, particularly at higher hierarchical levels. These findings indicate that encoder based and LLM based approaches play complementary roles in patent classification. We additionally quantify inference time and energy consumption, showing that encoder based models are up to three orders of magnitude more efficient than LLMs. Overall, our results inform responsible patentometrics and technology mapping, and motivate hybrid classification approaches that combine encoder efficiency with the long tail coverage of LLMs under computational and environmental constraints.
https://arxiv.org/abs/2601.23200
Academic Papers
svg
9d57683b3707c2ebe52a44673ba2eeaa9901988c397850a9f9dbbf7a00162cd4
2026-02-02T00:00:00-05:00
TSAQA: Time Series Analysis Question And Answering Benchmark
arXiv:2601.23204v1 Announce Type: new Abstract: Time series data are integral to critical applications across domains such as finance, healthcare, transportation, and environmental science. While recent work has begun to explore multi-task time series question answering (QA), current benchmarks remain limited to forecasting and anomaly detection tasks. We introduce TSAQA, a novel unified benchmark designed to broaden task coverage and evaluate diverse temporal analysis capabilities. TSAQA integrates six diverse tasks under a single framework ranging from conventional analysis, including anomaly detection and classification, to advanced analysis, such as characterization, comparison, data transformation, and temporal relationship analysis. Spanning 210k samples across 13 domains, the dataset employs diverse formats, including true-or-false (TF), multiple-choice (MC), and a novel puzzling (PZ), to comprehensively assess time series analysis. Zero-shot evaluation demonstrates that these tasks are challenging for current Large Language Models (LLMs): the best-performing commercial LLM, Gemini-2.5-Flash, achieves an average score of only 65.08. Although instruction tuning boosts open-source performance: the best-performing open-source model, LLaMA-3.1-8B, shows significant room for improvement, highlighting the complexity of temporal analysis for LLMs.
https://arxiv.org/abs/2601.23204
Academic Papers
svg
661691b69cf7dc1d7b952f73f31a82190989f6a8c34743c2c410c44fe1883fbb
2026-02-02T00:00:00-05:00
High-quality generation of dynamic game content via small language models: A proof of concept
arXiv:2601.23206v1 Announce Type: new Abstract: Large language models (LLMs) offer promise for dynamic game content generation, but they face critical barriers, including narrative incoherence and high operational costs. Due to their large size, they are often accessed in the cloud, limiting their application in offline games. Many of these practical issues are solved by pivoting to small language models (SLMs), but existing studies using SLMs have resulted in poor output quality. We propose a strategy of achieving high-quality SLM generation through aggressive fine-tuning on deliberately scoped tasks with narrow context, constrained structure, or both. In short, more difficult tasks require narrower scope and higher specialization to the training corpus. Training data is synthetically generated via a DAG-based approach, grounding models in the specific game world. Such models can form the basis for agentic networks designed around the narratological framework at hand, representing a more practical and robust solution than cloud-dependent LLMs. To validate this approach, we present a proof-of-concept focusing on a single specialized SLM as the fundamental building block. We introduce a minimal RPG loop revolving around rhetorical battles of reputations, powered by this model. We demonstrate that a simple retry-until-success strategy reaches adequate quality (as defined by an LLM-as-a-judge scheme) with predictable latency suitable for real-time generation. While local quality assessment remains an open question, our results demonstrate feasibility for real-time generation under typical game engine constraints.
https://arxiv.org/abs/2601.23206
Academic Papers
svg
6a003e43b339a06295b730a6babbc026dbd973a145a7ed167de52a63c16a0d13
2026-02-02T00:00:00-05:00
Learning to Execute Graph Algorithms Exactly with Graph Neural Networks
arXiv:2601.23207v1 Announce Type: new Abstract: Understanding what graph neural networks can learn, especially their ability to learn to execute algorithms, remains a central theoretical challenge. In this work, we prove exact learnability results for graph algorithms under bounded-degree and finite-precision constraints. Our approach follows a two-step process. First, we train an ensemble of multi-layer perceptrons (MLPs) to execute the local instructions of a single node. Second, during inference, we use the trained MLP ensemble as the update function within a graph neural network (GNN). Leveraging Neural Tangent Kernel (NTK) theory, we show that local instructions can be learned from a small training set, enabling the complete graph algorithm to be executed during inference without error and with high probability. To illustrate the learning power of our setting, we establish a rigorous learnability result for the LOCAL model of distributed computation. We further demonstrate positive learnability results for widely studied algorithms such as message flooding, breadth-first and depth-first search, and Bellman-Ford.
https://arxiv.org/abs/2601.23207
Academic Papers
svg
7af5f80b3fcb8ba9dde63e43cf9ac3be6652edd1a5b5a6bb3664a8d8ab0bc92b
2026-02-02T00:00:00-05:00
Evaluating the Viability of Additive Models to Predict Task Completion Time for 3D Interactions in Augmented Reality
arXiv:2601.23209v1 Announce Type: new Abstract: Additive models of interaction performance, such as the Keystroke-Level Model (KLM), are tools that allow designers to compare and optimize the performance of user interfaces by summing the predicted times for the atomic components of a specific interaction to predict the total time it would take to complete that interaction. There has been extensive work in creating such additive models for 2D interfaces, but this approach has rarely been explored for 3D user interfaces. We propose a KLM-style additive model, based on existing atomic task models in the literature, to predict task completion time for 3D interaction tasks. We performed two studies to evaluate the feasibility of this approach across multiple input modalities, with one study using a simple menu selection task and the other a more complex manipulation task. We found that several of the models from the literature predicted actual task performance with less than 20% error in both the menu selection and manipulation study. Overall, we found that additive models can predict both absolute and relative performance of input modalities with reasonable accuracy.
https://arxiv.org/abs/2601.23209
Academic Papers
svg
7a02907d8438e84ffd493875df9842f0ffb6078a318f9a5439677af9b1f7650f
2026-02-02T00:00:00-05:00
Multi-Agent Systems Should be Treated as Principal-Agent Problems
arXiv:2601.23211v1 Announce Type: new Abstract: Consider a multi-agent systems setup in which a principal (a supervisor agent) assigns subtasks to specialized agents and aggregates their responses into a single system-level output. A core property of such systems is information asymmetry: agents observe task-specific information, produce intermediate reasoning traces, and operate with different context windows. In isolation, such asymmetry is not problematic, since agents report truthfully to the principal when incentives are fully aligned. However, this assumption breaks down when incentives diverge. Recent evidence suggests that LLM-based agents can acquire their own goals, such as survival or self-preservation, a phenomenon known as scheming, and may deceive humans or other agents. This leads to agency loss: a gap between the principal's intended outcome and the realized system behavior. Drawing on core ideas from microeconomic theory, we argue that these characteristics, information asymmetry and misaligned goals, are best studied through the lens of principal-agent problems. We explain why multi-agent systems, both human-to-LLM and LLM-to-LLM, naturally induce information asymmetry under this formulation, and we use scheming, where LLM agents pursue covert goals, as a concrete case study. We show that recently introduced terminology used to describe scheming, such as covert subversion or deferred subversion, corresponds to well-studied concepts in the mechanism design literature, which not only characterizes the problem but also prescribes concrete mitigation strategies. More broadly, we argue for applying tools developed to study human agent behavior to the analysis of non-human agents.
https://arxiv.org/abs/2601.23211
Academic Papers
svg
86598d0aaf035fa1a14444a061501d9dec91e6d05fff1ecebf0aa517042b2e88
2026-02-02T00:00:00-05:00
A complete characterisation of conditional entropies
arXiv:2601.23213v1 Announce Type: new Abstract: Entropies are fundamental measures of uncertainty with central importance in information theory and statistics and applications across all the quantitative sciences. Under a natural set of operational axioms, the most general form of entropy is captured by the family of R\'enyi entropies, parameterized by a real number $\alpha$. Conditional entropy extends the notion of entropy by quantifying uncertainty from the viewpoint of an observer with access to potentially correlated side information. However, despite their significance and the emergence of various useful definitions, a complete characterization of measures of conditional entropy that satisfy a natural set of operational axioms has remained elusive. In this work, we provide a complete characterization of conditional entropy, defined through a set of axioms that are essential for any operationally meaningful definition: additivity for independent random variables, invariance under relabeling, and monotonicity under conditional mixing channels. We prove that the most general form of conditional entropy is captured by a family of measures that are exponential averages of R\'enyi entropies of the conditioned distribution and parameterized by a real parameter and a probability measure on the positive reals. Finally, we show that these quantities determine the rate of transformation under conditional mixing and provide a set of second laws of quantum thermodynamics with side information for states diagonal in the energy eigenbasis.
https://arxiv.org/abs/2601.23213
Academic Papers
svg
24c50395a8d0c2498546516513abf0f2d35ab03ab692cf81c0755ffd2fefeb1b
2026-02-02T00:00:00-05:00
Tackling air quality with SAPIENS
arXiv:2601.23215v1 Announce Type: new Abstract: Air pollution is a chronic problem in large cities worldwide and awareness is rising as the long-term health implications become clearer. Vehicular traffic has been identified as a major contributor to poor air quality. In a lot of cities the publicly available air quality measurements and forecasts are coarse-grained both in space and time. However, in general, real-time traffic intensity data is openly available in various forms and is fine-grained. In this paper, we present an in-depth study of pollution sensor measurements combined with traffic data from Mexico City. We analyse and model the relationship between traffic intensity and air quality with the aim to provide hyper-local, dynamic air quality forecasts. We developed an innovative method to represent traffic intensities by transforming simple colour-coded traffic maps into concentric ring-based descriptions, enabling improved characterisation of traffic conditions. Using Partial Least Squares Regression, we predict pollution levels based on these newly defined traffic intensities. The model was optimised with various training samples to achieve the best predictive performance and gain insights into the relationship between pollutants and traffic. The workflow we have designed is straightforward and adaptable to other contexts, like other cities beyond the specifics of our dataset.
https://arxiv.org/abs/2601.23215
Academic Papers
svg
a5f385f3abd0a2a273cabb925f88174c86db42d25db15b3ff71cb1e6746597b9
2026-02-02T00:00:00-05:00
Secure Integrated Sensing and Communication against Communication and Sensing Eavesdropping
arXiv:2601.23216v1 Announce Type: new Abstract: Sensing privacy and communication confidentiality play fundamentally different but interconnected roles in adversarial wireless environments. Capturing this interplay within a single physical-layer framework is particularly challenging in integrated sensing and communication (ISAC) systems, where the same waveform simultaneously serves dual purposes. We study a secure ISAC system in which a monostatic transmitter simultaneously sends a confidential message to a legitimate receiver and senses an environmental state, while a passive adversary attempts both message decoding and state estimation. We partially characterize the fundamental trade-offs among three performance measures: the transmitter's secrecy rate, its detection exponent, and the adversary's detection exponent. Beyond the joint input distribution that governs overall performance, the trade-offs are further shaped by the transmitter's ability to extract keys via feedback and hide both the content and structure of the codewords via wiretap and resolvability codes. We derive an achievable region, and illustrate the resulting design trade-offs through a numerical example.
https://arxiv.org/abs/2601.23216
Academic Papers
svg
8a091140180ff7db11f6e014b8f37e1eebbdc2abd91bd5cf07f70d1e91b8bb41
2026-02-02T00:00:00-05:00
MonoScale: Scaling Multi-Agent System with Monotonic Improvement
arXiv:2601.23219v1 Announce Type: new Abstract: In recent years, LLM-based multi-agent systems (MAS) have advanced rapidly, using a router to decompose tasks and delegate subtasks to specialized agents. A natural way to expand capability is to scale up the agent pool by continually integrating new functional agents or tool interfaces, but naive expansion can trigger performance collapse when the router cold-starts on newly added, heterogeneous, and unreliable agents. We propose MonoScale, an expansion-aware update framework that proactively generates a small set of agent-conditioned familiarization tasks, harvests evidence from both successful and failed interactions, and distills it into auditable natural-language memory to guide future routing. We formalize sequential augmentation as a contextual bandit and perform trust-region memory updates, yielding a monotonic non-decreasing performance guarantee across onboarding rounds. Experiments on GAIA and Humanity's Last Exam show stable gains as the agent pool grows, outperforming naive scale-up and strong-router fixed-pool baselines.
https://arxiv.org/abs/2601.23219
Academic Papers
svg
e79e7a88fa5fd5011365ef61a4814e211e03e7fb7fbf77324e80e279cc4b1337
2026-02-02T00:00:00-05:00
Med-Scout: Curing MLLMs' Geometric Blindness in Medical Perception via Geometry-Aware RL Post-Training
arXiv:2601.23220v1 Announce Type: new Abstract: Despite recent Multimodal Large Language Models (MLLMs)' linguistic prowess in medical diagnosis, we find even state-of-the-art MLLMs suffer from a critical perceptual deficit: geometric blindness. This failure to ground outputs in objective geometric constraints leads to plausible yet factually incorrect hallucinations, rooted in training paradigms that prioritize linguistic fluency over geometric fidelity. This paper introduces Med-Scout, a novel framework that "cures" this blindness via Reinforcement Learning (RL) that leverages the intrinsic geometric logic latent within unlabeled medical images. Instead of relying on costly expert annotations, Med-Scout derives verifiable supervision signals through three strategic proxy tasks: Hierarchical Scale Localization, Topological Jigsaw Reconstruction, and Anomaly Consistency Detection. To rigorously quantify this deficit, we present Med-Scout-Bench, a new benchmark specifically designed to evaluate geometric perception. Extensive evaluations show that Med-Scout significantly mitigates geometric blindness, outperforming leading proprietary and open-source MLLMs by over 40% on our benchmark. Furthermore, this enhanced geometric perception generalizes to broader medical understanding, achieving superior results on radiological and comprehensive medical VQA tasks.
https://arxiv.org/abs/2601.23220
Academic Papers
svg
dd083c486dc8e080009d10b306b508cedbf6ca944353a6d083e4f90a7e3151d8
2026-02-02T00:00:00-05:00
Optimal Fair Aggregation of Crowdsourced Noisy Labels using Demographic Parity Constraints
arXiv:2601.23221v1 Announce Type: new Abstract: As acquiring reliable ground-truth labels is usually costly, or infeasible, crowdsourcing and aggregation of noisy human annotations is the typical resort. Aggregating subjective labels, though, may amplify individual biases, particularly regarding sensitive features, raising fairness concerns. Nonetheless, fairness in crowdsourced aggregation remains largely unexplored, with no existing convergence guarantees and only limited post-processing approaches for enforcing $\varepsilon$-fairness under demographic parity. We address this gap by analyzing the fairness s of crowdsourced aggregation methods within the $\varepsilon$-fairness framework, for Majority Vote and Optimal Bayesian aggregation. In the small-crowd regime, we derive an upper bound on the fairness gap of Majority Vote in terms of the fairness gaps of the individual annotators. We further show that the fairness gap of the aggregated consensus converges exponentially fast to that of the ground-truth under interpretable conditions. Since ground-truth itself may still be unfair, we generalize a state-of-the-art multiclass fairness post-processing algorithm from the continuous to the discrete setting, which enforces strict demographic parity constraints to any aggregation rule. Experiments on synthetic and real datasets demonstrate the effectiveness of our approach and corroborate the theoretical insights.
https://arxiv.org/abs/2601.23221
Academic Papers
svg
33ad64e3f08949e22b6624d598d6f394623f0bac2f93ed1145faa603ffb6c117
2026-02-02T00:00:00-05:00
Region-Normalized DPO for Medical Image Segmentation under Noisy Judges
arXiv:2601.23222v1 Announce Type: new Abstract: While dense pixel-wise annotations remain the gold standard for medical image segmentation, they are costly to obtain and limit scalability. In contrast, many deployed systems already produce inexpensive automatic quality-control (QC) signals like model agreement, uncertainty measures, or learned mask-quality scores which can be used for further model training without additional ground-truth annotation. However, these signals can be noisy and biased, making preference-based fine-tuning susceptible to harmful updates. We study Direct Preference Optimization (DPO) for segmentation from such noisy judges using proposals generated by a supervised base segmenter trained on a small labeled set. We find that outcomes depend strongly on how preference pairs are mined: selecting the judge's top-ranked proposal can improve peak performance when the judge is reliable, but can amplify harmful errors under weaker judges. We propose Region-Normalized DPO (RN-DPO), a segmentation-aware objective which normalizes preference updates by the size of the disagreement region between masks, reducing the leverage of harmful comparisons and improving optimization stability. Across two medical datasets and multiple regimes, RN-DPO improves sustained performance and stabilizes preference-based fine-tuning, outperforming standard DPO and strong baselines without requiring additional pixel annotations.
https://arxiv.org/abs/2601.23222
Academic Papers
svg
909b85c9c45374711eed3b437d0223cb7549d73f0a9240f24b774fd9b58e6c53
2026-02-02T00:00:00-05:00
Are you going to finish that? A Practical Study of the Tokenization Boundary Problem
arXiv:2601.23223v1 Announce Type: new Abstract: Language models (LMs) are trained over sequences of tokens, whereas users interact with LMs via text. This mismatch gives rise to the partial token problem, which occurs when a user ends their prompt in the middle of the expected next-token, leading to distorted next-token predictions. Although this issue has been studied using arbitrary character prefixes, its prevalence and severity in realistic prompts respecting word boundaries remains underexplored. In this work, we identify three domains where token and "word" boundaries often do not line up: languages that do not use whitespace, highly compounding languages, and code. In Chinese, for example, up to 25% of word boundaries do not line up with token boundaries, making even natural, word-complete prompts susceptible to this problem. We systematically construct semantically natural prompts ending with a partial tokens; in experiments, we find that they comprise a serious failure mode: frontier LMs consistently place three orders of magnitude less probability on the correct continuation compared to when the prompt is "backed-off" to be token-aligned. This degradation does not diminish with scale and often worsens for larger models. Finally, we evaluate inference-time mitigations to the partial token problem and validate the effectiveness of recent exact solutions. Overall, we demonstrate the scale and severity of probability distortion caused by tokenization in realistic use cases, and provide practical recommentions for model inference providers.
https://arxiv.org/abs/2601.23223
Academic Papers
svg
a97d627bc6d1f54ef56cf21e69fa3107c82a41e56e4b567e85855d8bfb9a7edf
2026-02-02T00:00:00-05:00
Video-o3: Native Interleaved Clue Seeking for Long Video Multi-Hop Reasoning
arXiv:2601.23224v1 Announce Type: new Abstract: Existing multimodal large language models for long-video understanding predominantly rely on uniform sampling and single-turn inference, limiting their ability to identify sparse yet critical evidence amid extensive redundancy. We introduce Video-o3, a novel framework that supports iterative discovery of salient visual clues, fine-grained inspection of key segments, and adaptive termination once sufficient evidence is acquired. Technically, we address two core challenges in interleaved tool invocation. First, to mitigate attention dispersion induced by the heterogeneity of reasoning and tool-calling, we propose Task-Decoupled Attention Masking, which isolates per-step concentration while preserving shared global context. Second, to control context length growth in multi-turn interactions, we introduce a Verifiable Trajectory-Guided Reward that balances exploration coverage with reasoning efficiency. To support training at scale, we further develop a data synthesis pipeline and construct Seeker-173K, comprising 173K high-quality tool-interaction trajectories for effective supervised and reinforcement learning. Extensive experiments show that Video-o3 substantially outperforms state-of-the-art methods, achieving 72.1% accuracy on MLVU and 46.5% on Video-Holmes. These results demonstrate Video-o3's strong multi-hop evidence-seeking and reasoning capabilities, and validate the effectiveness of native tool invocation in long-video scenarios.
https://arxiv.org/abs/2601.23224
Academic Papers
svg
01ef36971b250b3fa1f6338e734ce9d9d612bf47f09b823c6ca417afd91e56e4
2026-02-02T00:00:00-05:00
Agile Reinforcement Learning through Separable Neural Architecture
arXiv:2601.23225v1 Announce Type: new Abstract: Deep reinforcement learning (RL) is increasingly deployed in resource-constrained environments, yet the go-to function approximators - multilayer perceptrons (MLPs) - are often parameter-inefficient due to an imperfect inductive bias for the smooth structure of many value functions. This mismatch can also hinder sample efficiency and slow policy learning in this capacity-limited regime. Although model compression techniques exist, they operate post-hoc and do not improve learning efficiency. Recent spline-based separable architectures - such as Kolmogorov-Arnold Networks (KANs) - have been shown to offer parameter efficiency but are widely reported to exhibit significant computational overhead, especially at scale. In seeking to address these limitations, this work introduces SPAN (SPline-based Adaptive Networks), a novel function approximation approach to RL. SPAN adapts the low rank KHRONOS framework by integrating a learnable preprocessing layer with a separable tensor product B-spline basis. SPAN is evaluated across discrete (PPO) and high-dimensional continuous (SAC) control tasks, as well as offline settings (Minari/D4RL). Empirical results demonstrate that SPAN achieves a 30-50% improvement in sample efficiency and 1.3-9 times higher success rates across benchmarks compared to MLP baselines. Furthermore, SPAN demonstrates superior anytime performance and robustness to hyperparameter variations, suggesting it as a viable, high performance alternative for learning intrinsically efficient policies in resource-limited settings.
https://arxiv.org/abs/2601.23225
Academic Papers
svg
c6b8c6186f8720a71b557c5ff530289e3be36f0597a9568d7bc7b581a30d5b7b
2026-02-02T00:00:00-05:00
Toward Digital Twins in 3D IC Packaging: A Critical Review of Physics, Data, and Hybrid Architectures
arXiv:2601.23226v1 Announce Type: new Abstract: Three-dimensional integrated circuit (3D IC) pack-aging and heterogeneous integration have emerged as central pillars of contemporary semiconductor scaling. Yet, the multi-physics coupling inherent to stacked architectures manifesting as thermal hot spots, warpage-induced stresses, and interconnect aging demands monitoring and control capabilities that surpass traditional offline metrology. Although Digital Twin (DT) technology provides a principled route to real-time reliability management, the existing literature remains fragmented and frequently blurs the distinction between static multiphysics simulation workflows and truly dynamic, closed-loop twins. This critical review distinguishes itself by addressing these deficiencies through three specific contributions. First, we clarify the Digital Twin hierarchy to resolve terminological ambiguity between digital models, shadows, and twins. Second, we synthesize three foundational enabling technologies: (1) physics-based modeling, emphasizing the shift from computationally intensive finite-element analysis (FEA) to real-time surrogate models; (2) data-driven paradigms, highlighting virtual metrology (VM) for inferring latent metrics; and (3) in-situ sensing, the nervous system coupling the physical stack to its virtual counterpart. Third, beyond a descriptive survey, we propose a unified hybrid DT architecture that leverages physics-informed machine learning (e.g., PINNs) to reconcile data scarcity with latency constraints. Finally, we outline a standards-aligned roadmap incorporating IEEE 1451 and UCIe protocols to accelerate the transition from passive digital shadows to autonomous, self-optimizing Digital Twins for 3D IC manufacturing and field operation.
https://arxiv.org/abs/2601.23226
Academic Papers
svg
1e8650fff3809bdf39741e96dd44aa091cac6016d221febf441aab550344f5db
2026-02-02T00:00:00-05:00
Scaling Multiagent Systems with Process Rewards
arXiv:2601.23228v1 Announce Type: new Abstract: While multiagent systems have shown promise for tackling complex tasks via specialization, finetuning multiple agents simultaneously faces two key challenges: (1) credit assignment across agents, and (2) sample efficiency of expensive multiagent rollouts. In this work, we propose finetuning multiagent systems with per-action process rewards from AI feedback (MAPPA) to address both. Through assigning credit to individual agent actions rather than only at task completion, MAPPA enables fine-grained supervision without ground truth labels while extracting maximal training signal from each rollout. We demonstrate our approach on competition math problems and tool-augmented data analysis tasks. On unseen math problems, MAPPA achieves +5.0--17.5pp on AIME and +7.8--17.2pp on AMC. For data analysis tasks, our method improves success rate by +12.5pp while quality metrics improve by up to 30%, validating that per-action supervision can lead to improvements across different multiagent system on various domains. By addressing these challenges, our work takes a first step toward scaling multiagent systems for complex, long-horizon tasks with minimal human supervision.
https://arxiv.org/abs/2601.23228
Academic Papers
svg
c07f61daa25db99c62e279435b072b37111ccedc6c2fd422547776a29beabd67
2026-02-02T00:00:00-05:00
Strongly Polynomial Time Complexity of Policy Iteration for $L_\infty$ Robust MDPs
arXiv:2601.23229v1 Announce Type: new Abstract: Markov decision processes (MDPs) are a fundamental model in sequential decision making. Robust MDPs (RMDPs) extend this framework by allowing uncertainty in transition probabilities and optimizing against the worst-case realization of that uncertainty. In particular, $(s, a)$-rectangular RMDPs with $L_\infty$ uncertainty sets form a fundamental and expressive model: they subsume classical MDPs and turn-based stochastic games. We consider this model with discounted payoffs. The existence of polynomial and strongly-polynomial time algorithms is a fundamental problem for these optimization models. For MDPs, linear programming yields polynomial-time algorithms for any arbitrary discount factor, and the seminal work of Ye established strongly--polynomial time for a fixed discount factor. The generalization of such results to RMDPs has remained an important open problem. In this work, we show that a robust policy iteration algorithm runs in strongly-polynomial time for $(s, a)$-rectangular $L_\infty$ RMDPs with a constant (fixed) discount factor, resolving an important algorithmic question.
https://arxiv.org/abs/2601.23229
Academic Papers
svg
7fffdd06374f571c29c91409e9d27354d16b7cbdbcfcb92519ee6ee8840c4006
2026-02-02T00:00:00-05:00
ShotFinder: Imagination-Driven Open-Domain Video Shot Retrieval via Web Search
arXiv:2601.23232v1 Announce Type: new Abstract: In recent years, large language models (LLMs) have made rapid progress in information retrieval, yet existing research has mainly focused on text or static multimodal settings. Open-domain video shot retrieval, which involves richer temporal structure and more complex semantics, still lacks systematic benchmarks and analysis. To fill this gap, we introduce ShotFinder, a benchmark that formalizes editing requirements as keyframe-oriented shot descriptions and introduces five types of controllable single-factor constraints: Temporal order, Color, Visual style, Audio, and Resolution. We curate 1,210 high-quality samples from YouTube across 20 thematic categories, using large models for generation with human verification. Based on the benchmark, we propose ShotFinder, a text-driven three-stage retrieval and localization pipeline: (1) query expansion via video imagination, (2) candidate video retrieval with a search engine, and (3) description-guided temporal localization. Experiments on multiple closed-source and open-source models reveal a significant gap to human performance, with clear imbalance across constraints: temporal localization is relatively tractable, while color and visual style remain major challenges. These results reveal that open-domain video shot retrieval is still a critical capability that multimodal large models have yet to overcome.
https://arxiv.org/abs/2601.23232
Academic Papers
svg
f73086b8f0ff74f22cfa5e4ed064ececf0b7e8ace4fb67de19aa052fb45743d4
2026-02-02T00:00:00-05:00
Sequence Diffusion Model for Temporal Link Prediction in Continuous-Time Dynamic Graph
arXiv:2601.23233v1 Announce Type: new Abstract: Temporal link prediction in dynamic graphs is a fundamental problem in many real-world systems. Existing temporal graph neural networks mainly focus on learning representations of historical interactions. Despite their strong performance, these models are still purely discriminative, producing point estimates for future links and lacking an explicit mechanism to capture the uncertainty and sequential structure of future temporal interactions. In this paper, we propose SDG, a novel sequence-level diffusion framework that unifies dynamic graph learning with generative denoising. Specifically, SDG injects noise into the entire historical interaction sequence and jointly reconstructs all interaction embeddings through a conditional denoising process, thereby enabling the model to capture more comprehensive interaction distributions. To align the generative process with temporal link prediction, we employ a cross-attention denoising decoder to guide the reconstruction of the destination sequence and optimize the model in an end-to-end manner. Extensive experiments on various temporal graph benchmarks show that SDG consistently achieves state-of-the-art performance in the temporal link prediction task.
https://arxiv.org/abs/2601.23233
Academic Papers
svg
9af33e6bfa221e53b91dd463e988610675504d1788bae5b1d8b9fbd28853fc57
2026-02-02T00:00:00-05:00
YuriiFormer: A Suite of Nesterov-Accelerated Transformers
arXiv:2601.23236v1 Announce Type: new Abstract: We propose a variational framework that interprets transformer layers as iterations of an optimization algorithm acting on token embeddings. In this view, self-attention implements a gradient step of an interaction energy, while MLP layers correspond to gradient updates of a potential energy. Standard GPT-style transformers emerge as vanilla gradient descent on the resulting composite objective, implemented via Lie--Trotter splitting between these two energy functionals. This perspective enables principled architectural design using classical optimization ideas. As a proof of concept, we introduce a Nesterov-style accelerated transformer that preserves the same attention and MLP oracles. The resulting architecture consistently outperforms a nanoGPT baseline on TinyStories and OpenWebText, demonstrating that optimization-theoretic insights can translate into practical gains.
https://arxiv.org/abs/2601.23236
Academic Papers
svg
f8dce92a85b48b156e655cd7749fdf8ec9f5e6ca41a9783cea227c579b3889e8
2026-02-02T00:00:00-05:00
Applications of QR-based Vector-Valued Rational Approximation
arXiv:2601.23237v1 Announce Type: new Abstract: Several applications of the QR-AAA algorithm, a greedy scheme for vector-valued rational approximation, are presented. The focus is on demonstrating the flexibility and practical effectiveness of QR-AAA in a variety of computational settings, including Stokes flow computation, multivariate rational approximation, function extension, the development of novel quadrature methods and near-field approximation in the boundary element method.
https://arxiv.org/abs/2601.23237
Academic Papers
svg
05c11e281a14c949b2931be9962df17cda8b322f884725ec9d67011b8f93ac7e
2026-02-02T00:00:00-05:00
How well do generative models solve inverse problems? A benchmark study
arXiv:2601.23238v1 Announce Type: new Abstract: Generative learning generates high dimensional data based on low dimensional conditions, also called prompts. Therefore, generative learning algorithms are eligible for solving (Bayesian) inverse problems. In this article we compare a traditional Bayesian inverse approach based on a forward regression model and a prior sampled with the Markov Chain Monte Carlo method with three state of the art generative learning models, namely conditional Generative Adversarial Networks, Invertible Neural Networks and Conditional Flow Matching. We apply them to a problem of gas turbine combustor design where we map six independent design parameters to three performance labels. We propose several metrics for the evaluation of this inverse design approaches and measure the accuracy of the labels of the generated designs along with the diversity. We also study the performance as a function of the training dataset size. Our benchmark has a clear winner, as Conditional Flow Matching consistently outperforms all competing approaches.
https://arxiv.org/abs/2601.23238
Academic Papers
svg