id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
70e0533314d64cb0a4646d605446ddec8ec6d66495fb736dcdb1bece2e1f3a5c
2026-02-02T00:00:00-05:00
On the convergence and efficiency of splitting schemes for the Cahn-Hilliard-Biot model
arXiv:2601.22854v1 Announce Type: new Abstract: In this paper, we present a novel solution strategy for the Cahn-Hilliard-Biot model, a three-way coupled system that features the interplay of solid phase separation, fluid dynamics, and elastic deformations in porous media. It is a phase-field model that combines the Cahn-Hilliard regularized interface equation and Biot's equations of poroelasticity. Solving the system poses significant challenges due to its coupled, nonlinear, and non-convex nature. The main goal of this work is to provide a consistent and efficient solution strategy. With this in mind, we introduce a semi-implicit time discretization such that the resulting discrete system is equivalent to a convex minimization problem. Then, using abstract theory for convex problems, we prove the convergence of an alternating minimization method to the time-discrete system. The solution strategy is relatively flexible in terms of spatial discretization, although we require standard inverse inequalities for the guaranteed convergence of the alternating minimization method. Finally, we perform some numerical experiments that show the promise of the proposed solution strategy, both in terms of efficiency and robustness.
https://arxiv.org/abs/2601.22854
Academic Papers
svg
0b6cf5aaca5d6d27e9d13ee9da853672192366f857bc31785342dd03b7f55c04
2026-02-02T00:00:00-05:00
OptiMAG: Structure-Semantic Alignment via Unbalanced Optimal Transport
arXiv:2601.22856v1 Announce Type: new Abstract: Multimodal Attributed Graphs (MAGs) have been widely adopted for modeling complex systems by integrating multi-modal information, such as text and images, on nodes. However, we identify a discrepancy between the implicit semantic structure induced by different modality embeddings and the explicit graph structure. For instance, neighbors in the explicit graph structure may be close in one modality but distant in another. Since existing methods typically perform message passing over the fixed explicit graph structure, they inadvertently aggregate dissimilar features, introducing modality-specific noise and impeding effective node representation learning. To address this, we propose OptiMAG, an Unbalanced Optimal Transport-based regularization framework. OptiMAG employs the Fused Gromov-Wasserstein distance to explicitly guide cross-modal structural consistency within local neighborhoods, effectively mitigating structural-semantic conflicts. Moreover, a KL divergence penalty enables adaptive handling of cross-modal inconsistencies. This framework can be seamlessly integrated into existing multimodal graph models, acting as an effective drop-in regularizer. Experiments demonstrate that OptiMAG consistently outperforms baselines across multiple tasks, ranging from graph-centric tasks (e.g., node classification, link prediction) to multimodal-centric generation tasks (e.g., graph2text, graph2image). The source code will be available upon acceptance.
https://arxiv.org/abs/2601.22856
Academic Papers
svg
cd44945bd918c1e7e042ada3a99efaa0d99b017591dad7710620b0923be250ec
2026-02-02T00:00:00-05:00
Learning to Build Shapes by Extrusion
arXiv:2601.22858v1 Announce Type: new Abstract: We introduce Text Encoded Extrusion (TEE), a text-based representation that expresses mesh construction as sequences of face extrusions rather than polygon lists, and a method for generating 3D meshes from TEE using a large language model (LLM). By learning extrusion sequences that assemble a mesh, similar to the way artists create meshes, our approach naturally supports arbitrary output face counts and produces manifold meshes by design, in contrast to recent transformer-based models. The learnt extrusion sequences can also be applied to existing meshes - enabling editing in addition to generation. To train our model, we decompose a library of quadrilateral meshes with non-self-intersecting face loops into constituent loops, which can be viewed as their building blocks, and finetune an LLM on the steps for reassembling the meshes by performing a sequence of extrusions. We demonstrate that our representation enables reconstruction, novel shape synthesis, and the addition of new features to existing meshes.
https://arxiv.org/abs/2601.22858
Academic Papers
svg
df2ffd83371bcfbf30147d2e160b3e992c02323c43e94aa2f66093028b1ebf39
2026-02-02T00:00:00-05:00
MEnvAgent: Scalable Polyglot Environment Construction for Verifiable Software Engineering
arXiv:2601.22859v1 Announce Type: new Abstract: The evolution of Large Language Model (LLM) agents for software engineering (SWE) is constrained by the scarcity of verifiable datasets, a bottleneck stemming from the complexity of constructing executable environments across diverse languages. To address this, we introduce MEnvAgent, a Multi-language framework for automated Environment construction that facilitates scalable generation of verifiable task instances. MEnvAgent employs a multi-agent Planning-Execution-Verification architecture to autonomously resolve construction failures and integrates a novel Environment Reuse Mechanism that reduces computational overhead by incrementally patching historical environments. Evaluations on MEnvBench, a new benchmark comprising 1,000 tasks across 10 languages, demonstrate that MEnvAgent outperforms baselines, improving Fail-to-Pass (F2P) rates by 8.6% while reducing time costs by 43%. Additionally, we demonstrate the utility of MEnvAgent by constructing MEnvData-SWE, the largest open-source polyglot dataset of realistic verifiable Docker environments to date, alongside solution trajectories that enable consistent performance gains on SWE tasks across a wide range of models. Our code, benchmark, and dataset are available at https://github.com/ernie-research/MEnvAgent.
https://arxiv.org/abs/2601.22859
Academic Papers
svg
0ea559fc4952fb1b04c8f191c9d2fa8b1bc8b77a3c8b407228b7f33171c8e5ca
2026-02-02T00:00:00-05:00
Bayesian Interpolating Neural Network (B-INN): a scalable and reliable Bayesian model for large-scale physical systems
arXiv:2601.22860v1 Announce Type: new Abstract: Neural networks and machine learning models for uncertainty quantification suffer from limited scalability and poor reliability compared to their deterministic counterparts. In industry-scale active learning settings, where generating a single high-fidelity simulation may require days or weeks of computation and produce data volumes on the order of gigabytes, they quickly become impractical. This paper proposes a scalable and reliable Bayesian surrogate model, termed the Bayesian Interpolating Neural Network (B-INN). The B-INN combines high-order interpolation theory with tensor decomposition and alternating direction algorithm to enable effective dimensionality reduction without compromising predictive accuracy. We theoretically show that the function space of a B-INN is a subset of that of Gaussian processes, while its Bayesian inference exhibits linear complexity, $\mathcal{O}(N)$, with respect to the number of training samples. Numerical experiments demonstrate that B-INNs can be from 20 times to 10,000 times faster with a robust uncertainty estimation compared to Bayesian neural networks and Gaussian processes. These capabilities make B-INN a practical foundation for uncertainty-driven active learning in large-scale industrial simulations, where computational efficiency and robust uncertainty calibration are paramount.
https://arxiv.org/abs/2601.22860
Academic Papers
svg
3bb20934a1a9054c6fa7faedd8e49a6d4367758385a8ec16a5f1882b821eab3b
2026-02-02T00:00:00-05:00
Under-Canopy Terrain Reconstruction in Dense Forests Using RGB Imaging and Neural 3D Reconstruction
arXiv:2601.22861v1 Announce Type: new Abstract: Mapping the terrain and understory hidden beneath dense forest canopies is of great interest for numerous applications such as search and rescue, trail mapping, forest inventory tasks, and more. Existing solutions rely on specialized sensors: either heavy, costly airborne LiDAR, or Airborne Optical Sectioning (AOS), which uses thermal synthetic aperture photography and is tailored for person detection. We introduce a novel approach for the reconstruction of canopy-free, photorealistic ground views using only conventional RGB images. Our solution is based on the celebrated Neural Radiance Fields (NeRF), a recent 3D reconstruction method. Additionally, we include specific image capture considerations, which dictate the needed illumination to successfully expose the scene beneath the canopy. To better cope with the poorly lit understory, we employ a low light loss. Finally, we propose two complementary approaches to remove occluding canopy elements by controlling per-ray integration procedure. To validate the value of our approach, we present two possible downstream tasks. For the task of search and rescue (SAR), we demonstrate that our method enables person detection which achieves promising results compared to thermal AOS (using only RGB images). Additionally, we show the potential of our approach for forest inventory tasks like tree counting. These results position our approach as a cost-effective, high-resolution alternative to specialized sensors for SAR, trail mapping, and forest-inventory tasks.
https://arxiv.org/abs/2601.22861
Academic Papers
svg
b32b6859b1af715ebcaaa33e45e48f6845d395a4d41bd9ee855032ccfda3a914
2026-02-02T00:00:00-05:00
Design of a GPU with Heterogeneous Cores for Graphics
arXiv:2601.22862v1 Announce Type: new Abstract: Heterogeneous architectures can deliver higher performance and energy efficiency than symmetric counterparts by using multiple architectures tuned to different types of workloads. While previous works focused on CPUs, this work extends the concept of heterogeneity to GPUs by proposing KHEPRI, a heterogeneous GPU architecture for graphics applications. Scenes in graphics applications showcase diversity, as they consist of many objects with varying levels of complexity. As a result, computational intensity and memory bandwidth requirements differ significantly across different regions of each scene. To address this variability, our proposal includes two types of cores: cores optimized for high ILP (compute-specialized) and cores that tolerate a higher number of simultaneously outstanding cache misses (memory-specialized). A key component of the proposed architecture is a novel work scheduler that dynamically assigns each part of a frame (i.e., a tile) to the most suitable core. Designing this scheduler is particularly challenging, as it must preserve data locality; otherwise, the benefits of heterogeneity may be offset by the penalty of additional cache misses. Additionally, the scheduler requires knowledge of each tile's characteristics before rendering it. For this purpose, KHEPRI leverages frame-to-frame coherence to predict the behavior of each tile based on that of the corresponding tile in the previous frame. Evaluations across a wide range of commercial animated graphics applications show that, compared to a traditional homogeneous GPU, KHEPRI achieves an average performance improvement of 9.2%, a throughput increase (frames per second) of 7.3%, and a total GPU energy reduction of 4.8%. Importantly, these benefits are achieved without any hardware overhead.
https://arxiv.org/abs/2601.22862
Academic Papers
svg
8f031129be91438e1be438aecdcdafbc8205ddc07a12cbed661437faedf7a26d
2026-02-02T00:00:00-05:00
{\mu}Touch: Enabling Accurate, Lightweight Self-Touch Sensing with Passive Magnets
arXiv:2601.22864v1 Announce Type: new Abstract: Self-touch gestures (e.g., nuanced facial touches and subtle finger scratches) provide rich insights into human behaviors, from hygiene practices to health monitoring. However, existing approaches fall short in detecting such micro gestures due to their diverse movement patterns. This paper presents {\mu}Touch, a novel magnetic sensing platform for self-touch gesture recognition. {\mu}Touch features (1) a compact hardware design with low-power magnetometers and magnetic silicon, (2) a lightweight semi-supervised framework requiring minimal user data, and (3) an ambient field detection module to mitigate environmental interference. We evaluated {\mu}Touch in two representative applications in user studies with 11 and 12 participants. {\mu}Touch only requires three-second fine-tuning data for each gesture, and new users need less than one minute before starting to use the system. {\mu}Touch can distinguish eight different face-touching behaviors with an average accuracy of 93.41%, and reliably detect body-scratch behaviors with an average accuracy of 94.63%. {\mu}Touch demonstrates accurate and robust sensing performance even after a month, showcasing its potential as a practical tool for hygiene monitoring and dermatological health applications.
https://arxiv.org/abs/2601.22864
Academic Papers
svg
49a5e8ce2a54201e594b2fdb7e33e7919936fe96c5beffbecb12dd6ef4c7e92b
2026-02-02T00:00:00-05:00
Degradation-Aware Frequency Regulation of a Heterogeneous Battery Fleet via Reinforcement Learning
arXiv:2601.22865v1 Announce Type: new Abstract: Battery energy storage systems are increasingly deployed as fast-responding resources for grid balancing services such as frequency regulation and for mitigating renewable generation uncertainty. However, repeated charging and discharging induces cycling degradation and reduces battery lifetime. This paper studies the real-time scheduling of a heterogeneous battery fleet that collectively tracks a stochastic balancing signal subject to per-battery ramp-rate and capacity constraints, while minimizing long-term cycling degradation. Cycling degradation is fundamentally path-dependent: it is determined by charge-discharge cycles formed by the state-of-charge (SoC) trajectory and is commonly quantified via rainflow cycle counting. This non-Markovian structure makes it difficult to express degradation as an additive per-time-step cost, complicating classical dynamic programming approaches. We address this challenge by formulating the fleet scheduling problem as a Markov decision process (MDP) with constrained action space and designing a dense proxy reward that provides informative feedback at each time step while remaining aligned with long-term cycle-depth reduction. To scale learning to large state-action spaces induced by fine-grained SoC discretization and asymmetric per-battery constraints, we develop a function-approximation reinforcement learning method using an Extreme Learning Machine (ELM) as a random nonlinear feature map combined with linear temporal-difference learning. We evaluate the proposed approach on a toy Markovian signal model and on a Markovian model trained from real-world regulation signal traces obtained from the University of Delaware, and demonstrate consistent reductions in cycle-depth occurrence and degradation metrics compared to baseline scheduling policies.
https://arxiv.org/abs/2601.22865
Academic Papers
svg
c95ab55e2e95d03cfe53cce359acf77eae9f8f9e9e492a0784e6d39a4dc8cc66
2026-02-02T00:00:00-05:00
Randomized Methods for Kernelized DMD
arXiv:2601.22867v1 Announce Type: new Abstract: Dynamic Mode Decomposition (DMD) is a data-driven method related to Koopman operator theory that extracts information about dominant dynamics from data snapshots. In this paper we examine techniques to accelerate the application of DMD to large-scale data sets with an eye on randomized techniques. Randomized techniques exploit low-rank matrix approximations at a much smaller computational cost, therefore permitting the use of increased data set sizes. In particular, we propose the application of the RPCholesky algorithm in the setting of kernelized DMD (KDMD). This algorithm relies on adaptive randomized sampling to approximate positive semidefinite kernel matrices and provides better stability guarantees than previously implemented randomized methods for KDMD. Differences between existing competitive randomized techniques and our proposed implementation are discussed with a focus on numerical stability and tradeoff between exploration and exploitation of information obtained from data. The efficacy of this new combination of algorithms is demonstrated on well-established benchmark problems from DMD literature increasing in problem dimension.
https://arxiv.org/abs/2601.22867
Academic Papers
svg
70213006c96044f7bc089684d8901b3ee3f967882bb607b0f2aa28abb3b9d36c
2026-02-02T00:00:00-05:00
When Anomalies Depend on Context: Learning Conditional Compatibility for Anomaly Detection
arXiv:2601.22868v1 Announce Type: new Abstract: Anomaly detection is often formulated under the assumption that abnormality is an intrinsic property of an observation, independent of context. This assumption breaks down in many real-world settings, where the same object or action may be normal or anomalous depending on latent contextual factors (e.g., running on a track versus on a highway). We revisit \emph{contextual anomaly detection}, classically defined as context-dependent abnormality, and operationalize it in the visual domain, where anomaly labels depend on subject--context compatibility rather than intrinsic appearance. To enable systematic study of this setting, we introduce CAAD-3K, a benchmark that isolates contextual anomalies by controlling subject identity while varying context. We further propose a conditional compatibility learning framework that leverages vision--language representations to model subject--context relationships under limited supervision. Our method substantially outperforms existing approaches on CAAD-3K and achieves state-of-the-art performance on MVTec-AD and VisA, demonstrating that modeling context dependence complements traditional structural anomaly detection. Our code and dataset will be publicly released.
https://arxiv.org/abs/2601.22868
Academic Papers
svg
4813987de54b28fb647834cd35b8d86ef30c88239e48fdc5f065dafd276ad1b8
2026-02-02T00:00:00-05:00
Eroding the Truth-Default: A Causal Analysis of Human Susceptibility to Foundation Model Hallucinations and Disinformation in the Wild
arXiv:2601.22871v1 Announce Type: new Abstract: As foundation models (FMs) approach human-level fluency, distinguishing synthetic from organic content has become a key challenge for Trustworthy Web Intelligence. This paper presents JudgeGPT and RogueGPT, a dual-axis framework that decouples "authenticity" from "attribution" to investigate the mechanisms of human susceptibility. Analyzing 918 evaluations across five FMs (including GPT-4 and Llama-2), we employ Structural Causal Models (SCMs) as a principal framework for formulating testable causal hypotheses about detection accuracy. Contrary to partisan narratives, we find that political orientation shows a negligible association with detection performance ($r=-0.10$). Instead, "fake news familiarity" emerges as a candidate mediator ($r=0.35$), suggesting that exposure may function as adversarial training for human discriminators. We identify a "fluency trap" where GPT-4 outputs (HumanMachineScore: 0.20) bypass Source Monitoring mechanisms, rendering them indistinguishable from human text. These findings suggest that "pre-bunking" interventions should target cognitive source monitoring rather than demographic segmentation to ensure trustworthy information ecosystems.
https://arxiv.org/abs/2601.22871
Academic Papers
svg
2375c497daef2457edb39841455e2e2e6e8e66035771388690671fd08da83902
2026-02-02T00:00:00-05:00
From Labels to Facets: Building a Taxonomically Enriched Turkish Learner Corpus
arXiv:2601.22875v1 Announce Type: new Abstract: In terms of annotation structure, most learner corpora rely on holistic flat label inventories which, even when extensive, do not explicitly separate multiple linguistic dimensions. This makes linguistically deep annotation difficult and complicates fine-grained analyses aimed at understanding why and how learners produce specific errors. To address these limitations, this paper presents a semi-automated annotation methodology for learner corpora, built upon a recently proposed faceted taxonomy, and implemented through a novel annotation extension framework. The taxonomy provides a theoretically grounded, multi-dimensional categorization that captures the linguistic properties underlying each error instance, thereby enabling standardized, fine-grained, and interpretable enrichment beyond flat annotations. The annotation extension tool, implemented based on the proposed extension framework for Turkish, automatically extends existing flat annotations by inferring additional linguistic and metadata information as facets within the taxonomy to provide richer learner-specific context. It was systematically evaluated and yielded promising performance results, achieving a facet-level accuracy of 95.86%. The resulting taxonomically enriched corpus offers enhanced querying capabilities and supports detailed exploratory analyses across learner corpora, enabling researchers to investigate error patterns through complex linguistic and pedagogical dimensions. This work introduces the first collaboratively annotated and taxonomically enriched Turkish Learner Corpus, a manual annotation guideline with a refined tagset, and an annotation extender. As the first corpus designed in accordance with the recently introduced taxonomy, we expect our study to pave the way for subsequent enrichment efforts of existing error-annotated learner corpora.
https://arxiv.org/abs/2601.22875
Academic Papers
svg
c80373841d52fd5417e82ee8af489a50bc02494fa87590082b94e63885f28064
2026-02-02T00:00:00-05:00
Matterhorn: Efficient Analog Sparse Spiking Transformer Architecture with Masked Time-To-First-Spike Encoding
arXiv:2601.22876v1 Announce Type: new Abstract: Spiking neural networks (SNNs) have emerged as a promising candidate for energy-efficient LLM inference. However, current energy evaluations for SNNs primarily focus on counting accumulate operations, and fail to account for real-world hardware costs such as data movement, which can consume nearly 80% of the total energy. In this paper, we propose Matterhorn, a spiking transformer that integrates a novel masked time-to-first-spike (M-TTFS) encoding method to reduce spike movement and a memristive synapse unit (MSU) to eliminate weight access overhead. M-TTFS employs a masking strategy that reassigns the zero-energy silent state (a spike train of all 0s) to the most frequent membrane potential rather than the lowest. This aligns the coding scheme with the data distribution, minimizing spike movement energy without information loss. We further propose a `dead zone' strategy that maximizes sparsity by mapping all values within a given range to the silent state. At the hardware level, the MSU utilizes compute-in-memory (CIM) technology to perform analog integration directly within memory, effectively removing weight access costs. On the GLUE benchmark, Matterhorn establishes a new state-of-the-art, surpassing existing SNNs by 1.42% in average accuracy while delivering a 2.31 times improvement in energy efficiency.
https://arxiv.org/abs/2601.22876
Academic Papers
svg
73be6cc5478baf86ca1b41f7a823b9890adb75946652ce1b876f35361fcef58a
2026-02-02T00:00:00-05:00
Synthetic Time Series Generation via Complex Networks
arXiv:2601.22879v1 Announce Type: new Abstract: Time series data are essential for a wide range of applications, particularly in developing robust machine learning models. However, access to high-quality datasets is often limited due to privacy concerns, acquisition costs, and labeling challenges. Synthetic time series generation has emerged as a promising solution to address these constraints. In this work, we present a framework for generating synthetic time series by leveraging complex networks mappings. Specifically, we investigate whether time series transformed into Quantile Graphs (QG) -- and then reconstructed via inverse mapping -- can produce synthetic data that preserve the statistical and structural properties of the original. We evaluate the fidelity and utility of the generated data using both simulated and real-world datasets, and compare our approach against state-of-the-art Generative Adversarial Network (GAN) methods. Results indicate that our quantile graph-based methodology offers a competitive and interpretable alternative for synthetic time series generation.
https://arxiv.org/abs/2601.22879
Academic Papers
svg
138d33118d62f7f159568f88fdf3369fb80740d6882a1c2d9041e7e61942d084
2026-02-02T00:00:00-05:00
Reinforcement Learning-Based Co-Design and Operation of Chiller and Thermal Energy Storage for Cost-Optimal HVAC Systems
arXiv:2601.22880v1 Announce Type: new Abstract: We study the joint operation and sizing of cooling infrastructure for commercial HVAC systems using reinforcement learning, with the objective of minimizing life-cycle cost over a 30-year horizon. The cooling system consists of a fixed-capacity electric chiller and a thermal energy storage (TES) unit, jointly operated to meet stochastic hourly cooling demands under time-varying electricity prices. The life-cycle cost accounts for both capital expenditure and discounted operating cost, including electricity consumption and maintenance. A key challenge arises from the strong asymmetry in capital costs: increasing chiller capacity by one unit is far more expensive than an equivalent increase in TES capacity. As a result, identifying the right combination of chiller and TES sizes, while ensuring zero loss-of-cooling-load under optimal operation, is a non-trivial co-design problem. To address this, we formulate the chiller operation problem for a fixed infrastructure configuration as a finite-horizon Markov Decision Process (MDP), in which the control action is the chiller part-load ratio (PLR). The MDP is solved using a Deep Q Network (DQN) with a constrained action space. The learned DQN RL policy minimizes electricity cost over historical traces of cooling demand and electricity prices. For each candidate chiller-TES sizing configuration, the trained policy is evaluated. We then restrict attention to configurations that fully satisfy the cooling demand and perform a life-cycle cost minimization over this feasible set to identify the cost-optimal infrastructure design. Using this approach, we determine the optimal chiller and thermal energy storage capacities to be 700 and 1500, respectively.
https://arxiv.org/abs/2601.22880
Academic Papers
svg
1e7e3aca99d327411eb448081fd16e7576442a411e1422dd76b2398b188071ed
2026-02-02T00:00:00-05:00
AnoMod: A Dataset for Anomaly Detection and Root Cause Analysis in Microservice Systems
arXiv:2601.22881v1 Announce Type: new Abstract: Microservice systems (MSS) have become a predominant architectural style for cloud services. Yet the community still lacks high-quality, publicly available datasets for anomaly detection (AD) and root cause analysis (RCA) in MSS. Most benchmarks emphasize performance-related faults and provide only one or two monitoring modalities, limiting research on broader failure modes and cross-modal methods. To address these gaps, we introduce a new multimodal anomaly dataset built on two open-source microservice systems: SocialNetwork and TrainTicket. We design and inject four categories of anomalies (Ano): performance-level, service-level, database-level, and code-level, to emulate realistic anomaly modes. For each scenario, we collect five modalities (Mod): logs, metrics, distributed traces, API responses, and code coverage reports, offering a richer, end-to-end view of system state and inter-service interactions. We name our dataset, reflecting its unique properties, as AnoMod. This dataset enables (1) evaluation of cross-modal anomaly detection and fusion/ablation strategies, and (2) fine-grained RCA studies across service and code regions, supporting end-to-end troubleshooting pipelines that jointly consider detection and localization.
https://arxiv.org/abs/2601.22881
Academic Papers
svg
8f51ddd32789b43970899505f4150f86ebd5de1c174761f4b490c9f6270c8fba
2026-02-02T00:00:00-05:00
Leveraging LLMs For Turkish Skill Extraction
arXiv:2601.22885v1 Announce Type: new Abstract: Skill extraction is a critical component of modern recruitment systems, enabling efficient job matching, personalized recommendations, and labor market analysis. Despite T\"urkiye's significant role in the global workforce, Turkish, a morphologically complex language, lacks both a skill taxonomy and a dedicated skill extraction dataset, resulting in underexplored research in skill extraction for Turkish. This article seeks the answers to three research questions: 1) How can skill extraction be effectively performed for this language, in light of its low resource nature? 2)~What is the most promising model? 3) What is the impact of different Large Language Models (LLMs) and prompting strategies on skill extraction (i.e., dynamic vs. static few-shot samples, varying context information, and encouraging causal reasoning)? The article introduces the first Turkish skill extraction dataset and performance evaluations of automated skill extraction using LLMs. The manually annotated dataset contains 4,819 labeled skill spans from 327 job postings across different occupation areas. The use of LLM outperforms supervised sequence labeling when used in an end-to-end pipeline, aligning extracted spans with standardized skills in the ESCO taxonomy more effectively. The best-performing configuration, utilizing Claude Sonnet 3.7 with dynamic few-shot prompting for skill identification, embedding-based retrieval, and LLM-based reranking for skill linking, achieves an end-to-end performance of 0.56, positioning Turkish alongside similar studies in other languages, which are few in the literature. Our findings suggest that LLMs can improve skill extraction performance in low-resource settings, and we hope that our work will accelerate similar research on skill extraction for underrepresented languages.
https://arxiv.org/abs/2601.22885
Academic Papers
svg
3162e45d3087289f9c94bd95753626d04df32da786dd154d05d51db1588299b0
2026-02-02T00:00:00-05:00
MoVE: Mixture of Value Embeddings -- A New Axis for Scaling Parametric Memory in Autoregressive Models
arXiv:2601.22887v1 Announce Type: new Abstract: Autoregressive sequence modeling stands as the cornerstone of modern Generative AI, powering results across diverse modalities ranging from text generation to image generation. However, a fundamental limitation of this paradigm is the rigid structural coupling of model capacity to computational cost: expanding a model's parametric memory -- its repository of factual knowledge or visual patterns -- traditionally requires deepening or widening the network, which incurs a proportional rise in active FLOPs. In this work, we introduce $\textbf{MoVE (Mixture of Value Embeddings)}$, a mechanism that breaks this coupling and establishes a new axis for scaling capacity. MoVE decouples memory from compute by introducing a global bank of learnable value embeddings shared across all attention layers. For every step in the sequence, the model employs a differentiable soft gating mechanism to dynamically mix retrieved concepts from this bank into the standard value projection. This architecture allows parametric memory to be scaled independently of network depth by simply increasing the number of embedding slots. We validate MoVE through strictly controlled experiments on two representative applications of autoregressive modeling: Text Generation and Image Generation. In both domains, MoVE yields consistent performance improvements over standard and layer-wise memory baselines, enabling the construction of "memory-dense" models that achieve lower perplexity and higher fidelity than their dense counterparts at comparable compute budgets.
https://arxiv.org/abs/2601.22887
Academic Papers
svg
4a35e68315af4855be5ccdf1fcc0410529d0cec87563accc098cb79ea54a166b
2026-02-02T00:00:00-05:00
Should LLMs, $\textit{like}$, Generate How Users Talk? Building Dialect-Accurate Dialog[ue]s Beyond the American Default with MDial
arXiv:2601.22888v1 Announce Type: new Abstract: More than 80% of the 1.6 billion English speakers do not use Standard American English (SAE) and experience higher failure rates and stereotyped responses when interacting with LLMs as a result. Yet multi-dialectal performance remains underexplored. We introduce $\textbf{MDial}$, the first large-scale framework for generating multi-dialectal conversational data encompassing the three pillars of written dialect -- lexical (vocabulary), orthographic (spelling), and morphosyntactic (grammar) features -- for nine English dialects. Partnering with native linguists, we design an annotated and scalable rule-based LLM transformation to ensure precision. Our approach challenges the assumption that models should mirror users' morphosyntactic features, showing that up to 90% of the grammatical features of a dialect should not be reproduced by models. Independent evaluations confirm data quality, with annotators preferring MDial outputs over prior methods in 98% of pairwise comparisons for dialect naturalness. Using this pipeline, we construct the dialect-parallel $\textbf{MDialBench}$mark with 50k+ dialogs, resulting in 97k+ QA pairs, and evaluate 17 LLMs on dialect identification and response generation tasks. Even frontier models achieve under 70% accuracy, fail to reach 50% for Canadian English, and systematically misclassify non-SAE dialects as American or British. As dialect identification underpins natural language understanding, these errors risk cascading failures into downstream tasks.
https://arxiv.org/abs/2601.22888
Academic Papers
svg
f24f7f1018ff0ec31041c3c630ef143931f1f2290e8ad2fedd6619e963cdd470
2026-02-02T00:00:00-05:00
DiffuSpeech: Silent Thought, Spoken Answer via Unified Speech-Text Diffusion
arXiv:2601.22889v1 Announce Type: new Abstract: Current speech language models generate responses directly without explicit reasoning, leading to errors that cannot be corrected once audio is produced. We introduce \textbf{``Silent Thought, Spoken Answer''} -- a paradigm where speech LLMs generate internal text reasoning alongside spoken responses, with thinking traces informing speech quality. To realize this, we present \method{}, the first diffusion-based speech-text language model supporting both understanding and generation, unifying discrete text and tokenized speech under a single masked diffusion framework. Unlike autoregressive approaches, \method{} jointly generates reasoning traces and speech tokens through iterative denoising, with modality-specific masking schedules. We also construct \dataset{}, the first speech QA dataset with paired text reasoning traces, containing 26K samples totaling 319 hours. Experiments show \method{} achieves state-of-the-art speech-to-speech QA accuracy, outperforming the best baseline by up to 9 points, while attaining the best TTS quality among generative models (6.2\% WER) and preserving language understanding (66.2\% MMLU). Ablations confirm that both the diffusion architecture and thinking traces contribute to these gains.
https://arxiv.org/abs/2601.22889
Academic Papers
svg
471335597d780c31e3d741870970137d8ae1123554aabd7e3dcd128667fdcbc4
2026-02-02T00:00:00-05:00
PlatoLTL: Learning to Generalize Across Symbols in LTL Instructions for Multi-Task RL
arXiv:2601.22891v1 Announce Type: new Abstract: A central challenge in multi-task reinforcement learning (RL) is to train generalist policies capable of performing tasks not seen during training. To facilitate such generalization, linear temporal logic (LTL) has recently emerged as a powerful formalism for specifying structured, temporally extended tasks to RL agents. While existing approaches to LTL-guided multi-task RL demonstrate successful generalization across LTL specifications, they are unable to generalize to unseen vocabularies of propositions (or "symbols"), which describe high-level events in LTL. We present PlatoLTL, a novel approach that enables policies to zero-shot generalize not only compositionally across LTL formula structures, but also parametrically across propositions. We achieve this by treating propositions as instances of parameterized predicates rather than discrete symbols, allowing policies to learn shared structure across related propositions. We propose a novel architecture that embeds and composes predicates to represent LTL specifications, and demonstrate successful zero-shot generalization to novel propositions and tasks across challenging environments.
https://arxiv.org/abs/2601.22891
Academic Papers
svg
8b3bbeae0cd7b6257c033e053b3ffb8b6fbd2bd54871d99e8087ef9b94f3e7bf
2026-02-02T00:00:00-05:00
Assessing the Real-World Impact of Post-Quantum Cryptography on WPA-Enterprise Networks
arXiv:2601.22892v1 Announce Type: new Abstract: The advent of large-scale quantum computers poses a significant threat to contemporary network security protocols, including Wi-Fi Protected Access (WPA)-Enterprise authentication. To mitigate this threat, the adoption of Post-Quantum Cryptography (PQC) is critical. In this work, we investigate the performance impact of PQC algorithms on WPA-Enterprise-based authentication. To this end, we conduct an experimental evaluation of authentication latency using a testbed built with the open-source tools FreeRADIUS and hostapd, measuring the time spent at the client, access point, and RADIUS server. We evaluate multiple combinations of PQC algorithms and analyze their performance overhead in comparison to currently deployed cryptographic schemes. Beyond performance, we assess the security implications of these algorithm choices by relating authentication mechanisms to the quantum effort required for their exploitation. This perspective enables a systematic categorization of PQ-relevant weaknesses in WPA-Enterprise according to their practical urgency. The evaluation results show that, although PQC introduces additional authentication latency, combinations such as ML-DSA-65 and Falcon-1024 used in conjunction with ML-KEM provide a favorable trade-off between security and performance. Furthermore, we demonstrate that the resulting overhead can be effectively mitigated through session resumption. Overall, this work presents a first real-world performance evaluation of PQC-enabled WPA-Enterprise authentication and demonstrates its practical feasibility for enterprise Wi-Fi deployments.
https://arxiv.org/abs/2601.22892
Academic Papers
svg
59cb1ab240846dc38951a886a7990676bdab2fb2602dcc55087ffc1afe1cf560
2026-02-02T00:00:00-05:00
When Machines Get It Wrong: Large Language Models Perpetuate Autism Myths More Than Humans Do
arXiv:2601.22893v1 Announce Type: new Abstract: As Large Language Models become ubiquitous sources of health information, understanding their capacity to accurately represent stigmatized conditions is crucial for responsible deployment. This study examines whether leading AI systems perpetuate or challenge misconceptions about Autism Spectrum Disorder, a condition particularly vulnerable to harmful myths. We administered a 30-item instrument measuring autism knowledge to 178 participants and three state-of-the-art LLMs including GPT-4, Claude, and Gemini. Contrary to expectations that AI systems would leverage their vast training data to outperform humans, we found the opposite pattern: human participants endorsed significantly fewer myths than LLMs (36.2% vs. 44.8% error rate; z = -2.59, p = .0048). In 18 of the 30 evaluated items, humans significantly outperformed AI systems. These findings reveal a critical blind spot in current AI systems and have important implications for human-AI interaction design, the epistemology of machine knowledge, and the need to center neurodivergent perspectives in AI development.
https://arxiv.org/abs/2601.22893
Academic Papers
svg
80e2d6fa76370fbde38e3bdfcb80a8c3c723629bc7db45ed9347f92ec1b5e0d5
2026-02-02T00:00:00-05:00
Calibrated Multivariate Distributional Regression with Pre-Rank Regularization
arXiv:2601.22895v1 Announce Type: new Abstract: The goal of probabilistic prediction is to issue predictive distributions that are as informative as possible, subject to being calibrated. Despite substantial progress in the univariate setting, achieving multivariate calibration remains challenging. Recent work has introduced pre-rank functions, scalar projections of multivariate forecasts and observations, as flexible diagnostics for assessing specific aspects of multivariate calibration, but their use has largely been limited to post-hoc evaluation. We propose a regularization-based calibration method that enforces multivariate calibration during training of multivariate distributional regression models using pre-rank functions. We further introduce a novel PCA-based pre-rank that projects predictions onto principal directions of the predictive distribution. Through simulation studies and experiments on 18 real-world multi-output regression datasets, we show that the proposed approach substantially improves multivariate pre-rank calibration without compromising predictive accuracy, and that the PCA pre-rank reveals dependence-structure misspecifications that are not detected by existing pre-ranks.
https://arxiv.org/abs/2601.22895
Academic Papers
svg
69d06cded02e9eef4aaf6e3bd71e70c83ddf50d2fa324a10493d99472b092e84
2026-02-02T00:00:00-05:00
Game-Theoretic Co-Evolution for LLM-Based Heuristic Discovery
arXiv:2601.22896v1 Announce Type: new Abstract: Large language models (LLMs) have enabled rapid progress in automatic heuristic discovery (AHD), yet most existing methods are predominantly limited by static evaluation against fixed instance distributions, leading to potential overfitting and poor generalization under distributional shifts. We propose Algorithm Space Response Oracles (ASRO), a game-theoretic framework that reframes heuristic discovery as a program level co-evolution between solver and instance generator. ASRO models their interaction as a two-player zero-sum game, maintains growing strategy pools on both sides, and iteratively expands them via LLM-based best-response oracles against mixed opponent meta-strategies, thereby replacing static evaluation with an adaptive, self-generated curriculum. Across multiple combinatorial optimization domains, ASRO consistently outperforms static-training AHD baselines built on the same program search mechanisms, achieving substantially improved generalization and robustness on diverse and out-of-distribution instances.
https://arxiv.org/abs/2601.22896
Academic Papers
svg
a4e2bceb79e7693589f3f555ca1a3814d663997558229b6d8b0a9be97815f231
2026-02-02T00:00:00-05:00
Uncertainty-Aware Extrapolation in Bayesian Oblique Trees
arXiv:2601.22899v1 Announce Type: new Abstract: Decision trees are widely used due to their interpretability and efficiency, but they struggle in regression tasks that require reliable extrapolation and well-calibrated uncertainty. Piecewise-constant leaf predictions are bounded by the training targets and often become overconfident under distribution shift. We propose a single-tree Bayesian model that extends VSPYCT by equipping each leaf with a GP predictor. Bayesian oblique splits provide uncertainty-aware partitioning of the input space, while GP leaves model local functional behaviour and enable principled extrapolation beyond the observed target range. We present an efficient inference and prediction scheme that combines posterior sampling of split parameters with \gls{gp} posterior predictions, and a gating mechanism that activates GP-based extrapolation when inputs fall outside the training support of a leaf. Experiments on benchmark regression tasks show improvements in the predictive performance compared to standard variational oblique trees, and substantial performance gains in extrapolation scenarios.
https://arxiv.org/abs/2601.22899
Academic Papers
svg
4afd0a2fda5a8a84486e81e15e201f1178ba3904e6f58ec311a276450552107f
2026-02-02T00:00:00-05:00
MulFeRL: Enhancing Reinforcement Learning with Verbal Feedback in a Multi-turn Loop
arXiv:2601.22900v1 Announce Type: new Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) is widely used to improve reasoning in multiple domains, yet outcome-only scalar rewards are often sparse and uninformative, especially on failed samples, where they merely indicate failure and provide no insight into why the reasoning fails. In this paper, we investigate how to leverage richer verbal feedback to guide RLVR training on failed samples, and how to convert such feedback into a trainable learning signal. Specifically, we propose a multi-turn feedback-guided reinforcement learning framework. It builds on three mechanisms: (1) dynamic multi-turn regeneration guided by feedback, triggered only on failed samples, (2) two complementary learning signals for within-turn and cross-turn optimization, and (3) structured feedback injection into the model's reasoning process. Trained on sampled OpenR1-Math, the approach outperforms supervised fine-tuning and RLVR baselines in-domain and generalizes well out-of-domain.
https://arxiv.org/abs/2601.22900
Academic Papers
svg
798f9b9c4d167da764f0ffa69865e2e0c1e7bba4ddc79722432eda3489a7b33b
2026-02-02T00:00:00-05:00
Status Updating via Integrated Sensing and Communication: Freshness Optimisation
arXiv:2601.22901v1 Announce Type: new Abstract: This paper studies strategic design in an integrated sensing and communication (ISAC) architecture for status updating of remotely navigating agents. We consider an ISAC-enabled base station that can sense the state of a remote source and communicate this information back to the source. Both sensing and communication succeed with given probabilities and incur distinct costs. The objective is to optimise a long-term cost that captures information freshness, measured by the age of information (AoI), at the source together with sensing and communication overheads. The resulting sequential decision problem is formulated as a discounted infinite-horizon Markov decision process with a two-dimensional AoI state, representing information freshness at the source and at the base station. We prove that the optimal stationary policy admits a monotone threshold structure characterised by a nondecreasing switching curve in the AoI state space. Our numerical analysis illustrates the structures of the value function and the optimal decision map. These results demonstrate that freshness-based objectives can be naturally integrated into ISAC design, while yielding interpretable and implementable strategies.
https://arxiv.org/abs/2601.22901
Academic Papers
svg
ce4960604df8de455cade8562631a019ad5198dc15c453ae9e0df7559970aeaa
2026-02-02T00:00:00-05:00
DINO-SAE: DINO Spherical Autoencoder for High-Fidelity Image Reconstruction and Generation
arXiv:2601.22904v1 Announce Type: new Abstract: Recent studies have explored using pretrained Vision Foundation Models (VFMs) such as DINO for generative autoencoders, showing strong generative performance. Unfortunately, existing approaches often suffer from limited reconstruction fidelity due to the loss of high-frequency details. In this work, we present the DINO Spherical Autoencoder (DINO-SAE), a framework that bridges semantic representation and pixel-level reconstruction. Our key insight is that semantic information in contrastive representations is primarily encoded in the direction of feature vectors, while forcing strict magnitude matching can hinder the encoder from preserving fine-grained details. To address this, we introduce Hierarchical Convolutional Patch Embedding module that enhances local structure and texture preservation, and Cosine Similarity Alignment objective that enforces semantic consistency while allowing flexible feature magnitudes for detail retention. Furthermore, leveraging the observation that SSL-based foundation model representations intrinsically lie on a hypersphere, we employ Riemannian Flow Matching to train a Diffusion Transformer (DiT) directly on this spherical latent manifold. Experiments on ImageNet-1K demonstrate that our approach achieves state-of-the-art reconstruction quality, reaching 0.37 rFID and 26.2 dB PSNR, while maintaining strong semantic alignment to the pretrained VFM. Notably, our Riemannian Flow Matching-based DiT exhibits efficient convergence, achieving a gFID of 3.47 at 80 epochs.
https://arxiv.org/abs/2601.22904
Academic Papers
svg
9188953788724ebb49f89ef7d945a073f5d194d4168e0f76fea0e39c78d8331a
2026-02-02T00:00:00-05:00
FlexLoRA: Entropy-Guided Flexible Low-Rank Adaptation
arXiv:2601.22905v1 Announce Type: new Abstract: Large pre-trained models achieve remarkable success across diverse domains, yet fully fine-tuning incurs prohibitive computational and memory costs. Parameter-efficient fine-tuning (PEFT) has thus become a mainstream paradigm. Among them, Low-Rank Adaptation (LoRA) introduces trainable low-rank matrices and shows strong performance, nevertheless, its fixed-rank design limits flexibility. Dynamic rank allocation methods mitigate this issue by pruning redundant directions; however, they often rely on heuristic, element-level metrics that globally sort rank directions without matrix-wise distinction, and they lack mechanisms to expand capacity in layers requiring additional adaptation. To overcome these limitations, we propose FlexLoRA, an entropy-guided flexible low-rank adaptation framework that (i) evaluates matrix importance via spectral energy entropy, (ii) supports rank pruning and expansion under a global budget, and (iii) employs zero-impact initialization for newly added singular directions to ensure stability. By addressing granularity, flexibility, and stability limitations, FlexLoRA provides a more principled solution for PEFT. Extensive experiments show that FlexLoRA consistently outperforms state-of-the-art baselines across benchmarks. Codes are available at https://github.com/Chongjie-Si/Subspace-Tuning.
https://arxiv.org/abs/2601.22905
Academic Papers
svg
d55af531935ca73a1d1839b74ce995d09098ed07563a622d596d5019cdc782ef
2026-02-02T00:00:00-05:00
Feedback Control via Integrated Sensing and Communication: Uncertainty Optimisation
arXiv:2601.22912v1 Announce Type: new Abstract: This paper studies strategic design in an integrated sensing and communication (ISAC) architecture for feedback control of cyber-physical systems. We focus on a setting in which the regulation of a physical process (i.e., remote source) is performed via an ISAC-enabled base station. The base station can alternate between tracking the state of the source and delivering control-relevant information back to the source. For a Gauss-Markov source subject to i.i.d. Bernoulli sensing and communication links, under a finite-horizon linear-quadratic-Gaussian cost, we rigorously characterise the optimal policies through an uncertainty-aware synthesis. We establish that the optimal switching policy, for the ISAC system at the base station, is threshold-based in terms of the source and base-station estimation covariances, while the optimal control policy, for the actuator at the source, is linear in the source state estimate. We show that the threshold region$\unicode{x2014}$defined as the set of estimation covariance pairs for which communication is preferred over sensing$\unicode{x2014}$expands with increasing source uncertainty and contracts with increasing base-station uncertainty.
https://arxiv.org/abs/2601.22912
Academic Papers
svg
d17e0ad63ef34dd47d3e6d93f3980fa0734b14c77075a3568604ca4080fd8291
2026-02-02T00:00:00-05:00
Multi-Cue Anomaly Detection and Localization under Data Contamination
arXiv:2601.22913v1 Announce Type: new Abstract: Visual anomaly detection in real-world industrial settings faces two major limitations. First, most existing methods are trained on purely normal data or on unlabeled datasets assumed to be predominantly normal, presuming the absence of contamination, an assumption that is rarely satisfied in practice. Second, they assume no access to labeled anomaly samples, limiting the model from learning discriminative characteristics of true anomalies. Therefore, these approaches often struggle to distinguish anomalies from normal instances, resulting in reduced detection and weak localization performance. In real-world applications, where training data are frequently contaminated with anomalies, such methods fail to deliver reliable performance. In this work, we propose a robust anomaly detection framework that integrates limited anomaly supervision into the adaptive deviation learning paradigm. We introduce a composite anomaly score that combines three complementary components: a deviation score capturing statistical irregularity, an entropy-based uncertainty score reflecting predictive inconsistency, and a segmentation-based score highlighting spatial abnormality. This unified scoring mechanism enables accurate detection and supports gradient-based localization, providing intuitive and explainable visual evidence of anomalous regions. Following the few-anomaly paradigm, we incorporate a small set of labeled anomalies during training while simultaneously mitigating the influence of contaminated samples through adaptive instance weighting. Extensive experiments on the MVTec and VisA benchmarks demonstrate that our framework outperforms state-of-the-art baselines and achieves strong detection and localization performance, interpretability, and robustness under various levels of data contamination.
https://arxiv.org/abs/2601.22913
Academic Papers
svg
473bada8a5ec644c60e108dadaf5ea0dd85d2f014606b6bee841deeacc99e2d1
2026-02-02T00:00:00-05:00
LLMDR: Large language model driven framework for missing data recovery in mixed data under low resource regime
arXiv:2601.22916v1 Announce Type: new Abstract: The missing data problem is one of the important issues to address for achieving data quality. While imputation-based methods are designed to achieve data completeness, their efficacy is observed to be diminishing as and when there is increasing in the missingness percentage. Further, extant approaches often struggle to handle mixed-type datasets, typically supporting either numerical and/or categorical data. In this work, we propose LLMDR, automatic data recovery framework which operates in two stage approach, wherein the Stage-I: DBSCAN clustering algorithm is employed to select the most representative samples and in the Stage-II: Multi-LLMs are employed for data recovery considering the local and global representative samples; Later, this framework invokes the consensus algorithm for recommending a more accurate value based on other LLMs of local and global effective samples. Experimental results demonstrate that proposed framework works effectively on various mixed datasets in terms of Accuracy, KS-Statistic, SMAPE, and MSE. Further, we have also shown the advantage of the consensus mechanism for final recommendation in mixed-type data.
https://arxiv.org/abs/2601.22916
Academic Papers
svg
fa80dc72847edc12c727c4e62a45e9a1bf26d90fcacaec3c677b87c037594acf
2026-02-02T00:00:00-05:00
Deep in the Jungle: Towards Automating Chimpanzee Population Estimation
arXiv:2601.22917v1 Announce Type: new Abstract: The estimation of abundance and density in unmarked populations of great apes relies on statistical frameworks that require animal-to-camera distance measurements. In practice, acquiring these distances depends on labour-intensive manual interpretation of animal observations across large camera trap video corpora. This study introduces and evaluates an only sparsely explored alternative: the integration of computer vision-based monocular depth estimation (MDE) pipelines directly into ecological camera trap workflows for great ape conservation. Using a real-world dataset of 220 camera trap videos documenting a wild chimpanzee population, we combine two MDE models, Dense Prediction Transformers and Depth Anything, with multiple distance sampling strategies. These components are used to generate detection distance estimates, from which population density and abundance are inferred. Comparative analysis against manually derived ground-truth distances shows that calibrated DPT consistently outperforms Depth Anything. This advantage is observed in both distance estimation accuracy and downstream density and abundance inference. Nevertheless, both models exhibit systematic biases. We show that, given complex forest environments, they tend to overestimate detection distances and consequently underestimate density and abundance relative to conventional manual approaches. We further find that failures in animal detection across distance ranges are a primary factor limiting estimation accuracy. Overall, this work provides a case study that shows MDE-driven camera trap distance sampling is a viable and practical alternative to manual distance estimation. The proposed approach yields population estimates within 22% of those obtained using traditional methods.
https://arxiv.org/abs/2601.22917
Academic Papers
svg
0926889d1a0182cb09f430a757be52830368e0d60061989aa0aea619f01cd8e0
2026-02-02T00:00:00-05:00
A Serverless Edge-Native Data Processing Architecture for Autonomous Driving Training
arXiv:2601.22919v1 Announce Type: new Abstract: Data is both the key enabler and a major bottleneck for machine learning in autonomous driving. Effective model training requires not only large quantities of sensor data but also balanced coverage that includes rare yet safety-critical scenarios. Capturing such events demands extensive driving time and efficient selection. This paper introduces the Lambda framework, an edge-native platform that enables on-vehicle data filtering and processing through user-defined functions. The framework provides a serverless-inspired abstraction layer that separates application logic from low-level execution concerns such as scheduling, deployment, and isolation. By adapting Function-as-a-Service (FaaS) principles to resource-constrained automotive environments, it allows developers to implement modular, event-driven filtering algorithms while maintaining compatibility with ROS 2 and existing data recording pipelines. We evaluate the framework on an NVIDIA Jetson Orin Nano and compare it against native ROS 2 deployments. Results show competitive performance, reduced latency and jitter, and confirm that lambda-based abstractions can support real-time data processing in embedded autonomous driving systems. The source code is available at https://github.com/LASFAS/jblambda.
https://arxiv.org/abs/2601.22919
Academic Papers
svg
97bf0573e0a5bc9e94fcb318d4d022c0aebacfbba238f8768e9af8fe1718a3b9
2026-02-02T00:00:00-05:00
Q-Hawkeye: Reliable Visual Policy Optimization for Image Quality Assessment
arXiv:2601.22920v1 Announce Type: new Abstract: Image Quality Assessment (IQA) predicts perceptual quality scores consistent with human judgments. Recent RL-based IQA methods built on MLLMs focus on generating visual quality descriptions and scores, ignoring two key reliability limitations: (i) although the model's prediction stability varies significantly across training samples, existing GRPO-based methods apply uniform advantage weighting, thereby amplifying noisy signals from unstable samples in gradient updates; (ii) most works emphasize text-grounded reasoning over images while overlooking the model's visual perception ability of image content. In this paper, we propose Q-Hawkeye, an RL-based reliable visual policy optimization framework that redesigns the learning signal through unified Uncertainty-Aware Dynamic Optimization and Perception-Aware Optimization. Q-Hawkeye estimates predictive uncertainty using the variance of predicted scores across multiple rollouts and leverages this uncertainty to reweight each sample's update strength, stabilizing policy optimization. To strengthen perceptual reliability, we construct paired inputs of degraded images and their original images and introduce an Implicit Perception Loss that constrains the model to ground its quality judgments in genuine visual evidence. Extensive experiments demonstrate that Q-Hawkeye outperforms state-of-the-art methods and generalizes better across multiple datasets. The code and models will be made available.
https://arxiv.org/abs/2601.22920
Academic Papers
svg
0bd6330b48b7efd7b67f2d3e54717e6ad25ad235cf0001fdec3c4866dce43d92
2026-02-02T00:00:00-05:00
Evaluating Large Language Models for Security Bug Report Prediction
arXiv:2601.22921v1 Announce Type: new Abstract: Early detection of security bug reports (SBRs) is critical for timely vulnerability mitigation. We present an evaluation of prompt-based engineering and fine-tuning approaches for predicting SBRs using Large Language Models (LLMs). Our findings reveal a distinct trade-off between the two approaches. Prompted proprietary models demonstrate the highest sensitivity to SBRs, achieving a G-measure of 77% and a recall of 74% on average across all the datasets, albeit at the cost of a higher false-positive rate, resulting in an average precision of only 22%. Fine-tuned models, by contrast, exhibit the opposite behavior, attaining a lower overall G-measure of 51% but substantially higher precision of 75% at the cost of reduced recall of 36%. Though a one-time investment in building fine-tuned models is necessary, the inference on the largest dataset is up to 50 times faster than that of proprietary models. These findings suggest that further investigations to harness the power of LLMs for SBR prediction are necessary.
https://arxiv.org/abs/2601.22921
Academic Papers
svg
2343c5b8f2d4fc6fd88c7c4ce95a9a6753bea21672602d9a63af5a007b90d913
2026-02-02T00:00:00-05:00
BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models
arXiv:2601.22925v1 Announce Type: new Abstract: Recent years have witnessed a rapid surge in research leveraging Large Language Models (LLMs) for recommendation. These methods typically employ supervised fine-tuning (SFT) to adapt LLMs to recommendation scenarios, and utilize beam search during inference to efficiently retrieve $B$ top-ranked recommended items. However, we identify a critical training-inference inconsistency: while SFT optimizes the overall probability of positive items, it does not guarantee that such items will be retrieved by beam search even if they possess high overall probabilities. Due to the greedy pruning mechanism, beam search can prematurely discard a positive item once its prefix probability is insufficient. To address this inconsistency, we propose BEAR (Beam-SEarch-Aware Regularization), a novel fine-tuning objective that explicitly accounts for beam search behavior during training. Rather than directly simulating beam search for each instance during training, which is computationally prohibitive, BEAR enforces a relaxed necessary condition: each token in a positive item must rank within the top-$B$ candidate tokens at each decoding step. This objective effectively mitigates the risk of incorrect pruning while incurring negligible computational overhead compared to standard SFT. Extensive experiments across four real-world datasets demonstrate that BEAR significantly outperforms strong baselines. Code will be released upon acceptance.
https://arxiv.org/abs/2601.22925
Academic Papers
svg
345d5bb5b7f0bbb5f4bdcd1bed7681849a2fe9c2822feafd23cc0c26c81f894f
2026-02-02T00:00:00-05:00
Toward Fully Autonomous Driving: AI, Challenges, Opportunities, and Needs
arXiv:2601.22927v1 Announce Type: new Abstract: Automated driving (AD) is promising, but the transition to fully autonomous driving is, among other things, subject to the real, ever-changing open world and the resulting challenges. However, research in the field of AD demonstrates the ability of artificial intelligence (AI) to outperform classical approaches, handle higher complexities, and reach a new level of autonomy. At the same time, the use of AI raises further questions of safety and transferability. To identify the challenges and opportunities arising from AI concerning autonomous driving functionalities, we have analyzed the current state of AD, outlined limitations, and identified foreseeable technological possibilities. Thereby, various further challenges are examined in the context of prospective developments. In this way, this article reconsiders fully autonomous driving with respect to advancements in the field of AI and carves out the respective needs and resulting research questions.
https://arxiv.org/abs/2601.22927
Academic Papers
svg
aa59bfa77b7ded7eae2cbfdf2548529ec50a38c1292926ff39e457509b367c9d
2026-02-02T00:00:00-05:00
LLMs Explain't: A Post-Mortem on Semantic Interpretability in Transformer Models
arXiv:2601.22928v1 Announce Type: new Abstract: Large Language Models (LLMs) are becoming increasingly popular in pervasive computing due to their versatility and strong performance. However, despite their ubiquitous use, the exact mechanisms underlying their outstanding performance remain unclear. Different methods for LLM explainability exist, and many are, as a method, not fully understood themselves. We started with the question of how linguistic abstraction emerges in LLMs, aiming to detect it across different LLM modules (attention heads and input embeddings). For this, we used methods well-established in the literature: (1) probing for token-level relational structures, and (2) feature-mapping using embeddings as carriers of human-interpretable properties. Both attempts failed for different methodological reasons: Attention-based explanations collapsed once we tested the core assumption that later-layer representations still correspond to tokens. Property-inference methods applied to embeddings also failed because their high predictive scores were driven by methodological artifacts and dataset structure rather than meaningful semantic knowledge. These failures matter because both techniques are widely treated as evidence for what LLMs supposedly understand, yet our results show such conclusions are unwarranted. These limitations are particularly relevant in pervasive and distributed computing settings where LLMs are deployed as system components and interpretability methods are relied upon for debugging, compression, and explaining models.
https://arxiv.org/abs/2601.22928
Academic Papers
svg
7b42ebad1cfac184177a07c5ad9b784d2914dff530ab4442cc25e2ce4dca7f37
2026-02-02T00:00:00-05:00
Semantic Leakage from Image Embeddings
arXiv:2601.22929v1 Announce Type: new Abstract: Image embeddings are generally assumed to pose limited privacy risk. We challenge this assumption by formalizing semantic leakage as the ability to recover semantic structures from compressed image embeddings. Surprisingly, we show that semantic leakage does not require exact reconstruction of the original image. Preserving local semantic neighborhoods under embedding alignment is sufficient to expose the intrinsic vulnerability of image embeddings. Crucially, this preserved neighborhood structure allows semantic information to propagate through a sequence of lossy mappings. Based on this conjecture, we propose Semantic Leakage from Image Embeddings (SLImE), a lightweight inference framework that reveals semantic information from standalone compressed image embeddings, incorporating a locally trained semantic retriever with off-the-shelf models, without training task-specific decoders. We thoroughly validate each step of the framework empirically, from aligned embeddings to retrieved tags, symbolic representations, and grammatical and coherent descriptions. We evaluate SLImE across a range of open and closed embedding models, including GEMINI, COHERE, NOMIC, and CLIP, and demonstrate consistent recovery of semantic information across diverse inference tasks. Our results reveal a fundamental vulnerability in image embeddings, whereby the preservation of semantic neighborhoods under alignment enables semantic leakage, highlighting challenges for privacy preservation.1
https://arxiv.org/abs/2601.22929
Academic Papers
svg
7f638ce33e6b104e78e1bed5f63fd9cbd4872a4a615e2832e864b89445d3cbff
2026-02-02T00:00:00-05:00
MTDrive: Multi-turn Interactive Reinforcement Learning for Autonomous Driving
arXiv:2601.22930v1 Announce Type: new Abstract: Trajectory planning is a core task in autonomous driving, requiring the prediction of safe and comfortable paths across diverse scenarios. Integrating Multi-modal Large Language Models (MLLMs) with Reinforcement Learning (RL) has shown promise in addressing "long-tail" scenarios. However, existing methods are constrained to single-turn reasoning, limiting their ability to handle complex tasks requiring iterative refinement. To overcome this limitation, we present MTDrive, a multi-turn framework that enables MLLMs to iteratively refine trajectories based on environmental feedback. MTDrive introduces Multi-Turn Group Relative Policy Optimization (mtGRPO), which mitigates reward sparsity by computing relative advantages across turns. We further construct an interactive trajectory understanding dataset from closed-loop simulation to support multi-turn training. Experiments on the NAVSIM benchmark demonstrate superior performance compared to existing methods, validating the effectiveness of our multi-turn reasoning paradigm. Additionally, we implement system-level optimizations to reduce data transfer overhead caused by high-resolution images and multi-turn sequences, achieving 2.5x training throughput. Our data, models, and code will be made available soon.
https://arxiv.org/abs/2601.22930
Academic Papers
svg
d1d7429da063c66a201d52c9d0de98fc7e77bfa46506c98c4b5baba274d0f7f5
2026-02-02T00:00:00-05:00
Benchmarking Machine Translation on Chinese Social Media Texts
arXiv:2601.22931v1 Announce Type: new Abstract: The prevalence of rapidly evolving slang, neologisms, and highly stylized expressions in informal user-generated text, particularly on Chinese social media, poses significant challenges for Machine Translation (MT) benchmarking. Specifically, we identify two primary obstacles: (1) data scarcity, as high-quality parallel data requires bilingual annotators familiar with platform-specific slang, and stylistic cues in both languages; and (2) metric limitations, where traditional evaluators like COMET often fail to capture stylistic fidelity and nonstandard expressions. To bridge these gaps, we introduce CSM-MTBench, a benchmark covering five Chinese-foreign language directions and consisting of two expert-curated subsets: Fun Posts, featuring context-rich, slang- and neologism-heavy content, and Social Snippets, emphasizing concise, emotion- and style- driven expressions. Furthermore, we propose tailored evaluation approaches for each subset: measuring the translation success rate of slang and neologisms in Fun Posts, while assessing tone and style preservation in Social Snippets via a hybrid of embedding-based metrics and LLM-as-a-judge. Experiments on over 20 models reveal substantial variation in how current MT systems handle semantic fidelity and informal, social-media-specific stylistic cues. CSM-MTBench thus serves as a rigorous testbed for advancing MT systems capable of mastering real-world Chinese social media texts.
https://arxiv.org/abs/2601.22931
Academic Papers
svg
c77577c0a43819693a714507b0bfacedfaa6924df277c19166b172d7e1711b51
2026-02-02T00:00:00-05:00
DC-LA: Difference-of-Convex Langevin Algorithm
arXiv:2601.22932v1 Announce Type: new Abstract: We study a sampling problem whose target distribution is $\pi \propto \exp(-f-r)$ where the data fidelity term $f$ is Lipschitz smooth while the regularizer term $r=r_1-r_2$ is a non-smooth difference-of-convex (DC) function, i.e., $r_1,r_2$ are convex. By leveraging the DC structure of $r$, we can smooth out $r$ by applying Moreau envelopes to $r_1$ and $r_2$ separately. In line of DC programming, we then redistribute the concave part of the regularizer to the data fidelity and study its corresponding proximal Langevin algorithm (termed DC-LA). We establish convergence of DC-LA to the target distribution $\pi$, up to discretization and smoothing errors, in the $q$-Wasserstein distance for all $q \in \mathbb{N}^*$, under the assumption that $V$ is distant dissipative. Our results improve previous work on non-log-concave sampling in terms of a more general framework and assumptions. Numerical experiments show that DC-LA produces accurate distributions in synthetic settings and reliably provides uncertainty quantification in a real-world Computed Tomography application.
https://arxiv.org/abs/2601.22932
Academic Papers
svg
a9c9e2d86f9468666da8093b7ccda6945abdb0aab92c74a4d9bf9c9f2548a699
2026-02-02T00:00:00-05:00
Protecting Private Code in IDE Autocomplete using Differential Privacy
arXiv:2601.22935v1 Announce Type: new Abstract: Modern Integrated Development Environments (IDEs) increasingly leverage Large Language Models (LLMs) to provide advanced features like code autocomplete. While powerful, training these models on user-written code introduces significant privacy risks, making the models themselves a new type of data vulnerability. Malicious actors can exploit this by launching attacks to reconstruct sensitive training data or infer whether a specific code snippet was used for training. This paper investigates the use of Differential Privacy (DP) as a robust defense mechanism for training an LLM for Kotlin code completion. We fine-tune a \texttt{Mellum} model using DP and conduct a comprehensive evaluation of its privacy and utility. Our results demonstrate that DP provides a strong defense against Membership Inference Attacks (MIAs), reducing the attack's success rate close to a random guess (AUC from 0.901 to 0.606). Furthermore, we show that this privacy guarantee comes at a minimal cost to model performance, with the DP-trained model achieving utility scores comparable to its non-private counterpart, even when trained on 100x less data. Our findings suggest that DP is a practical and effective solution for building private and trustworthy AI-powered IDE features.
https://arxiv.org/abs/2601.22935
Academic Papers
svg
03d7350565e9102697c108c6172eafe9083142a28535131e55ca250b519d1eb3
2026-02-02T00:00:00-05:00
A Real-Time Privacy-Preserving Behavior Recognition System via Edge-Cloud Collaboration
arXiv:2601.22938v1 Announce Type: new Abstract: As intelligent sensing expands into high-privacy environments such as restrooms and changing rooms, the field faces a critical privacy-security paradox. Traditional RGB surveillance raises significant concerns regarding visual recording and storage, while existing privacy-preserving methods-ranging from physical desensitization to traditional cryptographic or obfuscation techniques-often compromise semantic understanding capabilities or fail to guarantee mathematical irreversibility against reconstruction attacks. To address these challenges, this study presents a novel privacy-preserving perception technology based on the AI Flow theoretical framework and an edge-cloud collaborative architecture. The proposed methodology integrates source desensitization with irreversible feature mapping. Leveraging Information Bottleneck theory, the edge device performs millisecond-level processing to transform raw imagery into abstract feature vectors via non-linear mapping and stochastic noise injection. This process constructs a unidirectional information flow that strips identity-sensitive attributes, rendering the reconstruction of original images impossible. Subsequently, the cloud platform utilizes multimodal family models to perform joint inference solely on these abstract vectors to detect abnormal behaviors. This approach fundamentally severs the path to privacy leakage at the architectural level, achieving a breakthrough from video surveillance to de-identified behavior perception and offering a robust solution for risk management in high-sensitivity public spaces.
https://arxiv.org/abs/2601.22938
Academic Papers
svg
7fc2f733477c474211d394ff9208ef99f30702272d20c1cdc1ee0dc95948a537
2026-02-02T00:00:00-05:00
FNWoS: Fractional Neural Walk-on-Spheres Methods for High-Dimensional PDEs Driven by $\alpha$-stable L\'{e}vy Process on Irregular Domains
arXiv:2601.22942v1 Announce Type: new Abstract: In this paper, we develop a highly parallel and derivative-free fractional neural walk-on-spheres method (FNWoS) for solving high-dimensional fractional Poisson equations on irregular domains. We first propose a simplified fractional walk-on-spheres (FWoS) scheme that replaces the high-dimensional normalized weight integral with a constant weight and adopts a correspondingly simpler sampling density, substantially reducing per-trajectory cost. To mitigate the slow convergence of standard Monte Carlo sampling, FNWoS is then proposed via integrating this simplified FWoS estimator, derived from the Feynman-Kac representation, with a neural network surrogate. By amortizing sampling effort over the entire domain during training, FNWoS achieves more accurate evaluation at arbitrary query points with dramatically fewer trajectories than classical FWoS. To further enhance efficiency in regimes where the fractional order $\alpha$ is close to 2 and trajectories become excessively long, we introduce a truncated path strategy with a prescribed maximum step count. Building on this, we propose a buffered supervision mechanism that caches training pairs and progressively refines their Monte Carlo targets during training, removing the need to precompute a highly accurate training set and yielding the buffered fractional neural walk-on-spheres method (BFNWoS). Extensive numerical experiments, including tests on irregular domains and problems with dimensions up to $1000$, demonstrate the accuracy, scalability, and computational efficiency of the proposed methods.
https://arxiv.org/abs/2601.22942
Academic Papers
svg
a19bbd8d732dff4ea4d8884dc9b035b0c61d0350447f8ff07d679676ddae9935
2026-02-02T00:00:00-05:00
Scalable Topology-Preserving Graph Coarsening with Graph Collapse
arXiv:2601.22943v1 Announce Type: new Abstract: Graph coarsening reduces the size of a graph while preserving certain properties. Most existing methods preserve either spectral or spatial characteristics. Recent research has shown that preserving topological features helps maintain the predictive performance of graph neural networks (GNNs) trained on the coarsened graph but suffers from exponential time complexity. To address these problems, we propose Scalable Topology-Preserving Graph Coarsening (STPGC) by introducing the concepts of graph strong collapse and graph edge collapse extended from algebraic topology. STPGC comprises three new algorithms, GStrongCollapse, GEdgeCollapse, and NeighborhoodConing based on these two concepts, which eliminate dominated nodes and edges while rigorously preserving topological features. We further prove that STPGC preserves the GNN receptive field and develop approximate algorithms to accelerate GNN training. Experiments on node classification with GNNs demonstrate the efficiency and effectiveness of STPGC.
https://arxiv.org/abs/2601.22943
Academic Papers
svg
5a66453773a526677ad1db769832da0f48f24aee4df632c52ce42fb9406f1011
2026-02-02T00:00:00-05:00
Environment-Conditioned Tail Reweighting for Total Variation Invariant Risk Minimization
arXiv:2601.22944v1 Announce Type: new Abstract: Out-of-distribution (OOD) generalization remains challenging when models simultaneously encounter correlation shifts across environments and diversity shifts driven by rare or hard samples. Existing invariant risk minimization (IRM) methods primarily address spurious correlations at the environment level, but often overlook sample-level heterogeneity within environments, which can critically impact OOD performance. In this work, we propose \emph{Environment-Conditioned Tail Reweighting for Total Variation Invariant Risk Minimization} (ECTR), a unified framework that augments TV-based invariant learning with environment-conditioned tail reweighting to jointly address both types of distribution shift. By integrating environment-level invariance with within-environment robustness, the proposed approach makes these two mechanisms complementary under mixed distribution shifts. We further extend the framework to scenarios without explicit environment annotations by inferring latent environments through a minimax formulation. Experiments across regression, tabular, time-series, and image classification benchmarks under mixed distribution shifts demonstrate consistent improvements in both worst-environment and average OOD performance.
https://arxiv.org/abs/2601.22944
Academic Papers
svg
fa335b29339e979a6e81978862fdfa31cc39a569b6962f51dbc5899856af5cc1
2026-02-02T00:00:00-05:00
From Data Leak to Secret Misses: The Impact of Data Leakage on Secret Detection Models
arXiv:2601.22946v1 Announce Type: new Abstract: Machine learning models are increasingly used for software security tasks. These models are commonly trained and evaluated on large Internet-derived datasets, which often contain duplicated or highly similar samples. When such samples are split across training and test sets, data leakage may occur, allowing models to memorize patterns instead of learning to generalize. We investigate duplication in a widely used benchmark dataset of hard coded secrets and show how data leakage can substantially inflate the reported performance of AI-based secret detectors, resulting in a misleading picture of their real-world effectiveness.
https://arxiv.org/abs/2601.22946
Academic Papers
svg
97eaa59b1f7c5c4e2dbb8b40b0869d6255ce7d79e0630af963bb8a8468243cbb
2026-02-02T00:00:00-05:00
Relaxing Positional Alignment in Masked Diffusion Language Models
arXiv:2601.22947v1 Announce Type: new Abstract: Masked diffusion language models (MDLMs) have emerged as a promising alternative to dominant autoregressive approaches. Although they achieve competitive performance on several tasks, a substantial gap remains in open-ended text generation. We hypothesize that one cause of this gap is that strict positional prediction makes MDLM decoding highly sensitive to token misalignment, and we show through controlled interventions that a one-position shift can severely disrupt semantics. This observation suggests that enforcing strict positional supervision during training is misaligned with the irreversible denoising dynamics of MDLM decoding. Motivated by this mismatch, we adopt an alignment-flexible supervision strategy during fine-tuning. Specifically, we introduce a special token via the connectionist temporal classification objective. We apply this approach to the widely used MDLM model and conduct experiments on five open-ended text generation benchmarks. Our method consistently outperforms the original model and improves robustness to positional shifts, indicating that relaxing strict positional supervision is an important factor in improving generation quality in MDLMs.
https://arxiv.org/abs/2601.22947
Academic Papers
svg
86fe35b47c200ef38fc0687f3114f39ccf2933df1e637fffff44819f6da0dec9
2026-02-02T00:00:00-05:00
Alignment among Language, Vision and Action Representations
arXiv:2601.22948v1 Announce Type: new Abstract: A fundamental question in cognitive science and AI concerns whether different learning modalities: language, vision, and action, give rise to distinct or shared internal representations. Traditional views assume that models trained on different data types develop specialized, non-transferable representations. However, recent evidence suggests unexpected convergence: models optimized for distinct tasks may develop similar representational geometries. We investigate whether this convergence extends to embodied action learning by training a transformer-based agent to execute goal-directed behaviors in response to natural language instructions. Using behavioral cloning on the BabyAI platform, we generated action-grounded language embeddings shaped exclusively by sensorimotor control requirements. We then compared these representations with those extracted from state-of-the-art large language models (LLaMA, Qwen, DeepSeek, BERT) and vision-language models (CLIP, BLIP). Despite substantial differences in training data, modality, and objectives, we observed robust cross-modal alignment. Action representations aligned strongly with decoder-only language models and BLIP (precision@15: 0.70-0.73), approaching the alignment observed among language models themselves. Alignment with CLIP and BERT was significantly weaker. These findings indicate that linguistic, visual, and action representations converge toward partially shared semantic structures, supporting modality-independent semantic organization and highlighting potential for cross-domain transfer in embodied AI systems.
https://arxiv.org/abs/2601.22948
Academic Papers
svg
459298c616aa538fde77b677a3b9cabe4b486452a0315f4d988c2ef3eed35d95
2026-02-02T00:00:00-05:00
Autonomous Chain-of-Thought Distillation for Graph-Based Fraud Detection
arXiv:2601.22949v1 Announce Type: new Abstract: Graph-based fraud detection on text-attributed graphs (TAGs) requires jointly modeling rich textual semantics and relational dependencies. However, existing LLM-enhanced GNN approaches are constrained by predefined prompting and decoupled training pipelines, limiting reasoning autonomy and weakening semantic-structural alignment. We propose FraudCoT, a unified framework that advances TAG-based fraud detection through autonomous, graph-aware chain-of-thought (CoT) reasoning and scalable LLM-GNN co-training. To address the limitations of predefined prompts, we introduce a fraud-aware selective CoT distillation mechanism that generates diverse reasoning paths and enhances semantic-structural understanding. These distilled CoTs are integrated into node texts, providing GNNs with enriched, multi-hop semantic and structural cues for fraud detection. Furthermore, we develop an efficient asymmetric co-training strategy that enables end-to-end optimization while significantly reducing the computational cost of naive joint training. Extensive experiments on public and industrial benchmarks demonstrate that FraudCoT achieves up to 8.8% AUPRC improvement over state-of-the-art methods and delivers up to 1,066x speedup in training throughput, substantially advancing both detection performance and efficiency.
https://arxiv.org/abs/2601.22949
Academic Papers
svg
fabf5abca8cfb683a48c568e192ef9be19b6035dc5f1db9497e9b5f052de157f
2026-02-02T00:00:00-05:00
Perplexity Cannot Always Tell Right from Wrong
arXiv:2601.22950v1 Announce Type: new Abstract: Perplexity -- a function measuring a model's overall level of "surprise" when encountering a particular output -- has gained significant traction in recent years, both as a loss function and as a simple-to-compute metric of model quality. Prior studies have pointed out several limitations of perplexity, often from an empirical manner. Here we leverage recent results on Transformer continuity to show in a rigorous manner how perplexity may be an unsuitable metric for model selection. Specifically, we prove that, if there is any sequence that a compact decoder-only Transformer model predicts accurately and confidently -- a necessary pre-requisite for strong generalisation -- it must imply existence of another sequence with very low perplexity, but not predicted correctly by that same model. Further, by analytically studying iso-perplexity plots, we find that perplexity will not always select for the more accurate model -- rather, any increase in model confidence must be accompanied by a commensurate rise in accuracy for the new model to be selected.
https://arxiv.org/abs/2601.22950
Academic Papers
svg
b48ecb12f47e7af47285abd45c45dc2a84e98938cbc460696b0897a197950291
2026-02-02T00:00:00-05:00
Sifting the Noise: A Comparative Study of LLM Agents in Vulnerability False Positive Filtering
arXiv:2601.22952v1 Announce Type: new Abstract: Static Application Security Testing (SAST) tools are essential for identifying software vulnerabilities, but they often produce a high volume of false positives (FPs), imposing a substantial manual triage burden on developers. Recent advances in Large Language Model (LLM) agents offer a promising direction by enabling iterative reasoning, tool use, and environment interaction to refine SAST alerts. However, the comparative effectiveness of different LLM-based agent architectures for FP filtering remains poorly understood. In this paper, we present a comparative study of three state-of-the-art LLM-based agent frameworks, i.e., Aider, OpenHands, and SWE-agent, for vulnerability FP filtering. We evaluate these frameworks using the vulnerabilities from the OWASP Benchmark and real-world open-source Java projects. The experimental results show that LLM-based agents can remove the majority of SAST noise, reducing an initial FP detection rate of over 92% on the OWASP Benchmark to as low as 6.3% in the best configuration. On real-world dataset, the best configuration of LLM-based agents can achieve an FP identification rate of up to 93.3% involving CodeQL alerts. However, the benefits of agents are strongly backbone- and CWE-dependent: agentic frameworks significantly outperform vanilla prompting for stronger models such as Claude Sonnet 4 and GPT-5, but yield limited or inconsistent gains for weaker backbones. Moreover, aggressive FP reduction can come at the cost of suppressing true vulnerabilities, highlighting important trade-offs. Finally, we observe large disparities in computational cost across agent frameworks. Overall, our study demonstrates that LLM-based agents are a powerful but non-uniform solution for SAST FP filtering, and that their practical deployment requires careful consideration of agent design, backbone model choice, vulnerability category, and operational cost.
https://arxiv.org/abs/2601.22952
Academic Papers
svg
554307864dc8da7c4d50855dd2a8eeb270d9a26398eec4cca4c3805b13144675
2026-02-02T00:00:00-05:00
Residual Context Diffusion Language Models
arXiv:2601.22954v1 Announce Type: new Abstract: Diffusion Large Language Models (dLLMs) have emerged as a promising alternative to purely autoregressive language models because they can decode multiple tokens in parallel. However, state-of-the-art block-wise dLLMs rely on a "remasking" mechanism that decodes only the most confident tokens and discards the rest, effectively wasting computation. We demonstrate that recycling computation from the discarded tokens is beneficial, as these tokens retain contextual information useful for subsequent decoding iterations. In light of this, we propose Residual Context Diffusion (RCD), a module that converts these discarded token representations into contextual residuals and injects them back for the next denoising step. RCD uses a decoupled two-stage training pipeline to bypass the memory bottlenecks associated with backpropagation. We validate our method on both long CoT reasoning (SDAR) and short CoT instruction following (LLaDA) models. We demonstrate that a standard dLLM can be efficiently converted to the RCD paradigm with merely ~1 billion tokens. RCD consistently improves frontier dLLMs by 5-10 points in accuracy with minimal extra computation overhead across a wide range of benchmarks. Notably, on the most challenging AIME tasks, RCD nearly doubles baseline accuracy and attains up to 4-5x fewer denoising steps at equivalent accuracy levels.
https://arxiv.org/abs/2601.22954
Academic Papers
svg
cd33e8403464bac7bd5302f714ac969e436bac114c1cdcf0c26fa97769786ae0
2026-02-02T00:00:00-05:00
SWE-Manager: Selecting and Synthesizing Golden Proposals Before Coding
arXiv:2601.22956v1 Announce Type: new Abstract: Large language model (LLM) research in software engineering has largely focused on tasks such as code generation and bug repair. In practice, teams often draft multiple candidate proposals for fixing an issue and then deliberate on one golden proposal for implementation. This selection requires not only assessing the issue's scope, impact, and urgency, but also a clear understanding of each proposal's strengths and weaknesses. A good selection could make issue resolution more reliable while reducing regression and operational risk, whereas a poor choice can increase risk and even cause unpredictable failures. We first conduct a manual study of real-world issues to characterize the rationales maintainers use when selecting among competing proposals. Motivated by these findings, we introduce SWE-Manager, a joint selection and synthesis approach that selects the best proposal and synthesizes a golden proposal. SWE-Manager is an 8B model trained via reinforcement learning (RL) to compare proposals, justify its choice, and synthesize a golden proposal for implementation. We view proposal selection as a reasoning task, mirroring how technical managers review competing proposals by weighing issue context and each proposal's solution without executing code or running tests. On the SWE-Lancer Manager benchmark, SWE-Manager achieves 53.21 selection accuracy and 57.75 earn rate, earning 152,750 dollars and outperforming strong baselines including GPT-5. To further evaluate the effectiveness of SWE-Manager in real-world issue resolution, we design the P2A framework, which simulates a real-world workflow where multiple proposals are drafted, reviewed, and a golden proposal is selected for implementation ...
https://arxiv.org/abs/2601.22956
Academic Papers
svg
67d59e5476ce531d56c61bc5d624e8c01241af3f83e6b810dc4a78ef99ee24eb
2026-02-02T00:00:00-05:00
Triage: Hierarchical Visual Budgeting for Efficient Video Reasoning in Vision-Language Models
arXiv:2601.22959v1 Announce Type: new Abstract: Vision-Language Models (VLMs) face significant computational challenges in video processing due to massive data redundancy, which creates prohibitively long token sequences. To address this, we introduce Triage, a training-free, plug-and-play framework that reframes video reasoning as a resource allocation problem via hierarchical visual budgeting. Its first stage, Frame-Level Budgeting, identifies keyframes by evaluating their visual dynamics and relevance, generating a strategic prior based on their importance scores. Guided by this prior, the second stage, Token-Level Budgeting, allocates tokens in two phases: it first secures high-relevance Core Tokens, followed by diverse Context Tokens selected with an efficient batched Maximal Marginal Relevance (MMR) algorithm. Extensive experiments demonstrate that Triage improves inference speed and reduces memory footprint, while maintaining or surpassing the performance of baselines and other methods on various video reasoning benchmarks.
https://arxiv.org/abs/2601.22959
Academic Papers
svg
8b37fa63a85e5d87e955c10cdb56441e1a3aaf218a49849d7e463bce78c2a5b7
2026-02-02T00:00:00-05:00
Improving Supervised Machine Learning Performance in Optical Quality Control via Generative AI for Dataset Expansion
arXiv:2601.22961v1 Announce Type: new Abstract: Supervised machine learning algorithms play a crucial role in optical quality control within industrial production. These approaches require representative datasets for effective model training. However, while non-defective components are frequent, defective parts are rare in production, resulting in highly imbalanced datasets that adversely impact model performance. Existing strategies to address this challenge, such as specialized loss functions or traditional data augmentation techniques, have limitations, including the need for careful hyperparameter tuning or the alteration of only simple image features. Therefore, this work explores the potential of generative artificial intelligence (GenAI) as an alternative method for expanding limited datasets and enhancing supervised machine learning performance. Specifically, we investigate Stable Diffusion and CycleGAN as image generation models, focusing on the segmentation of combine harvester components in thermal images for subsequent defect detection. Our results demonstrate that dataset expansion using Stable Diffusion yields the most significant improvement, enhancing segmentation performance by 4.6 %, resulting in a Mean Intersection over Union (Mean IoU) of 84.6 %.
https://arxiv.org/abs/2601.22961
Academic Papers
svg
35bda6274c082482e1c317327856932c8142ea7c6bff0ae402d055e0443a2a22
2026-02-02T00:00:00-05:00
ERA: Epoch-Resolved Arbitration for Duelling Admins in Group Management CRDTs
arXiv:2601.22963v1 Announce Type: new Abstract: Conflict-Free Replicated Data Types (CRDTs) are used in a range of fields for their coordination-free replication with strong eventual consistency. By prioritising availability over consistency under partition, nodes accumulate events in different orders, and rely on an associative, commutative and idempotent merge function to present a materialised view of the CRDT. Under some circumstances, the state of the materialised view over time can appear to ''roll back'' previously applied events. When the materialised view is used to manage group permissions such as ones found in instant messaging applications, this can lead to surprising behaviour. This can occur when there are multiple concurrent events, such as in the Duelling Admins problem where two equally permissioned admins concurrently revoke each other's permissions. Who wins? This article argues that a Byzantine admin can exploit concurrency to win the duel. As a result, an external arbiter is required to arbitrate an immutable happens-before relation between concurrent events. Arbitration occurs asynchronously in batches via optional ''epoch events'', preserving availability. This introduces a bounded total order within epochs, and the resulting ''finality'' improves on the level of consistency CRDTs can provide.
https://arxiv.org/abs/2601.22963
Academic Papers
svg
525833a1c8a89e4146d74d67b77beff566b4311d9fcdaf444da4950fea868ae2
2026-02-02T00:00:00-05:00
EvoClinician: A Self-Evolving Agent for Multi-Turn Medical Diagnosis via Test-Time Evolutionary Learning
arXiv:2601.22964v1 Announce Type: new Abstract: Prevailing medical AI operates on an unrealistic ''one-shot'' model, diagnosing from a complete patient file. However, real-world diagnosis is an iterative inquiry where Clinicians sequentially ask questions and order tests to strategically gather information while managing cost and time. To address this, we first propose Med-Inquire, a new benchmark designed to evaluate an agent's ability to perform multi-turn diagnosis. Built upon a dataset of real-world clinical cases, Med-Inquire simulates the diagnostic process by hiding a complete patient file behind specialized Patient and Examination agents. They force the agent to proactively ask questions and order tests to gather information piece by piece. To tackle the challenges posed by Med-Inquire, we then introduce EvoClinician, a self-evolving agent that learns efficient diagnostic strategies at test time. Its core is a ''Diagnose-Grade-Evolve'' loop: an Actor agent attempts a diagnosis; a Process Grader agent performs credit assignment by evaluating each action for both clinical yield and resource efficiency; finally, an Evolver agent uses this feedback to update the Actor's strategy by evolving its prompt and memory. Our experiments show EvoClinician outperforms continual learning baselines and other self-evolving agents like memory agents. The code is available at https://github.com/yf-he/EvoClinician
https://arxiv.org/abs/2601.22964
Academic Papers
svg
d0e1c58f1ab59bf32faf5dd6dc4448653df101b35013d76b34220d10a12047ac
2026-02-02T00:00:00-05:00
Self-Imitated Diffusion Policy for Efficient and Robust Visual Navigation
arXiv:2601.22965v1 Announce Type: new Abstract: Diffusion policies (DP) have demonstrated significant potential in visual navigation by capturing diverse multi-modal trajectory distributions. However, standard imitation learning (IL), which most DP methods rely on for training, often inherits sub-optimality and redundancy from expert demonstrations, thereby necessitating a computationally intensive "generate-then-filter" pipeline that relies on auxiliary selectors during inference. To address these challenges, we propose Self-Imitated Diffusion Policy (SIDP), a novel framework that learns improved planning by selectively imitating a set of trajectories sampled from itself. Specifically, SIDP introduces a reward-guided self-imitation mechanism that encourages the policy to consistently produce high-quality trajectories efficiently, rather than outputs of inconsistent quality, thereby reducing reliance on extensive sampling and post-filtering. During training, we employ a reward-driven curriculum learning paradigm to mitigate inefficient data utility, and goal-agnostic exploration for trajectory augmentation to improve planning robustness. Extensive evaluations on a comprehensive simulation benchmark show that SIDP significantly outperforms previous methods, with real-world experiments confirming its effectiveness across multiple robotic platforms. On Jetson Orin Nano, SIDP delivers a 2.5$\times$ faster inference than the baseline NavDP, i.e., 110ms VS 273ms, enabling efficient real-time deployment.
https://arxiv.org/abs/2601.22965
Academic Papers
svg
8aeedf6c928182e9ac02142cc2ab3d5378a986d135b098669846cce7b7fbf83d
2026-02-02T00:00:00-05:00
A Unified View of Attention and Residual Sinks: Outlier-Driven Rescaling is Essential for Transformer Training
arXiv:2601.22966v1 Announce Type: new Abstract: We investigate the functional role of emergent outliers in large language models, specifically attention sinks (a few tokens that consistently receive large attention logits) and residual sinks (a few fixed dimensions with persistently large activations across most tokens). We hypothesize that these outliers, in conjunction with the corresponding normalizations (\textit{e.g.}, softmax attention and RMSNorm), effectively rescale other non-outlier components. We term this phenomenon \textit{outlier-driven rescaling} and validate this hypothesis across different model architectures and training token counts. This view unifies the origin and mitigation of both sink types. Our main conclusions and observations include: (1) Outliers function jointly with normalization: removing normalization eliminates the corresponding outliers but degrades training stability and performance; directly clipping outliers while retaining normalization leads to degradation, indicating that outlier-driven rescaling contributes to training stability. (2) Outliers serve more as rescale factors rather than contributors, as the final contributions of attention and residual sinks are significantly smaller than those of non-outliers. (3) Outliers can be absorbed into learnable parameters or mitigated via explicit gated rescaling, leading to improved training performance (average gain of 2 points) and enhanced quantization robustness (1.2 points degradation under W4A4 quantization).
https://arxiv.org/abs/2601.22966
Academic Papers
svg
ca9e1ffe36a841f38df0c79bafa6e4d8d103468901783a16ad3e0a4e8df70703
2026-02-02T00:00:00-05:00
Improved Algorithms for Nash Welfare in Linear Bandits
arXiv:2601.22969v1 Announce Type: new Abstract: Nash regret has recently emerged as a principled fairness-aware performance metric for stochastic multi-armed bandits, motivated by the Nash Social Welfare objective. Although this notion has been extended to linear bandits, existing results suffer from suboptimality in ambient dimension $d$, stemming from proof techniques that rely on restrictive concentration inequalities. In this work, we resolve this open problem by introducing new analytical tools that yield an order-optimal Nash regret bound in linear bandits. Beyond Nash regret, we initiate the study of $p$-means regret in linear bandits, a unifying framework that interpolates between fairness and utility objectives and strictly generalizes Nash regret. We propose a generic algorithmic framework, FairLinBandit, that works as a meta-algorithm on top of any linear bandit strategy. We instantiate this framework using two bandit algorithms: Phased Elimination and Upper Confidence Bound, and prove that both achieve sublinear $p$-means regret for the entire range of $p$. Extensive experiments on linear bandit instances generated from real-world datasets demonstrate that our methods consistently outperform the existing state-of-the-art baseline.
https://arxiv.org/abs/2601.22969
Academic Papers
svg
86eb0c322e923a837750155782a916bafea9421996cb6c290d262ab204de6e42
2026-02-02T00:00:00-05:00
Stabilizing the Q-Gradient Field for Policy Smoothness in Actor-Critic
arXiv:2601.22970v1 Announce Type: new Abstract: Policies learned via continuous actor-critic methods often exhibit erratic, high-frequency oscillations, making them unsuitable for physical deployment. Current approaches attempt to enforce smoothness by directly regularizing the policy's output. We argue that this approach treats the symptom rather than the cause. In this work, we theoretically establish that policy non-smoothness is fundamentally governed by the differential geometry of the critic. By applying implicit differentiation to the actor-critic objective, we prove that the sensitivity of the optimal policy is bounded by the ratio of the Q-function's mixed-partial derivative (noise sensitivity) to its action-space curvature (signal distinctness). To empirically validate this theoretical insight, we introduce PAVE (Policy-Aware Value-field Equalization), a critic-centric regularization framework that treats the critic as a scalar field and stabilizes its induced action-gradient field. PAVE rectifies the learning signal by minimizing the Q-gradient volatility while preserving local curvature. Experimental results demonstrate that PAVE achieves smoothness and robustness comparable to policy-side smoothness regularization methods, while maintaining competitive task performance, without modifying the actor.
https://arxiv.org/abs/2601.22970
Academic Papers
svg
bbcb9860ed72e3c0d8b698a9c88af86bc72e3fedbbdadf449b7a003d4b467ab5
2026-02-02T00:00:00-05:00
MiTa: A Hierarchical Multi-Agent Collaboration Framework with Memory-integrated and Task Allocation
arXiv:2601.22974v1 Announce Type: new Abstract: Recent advances in large language models (LLMs) have substantially accelerated the development of embodied agents. LLM-based multi-agent systems mitigate the inefficiency of single agents in complex tasks. However, they still suffer from issues such as memory inconsistency and agent behavioral conflicts. To address these challenges, we propose MiTa, a hierarchical memory-integrated task allocative framework to enhance collaborative efficiency. MiTa organizes agents into a manager-member hierarchy, where the manager incorporates additional allocation and summary modules that enable (1) global task allocation and (2) episodic memory integration. The allocation module enables the manager to allocate tasks from a global perspective, thereby avoiding potential inter-agent conflicts. The summary module, triggered by task progress updates, performs episodic memory integration by condensing recent collaboration history into a concise summary that preserves long-horizon context. By combining task allocation with episodic memory, MiTa attains a clearer understanding of the task and facilitates globally consistent task distribution. Experimental results confirm that MiTa achieves superior efficiency and adaptability in complex multi-agent cooperation over strong baseline methods.
https://arxiv.org/abs/2601.22974
Academic Papers
svg
575a1e93c64f999a2223a90b59032a2ec87921c9439b959346a7e8a549f88b9e
2026-02-02T00:00:00-05:00
Golden Goose: A Simple Trick to Synthesize Unlimited RLVR Tasks from Unverifiable Internet Text
arXiv:2601.22975v1 Announce Type: new Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) has become a cornerstone for unlocking complex reasoning in Large Language Models (LLMs). Yet, scaling up RL is bottlenecked by limited existing verifiable data, where improvements increasingly saturate over prolonged training. To overcome this, we propose Golden Goose, a simple trick to synthesize unlimited RLVR tasks from unverifiable internet text by constructing a multiple-choice question-answering version of the fill-in-the-middle task. Given a source text, we prompt an LLM to identify and mask key reasoning steps, then generate a set of diverse, plausible distractors. This enables us to leverage reasoning-rich unverifiable corpora typically excluded from prior RLVR data construction (e.g., science textbooks) to synthesize GooseReason-0.7M, a large-scale RLVR dataset with over 0.7 million tasks spanning mathematics, programming, and general scientific domains. Empirically, GooseReason effectively revives models saturated on existing RLVR data, yielding robust, sustained gains under continuous RL and achieving new state-of-the-art results for 1.5B and 4B-Instruct models across 15 diverse benchmarks. Finally, we deploy Golden Goose in a real-world setting, synthesizing RLVR tasks from raw FineWeb scrapes for the cybersecurity domain, where no prior RLVR data exists. Training Qwen3-4B-Instruct on the resulting data GooseReason-Cyber sets a new state-of-the-art in cybersecurity, surpassing a 7B domain-specialized model with extensive domain-specific pre-training and post-training. This highlights the potential of automatically scaling up RLVR data by exploiting abundant, reasoning-rich, unverifiable internet text.
https://arxiv.org/abs/2601.22975
Academic Papers
svg
b64f5ffae810293c072add9bed988dc4a3c3ae51652cde42bdb66a219798f896
2026-02-02T00:00:00-05:00
Quantifying Model Uniqueness in Heterogeneous AI Ecosystems
arXiv:2601.22977v1 Announce Type: new Abstract: As AI systems evolve from isolated predictors into complex, heterogeneous ecosystems of foundation models and specialized adapters, distinguishing genuine behavioral novelty from functional redundancy becomes a critical governance challenge. Here, we introduce a statistical framework for auditing model uniqueness based on In-Silico Quasi-Experimental Design (ISQED). By enforcing matched interventions across models, we isolate intrinsic model identity and quantify uniqueness as the Peer-Inexpressible Residual (PIER), i.e. the component of a target's behavior strictly irreducible to any stochastic convex combination of its peers, with vanishing PIER characterizing when such a routing-based substitution becomes possible. We establish the theoretical foundations of ecosystem auditing through three key contributions. First, we prove a fundamental limitation of observational logs: uniqueness is mathematically non-identifiable without intervention control. Second, we derive a scaling law for active auditing, showing that our adaptive query protocol achieves minimax-optimal sample efficiency ($d\sigma^2\gamma^{-2}\log(Nd/\delta)$). Third, we demonstrate that cooperative game-theoretic methods, such as Shapley values, fundamentally fail to detect redundancy. We implement this framework via the DISCO (Design-Integrated Synthetic Control) estimator and deploy it across diverse ecosystems, including computer vision models (ResNet/ConvNeXt/ViT), large language models (BERT/RoBERTa), and city-scale traffic forecasters. These results move trustworthy AI beyond explaining single models: they establish a principled, intervention-based science of auditing and governing heterogeneous model ecosystems.
https://arxiv.org/abs/2601.22977
Academic Papers
svg
db89fe82c1696cdcbfe3adba73f22f33cee3aebc4933734349c4b0fc5ee8376b
2026-02-02T00:00:00-05:00
SpecIBT: Formally Verified Protection Against Speculative Control-Flow Hijacking
arXiv:2601.22978v1 Announce Type: new Abstract: This paper introduces SpecIBT, a formally verified defense against Spectre BTB, RSB, and PHT that combines CET-style hardware-assisted control-flow integrity with compiler-inserted speculative load hardening (SLH). SpecIBT is based on the novel observation that in the presence of CET-style protection, we can precisely detect BTB misspeculation for indirect calls and set the SLH misspeculation flag. We formalize SpecIBT as a transformation in Rocq and provide a machine-checked proof that it achieves relative security: any transformed program running with speculation leaks no more than what the source program leaks without speculation. This strong security guarantee applies to arbitrary programs, even those not following the cryptographic constant-time programming discipline.
https://arxiv.org/abs/2601.22978
Academic Papers
svg
35a5eb4982af4dbe47df5acb0d390d0b73cb811586a53b00c8ba07b0c5867bba
2026-02-02T00:00:00-05:00
Learnable Permutation for Structured Sparsity on Transformer Models
arXiv:2601.22980v1 Announce Type: new Abstract: Structured sparsity has emerged as a popular model pruning technique, widely adopted in various architectures, including CNNs, Transformer models, and especially large language models (LLMs) in recent years. A promising direction to further improve post-pruning performance is weight permutation, which reorders model weights into patterns more amenable to pruning. However, the exponential growth of the permutation search space with the scale of Transformer architectures forces most methods to rely on greedy or heuristic algorithms, limiting the effectiveness of reordering. In this work, we propose a novel end-to-end learnable permutation framework. Our method introduces a learnable permutation cost matrix to quantify the cost of swapping any two input channels of a given weight matrix, a differentiable bipartite matching solver to obtain the optimal binary permutation matrix given a cost matrix, and a sparsity optimization loss function to directly optimize the permutation operator. We extensively validate our approach on vision and language Transformers, demonstrating that our method achieves state-of-the-art permutation results for structured sparsity.
https://arxiv.org/abs/2601.22980
Academic Papers
svg
e73297f4295a2b170710168df5b0a79d0523a4f25adf115bd1251a23538efea1
2026-02-02T00:00:00-05:00
About an Automating Annotation Method for Robot Markers
arXiv:2601.22982v1 Announce Type: new Abstract: Factory automation has become increasingly important due to labor shortages, leading to the introduction of autonomous mobile robots for tasks such as material transportation. Markers are commonly used for robot self-localization and object identification. In the RoboCup Logistics League (RCLL), ArUco markers are employed both for robot localization and for identifying processing modules. Conventional recognition relies on OpenCV-based image processing, which detects black-and-white marker patterns. However, these methods often fail under noise, motion blur, defocus, or varying illumination conditions. Deep-learning-based recognition offers improved robustness under such conditions, but requires large amounts of annotated data. Annotation must typically be done manually, as the type and position of objects cannot be detected automatically, making dataset preparation a major bottleneck. In contrast, ArUco markers include built-in recognition modules that provide both ID and positional information, enabling automatic annotation. This paper proposes an automated annotation method for training deep-learning models on ArUco marker images. By leveraging marker detection results obtained from the ArUco module, the proposed approach eliminates the need for manual labeling. A YOLO-based model is trained using the automatically annotated dataset, and its performance is evaluated under various conditions. Experimental results demonstrate that the proposed method improves recognition performance compared with conventional image-processing techniques, particularly for images affected by blur or defocus. Automatic annotation also reduces human effort and ensures consistent labeling quality. Future work will investigate the relationship between confidence thresholds and recognition performance.
https://arxiv.org/abs/2601.22982
Academic Papers
svg
901ac667134ce5941ce1e1e33616be9cbd616d53a60505deedf1006d93cc5e2b
2026-02-02T00:00:00-05:00
PIDSMaker: Building and Evaluating Provenance-based Intrusion Detection Systems
arXiv:2601.22983v1 Announce Type: new Abstract: Recent provenance-based intrusion detection systems (PIDSs) have demonstrated strong potential for detecting advanced persistent threats (APTs) by applying machine learning to system provenance graphs. However, evaluating and comparing PIDSs remains difficult: prior work uses inconsistent preprocessing pipelines, non-standard dataset splits, and incompatible ground-truth labeling and metrics. These discrepancies undermine reproducibility, impede fair comparison, and impose substantial re-implementation overhead on researchers. We present PIDSMaker, an open-source framework for developing and evaluating PIDSs under consistent protocols. PIDSMaker consolidates eight state-of-the-art systems into a modular, extensible architecture with standardized preprocessing and ground-truth labels, enabling consistent experiments and apples-to-apples comparisons. A YAML-based configuration interface supports rapid prototyping by composing components across systems without code changes. PIDSMaker also includes utilities for ablation studies, hyperparameter tuning, multi-run instability measurement, and visualization, addressing methodological gaps identified in prior work. We demonstrate PIDSMaker through concrete use cases and release it with preprocessed datasets and labels to support shared evaluation for the PIDS community.
https://arxiv.org/abs/2601.22983
Academic Papers
svg
db95fc504b9ebf2e3618834fc470c860d7d41f2161ef3f11fc2e40ee4ced06a7
2026-02-02T00:00:00-05:00
Why Your Deep Research Agent Fails? On Hallucination Evaluation in Full Research Trajectory
arXiv:2601.22984v1 Announce Type: new Abstract: Diagnosing the failure mechanisms of Deep Research Agents (DRAs) remains a critical challenge. Existing benchmarks predominantly rely on end-to-end evaluation, obscuring critical intermediate hallucinations, such as flawed planning, that accumulate throughout the research trajectory. To bridge this gap, we propose a shift from outcome-based to process-aware evaluation by auditing the full research trajectory. We introduce the PIES Taxonomy to categorize hallucinations along functional components (Planning vs. Summarization) and error properties (Explicit vs. Implicit). We instantiate this taxonomy into a fine-grained evaluation framework that decomposes the trajectory to rigorously quantify these hallucinations. Leveraging this framework to isolate 100 distinctively hallucination-prone tasks including adversarial scenarios, we curate DeepHalluBench. Experiments on six state-of-theart DRAs reveal that no system achieves robust reliability. Furthermore, our diagnostic analysis traces the etiology of these failures to systemic deficits, specifically hallucination propagation and cognitive biases, providing foundational insights to guide future architectural optimization. Data and code are available at https://github.com/yuhao-zhan/DeepHalluBench.
https://arxiv.org/abs/2601.22984
Academic Papers
svg
f06284cef77dac121be017d9341edcef3498db84933b7c9b5c75f776b1ca1dea
2026-02-02T00:00:00-05:00
dgMARK: Decoding-Guided Watermarking for Diffusion Language Models
arXiv:2601.22985v1 Announce Type: new Abstract: We propose dgMARK, a decoding-guided watermarking method for discrete diffusion language models (dLLMs). Unlike autoregressive models, dLLMs can generate tokens in arbitrary order. While an ideal conditional predictor would be invariant to this order, practical dLLMs exhibit strong sensitivity to the unmasking order, creating a new channel for watermarking. dgMARK steers the unmasking order toward positions whose high-reward candidate tokens satisfy a simple parity constraint induced by a binary hash, without explicitly reweighting the model's learned probabilities. The method is plug-and-play with common decoding strategies (e.g., confidence, entropy, and margin-based ordering) and can be strengthened with a one-step lookahead variant. Watermarks are detected via elevated parity-matching statistics, and a sliding-window detector ensures robustness under post-editing operations including insertion, deletion, substitution, and paraphrasing.
https://arxiv.org/abs/2601.22985
Academic Papers
svg
d13e0a0e4a2eb8a6e80bf031fc138040da0d94366d7745226366e614c4a2dec8
2026-02-02T00:00:00-05:00
ArabicDialectHub: A Cross-Dialectal Arabic Learning Resource and Platform
arXiv:2601.22987v1 Announce Type: new Abstract: We present ArabicDialectHub, a cross-dialectal Arabic learning resource comprising 552 phrases across six varieties (Moroccan Darija, Lebanese, Syrian, Emirati, Saudi, and MSA) and an interactive web platform. Phrases were generated using LLMs and validated by five native speakers, stratified by difficulty, and organized thematically. The open-source platform provides translation exploration, adaptive quizzing with algorithmic distractor generation, cloud-synchronized progress tracking, and cultural context. Both the dataset and complete platform source code are released under MIT license. Platform: https://arabic-dialect-hub.netlify.app.
https://arxiv.org/abs/2601.22987
Academic Papers
svg
8c575dccc62d0d7e4cafc6a505d408a37d359ec6baecffbdd98c149023be8a0a
2026-02-02T00:00:00-05:00
Learning Geometrically-Grounded 3D Visual Representations for View-Generalizable Robotic Manipulation
arXiv:2601.22988v1 Announce Type: new Abstract: Real-world robotic manipulation demands visuomotor policies capable of robust spatial scene understanding and strong generalization across diverse camera viewpoints. While recent advances in 3D-aware visual representations have shown promise, they still suffer from several key limitations, including reliance on multi-view observations during inference which is impractical in single-view restricted scenarios, incomplete scene modeling that fails to capture holistic and fine-grained geometric structures essential for precise manipulation, and lack of effective policy training strategies to retain and exploit the acquired 3D knowledge. To address these challenges, we present MethodName, a unified representation-policy learning framework for view-generalizable robotic manipulation. MethodName introduces a single-view 3D pretraining paradigm that leverages point cloud reconstruction and feed-forward gaussian splatting under multi-view supervision to learn holistic geometric representations. During policy learning, MethodName performs multi-step distillation to preserve the pretrained geometric understanding and effectively transfer it to manipulation skills. We conduct experiments on 12 RLBench tasks, where our approach outperforms the previous state-of-the-art method by 12.7% in average success rate. Further evaluation on six representative tasks demonstrates strong zero-shot view generalization, with success rate drops of only 22.0% and 29.7% under moderate and large viewpoint shifts respectively, whereas the state-of-the-art method suffers larger decreases of 41.6% and 51.5%.
https://arxiv.org/abs/2601.22988
Academic Papers
svg
17ec96cfa32388b6469baa2ed5204104c266198cf960c3fe351a9f762c7db668
2026-02-02T00:00:00-05:00
Self-Supervised Slice-to-Volume Reconstruction with Gaussian Representations for Fetal MRI
arXiv:2601.22990v1 Announce Type: new Abstract: Reconstructing 3D fetal MR volumes from motion-corrupted stacks of 2D slices is a crucial and challenging task. Conventional slice-to-volume reconstruction (SVR) methods are time-consuming and require multiple orthogonal stacks for reconstruction. While learning-based SVR approaches have significantly reduced the time required at the inference stage, they heavily rely on ground truth information for training, which is inaccessible in practice. To address these challenges, we propose GaussianSVR, a self-supervised framework for slice-to-volume reconstruction. GaussianSVR represents the target volume using 3D Gaussian representations to achieve high-fidelity reconstruction. It leverages a simulated forward slice acquisition model to enable self-supervised training, alleviating the need for ground-truth volumes. Furthermore, to enhance both accuracy and efficiency, we introduce a multi-resolution training strategy that jointly optimizes Gaussian parameters and spatial transformations across different resolution levels. Experiments show that GaussianSVR outperforms the baseline methods on fetal MR volumetric reconstruction. Code will be available upon acceptance.
https://arxiv.org/abs/2601.22990
Academic Papers
svg
1f039a8249f2d32907ab3d9069f72032dc29c325df317a98bfff35f2e4b4cecc
2026-02-02T00:00:00-05:00
Value-at-Risk Constrained Policy Optimization
arXiv:2601.22993v1 Announce Type: new Abstract: We introduce the Value-at-Risk Constrained Policy Optimization algorithm (VaR-CPO), a sample efficient and conservative method designed to optimize Value-at-Risk (VaR) constraints directly. Empirically, we demonstrate that VaR-CPO is capable of safe exploration, achieving zero constraint violations during training in feasible environments, a critical property that baseline methods fail to uphold. To overcome the inherent non-differentiability of the VaR constraint, we employ the one-sided Chebyshev inequality to obtain a tractable surrogate based on the first two moments of the cost return. Additionally, by extending the trust-region framework of the Constrained Policy Optimization (CPO) method, we provide rigorous worst-case bounds for both policy improvement and constraint violation during the training process.
https://arxiv.org/abs/2601.22993
Academic Papers
svg
0747c0a13328df018ba9b4732f5ccd7a0e3f665e4f2f7a746f24fd300e10e572
2026-02-02T00:00:00-05:00
Competitive Non-Clairvoyant KV-Cache Scheduling for LLM Inference
arXiv:2601.22996v1 Announce Type: new Abstract: Large Language Model (LLM) inference presents a unique scheduling challenge due to the Key-Value (KV) cache, where a job's memory footprint grows linearly with the number of decoded tokens. This growth couples scheduling decisions with feasibility: a scheduler must minimize latency under a hard memory budget, yet the response lengths of requests are inherently unknown. While recent works have explored this problem either assuming clairvoyance -- exact knowledge of response lengths -- or relying on machine-learned predictions, obtaining robust performance guarantees without any prior knowledge of job sizes remains a theoretically fundamental and practically important open problem. In this work, we propose the Geometric Slicing Algorithm (GSA), the non-clairvoyant policy to achieve the first constant competitive ratio for this problem in the offline batch setting. GSA manages uncertainty through a geometric phase structure that periodically restarts jobs to bound memory exposure, combined with a staggered pipeline mechanism that enables high concurrency by smoothing aggregate memory consumption. We prove that GSA achieves a competitive ratio of at most 61.92 for general instances, improving to 32 in the large-memory regime. Our algorithmic framework also yields a clairvoyant counterpart, the Geometric Batching Algorithm (GBA), which achieves an approximation ratio of 10.67 for general instances and 6.75 in the large-memory regime -- significantly improving upon the best previously known bound of over 9000. Numerical experiments on real request traces demonstrate that our algorithms perform robustly while preserving these worst-case guarantees.
https://arxiv.org/abs/2601.22996
Academic Papers
svg
87c0fecd753b42be5995f3bc5307b194a1c2c7ed8b07af4035c26434cbf4f9ce
2026-02-02T00:00:00-05:00
TriCEGAR: A Trace-Driven Abstraction Mechanism for Agentic AI
arXiv:2601.22997v1 Announce Type: new Abstract: Agentic AI systems act through tools and evolve their behavior over long, stochastic interaction traces. This setting complicates assurance, because behavior depends on nondeterministic environments and probabilistic model outputs. Prior work introduced runtime verification for agentic AI via Dynamic Probabilistic Assurance (DPA), learning an MDP online and model checking quantitative properties. A key limitation is that developers must manually define the state abstraction, which couples verification to application-specific heuristics and increases adoption friction. This paper proposes TriCEGAR, a trace-driven abstraction mechanism that automates state construction from execution logs and supports online construction of an agent behavioral MDP. TriCEGAR represents abstractions as predicate trees learned from traces and refined using counterexamples. We describe a framework-native implementation that (i) captures typed agent lifecycle events, (ii) builds abstractions from traces, (iii) constructs an MDP, and (iv) performs probabilistic model checking to compute bounds such as Pmax(success) and Pmin(failure). We also show how run likelihoods enable anomaly detection as a guardrailing signal.
https://arxiv.org/abs/2601.22997
Academic Papers
svg
039ae1f27370f6fe5f7264ed893cdd5b1c8aae721b10c7285d0d1cccfec91a94
2026-02-02T00:00:00-05:00
Mano: Restriking Manifold Optimization for LLM Training
arXiv:2601.23000v1 Announce Type: new Abstract: While large language models (LLMs) have emerged as a significant advancement in artificial intelligence, the hardware and computational costs for training LLMs are also significantly burdensome. Among the state-of-the-art optimizers, AdamW relies on diagonal curvature estimates and ignores structural properties, while Muon applies global spectral normalization at the expense of losing curvature information. In this study, we restriked manifold optimization methods for training LLMs, which may address both optimizers' limitations, while conventional manifold optimization methods have been largely overlooked due to the poor performance in large-scale model optimization. By innovatively projecting the momentum onto the tangent space of model parameters and constraining it on a rotational Oblique manifold, we propose a novel, powerful, and efficient optimizer **Mano** that is the first to bridge the performance gap between manifold optimization and modern optimizers. Extensive experiments on the LLaMA and Qwen3 models demonstrate that Mano consistently and significantly outperforms AdamW and Muon even with less memory consumption and computational complexity, respectively, suggesting an expanded Pareto frontier in terms of space and time efficiency.
https://arxiv.org/abs/2601.23000
Academic Papers
svg
1c6c0f917bde1a81c911def8a2c9432df684c821b78d5d0e5cef97ef03687e1d
2026-02-02T00:00:00-05:00
Bias Beyond Borders: Political Ideology Evaluation and Steering in Multilingual LLMs
arXiv:2601.23001v1 Announce Type: new Abstract: Large Language Models (LLMs) increasingly shape global discourse, making fairness and ideological neutrality essential for responsible AI deployment. Despite growing attention to political bias in LLMs, prior work largely focuses on high-resource, Western languages or narrow multilingual settings, leaving cross-lingual consistency and safe post-hoc mitigation underexplored. To address this gap, we present a large-scale multilingual evaluation of political bias spanning 50 countries and 33 languages. We introduce a complementary post-hoc mitigation framework, Cross-Lingual Alignment Steering (CLAS), designed to augment existing steering methods by aligning ideological representations across languages and dynamically regulating intervention strength. This method aligns latent ideological representations induced by political prompts into a shared ideological subspace, ensuring cross lingual consistency, with the adaptive mechanism prevents over correction and preserves coherence. Experiments demonstrate substantial bias reduction along both economic and social axes with minimal degradation in response quality. The proposed framework establishes a scalable and interpretable paradigm for fairness-aware multilingual LLM governance, balancing ideological neutrality with linguistic and cultural diversity.
https://arxiv.org/abs/2601.23001
Academic Papers
svg
43821f32ce4aa6989755a0fa80c3ec8defaf8aa5ba44fcbf932f9eaab693f362
2026-02-02T00:00:00-05:00
InstructDiff: Domain-Adaptive Data Selection via Differential Entropy for Efficient LLM Fine-Tuning
arXiv:2601.23006v1 Announce Type: new Abstract: Supervised fine-tuning (SFT) is fundamental to adapting large language models, yet training on complete datasets incurs prohibitive costs with diminishing returns. Existing data selection methods suffer from severe domain specificity: techniques optimized for general instruction-following fail on reasoning tasks, and vice versa. We observe that measuring entropy differences between base models and minimally instruction-tuned calibrated models reveals a pattern -- samples with the lowest differential entropy consistently yield optimal performance across domains, yet this principle manifests domain-adaptively: reasoning tasks favor entropy increase (cognitive expansion), while general tasks favor entropy decrease (cognitive compression). We introduce InstructDiff, a unified framework that operationalizes differential entropy as a domain-adaptive selection criterion through warmup calibration, bi-directional NLL filtering, and entropy-based ranking. Extensive experiments show that InstructDiff achieves 17\% relative improvement over full data training on mathematical reasoning and 52\% for general instruction-following, outperforming prior baselines while using only 10\% of the data.
https://arxiv.org/abs/2601.23006
Academic Papers
svg
0f0cb1221ff77adfb5bc85a8130c278d69cedce2f90fe084b913f41c44542163
2026-02-02T00:00:00-05:00
Leveraging Multi-Rater Annotations to Calibrate Object Detectors in Microscopy Imaging
arXiv:2601.23007v1 Announce Type: new Abstract: Deep learning-based object detectors have achieved impressive performance in microscopy imaging, yet their confidence estimates often lack calibration, limiting their reliability for biomedical applications. In this work, we introduce a new approach to improve model calibration by leveraging multi-rater annotations. We propose to train separate models on the annotations from single experts and aggregate their predictions to emulate consensus. This improves upon label sampling strategies, where models are trained on mixed annotations, and offers a more principled way to capture inter-rater variability. Experiments on a colorectal organoid dataset annotated by two experts demonstrate that our rater-specific ensemble strategy improves calibration performance while maintaining comparable detection accuracy. These findings suggest that explicitly modelling rater disagreement can lead to more trustworthy object detectors in biomedical imaging.
https://arxiv.org/abs/2601.23007
Academic Papers
svg
d5ad266c344045516ac6560f8a2179352b9846f0245feaa24614713c1058b90a
2026-02-02T00:00:00-05:00
SolAgent: A Specialized Multi-Agent Framework for Solidity Code Generation
arXiv:2601.23009v1 Announce Type: new Abstract: Smart contracts are the backbone of the decentralized web, yet ensuring their functional correctness and security remains a critical challenge. While Large Language Models (LLMs) have shown promise in code generation, they often struggle with the rigorous requirements of smart contracts, frequently producing code that is buggy or vulnerable. To address this, we propose SolAgent, a novel tool-augmented multi-agent framework that mimics the workflow of human experts. SolAgent integrates a \textbf{dual-loop refinement mechanism}: an inner loop using the \textit{Forge} compiler to ensure functional correctness, and an outer loop leveraging the \textit{Slither} static analyzer to eliminate security vulnerabilities. Additionally, the agent is equipped with file system capabilities to resolve complex project dependencies. Experiments on the SolEval+ Benchmark, a rigorous suite derived from high-quality real-world projects, demonstrate that SolAgent achieves a Pass@1 rate of up to \textbf{64.39\%}, significantly outperforming state-of-the-art LLMs ($\sim$25\%), AI IDEs (e.g., GitHub Copilot), and existing agent frameworks. Moreover, it reduces security vulnerabilities by up to \textbf{39.77\%} compared to human-written baselines. Finally, we demonstrate that the high-quality trajectories generated by SolAgent can be used to distill smaller, open-source models, democratizing access to secure smart contract generation. We release our data and code at https://github.com/openpaperz/SolAgent.
https://arxiv.org/abs/2601.23009
Academic Papers
svg
2a7bd0a346f4d73341c6a70404f1890bf07ab3378fb6b52e5929d573c730f5e4
2026-02-02T00:00:00-05:00
Automatic Constraint Policy Optimization based on Continuous Constraint Interpolation Framework for Offline Reinforcement Learning
arXiv:2601.23010v1 Announce Type: new Abstract: Offline Reinforcement Learning (RL) relies on policy constraints to mitigate extrapolation error, where both the constraint form and constraint strength critically shape performance. However, most existing methods commit to a single constraint family: weighted behavior cloning, density regularization, or support constraints, without a unified principle that explains their connections or trade-offs. In this work, we propose Continuous Constraint Interpolation (CCI), a unified optimization framework in which these three constraint families arise as special cases along a common constraint spectrum. The CCI framework introduces a single interpolation parameter that enables smooth transitions and principled combinations across constraint types. Building on CCI, we develop Automatic Constraint Policy Optimization (ACPO), a practical primal--dual algorithm that adapts the interpolation parameter via a Lagrangian dual update. Moreover, we establish a maximum-entropy performance difference lemma and derive performance lower bounds for both the closed-form optimal policy and its parametric projection. Experiments on D4RL and NeoRL2 demonstrate robust gains across diverse domains, achieving state-of-the-art performance overall.
https://arxiv.org/abs/2601.23010
Academic Papers
svg
3a3addac26f21448ab03c05f2ff3b26d24fb5b21d9b3a4e40742c00ff4a19e6f
2026-02-02T00:00:00-05:00
Leveraging Convolutional Sparse Autoencoders for Robust Movement Classification from Low-Density sEMG
arXiv:2601.23011v1 Announce Type: new Abstract: Reliable control of myoelectric prostheses is often hindered by high inter-subject variability and the clinical impracticality of high-density sensor arrays. This study proposes a deep learning framework for accurate gesture recognition using only two surface electromyography (sEMG) channels. The method employs a Convolutional Sparse Autoencoder (CSAE) to extract temporal feature representations directly from raw signals, eliminating the need for heuristic feature engineering. On a 6-class gesture set, our model achieved a multi-subject F1-score of 94.3% $\pm$ 0.3%. To address subject-specific differences, we present a few-shot transfer learning protocol that improved performance on unseen subjects from a baseline of 35.1% $\pm$ 3.1% to 92.3% $\pm$ 0.9% with minimal calibration data. Furthermore, the system supports functional extensibility through an incremental learning strategy, allowing for expansion to a 10-class set with a 90.0% $\pm$ 0.2% F1-score without full model retraining. By combining high precision with minimal computational and sensor overhead, this framework provides a scalable and efficient approach for the next generation of affordable and adaptive prosthetic systems.
https://arxiv.org/abs/2601.23011
Academic Papers
svg
7068c2cd1aeddc5661fa6feb978f68bae033e70dfddddb480344d9737acbe743
2026-02-02T00:00:00-05:00
Mem-T: Densifying Rewards for Long-Horizon Memory Agents
arXiv:2601.23014v1 Announce Type: new Abstract: Memory agents, which depart from predefined memory-processing pipelines by endogenously managing the processing, storage, and retrieval of memories, have garnered increasing attention for their autonomy and adaptability. However, existing training paradigms remain constrained: agents often traverse long-horizon sequences of memory operations before receiving sparse and delayed rewards, which hinders truly end-to-end optimization of memory management policies. To address this limitation, we introduce Mem-T, an autonomous memory agent that interfaces with a lightweight hierarchical memory database to perform dynamic updates and multi-turn retrieval over streaming inputs. To effectively train long-horizon memory management capabilities, we further propose MoT-GRPO, a tree-guided reinforcement learning framework that transforms sparse terminal feedback into dense, step-wise supervision via memory operation tree backpropagation and hindsight credit assignment, thereby enabling the joint optimization of memory construction and retrieval. Extensive experiments demonstrate that Mem-T is (1) high-performing, surpassing frameworks such as A-Mem and Mem0 by up to $14.92\%$, and (2) economical, operating on a favorable accuracy-efficiency Pareto frontier and reducing inference tokens per query by $\sim24.45\%$ relative to GAM without sacrificing performance.
https://arxiv.org/abs/2601.23014
Academic Papers
svg
243573adcb34f9ef110dca9fe3f8684ceae6e0162e75bb2f965e447192120a03
2026-02-02T00:00:00-05:00
Integrating Multi-Label Classification and Generative AI for Scalable Analysis of User Feedback
arXiv:2601.23018v1 Announce Type: new Abstract: In highly competitive software markets, user experience (UX) evaluation is crucial for ensuring software quality and fostering long-term product success. Such UX evaluations typically combine quantitative metrics from standardized questionnaires with qualitative feedback collected through open-ended questions. While open-ended feedback offers valuable insights for improvement and helps explain quantitative results, analyzing large volumes of user comments is challenging and time-consuming. In this paper, we present techniques developed during a long-term UX measurement project at a major software company to efficiently process and interpret extensive volumes of user comments. To provide a high-level overview of the collected comments, we employ a supervised machine learning approach that assigns meaningful, pre-defined topic labels to each comment. Additionally, we demonstrate how generative AI (GenAI) can be leveraged to create concise and informative summaries of user feedback, facilitating effective communication of findings to the organization and especially upper management. Finally, we investigate whether the sentiment expressed in user comments can serve as an indicator for overall product satisfaction. Our results show that sentiment analysis alone does not reliably reflect user satisfaction. Instead, product satisfaction needs to be assessed explicitly in surveys to measure the user's perception of the product.
https://arxiv.org/abs/2601.23018
Academic Papers
svg
d7bffc6ef85a824fae8213558943d3c5d627f1ba471ccf12e87913f7b5524a25
2026-02-02T00:00:00-05:00
Uncovering Hidden Inclusions of Vulnerable Dependencies in Real-World Java Projects
arXiv:2601.23020v1 Announce Type: new Abstract: Open-source software (OSS) dependencies are a dominant component of modern software code bases. Using proven and well-tested OSS components lets developers reduce development time and cost while improving quality. However, heavy reliance on open-source software also introduces significant security risks, including the incorporation of known vulnerabilities into the codebase. To mitigate these risks, metadata-based dependency scanners, which are lightweight and fast, and code-centric scanners, which enable the detection of modified dependencies hidden from metadata-based approaches, have been developed. In this paper, we present Unshade, a hybrid approach towards dependency scanning in Java that combines the efficiency of metadata-based scanning with the ability to detect modified dependencies of code-centric approaches. Unshade first augments a Java project's software bill of materials (SBOM) by identifying modified and hidden dependencies via a bytecode-based fingerprinting mechanism. This augmented SBOM is then passed to a metadata-based vulnerability scanner to identify known vulnerabilities in both declared and newly revealed dependencies. Leveraging Unshade's high scalability, we conducted a large-scale study of the 1,808 most popular open-source Java Maven projects on GitHub. The results show that nearly 50% of these projects contain at least one modified, hidden dependency associated with a known vulnerability. On average, each affected project includes more than eight such hidden vulnerable dependencies, all missed by traditional metadata-based scanners. Overall, Unshade identified 7,712 unique CVEs in hidden dependencies that would remain undetected when relying on metadata-based scanning alone.
https://arxiv.org/abs/2601.23020
Academic Papers
svg
75831f69c1cfb0a8ba8d637f122a3388be7ca30890397443fd876cbc889d61c4
2026-02-02T00:00:00-05:00
DimABSA: Building Multilingual and Multidomain Datasets for Dimensional Aspect-Based Sentiment Analysis
arXiv:2601.23022v1 Announce Type: new Abstract: Aspect-Based Sentiment Analysis (ABSA) focuses on extracting sentiment at a fine-grained aspect level and has been widely applied across real-world domains. However, existing ABSA research relies on coarse-grained categorical labels (e.g., positive, negative), which limits its ability to capture nuanced affective states. To address this limitation, we adopt a dimensional approach that represents sentiment with continuous valence-arousal (VA) scores, enabling fine-grained analysis at both the aspect and sentiment levels. To this end, we introduce DimABSA, the first multilingual, dimensional ABSA resource annotated with both traditional ABSA elements (aspect terms, aspect categories, and opinion terms) and newly introduced VA scores. This resource contains 76,958 aspect instances across 42,590 sentences, spanning six languages and four domains. We further introduce three subtasks that combine VA scores with different ABSA elements, providing a bridge from traditional ABSA to dimensional ABSA. Given that these subtasks involve both categorical and continuous outputs, we propose a new unified metric, continuous F1 (cF1), which incorporates VA prediction error into standard F1. We provide a comprehensive benchmark using both prompted and fine-tuned large language models across all subtasks. Our results show that DimABSA is a challenging benchmark and provides a foundation for advancing multilingual dimensional ABSA.
https://arxiv.org/abs/2601.23022
Academic Papers
svg
4c616c8531bd48e7905a9e04508f131b11cbc01351b55bf640dfdf0f3b52c283
2026-02-02T00:00:00-05:00
Causal Characterization of Measurement and Mechanistic Anomalies
arXiv:2601.23026v1 Announce Type: new Abstract: Root cause analysis of anomalies aims to identify those features that cause the deviation from the normal process. Existing methods ignore, however, that anomalies can arise through two fundamentally different processes: measurement errors, where data was generated normally but one or more values were recorded incorrectly, and mechanism shifts, where the causal process generating the data changed. While measurement errors can often be safely corrected, mechanistic anomalies require careful consideration. We define a causal model that explicitly captures both types by treating outliers as latent interventions on latent ("true") and observed ("measured") variables. We show that they are identifiable, and propose a maximum likelihood estimation approach to put this to practice. Experiments show that our method matches state-of-the-art performance in root cause localization, while it additionally enables accurate classification of anomaly types, and remains robust even when the causal DAG is unknown.
https://arxiv.org/abs/2601.23026
Academic Papers
svg
ca4ad2708770aa3c7c2cb5bffb940117a5378697cae0acf40d0278a59b7ec1b7
2026-02-02T00:00:00-05:00
Divide-and-Conquer CoT: RL for Reducing Latency via Parallel Reasoning
arXiv:2601.23027v1 Announce Type: new Abstract: Long chain-of-thought reasoning (Long CoT) is now fundamental to state-of-the-art LLMs, especially in mathematical reasoning. However, LLM generation is highly sequential, and long CoTs lead to a high latency. We propose to train Divide-and-Conquer CoT (DC-CoT) to reduce the latency. With DC-CoT, the model can act as a director that identifies distinct subtasks that can be performed in parallel in its reasoning process, and then spawns workers to execute the subtasks. Our goal is to achieve high accuracy, with a low longest path length, which is a theoretical measure of the latency needed for the response. We start with a long CoT base model (DeepScaleR-1.5B-Preview), and first use SFT with a small curated demonstration set to initialize its ability to spawn workers in a certain format. Because SFT degrades the accuracy significantly, we design a multi-stage RL algorithm, with various data filtering strategies, to recover the accuracy while decreasing the longest path length. Across several benchmarks including AIME 2024 and HMMT 2025, DC-CoT achieves similar accuracy as DeepScaleR-1.5B-Preview while decreasing longest path length by 35-40%. Our code, SFT dataset and models are publicly available at https://github.com/amahankali10/DC_CoT_RL_for_Low_Latency_CoT_with_Parallel_Reasoning.
https://arxiv.org/abs/2601.23027
Academic Papers
svg
338b14f4db699b0ec3b68b03036789c2ddfd8b658b68515754e3653904585cbd
2026-02-02T00:00:00-05:00
Guided by Trajectories: Repairing and Rewarding Tool-Use Trajectories for Tool-Integrated Reasoning
arXiv:2601.23032v1 Announce Type: new Abstract: Tool-Integrated Reasoning (TIR) enables large language models (LLMs) to solve complex tasks by interacting with external tools, yet existing approaches depend on high-quality synthesized trajectories selected by scoring functions and sparse outcome-based rewards, providing limited and biased supervision for learning TIR. To address these challenges, in this paper, we propose AutoTraj, a two-stage framework that automatically learns TIR by repairing and rewarding tool-use trajectories. Specifically, in the supervised fine-tuning (SFT) stage, AutoTraj generates multiple candidate tool-use trajectories for each query and evaluates them along multiple dimensions. High-quality trajectories are directly retained, while low-quality ones are repaired using a LLM (i.e., LLM-as-Repairer). The resulting repaired and high-quality trajectories form a synthetic SFT dataset, while each repaired trajectory paired with its original low-quality counterpart constitutes a dataset for trajectory preference modeling. In the reinforcement learning (RL) stage, based on the preference dataset, we train a trajectory-level reward model to assess the quality of reasoning paths and combine it with outcome and format rewards, thereby explicitly guiding the optimization toward reliable TIR behaviors. Experiments on real-world benchmarks demonstrate the effectiveness of AutoTraj in TIR.
https://arxiv.org/abs/2601.23032
Academic Papers
svg
69db85e52fd46429fcab76625e30884f73f49e87eb666de3087268ba9e180026
2026-02-02T00:00:00-05:00
MOSAIC: Modular Scalable Autonomy for Intelligent Coordination of Heterogeneous Robotic Teams
arXiv:2601.23038v1 Announce Type: new Abstract: Mobile robots have become indispensable for exploring hostile environments, such as in space or disaster relief scenarios, but often remain limited to teleoperation by a human operator. This restricts the deployment scale and requires near-continuous low-latency communication between the operator and the robot. We present MOSAIC: a scalable autonomy framework for multi-robot scientific exploration using a unified mission abstraction based on Points of Interest (POIs) and multiple layers of autonomy, enabling supervision by a single operator. The framework dynamically allocates exploration and measurement tasks based on each robot's capabilities, leveraging team-level redundancy and specialization to enable continuous operation. We validated the framework in a space-analog field experiment emulating a lunar prospecting scenario, involving a heterogeneous team of five robots and a single operator. Despite the complete failure of one robot during the mission, the team completed 82.3% of assigned tasks at an Autonomy Ratio of 86%, while the operator workload remained at only 78.2%. These results demonstrate that the proposed framework enables robust, scalable multi-robot scientific exploration with limited operator intervention. We further derive practical lessons learned in robot interoperability, networking architecture, team composition, and operator workload management to inform future multi-robot exploration missions.
https://arxiv.org/abs/2601.23038
Academic Papers
svg
d765130c35e8f05a9d95b6a8d328f9a8cff4f26299943b2d2e5028b9b80ef521
2026-02-02T00:00:00-05:00
Avoiding Premature Collapse: Adaptive Annealing for Entropy-Regularized Structural Inference
arXiv:2601.23039v1 Announce Type: new Abstract: Differentiable matching layers, often implemented via entropy-regularized Optimal Transport, serve as a critical approximate inference mechanism in structural prediction. However, recovering discrete permutations via annealing $\epsilon \to 0$ is notoriously unstable. We identify a fundamental mechanism for this failure: \textbf{Premature Mode Collapse}. By analyzing the non-normal dynamics of the Sinkhorn fixed-point map, we reveal a theoretical \textbf{thermodynamic speed limit}. Under standard exponential cooling, the shift in the target posterior ($O(1)$) outpaces the contraction rate of the inference operator, which degrades as $O(1/\epsilon)$. This mismatch inevitably forces the inference trajectory into spurious local basins. To address this, we propose \textbf{Efficient PH-ASC}, an adaptive scheduling algorithm that monitors the stability of the inference process. By enforcing a linear stability law, we decouple expensive spectral diagnostics from the training loop, reducing overhead from $O(N^3)$ to amortized $O(1)$. Our implementation and interactive demo are available at https://github.com/xxx0438/torch-sinkhorn-asc and https://huggingface.co/spaces/leon0923/torch-sinkhorn-asc-demo. bounded away from zero in generic training dynamics unless the feature extractor converges unrealistically fast.
https://arxiv.org/abs/2601.23039
Academic Papers
svg
da7233765240e84fad283ba621392ba70e835c803e6340b761df617e323f7911
2026-02-02T00:00:00-05:00
One-shot Optimized Steering Vector for Hallucination Mitigation for VLMs
arXiv:2601.23041v1 Announce Type: new Abstract: Vision Language Models (VLMs) achieve strong performance on multimodal tasks but still suffer from hallucination and safety-related failures that persist even at scale. Steering offers a lightweight technique to improve model performance. However, steering, whether input-dependent or input-independent, achieves a meaningful trade-off between efficiency and effectiveness. In this work, we observe that steering vectors can generalize across inputs when tasks share aligned semantic intent. Based on this insight, we propose \textbf{OSGA} (\textbf{O}ne-shot \textbf{S}teering with \textbf{G}enerative \textbf{A}nchor), an input-independent framework that improves model performance with a single optimization instance. OSGA first selects an informative sample via a variance-based data selection strategy and learns a single steering vector with a contrastive objective with generative anchor regularization. The resulting vector can be universally applied at a certain layer during inference time without modifying model parameters. Experiments across multiple benchmarks show that a single OSGA-optimized steering vector consistently improves hallucination mitigation and safety enhancement with negligible overhead, highlighting one-shot steering as a practical and scalable solution for reliable VLMs.
https://arxiv.org/abs/2601.23041
Academic Papers
svg
5849a56ab769be5d532a6ed24858b6639330afda7a3584e28a47a55dfa33219d
2026-02-02T00:00:00-05:00
The Hot Mess of AI: How Does Misalignment Scale With Model Intelligence and Task Complexity?
arXiv:2601.23045v1 Announce Type: new Abstract: As AI becomes more capable, we entrust it with more general and consequential tasks. The risks from failure grow more severe with increasing task scope. It is therefore important to understand how extremely capable AI models will fail: Will they fail by systematically pursuing goals we do not intend? Or will they fail by being a hot mess, and taking nonsensical actions that do not further any goal? We operationalize this question using a bias-variance decomposition of the errors made by AI models: An AI's \emph{incoherence} on a task is measured over test-time randomness as the fraction of its error that stems from variance rather than bias in task outcome. Across all tasks and frontier models we measure, the longer models spend reasoning and taking actions, \emph{the more incoherent} their failures become. Incoherence changes with model scale in a way that is experiment dependent. However, in several settings, larger, more capable models are more incoherent than smaller models. Consequently, scale alone seems unlikely to eliminate incoherence. Instead, as more capable AIs pursue harder tasks, requiring more sequential action and thought, our results predict failures to be accompanied by more incoherent behavior. This suggests a future where AIs sometimes cause industrial accidents (due to unpredictable misbehavior), but are less likely to exhibit consistent pursuit of a misaligned goal. This increases the relative importance of alignment research targeting reward hacking or goal misspecification.
https://arxiv.org/abs/2601.23045
Academic Papers
svg
c2a2328c801ac453607dc2ba5b5d864e567e523f2747aa7a0c39ad22bce60c33
2026-02-02T00:00:00-05:00
From Abstract to Contextual: What LLMs Still Cannot Do in Mathematics
arXiv:2601.23048v1 Announce Type: new Abstract: Large language models now solve many benchmark math problems at near-expert levels, yet this progress has not fully translated into reliable performance in real-world applications. We study this gap through contextual mathematical reasoning, where the mathematical core must be formulated from descriptive scenarios. We introduce ContextMATH, a benchmark that repurposes AIME and MATH-500 problems into two contextual settings: Scenario Grounding (SG), which embeds abstract problems into realistic narratives without increasing reasoning complexity, and Complexity Scaling (CS), which transforms explicit conditions into sub-problems to capture how constraints often appear in practice. Evaluating 61 proprietary and open-source models, we observe sharp drops: on average, open-source models decline by 13 and 34 points on SG and CS, while proprietary models drop by 13 and 20. Error analysis shows that errors are dominated by incorrect problem formulation, with formulation accuracy declining as original problem difficulty increases. Correct formulation emerges as a prerequisite for success, and its sufficiency improves with model scale, indicating that larger models advance in both understanding and reasoning. Nevertheless, formulation and reasoning remain two complementary bottlenecks that limit contextual mathematical problem solving. Finally, we find that fine-tuning with scenario data improves performance, whereas formulation-only training is ineffective. However, performance gaps are only partially alleviated, highlighting contextual mathematical reasoning as a central unsolved challenge for LLMs.
https://arxiv.org/abs/2601.23048
Academic Papers
svg