id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
6cc385118e1410e7702330233a4201f8baa8dac6e095d62cf67fa48f5f74b6c5
2026-02-02T00:00:00-05:00
Capacity of Two-User Wireless Systems Aided by Movable Signals
arXiv:2601.22358v1 Announce Type: new Abstract: Movable signals have emerged as a third approach to enable smart radio environments (SREs), complementing reconfigurable intelligent surfaces (RISs) and flexible antennas. This paper investigates their potential to enhance multi-user wireless systems. Focusing on two-user systems, we characterize the capacity regions of the multiple access channel (MAC) and broadcast channel (BC). Interestingly, movable signals can dynamically adjust the operating frequency to orthogonalize the user channels, thereby significantly expanding the capacity regions. We also study frequency optimization, constraining it in a limited frequency range, and show that movable signals provide up to 45% sum rate gain over fixed signals.
https://arxiv.org/abs/2601.22358
Academic Papers
svg
dabdc7a3b27658250c7c67310bc11fb77de477ab0d34c57477430d14c15821b9
2026-02-02T00:00:00-05:00
The Unseen Threat: Residual Knowledge in Machine Unlearning under Perturbed Samples
arXiv:2601.22359v1 Announce Type: new Abstract: Machine unlearning offers a practical alternative to avoid full model re-training by approximately removing the influence of specific user data. While existing methods certify unlearning via statistical indistinguishability from re-trained models, these guarantees do not naturally extend to model outputs when inputs are adversarially perturbed. In particular, slight perturbations of forget samples may still be correctly recognized by the unlearned model - even when a re-trained model fails to do so - revealing a novel privacy risk: information about the forget samples may persist in their local neighborhood. In this work, we formalize this vulnerability as residual knowledge and show that it is inevitable in high-dimensional settings. To mitigate this risk, we propose a fine-tuning strategy, named RURK, that penalizes the model's ability to re-recognize perturbed forget samples. Experiments on vision benchmarks with deep neural networks demonstrate that residual knowledge is prevalent across existing unlearning methods and that our approach effectively prevents residual knowledge.
https://arxiv.org/abs/2601.22359
Academic Papers
svg
0ac1baaa260e7617d5a881d1d6119a1222a05c4a62fa524fda73e2efefdaa967
2026-02-02T00:00:00-05:00
MERMAID: Memory-Enhanced Retrieval and Reasoning with Multi-Agent Iterative Knowledge Grounding for Veracity Assessment
arXiv:2601.22361v1 Announce Type: new Abstract: Assessing the veracity of online content has become increasingly critical. Large language models (LLMs) have recently enabled substantial progress in automated veracity assessment, including automated fact-checking and claim verification systems. Typical veracity assessment pipelines break down complex claims into sub-claims, retrieve external evidence, and then apply LLM reasoning to assess veracity. However, existing methods often treat evidence retrieval as a static, isolated step and do not effectively manage or reuse retrieved evidence across claims. In this work, we propose MERMAID, a memory-enhanced multi-agent veracity assessment framework that tightly couples the retrieval and reasoning processes. MERMAID integrates agent-driven search, structured knowledge representations, and a persistent memory module within a Reason-Action style iterative process, enabling dynamic evidence acquisition and cross-claim evidence reuse. By retaining retrieved evidence in an evidence memory, the framework reduces redundant searches and improves verification efficiency and consistency. We evaluate MERMAID on three fact-checking benchmarks and two claim-verification datasets using multiple LLMs, including GPT, LLaMA, and Qwen families. Experimental results show that MERMAID achieves state-of-the-art performance while improving the search efficiency, demonstrating the effectiveness of synergizing retrieval, reasoning, and memory for reliable veracity assessment.
https://arxiv.org/abs/2601.22361
Academic Papers
svg
aafb278c93428b105efc7bc2e36ccd673a253c7fd399893b400a05bbba2f5287
2026-02-02T00:00:00-05:00
Understanding Efficiency: Quantization, Batching, and Serving Strategies in LLM Energy Use
arXiv:2601.22362v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly deployed in production, contributing towards shifting the burden in terms of computational resources and energy demands from training to inference. While prior work has examined the energy cost of inference per prompt or per token, we highlight how \emph{system-level design choices} - such as numerical precision, batching strategy, and request scheduling - can lead to orders-of-magnitude differences in energy consumption for the same model. We perform a detailed empirical study of LLM inference energy and latency on NVIDIA H100 GPUs, analyzing the impact of quantization, batch size, and serving configuration (e.g., with Hugging Face's Text Generation Inference server). Our results reveal that lower-precision formats only yield energy gains in compute-bound regimes; that batching improves energy efficiency, especially in memory-bound phases like decoding; and that structured request timing (arrival shaping) can reduce per-request energy by up to 100 times. We argue that sustainable LLM deployment depends not only on model internals, but also on the orchestration of the serving stack. Our findings motivate phase-aware energy profiling and system-level optimizations for greener AI services.
https://arxiv.org/abs/2601.22362
Academic Papers
svg
7be097148ade36649dc7fd301a088b4b0b8a2f0ab9ec5f82b40b3e131d8e89d3
2026-02-02T00:00:00-05:00
Context Structure Reshapes the Representational Geometry of Language Models
arXiv:2601.22364v1 Announce Type: new Abstract: Large Language Models (LLMs) have been shown to organize the representations of input sequences into straighter neural trajectories in their deep layers, which has been hypothesized to facilitate next-token prediction via linear extrapolation. Language models can also adapt to diverse tasks and learn new structure in context, and recent work has shown that this in-context learning (ICL) can be reflected in representational changes. Here we bring these two lines of research together to explore whether representation straightening occurs \emph{within} a context during ICL. We measure representational straightening in Gemma 2 models across a diverse set of in-context tasks, and uncover a dichotomy in how LLMs' representations change in context. In continual prediction settings (e.g., natural language, grid world traversal tasks) we observe that increasing context increases the straightness of neural sequence trajectories, which is correlated with improvement in model prediction. Conversely, in structured prediction settings (e.g., few-shot tasks), straightening is inconsistent -- it is only present in phases of the task with explicit structure (e.g., repeating a template), but vanishes elsewhere. These results suggest that ICL is not a monolithic process. Instead, we propose that LLMs function like a Swiss Army knife: depending on task structure, the LLM dynamically selects between strategies, only some of which yield representational straightening.
https://arxiv.org/abs/2601.22364
Academic Papers
svg
e565097f43db4f8ad7304e7b216ba6996777af1db727736a93912c7efd555e48
2026-02-02T00:00:00-05:00
Towards Solving the Gilbert-Pollak Conjecture via Large Language Models
arXiv:2601.22365v1 Announce Type: new Abstract: The Gilbert-Pollak Conjecture \citep{gilbert1968steiner}, also known as the Steiner Ratio Conjecture, states that for any finite point set in the Euclidean plane, the Steiner minimum tree has length at least $\sqrt{3}/2 \approx 0.866$ times that of the Euclidean minimum spanning tree (the Steiner ratio). A sequence of improvements through the 1980s culminated in a lower bound of $0.824$, with no substantial progress reported over the past three decades. Recent advances in LLMs have demonstrated strong performance on contest-level mathematical problems, yet their potential for addressing open, research-level questions remains largely unexplored. In this work, we present a novel AI system for obtaining tighter lower bounds on the Steiner ratio. Rather than directly prompting LLMs to solve the conjecture, we task them with generating rule-constrained geometric lemmas implemented as executable code. These lemmas are then used to construct a collection of specialized functions, which we call verification functions, that yield theoretically certified lower bounds of the Steiner ratio. Through progressive lemma refinement driven by reflection, the system establishes a new certified lower bound of 0.8559 for the Steiner ratio. The entire research effort involves only thousands of LLM calls, demonstrating the strong potential of LLM-based systems for advanced mathematical research.
https://arxiv.org/abs/2601.22365
Academic Papers
svg
8ed2efc6e4bb640c7c8aee23935c0c2573102f73dcd515ade7c5fd9864cedf65
2026-02-02T00:00:00-05:00
Learning Provably Correct Distributed Protocols Without Human Knowledge
arXiv:2601.22369v1 Announce Type: new Abstract: Provably correct distributed protocols, which are a critical component of modern distributed systems, are highly challenging to design and have often required decades of human effort. These protocols allow multiple agents to coordinate to come to a common agreement in an environment with uncertainty and failures. We formulate protocol design as a search problem over strategies in a game with imperfect information, and the desired correctness conditions are specified in Satisfiability Modulo Theories (SMT). However, standard methods for solving multi-agent games fail to learn correct protocols in this setting, even when the number of agents is small. We propose a learning framework, GGMS, which integrates a specialized variant of Monte Carlo Tree Search with a transformer-based action encoder, a global depth-first search to break out of local minima, and repeated feedback from a model checker. Protocols output by GGMS are verified correct via exhaustive model checking for all executions within the bounded setting. We further prove that, under mild assumptions, the search process is complete: if a correct protocol exists, GGMS will eventually find it. In experiments, we show that GGMS can learn correct protocols for larger settings than existing methods.
https://arxiv.org/abs/2601.22369
Academic Papers
svg
3ffc401941c987cd4b15c451c946ba1a61f7e515134c459c128bde5433d0aca4
2026-02-02T00:00:00-05:00
FIRE: Multi-fidelity Regression with Distribution-conditioned In-context Learning using Tabular Foundation Models
arXiv:2601.22371v1 Announce Type: new Abstract: Multi-fidelity (MF) regression often operates in regimes of extreme data imbalance, where the commonly-used Gaussian-process (GP) surrogates struggle with cubic scaling costs and overfit to sparse high-fidelity observations, limiting efficiency and generalization in real-world applications. We introduce FIRE, a training-free MF framework that couples tabular foundation models (TFMs) to perform zero-shot in-context Bayesian inference via a high-fidelity correction model conditioned on the low-fidelity model's posterior predictive distributions. This cross-fidelity information transfer via distributional summaries captures heteroscedastic errors, enabling robust residual learning without model retraining. Across 31 benchmark problems spanning synthetic and real-world tasks (e.g., DrivAerNet, LCBench), FIRE delivers a stronger performance-time trade-off than seven state-of-the-art GP-based or deep learning MF regression methods, ranking highest in accuracy and uncertainty quantification with runtime advantages. Limitations include context window constraints and dependence on the quality of the pre-trained TFM's.
https://arxiv.org/abs/2601.22371
Academic Papers
svg
c15272ba0173583f0003e68952c835133bfa74d10eeb7ce3d2f42493575b8269
2026-02-02T00:00:00-05:00
Stability-Aware Prompt Optimization for Clinical Data Abstraction
arXiv:2601.22373v1 Announce Type: new Abstract: Large language models used for clinical abstraction are sensitive to prompt wording, yet most work treats prompts as fixed and studies uncertainty in isolation. We argue these should be treated jointly. Across two clinical tasks (MedAlign applicability/correctness and MS subtype abstraction) and multiple open and proprietary models, we measure prompt sensitivity via flip rates and relate it to calibration and selective prediction. We find that higher accuracy does not guarantee prompt stability, and that models can appear well-calibrated yet remain fragile to paraphrases. We propose a dual-objective prompt optimization loop that jointly targets accuracy and stability, showing that explicitly including a stability term reduces flip rates across tasks and models, sometimes at modest accuracy cost. Our results suggest prompt sensitivity should be an explicit objective when validating clinical LLM systems.
https://arxiv.org/abs/2601.22373
Academic Papers
svg
f57fb429ab187e619b9222f4fa7b703a3ee5f505c25308b07985f093024f1978
2026-02-02T00:00:00-05:00
FlexMap: Generalized HD Map Construction from Flexible Camera Configurations
arXiv:2601.22376v1 Announce Type: new Abstract: High-definition (HD) maps provide essential semantic information of road structures for autonomous driving systems, yet current HD map construction methods require calibrated multi-camera setups and either implicit or explicit 2D-to-BEV transformations, making them fragile when sensors fail or camera configurations vary across vehicle fleets. We introduce FlexMap, unlike prior methods that are fixed to a specific N-camera rig, our approach adapts to variable camera configurations without any architectural changes or per-configuration retraining. Our key innovation eliminates explicit geometric projections by using a geometry-aware foundation model with cross-frame attention to implicitly encode 3D scene understanding in feature space. FlexMap features two core components: a spatial-temporal enhancement module that separates cross-view spatial reasoning from temporal dynamics, and a camera-aware decoder with latent camera tokens, enabling view-adaptive attention without the need for projection matrices. Experiments demonstrate that FlexMap outperforms existing methods across multiple configurations while maintaining robustness to missing views and sensor variations, enabling more practical real-world deployment.
https://arxiv.org/abs/2601.22376
Academic Papers
svg
6355b80b7ebaf34a4de680cae5155a67432234b16d06a01becc6f1d1a5948e84
2026-02-02T00:00:00-05:00
SPLA: Block Sparse Plus Linear Attention for Long Context Modeling
arXiv:2601.22379v1 Announce Type: new Abstract: Block-wise sparse attention offers significant efficiency gains for long-context modeling, yet existing methods often suffer from low selection fidelity and cumulative contextual loss by completely discarding unselected blocks. To address these limitations, we introduce Sparse Plus Linear Attention (SPLA), a framework that utilizes a selection metric derived from second-order Taylor expansions to accurately identify relevant blocks for exact attention. Instead of discarding the remaining "long tail," SPLA compresses unselected blocks into a compact recurrent state via a residual linear attention (RLA) module. Crucially, to avoid IO overhead, we derive an optimized subtraction-based formulation for RLA -- calculating the residual as the difference between global and selected linear attention -- ensuring that unselected blocks are never explicitly accessed during inference. Our experiments demonstrate that SPLA closes the performance gap in continual pretraining, surpassing dense attention models on long-context benchmarks like RULER while maintaining competitive general knowledge and reasoning capabilities.
https://arxiv.org/abs/2601.22379
Academic Papers
svg
a1a7d798d0f7dcb88788526bda6c0c10485db1310c607274f4f9afd435d0f5a0
2026-02-02T00:00:00-05:00
Lantern: A Minimalist Robotic Object Platform
arXiv:2601.22381v1 Announce Type: new Abstract: Robotic objects are simple actuated systems that subtly blend into human environments. We design and introduce Lantern, a minimalist robotic object platform to enable building simple robotic artifacts. We conducted in-depth design and engineering iterations of Lantern's mechatronic architecture to meet specific design goals while maintaining a low build cost (~40 USD). As an extendable, open-source platform, Lantern aims to enable exploration of a range of HRI scenarios by leveraging human tendency to assign social meaning to simple forms. To evaluate Lantern's potential for HRI, we conducted a series of explorations: 1) a co-design workshop, 2) a sensory room case study, 3) distribution to external HRI labs, 4) integration into a graduate-level HRI course, and 5) public exhibitions with older adults and children. Our findings show that Lantern effectively evokes engagement, can support versatile applications ranging from emotion regulation to focused work, and serves as a viable platform for lowering barriers to HRI as a field.
https://arxiv.org/abs/2601.22381
Academic Papers
svg
b462b3c6186cef7ddf3a95c21ac67e17252dcd26d864e50e6213f7434b96fdb5
2026-02-02T00:00:00-05:00
Purely Agentic Black-Box Optimization for Biological Design
arXiv:2601.22382v1 Announce Type: new Abstract: Many key challenges in biological design-such as small-molecule drug discovery, antimicrobial peptide development, and protein engineering-can be framed as black-box optimization over vast, complex structured spaces. Existing methods rely mainly on raw structural data and struggle to exploit the rich scientific literature. While large language models (LLMs) have been added to these pipelines, they have been confined to narrow roles within structure-centered optimizers. We instead cast biological black-box optimization as a fully agentic, language-based reasoning process. We introduce Purely Agentic BLack-box Optimization (PABLO), a hierarchical agentic system that uses scientific LLMs pretrained on chemistry and biology literature to generate and iteratively refine biological candidates. On both the standard GuacaMol molecular design and antimicrobial peptide optimization tasks, PABLO achieves state-of-the-art performance, substantially improving sample efficiency and final objective values over established baselines. Compared to prior optimization methods that incorporate LLMs, PABLO achieves competitive token usage per run despite relying on LLMs throughout the optimization loop. Beyond raw performance, the agentic formulation offers key advantages for realistic design: it naturally incorporates semantic task descriptions, retrieval-augmented domain knowledge, and complex constraints. In follow-up in vitro validation, PABLO-optimized peptides showed strong activity against drug-resistant pathogens, underscoring the practical potential of PABLO for therapeutic discovery.
https://arxiv.org/abs/2601.22382
Academic Papers
svg
2c24332432eee5485640815edd8e4ee0756e845f603813c053f44e99c233180e
2026-02-02T00:00:00-05:00
Graph is a Substrate Across Data Modalities
arXiv:2601.22384v1 Announce Type: new Abstract: Graphs provide a natural representation of relational structure that arises across diverse domains. Despite this ubiquity, graph structure is typically learned in a modality- and task-isolated manner, where graph representations are constructed within individual task contexts and discarded thereafter. As a result, structural regularities across modalities and tasks are repeatedly reconstructed rather than accumulated at the level of intermediate graph representations. This motivates a representation-learning question: how should graph structure be organized so that it can persist and accumulate across heterogeneous modalities and tasks? We adopt a representation-centric perspective in which graph structure is treated as a structural substrate that persists across learning contexts. To instantiate this perspective, we propose G-Substrate, a graph substrate framework that organizes learning around shared graph structures. G-Substrate comprises two complementary mechanisms: a unified structural schema that ensures compatibility among graph representations across heterogeneous modalities and tasks, and an interleaved role-based training strategy that exposes the same graph structure to multiple functional roles during learning. Experiments across multiple domains, modalities, and tasks show that G-Substrate outperforms task-isolated and naive multi-task learning methods.
https://arxiv.org/abs/2601.22384
Academic Papers
svg
1290ede9d2c91feaea87f023b551ee7bab0dfbf2b943f453f769b5f954b3de4c
2026-02-02T00:00:00-05:00
SP^2DPO: An LLM-assisted Semantic Per-Pair DPO Generalization
arXiv:2601.22385v1 Announce Type: new Abstract: Direct Preference Optimization (DPO) controls the trade-off between fitting preference labels and staying close to a reference model using a single global temperature beta, implicitly treating all preference pairs as equally informative. Real-world preference corpora are heterogeneous: they mix high-signal, objective failures (for example, safety, factuality, instruction violations) with low-signal or subjective distinctions (for example, style), and also include label noise. We introduce our method, SP2DPO (Semantic Per-Pair DPO), a generalization that replaces the global temperature with an instance-specific schedule beta_i pre-decided offline from structured semantic-gap annotations (category, magnitude, confidence) produced by teacher language models. We instantiate this procedure on the UltraFeedback preference corpus (59,960 pairs), enabling large-scale construction of an auditable beta_i artifact, and incur zero training-time overhead: the inner-loop optimizer remains standard DPO with beta set per pair. We focus our empirical study on AlpacaEval 2.0, reporting both raw win rate and length-controlled win rate. Across four open-weight, instruction-tuned student backbones (4B-8B), SP2DPO is competitive with a tuned global-beta DPO baseline and improves AlpacaEval 2.0 length-controlled win rate on two of four backbones, while avoiding per-model beta sweeps. All code, annotations, and artifacts will be released.
https://arxiv.org/abs/2601.22385
Academic Papers
svg
8065c0c4dd411ec2e189778d6073acb8f673b5136e827246aed3cfa105fb34da
2026-02-02T00:00:00-05:00
Specialists or Generalists? Multi-Agent and Single-Agent LLMs for Essay Grading
arXiv:2601.22386v1 Announce Type: new Abstract: Automated essay scoring (AES) systems increasingly rely on large language models, yet little is known about how architectural choices shape their performance across different essay quality levels. This paper evaluates single-agent and multi-agent LLM architectures for essay grading using the ASAP 2.0 corpus. Our multi-agent system decomposes grading into three specialist agents (Content, Structure, Language) coordinated by a Chairman Agent that implements rubric-aligned logic including veto rules and score capping. We test both architectures in zero-shot and few-shot conditions using GPT-5.1. Results show that the multi-agent system is significantly better at identifying weak essays while the single-agent system performs better on mid-range essays. Both architectures struggle with high-quality essays. Critically, few-shot calibration emerges as the dominant factor in system performance -- providing just two examples per score level improves QWK by approximately 26% for both architectures. These findings suggest architectural choice should align with specific deployment priorities, with multi-agent AI particularly suited for diagnostic screening of at-risk students, while single-agent models provide a cost-effective solution for general assessment.
https://arxiv.org/abs/2601.22386
Academic Papers
svg
67d47a47331147656750a5134139afc94d10fc66ff41d18554842e73ddad1948
2026-02-02T00:00:00-05:00
Plant-Inspired Robot Design Metaphors for Ambient HRI
arXiv:2601.22387v1 Announce Type: new Abstract: Plants offer a paradoxical model for interaction: they are ambient, low-demand presences that nonetheless shape atmosphere, routines, and relationships through temporal rhythms and subtle expressions. In contrast, most human-robot interaction (HRI) has been grounded in anthropomorphic and zoomorphic paradigms, producing overt, high-demand forms of engagement. Using a Research through Design (RtD) methodology, we explore plants as metaphoric inspiration for HRI; we conducted iterative cycles of ideation, prototyping, and reflection to investigate what design primitives emerge from plant metaphors and morphologies, and how these primitives can be combined into expressive robotic forms. We present a suite of speculative, open-source prototypes that help probe plant-inspired presence, temporality, form, and gestures. We deepened our learnings from design and prototyping through prototype-centered workshops that explored people's perceptions and imaginaries of plant-inspired robots. This work contributes: (1) Set of plant-inspired robotic artifacts; (2) Designerly insights on how people perceive plant-inspired robots; and (3) Design consideration to inform how to use plant metaphors to reshape HRI.
https://arxiv.org/abs/2601.22387
Academic Papers
svg
f3ade658a60747faf661114f2eb9fade5360d657bfa2ecc3c21a9940cb04a7ed
2026-02-02T00:00:00-05:00
An Effective Energy Mask-based Adversarial Evasion Attacks against Misclassification in Speaker Recognition Systems
arXiv:2601.22390v1 Announce Type: new Abstract: Evasion attacks pose significant threats to AI systems, exploiting vulnerabilities in machine learning models to bypass detection mechanisms. The widespread use of voice data, including deepfakes, in promising future industries is currently hindered by insufficient legal frameworks. Adversarial attack methods have emerged as the most effective countermeasure against the indiscriminate use of such data. This research introduces masked energy perturbation (MEP), a novel approach using power spectrum for energy masking of original voice data. MEP applies masking to small energy regions in the frequency domain before generating adversarial perturbations, targeting areas less noticeable to the human auditory model. The study primarily employs advanced speaker recognition models, including ECAPA-TDNN and ResNet34, which have shown remarkable performance in speaker verification tasks. The proposed MEP method demonstrated strong performance in both audio quality and evasion effectiveness. The energy masking approach effectively minimizes the perceptual evaluation of speech quality (PESQ) degradation, indicating that minimal perceptual distortion occurs to the human listener despite the adversarial perturbations. Specifically, in the PESQ evaluation, the relative performance of the MEP method was 26.68% when compared to the fast gradient sign method (FGSM) and iterative FGSM.
https://arxiv.org/abs/2601.22390
Academic Papers
svg
df9db1b0e7212bef5a79a0984bce537c1b865affba6f9db30ef30cd80c34d1e0
2026-02-02T00:00:00-05:00
Proof Complexity of Linear Logics
arXiv:2601.22393v1 Announce Type: new Abstract: Proving proof-size lower bounds for $\mathbf{LK}$, the sequent calculus for classical propositional logic, remains a major open problem in proof complexity. We shed new light on this challenge by isolating the power of structural rules, showing that their combination is extremely stronger than any single rule alone. We establish exponential (resp. sub-exponential) proof-size lower bounds for $\mathbf{LK}$ without contraction (resp. weakening) for formulas with short $\mathbf{LK}$-proofs. Concretely, we work with the Full Lambek calculus with exchange, $\mathbf{FL_e}$, and its contraction-extended variant, $\mathbf{FL_{ec}}$, substructural systems underlying linear logic. We construct families of $\mathbf{FL_e}$-provable (resp. $\mathbf{FL_{ec}}$-provable) formulas that require exponential-size (resp. sub-exponential-size) proofs in affine linear logic $\mathbf{ALL}$ (resp. relevant linear logic $\mathbf{RLL}$), but admit polynomial-size proofs once contraction (resp. weakening) is restored. This yields exponential lower bounds on proof-size of $\mathbf{FL_e}$-provable formulas in $\mathbf{ALL}$ and hence for $\mathbf{MALL}$, $\mathbf{AMALL}$, and full classical linear logic $\mathbf{CLL}$. Finally, we exhibit formulas with polynomial-size $\mathbf{FL_e}$-proofs that nevertheless require exponential-size proofs in cut-free $\mathbf{LK}$, establishing exponential speed-ups between various linear calculi and their cut-free counterparts.
https://arxiv.org/abs/2601.22393
Academic Papers
svg
9d9da62f4cdeba33fbabb528c9b8b1d1ef9e313f9541ddee09d899aa9c96cb7f
2026-02-02T00:00:00-05:00
Conversational Inoculation to Enhance Resistance to Misinformation
arXiv:2601.22394v1 Announce Type: new Abstract: Proliferation of misinformation is a globally acknowledged problem. Cognitive Inoculation helps build resistance to different forms of persuasion, such as misinformation. We investigate Conversational Inoculation, a method to help people build resistance to misinformation through dynamic conversations with a chatbot. We built a Web-based system to implement the method, and conducted a within-subject user experiment to compare it with two traditional inoculation methods. Our results validate Conversational Inoculation as a viable novel method, and show how it was able to enhance participants' resistance to misinformation. A qualitative analysis of the conversations between participants and the chatbot reveal independence and trust as factors that boosted the efficiency of Conversational Inoculation, and friction of interaction as a factor hindering it. We discuss the opportunities and challenges of using Conversational Inoculation to combat misinformation. Our work contributes a timely investigation and a promising research direction in scalable ways to combat misinformation.
https://arxiv.org/abs/2601.22394
Academic Papers
svg
cbfd8345289f49ca90ddddc0855ca7859945931c25d319737dfdf73376cd72e1
2026-02-02T00:00:00-05:00
Regional Transportation Modeling for Equitable Electric Vehicle Charging Infrastructure Design
arXiv:2601.22395v1 Announce Type: new Abstract: The widespread adoption of battery electric vehicles (BEVs) holds promise for mitigating emission-related health impacts, particularly for low-income communities disproportionately affected by exposure to traffic-related air pollution. However, designing effective charging infrastructure necessitates a regional modeling approach that accounts for the inherent cross-jurisdictional nature of mobility patterns. This study underscores the importance of regional modeling in optimizing charging station deployment and evaluating the environmental justice implications for equity priority communities. We present a large-scale regional transportation modeling analysis leveraging Mobiliti, a cloud-based platform that employs parallel discrete event simulation to enable rapid computation. Our approach identifies the spatial demand density for charging infrastructure by analyzing over 19 million trips in the San Francisco Bay Area and determining the threshold points where BEVs may require charging across a typical day. By transitioning these trips that originate outside equity priority communities to BEVs, we quantify the potential emission reductions within these vulnerable areas. The regional modeling framework captures the complex interactions between travel behavior, vehicle characteristics, and charging needs, while accounting for the interconnectivity of infrastructure across municipal boundaries. This study demonstrates the critical role of regional modeling in designing equitable BEV charging networks that address environmental justice concerns. The findings inform strategies for deploying charging infrastructure that maximizes accessibility, minimizes range anxiety, and prioritizes the health and well-being of communities disproportionately burdened by transportation emissions.
https://arxiv.org/abs/2601.22395
Academic Papers
svg
8b69390d43f0dcd41894a9373ec851d097f6d3ef225f09e2c4da20028da64741
2026-02-02T00:00:00-05:00
Culturally Grounded Personas in Large Language Models: Characterization and Alignment with Socio-Psychological Value Frameworks
arXiv:2601.22396v1 Announce Type: new Abstract: Despite the growing utility of Large Language Models (LLMs) for simulating human behavior, the extent to which these synthetic personas accurately reflect world and moral value systems across different cultural conditionings remains uncertain. This paper investigates the alignment of synthetic, culturally-grounded personas with established frameworks, specifically the World Values Survey (WVS), the Inglehart-Welzel Cultural Map, and Moral Foundations Theory. We conceptualize and produce LLM-generated personas based on a set of interpretable WVS-derived variables, and we examine the generated personas through three complementary lenses: positioning on the Inglehart-Welzel map, which unveils their interpretation reflecting stable differences across cultural conditionings; demographic-level consistency with the World Values Survey, where response distributions broadly track human group patterns; and moral profiles derived from a Moral Foundations questionnaire, which we analyze through a culture-to-morality mapping to characterize how moral responses vary across different cultural configurations. Our approach of culturally-grounded persona generation and analysis enables evaluation of cross-cultural structure and moral variation.
https://arxiv.org/abs/2601.22396
Academic Papers
svg
6602ce61e143ef9ebdb25094af14b541f2f0dfb727a8660259fe55eec4835c38
2026-02-02T00:00:00-05:00
SAIR: Cost-Efficient Multi-Stage ML Pipeline Autoscaling via In-Context Reinforcement Learning
arXiv:2601.22397v1 Announce Type: new Abstract: Multi-stage ML inference pipelines are difficult to autoscale due to heterogeneous resources, cross-stage coupling, and dynamic bottleneck migration. We present SAIR, an autoscaling framework that uses an LLM as an in-context reinforcement learning controller, improving its policy online from reward-labeled interaction histories without gradient updates. SAIR combines Pareto-dominance reward shaping with a provable separation margin, surprisal-guided experience retrieval for context efficiency, and fine-grained GPU rate control via user-space CUDA interception. We provide regret analysis decomposing error into retrieval coverage and LLM selection components. On four ML serving pipelines under three workload patterns, SAIR achieves the best or tied-best P99 latency and effective resource cost among deployed baselines, improving P99 by up to 50% and reducing effective cost by up to 97% (under GPU rate-control assumptions), with 86% bottleneck detection accuracy and no offline training.
https://arxiv.org/abs/2601.22397
Academic Papers
svg
022c83522432d63fd0faf79fb173441d1b212744f5d26e44047157ced1f078b2
2026-02-02T00:00:00-05:00
Jailbreaks on Vision Language Model via Multimodal Reasoning
arXiv:2601.22398v1 Announce Type: new Abstract: Vision-language models (VLMs) have become central to tasks such as visual question answering, image captioning, and text-to-image generation. However, their outputs are highly sensitive to prompt variations, which can reveal vulnerabilities in safety alignment. In this work, we present a jailbreak framework that exploits post-training Chain-of-Thought (CoT) prompting to construct stealthy prompts capable of bypassing safety filters. To further increase attack success rates (ASR), we propose a ReAct-driven adaptive noising mechanism that iteratively perturbs input images based on model feedback. This approach leverages the ReAct paradigm to refine adversarial noise in regions most likely to activate safety defenses, thereby enhancing stealth and evasion. Experimental results demonstrate that the proposed dual-strategy significantly improves ASR while maintaining naturalness in both text and visual domains.
https://arxiv.org/abs/2601.22398
Academic Papers
svg
c0296ed2fb70b495913219ec49638e3a9bbcd66476df89710e2f3a78bb0a994a
2026-02-02T00:00:00-05:00
Score-based Integrated Gradient for Root Cause Explanations of Outliers
arXiv:2601.22399v1 Announce Type: new Abstract: Identifying the root causes of outliers is a fundamental problem in causal inference and anomaly detection. Traditional approaches based on heuristics or counterfactual reasoning often struggle under uncertainty and high-dimensional dependencies. We introduce SIREN, a novel and scalable method that attributes the root causes of outliers by estimating the score functions of the data likelihood. Attribution is computed via integrated gradients that accumulate score contributions along paths from the outlier toward the normal data distribution. Our method satisfies three of the four classic Shapley value axioms - dummy, efficiency, and linearity - as well as an asymmetry axiom derived from the underlying causal structure. Unlike prior work, SIREN operates directly on the score function, enabling tractable and uncertainty-aware root cause attribution in nonlinear, high-dimensional, and heteroscedastic causal models. Extensive experiments on synthetic random graphs and real-world cloud service and supply chain datasets show that SIREN outperforms state-of-the-art baselines in both attribution accuracy and computational efficiency.
https://arxiv.org/abs/2601.22399
Academic Papers
svg
3739ca10867f6c3c614b533118678c5d721b3c16f002a62196dd9d98d8f6abba
2026-02-02T00:00:00-05:00
Semi-Autonomous Mathematics Discovery with Gemini: A Case Study on the Erd\H{o}s Problems
arXiv:2601.22401v1 Announce Type: new Abstract: We present a case study in semi-autonomous mathematics discovery, using Gemini to systematically evaluate 700 conjectures labeled 'Open' in Bloom's Erd\H{o}s Problems database. We employ a hybrid methodology: AI-driven natural language verification to narrow the search space, followed by human expert evaluation to gauge correctness and novelty. We address 13 problems that were marked 'Open' in the database: 5 through seemingly novel autonomous solutions, and 8 through identification of previous solutions in the existing literature. Our findings suggest that the 'Open' status of the problems was through obscurity rather than difficulty. We also identify and discuss issues arising in applying AI to math conjectures at scale, highlighting the difficulty of literature identification and the risk of ''subconscious plagiarism'' by AI. We reflect on the takeaways from AI-assisted efforts on the Erd\H{o}s Problems.
https://arxiv.org/abs/2601.22401
Academic Papers
svg
d29081b2a9534b82a2e738d136e9ed1667017f26e5c2a8d09a28e829849cf18c
2026-02-02T00:00:00-05:00
Bifocal Attention: Harmonizing Geometric and Spectral Positional Embeddings for Algorithmic Generalization
arXiv:2601.22402v1 Announce Type: new Abstract: Rotary Positional Embeddings (RoPE) have become the standard for Large Language Models (LLMs) due to their ability to encode relative positions through geometric rotation. However, we identify a significant limitation we term ''Spectral Rigidity'': standard RoPE utilizes a fixed geometric decay ($\theta^{-i}$) optimized for local syntactic coherence, which fails to capture the long-range, periodic structures inherent in recursive logic and algorithmic reasoning. This results in a ''Structure Gap'', where models trained on shallow reasoning chains fail to extrapolate to deeper recursive steps. In this work, we introduce Bifocal Attention, an architectural paradigm that decouples positional encoding into two distinct modalities: Geometric Eyes (Standard RoPE) for precise token-level manipulation, and Spectral Eyes (Learnable Harmonic Operators) for tracking long-range recursive depth. We propose a novel training protocol, Spectral Evolution, which initializes positional frequencies as static geometric parameters but allows them to evolve via gradient descent into a harmonic basis optimized for the specific algorithmic topology of the task.
https://arxiv.org/abs/2601.22402
Academic Papers
svg
5e5c1124d931050e8c0cb63747697afcfa71eeaccda6fd1eb2b7f4209d8d9995
2026-02-02T00:00:00-05:00
Modeling of Non-linear Dynamics of Lithium-ion Batteries via Delay-Embedded Dynamic Mode Decomposition
arXiv:2601.22403v1 Announce Type: new Abstract: The complex electrochemical behavior of lithium-ion batteries results in non-linear dynamics and appropriate modeling of this non-linear dynamical system is of interest for better management and control. In this work, we proposed a family of dynamic mode decomposition (DMD)-based data-driven models that do not require detailed knowledge of the composition of the battery materials but can essentially capture the non-linear dynamics with higher computational efficiency. Only voltage and current data obtained from hybrid pulse power characterization (HPPC) tests were utilized to form the state space matrices and subsequently used for predicting the future terminal voltage at different state of charge (SoC) and aging levels. To construct the system model, 60\% of the data from a single HPPC test was utilized to generate time-delay embedded snapshots, with embedding dimension ranging from 40 to 2000. Among these, an embedding dimension of 1810 resulted in the least residual sum of squares (RSS) error of 3.86 for the dynamic mode decomposition with control (DMDc) model and 30 for the standard DMD model. For DMDc model, delay embeddings (ranging from 1 to 12) were also incorporated into the input current signals. For the input matrix, an embedding dimension of 6 resulted in a minimum RSS error of 1.74. Furthermore, the system matrices A and B, identified from the HPPC test when the cell is in its healthy state, were held fixed and used to simulate the system dynamics for aged batteries by updating only the control input. Despite the presence of nonlinear degradation effects in later cycles, the DMDc model effectively captured key inner dynamics such as voltage dips and transient responses for subsequent charge and discharge cycles.
https://arxiv.org/abs/2601.22403
Academic Papers
svg
8e12edb1ab35a16efc824b84333286a124708f805c2403efdaeba471f1a2f0c0
2026-02-02T00:00:00-05:00
Accurate Pedestrian Tracking in Urban Canyons: A Multi-Modal Fusion Approach
arXiv:2601.22406v1 Announce Type: new Abstract: The contribution describes a pedestrian navigation approach designed to improve localization accuracy in urban environments where GNSS performance is degraded, a problem that is especially critical for blind or low-vision users who depend on precise guidance such as identifying the correct side of a street. To address GNSS limitations and the impracticality of camera-based visual positioning, the work proposes a particle filter based fusion of GNSS and inertial data that incorporates spatial priors from maps, such as impassable buildings and unlikely walking areas, functioning as a probabilistic form of map matching. Inertial localization is provided by the RoNIN machine learning method, and fusion with GNSS is achieved by weighting particles based on their consistency with GNSS estimates and uncertainty. The system was evaluated on six challenging walking routes in downtown San Francisco using three metrics related to sidewalk correctness and localization error. Results show that the fused approach (GNSS+RoNIN+PF) significantly outperforms GNSS only localization on most metrics, while inertial-only localization with particle filtering also surpasses GNSS alone for critical measures such as sidewalk assignment and across street error.
https://arxiv.org/abs/2601.22406
Academic Papers
svg
8e777f2a7df23f1ca849acf314e4d1c97e7ec03ac55df31e8048ceda881a5e08
2026-02-02T00:00:00-05:00
Optimization, Generalization and Differential Privacy Bounds for Gradient Descent on Kolmogorov-Arnold Networks
arXiv:2601.22409v1 Announce Type: new Abstract: Kolmogorov--Arnold Networks (KANs) have recently emerged as a structured alternative to standard MLPs, yet a principled theory for their training dynamics, generalization, and privacy properties remains limited. In this paper, we analyze gradient descent (GD) for training two-layer KANs and derive general bounds that characterize their training dynamics, generalization, and utility under differential privacy (DP). As a concrete instantiation, we specialize our analysis to logistic loss under an NTK-separable assumption, where we show that polylogarithmic network width suffices for GD to achieve an optimization rate of order $1/T$ and a generalization rate of order $1/n$, with $T$ denoting the number of GD iterations and $n$ the sample size. In the private setting, we characterize the noise required for $(\epsilon,\delta)$-DP and obtain a utility bound of order $\sqrt{d}/(n\epsilon)$ (with $d$ the input dimension), matching the classical lower bound for general convex Lipschitz problems. Our results imply that polylogarithmic width is not only sufficient but also necessary under differential privacy, revealing a qualitative gap between non-private (sufficiency only) and private (necessity also emerges) training regimes. Experiments further illustrate how these theoretical insights can guide practical choices, including network width selection and early stopping.
https://arxiv.org/abs/2601.22409
Academic Papers
svg
7ab7b6f520af985ab2e63577d6d0532daf03812b21a95cdc3dcd87cdc0185ccf
2026-02-02T00:00:00-05:00
Word-Centered Semantic Graphs for Interpretable Diachronic Sense Tracking
arXiv:2601.22410v1 Announce Type: new Abstract: We propose an interpretable, graph-based framework for analyzing semantic shift in diachronic corpora. For each target word and time slice, we induce a word-centered semantic network that integrates distributional similarity from diachronic Skip-gram embeddings with lexical substitutability from time-specific masked language models. We identify sense-related structure by clustering the peripheral graph, align clusters across time via node overlap, and track change through cluster composition and normalized cluster mass. In an application study on a corpus of New York Times Magazine articles (1980 - 2017), we show that graph connectivity reflects polysemy dynamics and that the induced communities capture contrasting trajectories: event-driven sense replacement (trump), semantic stability with cluster over-segmentation effects (god), and gradual association shifts tied to digital communication (post). Overall, word-centered semantic graphs offer a compact and transparent representation for exploring sense evolution without relying on predefined sense inventories.
https://arxiv.org/abs/2601.22410
Academic Papers
svg
111c19ee665b30d3e1ed47dfe69fc715b64739a20d6e920ca1d354664dcb3a9d
2026-02-02T00:00:00-05:00
EMBC Special Issue: Calibrated Uncertainty for Trustworthy Clinical Gait Analysis Using Probabilistic Multiview Markerless Motion Capture
arXiv:2601.22412v1 Announce Type: new Abstract: Video-based human movement analysis holds potential for movement assessment in clinical practice and research. However, the clinical implementation and trust of multi-view markerless motion capture (MMMC) require that, in addition to being accurate, these systems produce reliable confidence intervals to indicate how accurate they are for any individual. Building on our prior work utilizing variational inference to estimate joint angle posterior distributions, this study evaluates the calibration and reliability of a probabilistic MMMC method. We analyzed data from 68 participants across two institutions, validating the model against an instrumented walkway and standard marker-based motion capture. We measured the calibration of the confidence intervals using the Expected Calibration Error (ECE). The model demonstrated reliable calibration, yielding ECE values generally < 0.1 for both step and stride length and bias-corrected gait kinematics. We observed a median step and stride length error of ~16 mm and ~12 mm respectively, with median bias-corrected kinematic errors ranging from 1.5 to 3.8 degrees across lower extremity joints. Consistent with the calibrated ECE, the magnitude of the model's predicted uncertainty correlated strongly with observed error measures. These findings indicate that, as designed, the probabilistic model reconstruction quantifies epistemic uncertainty, allowing it to identify unreliable outputs without the need for concurrent ground-truth instrumentation.
https://arxiv.org/abs/2601.22412
Academic Papers
svg
7a02408aad314fa2f2ce29ddf7da6e14a7b16e64505f2bdafa8e32072180d9d3
2026-02-02T00:00:00-05:00
PriviSense: A Frida-Based Framework for Multi-Sensor Spoofing on Android
arXiv:2601.22414v1 Announce Type: new Abstract: Mobile apps increasingly rely on real-time sensor and system data to adapt their behavior to user context. While emulators and instrumented builds offer partial solutions, they often fail to support reproducible testing of context-sensitive app behavior on physical devices. We present PriviSense, a Frida-based, on-device toolkit for runtime spoofing of sensor and system signals on rooted Android devices. PriviSense can script and inject time-varying sensor streams (accelerometer, gyroscope, step counter) and system values (battery level, system time, device metadata) into unmodified apps, enabling reproducible on-device experiments without emulators or app rewrites. Our demo validates real-time spoofing on a rooted Android device across five representative sensor-visualization apps. By supporting scriptable and reversible manipulation of these values, PriviSense facilitates testing of app logic, uncovering of context-based behaviors, and privacy-focused analysis. To ensure ethical use, the code is shared upon request with verified researchers. Tool Guide: How to Run PriviSense on Rooted Android https://bit.ly/privisense-guide Demonstration video: https://www.youtube.com/watch?v=4Qwnogcc3pw
https://arxiv.org/abs/2601.22414
Academic Papers
svg
2c42e65271268e92cc6fea7b8883546cefc7cc0869bcba1b6684d130bb67c82b
2026-02-02T00:00:00-05:00
MM-OpenFGL: A Comprehensive Benchmark for Multimodal Federated Graph Learning
arXiv:2601.22416v1 Announce Type: new Abstract: Multimodal-attributed graphs (MMAGs) provide a unified framework for modeling complex relational data by integrating heterogeneous modalities with graph structures. While centralized learning has shown promising performance, MMAGs in real-world applications are often distributed across isolated platforms and cannot be shared due to privacy concerns or commercial constraints. Federated graph learning (FGL) offers a natural solution for collaborative training under such settings; however, existing studies largely focus on single-modality graphs and do not adequately address the challenges unique to multimodal federated graph learning (MMFGL). To bridge this gap, we present MM-OpenFGL, the first comprehensive benchmark that systematically formalizes the MMFGL paradigm and enables rigorous evaluation. MM-OpenFGL comprises 19 multimodal datasets spanning 7 application domains, 8 simulation strategies capturing modality and topology variations, 6 downstream tasks, and 57 state-of-the-art methods implemented through a modular API. Extensive experiments investigate MMFGL from the perspectives of necessity, effectiveness, robustness, and efficiency, offering valuable insights for future research on MMFGL.
https://arxiv.org/abs/2601.22416
Academic Papers
svg
3560a373ec6a047f0235af0dde510fc17e633a433c4a5e01fe07e1bcd09d551c
2026-02-02T00:00:00-05:00
AI-Enabled Waste Classification as a Data-Driven Decision Support Tool for Circular Economy and Urban Sustainability
arXiv:2601.22418v1 Announce Type: new Abstract: Efficient waste sorting is crucial for enabling circular-economy practices and resource recovery in smart cities. This paper evaluates both traditional machine-learning (Random Forest, SVM, AdaBoost) and deep-learning techniques including custom CNNs, VGG16, ResNet50, and three transfer-learning models (DenseNet121, EfficientNetB0, InceptionV3) for binary classification of 25 077 waste images (80/20 train/test split, augmented and resized to 150x150 px). The paper assesses the impact of Principal Component Analysis for dimensionality reduction on traditional models. DenseNet121 achieved the highest accuracy (91 %) and ROC-AUC (0.98), outperforming the best traditional classifier by 20 pp. Principal Component Analysis (PCA) showed negligible benefit for classical methods, whereas transfer learning substantially improved performance under limited-data conditions. Finally, we outline how these models integrate into a real-time Data-Driven Decision Support System for automated waste sorting, highlighting potential reductions in landfill use and lifecycle environmental impacts.)
https://arxiv.org/abs/2601.22418
Academic Papers
svg
3f22a3ba4cac6910c073bd857f8e11c36dfb7bf81750894c9ab8c2a596e737ff
2026-02-02T00:00:00-05:00
Dynamic Welfare-Maximizing Pooled Testing
arXiv:2601.22419v1 Announce Type: new Abstract: Pooled testing is a common strategy for public health disease screening under limited testing resources, allowing multiple biological samples to be tested together with the resources of a single test, at the cost of reduced individual resolution. While dynamic and adaptive strategies have been extensively studied in the classical pooled testing literature, where the goal is to minimize the number of tests required for full diagnosis of a given population, much of the existing work on welfare-maximizing pooled testing adopts static formulations in which all tests are assigned in advance. In this paper, we study dynamic welfare-maximizing pooled testing strategies in which a limited number of tests are performed sequentially to maximize social welfare, defined as the aggregate utility of individuals who are confirmed to be healthy. We formally define the dynamic problem and study algorithmic approaches for sequential test assignment. Because exact dynamic optimization is computationally infeasible beyond small instances, we evaluate a range of strategies (including exact optimization baselines, greedy heuristics, mixed-integer programming relaxations, and learning-based policies) and empirically characterize their performance and tradeoffs using synthetic experiments. Our results show that dynamic testing can yield substantial welfare improvements over static baselines in low-budget regimes. We find that much of the benefit of dynamic testing is captured by simple greedy policies, which substantially outperform static approaches while remaining computationally efficient. Learning-based methods are included as flexible baselines, but in our experiments they do not reliably improve upon these heuristics. Overall, this work provides a principled computational perspective on dynamic pooled testing and clarifies when dynamic assignment meaningfully improves welfare in public health screening.
https://arxiv.org/abs/2601.22419
Academic Papers
svg
64913b81c90d17a2afacb4a95aac8500e86497260e625042e9f03bd2b94c477a
2026-02-02T00:00:00-05:00
MetaLead: A Comprehensive Human-Curated Leaderboard Dataset for Transparent Reporting of Machine Learning Experiments
arXiv:2601.22420v1 Announce Type: new Abstract: Leaderboards are crucial in the machine learning (ML) domain for benchmarking and tracking progress. However, creating leaderboards traditionally demands significant manual effort. In recent years, efforts have been made to automate leaderboard generation, but existing datasets for this purpose are limited by capturing only the best results from each paper and limited metadata. We present MetaLead, a fully human-annotated ML Leaderboard dataset that captures all experimental results for result transparency and contains extra metadata, such as the result experimental type: baseline, proposed method, or variation of proposed method for experiment-type guided comparisons, and explicitly separates train and test dataset for cross-domain assessment. This enriched structure makes MetaLead a powerful resource for more transparent and nuanced evaluations across ML research.
https://arxiv.org/abs/2601.22420
Academic Papers
svg
f0db071893200fd59ef5208eef6b69c3754ac7153673a0dc407ef72a9972b7e8
2026-02-02T00:00:00-05:00
Toward Third-Party Assurance of AI Systems: Design Requirements, Prototype, and Early Testing
arXiv:2601.22424v1 Announce Type: new Abstract: As Artificial Intelligence (AI) systems proliferate, the need for systematic, transparent, and actionable processes for evaluating them is growing. While many resources exist to support AI evaluation, they have several limitations. Few address both the process of designing, developing, and deploying an AI system and the outcomes it produces. Furthermore, few are end-to-end and operational, give actionable guidance, or present evidence of usability or effectiveness in practice. In this paper, we introduce a third-party AI assurance framework that addresses these gaps. We focus on third-party assurance to prevent conflict of interest and ensure credibility and accountability of the process. We begin by distinguishing assurance from audits in several key dimensions. Then, following design principles, we reflect on the shortcomings of existing resources to identify a set of design requirements for AI assurance. We then construct a prototype of an assurance process that consists of (1) a responsibility assignment matrix to determine the different levels of involvement each stakeholder has at each stage of the AI lifecycle, (2) an interview protocol for each stakeholder of an AI system, (3) a maturity matrix to assess AI systems' adherence to best practices, and (4) a template for an assurance report that draws from more mature assurance practices in business accounting. We conduct early validation of our AI assurance framework by applying the framework to two distinct AI use cases -- a business document tagging tool for downstream processing in a large private firm, and a housing resource allocation tool in a public agency -- and conducting expert validation interviews. Our findings show early evidence that our AI assurance framework is sound and comprehensive, usable across different organizational contexts, and effective at identifying bespoke issues with AI systems.
https://arxiv.org/abs/2601.22424
Academic Papers
svg
d560346ce45662e6e7ad35d04cda3de15c9de448a95e39a9120560c7beb5cc71
2026-02-02T00:00:00-05:00
ScamPilot: Simulating Conversations with LLMs to Protect Against Online Scams
arXiv:2601.22426v1 Announce Type: new Abstract: Fraud continues to proliferate online, from phishing and ransomware to impersonation scams. Yet automated prevention approaches adapt slowly and may not reliably protect users from falling prey to new scams. To better combat online scams, we developed ScamPilot, a conversational interface that inoculates users against scams through simulation, dynamic interaction, and real-time feedback. ScamPilot simulates scams with two large language model-powered agents: a scammer and a target. Users must help the target defend against the scammer by providing real-time advice. Through a between-subjects study (N=150) with one control and three experimental conditions, we find that blending advice-giving with multiple choice questions significantly increased scam recognition (+8%) without decreasing wariness towards legitimate conversations. Users' response efficacy and change in self-efficacy was also 9% and 19% higher, respectively. Qualitatively, we find that users more frequently provided action-oriented advice over urging caution or providing emotional support. Overall, ScamPilot demonstrates the potential for inter-agent conversational user interfaces to augment learning.
https://arxiv.org/abs/2601.22426
Academic Papers
svg
40fbf6c4e581c6edd76184c9d88aa04332c66c9a209f874770a7dd579437ddc8
2026-02-02T00:00:00-05:00
CoDCL: Counterfactual Data Augmentation Contrastive Learning for Continuous-Time Dynamic Network Link Prediction
arXiv:2601.22427v1 Announce Type: new Abstract: The rapid growth and continuous structural evolution of dynamic networks make effective predictions increasingly challenging. To enable prediction models to adapt to complex temporal environments, they need to be robust to emerging structural changes. We propose a dynamic network learning framework CoDCL, which combines counterfactual data augmentation with contrastive learning to address this deficiency.Furthermore, we devise a comprehensive strategy to generate high-quality counterfactual data, combining a dynamic treatments design with efficient structural neighborhood exploration to quantify the temporal changes in interaction patterns.Crucially, the entire CoDCL is designed as a plug-and-play universal module that can be seamlessly integrated into various existing temporal graph models without requiring architectural modifications.Extensive experiments on multiple real-world datasets demonstrate that CoDCL significantly gains state-of-the-art baseline models in the field of dynamic networks, confirming the critical role of integrating counterfactual data augmentation into dynamic representation learning.
https://arxiv.org/abs/2601.22427
Academic Papers
svg
ecc95697f9f9b09eebee97c963333c233c35e6e8741301008bebb98a42a69cdc
2026-02-02T00:00:00-05:00
Why Johnny Can't Think: GenAI's Impacts on Cognitive Engagement
arXiv:2601.22430v1 Announce Type: new Abstract: Context: Many students now use generative AI in their coursework, yet its effects on intellectual development remain poorly understood. While prior work has investigated students' cognitive offloading during episodic interactions, it remains unclear whether using genAI routinely is tied to more fundamental shifts in students' thinking habits. Objective: We investigate (RQ1-How): how students' trust in and routine use of genAI affect their cognitive engagement -- specifically, reflection, need for understanding, and critical thinking in STEM coursework. Further, we investigate (RQ2-Who): which students are particularly vulnerable to these cognitive disengagement effects. Method: We drew on dual-process theory, cognitive offloading, and automation bias literature to develop a statistical model explaining how and to what extent students' trust-driven routine use of genAI affected their cognitive engagement habits in coursework, and how these effects differed across students' cognitive styles. We empirically evaluated this model using Partial Least Squares Structural Equation Modeling on survey data from 299 STEM students across five North American universities. Results: Students who trusted and routinely used genAI reported significantly lower cognitive engagement. Unexpectedly, students with higher technophilic motivations, risk tolerance, and computer self-efficacy -- traits often celebrated in STEM -- were more prone to these effects. Interestingly, prior experience with genAI or academia did not protect them from cognitively disengaging. Implications: Our findings suggest a potential cognitive debt cycle in which routine genAI use progressively weakens students' intellectual habits, potentially driving over-reliance and escalating usage. This poses critical challenges for curricula and genAI system design, requiring interventions that actively support cognitive engagement.
https://arxiv.org/abs/2601.22430
Academic Papers
svg
77e57185aaa84614cfb40ea72b205f7bd128fc3654a17cd50e9fafd6b2f8ff2d
2026-02-02T00:00:00-05:00
ReNCE: Learning to Reason by Noise Contrastive Estimation
arXiv:2601.22432v1 Announce Type: new Abstract: GRPO is a standard approach to endowing pretrained LLMs with reasoning capabilities. It estimates the advantage of an outcome from a group of $K$ outcomes, and promotes those with positive advantages inside a trust region. Since GRPO discriminates between good and bad outcomes softly, it benefits from additional refinements such as asymmetric clipping and zero-variance data filtering. While effective, these refinements require significant empirical insight and can be challenging to identify. We instead propose an explicit contrastive learning approach. Instead of estimating advantages, we bifurcate $K$ outcomes into positive and negative sets, then maximize the likelihood of positive outcomes. Our approach can be viewed as an online instantiation of (multi-label) noise contrastive estimation for LLM reasoning. We validate our method by demonstrating competitive performance on a suite of challenging math benchmarks against strong baselines such as DAPO and online DPO.
https://arxiv.org/abs/2601.22432
Academic Papers
svg
b1fdb1c34a55126586112df95ab891c0904778e3a8cd2179d794d53872ab3fcf
2026-02-02T00:00:00-05:00
When LLM meets Fuzzy-TOPSIS for Personnel Selection through Automated Profile Analysis
arXiv:2601.22433v1 Announce Type: new Abstract: In this highly competitive employment environment, the selection of suitable personnel is essential for organizational success. This study presents an automated personnel selection system that utilizes sophisticated natural language processing (NLP) methods to assess and rank software engineering applicants. A distinctive dataset was created by aggregating LinkedIn profiles that include essential features such as education, work experience, abilities, and self-introduction, further enhanced with expert assessments to function as standards. The research combines large language models (LLMs) with multicriteria decision-making (MCDM) theory to develop the LLM-TOPSIS framework. In this context, we utilized the TOPSIS method enhanced by fuzzy logic (Fuzzy TOPSIS) to address the intrinsic ambiguity and subjectivity in human assessments. We utilized triangular fuzzy numbers (TFNs) to describe criteria weights and scores, thereby addressing the ambiguity frequently encountered in candidate evaluations. For candidate ranking, the DistilRoBERTa model was fine-tuned and integrated with the fuzzy TOPSIS method, achieving rankings closely aligned with human expert evaluations and attaining an accuracy of up to 91% for the Experience attribute and the Overall attribute. The study underlines the potential of NLP-driven frameworks to improve recruitment procedures by boosting scalability, consistency, and minimizing prejudice. Future endeavors will concentrate on augmenting the dataset, enhancing model interpretability, and verifying the system in actual recruitment scenarios to better evaluate its practical applicability. This research highlights the intriguing potential of merging NLP with fuzzy decision-making methods in personnel selection, enabling scalable and unbiased solutions to recruitment difficulties.
https://arxiv.org/abs/2601.22433
Academic Papers
svg
1c94d04ad8e38f9c9353b098b27117f37845aad114c70fc7e9e0f682ee61b425
2026-02-02T00:00:00-05:00
Rethinking Anonymity Claims in Synthetic Data Generation: A Model-Centric Privacy Attack Perspective
arXiv:2601.22434v1 Announce Type: new Abstract: Training generative machine learning models to produce synthetic tabular data has become a popular approach for enhancing privacy in data sharing. As this typically involves processing sensitive personal information, releasing either the trained model or generated synthetic datasets can still pose privacy risks. Yet, recent research, commercial deployments, and privacy regulations like the General Data Protection Regulation (GDPR) largely assess anonymity at the level of an individual dataset. In this paper, we rethink anonymity claims about synthetic data from a model-centric perspective and argue that meaningful assessments must account for the capabilities and properties of the underlying generative model and be grounded in state-of-the-art privacy attacks. This perspective better reflects real-world products and deployments, where trained models are often readily accessible for interaction or querying. We interpret the GDPR's definitions of personal data and anonymization under such access assumptions to identify the types of identifiability risks that must be mitigated and map them to privacy attacks across different threat settings. We then argue that synthetic data techniques alone do not ensure sufficient anonymization. Finally, we compare the two mechanisms most commonly used alongside synthetic data -- Differential Privacy (DP) and Similarity-based Privacy Metrics (SBPMs) -- and argue that while DP can offer robust protections against identifiability risks, SBPMs lack adequate safeguards. Overall, our work connects regulatory notions of identifiability with model-centric privacy attacks, enabling more responsible and trustworthy regulatory assessment of synthetic data systems by researchers, practitioners, and policymakers.
https://arxiv.org/abs/2601.22434
Academic Papers
svg
57de24234cad11485e43032c14e611c63a49a06421121fcac83b7534dae58a73
2026-02-02T00:00:00-05:00
Large Language Model Agents Are Not Always Faithful Self-Evolvers
arXiv:2601.22436v1 Announce Type: new Abstract: Self-evolving large language model (LLM) agents continually improve by accumulating and reusing past experience, yet it remains unclear whether they faithfully rely on that experience to guide their behavior. We present the first systematic investigation of experience faithfulness, the causal dependence of an agent's decisions on the experience it is given, in self-evolving LLM agents. Using controlled causal interventions on both raw and condensed forms of experience, we comprehensively evaluate four representative frameworks across 10 LLM backbones and 9 environments. Our analysis uncovers a striking asymmetry: while agents consistently depend on raw experience, they often disregard or misinterpret condensed experience, even when it is the only experience provided. This gap persists across single- and multi-agent configurations and across backbone scales. We trace its underlying causes to three factors: the semantic limitations of condensed content, internal processing biases that suppress experience, and task regimes where pretrained priors already suffice. These findings challenge prevailing assumptions about self-evolving methods and underscore the need for more faithful and reliable approaches to experience integration.
https://arxiv.org/abs/2601.22436
Academic Papers
svg
2a8d492c6583874a72f92c66aa663e981652a986408a2cbd711c13b11465e030
2026-02-02T00:00:00-05:00
Towards Resiliency in Large Language Model Serving with KevlarFlow
arXiv:2601.22438v1 Announce Type: new Abstract: Large Language Model (LLM) serving systems remain fundamentally fragile, where frequent hardware faults in hyperscale clusters trigger disproportionate service outages in the software stack. Current recovery mechanisms are prohibitively slow, often requiring up to 10 minutes to reinitialize resources and reload massive model weights. We introduce KevlarFlow, a fault tolerant serving architecture designed to bridge the gap between hardware unreliability and service availability. KevlarFlow leverages 1) decoupled model parallelism initialization, 2) dynamic traffic rerouting, and 3) background KV cache replication to maintain high throughput during partial failures. Our evaluation demonstrates that KevlarFlow reduces mean-time-to-recovery (MTTR) by 20x and, under failure conditions, improves average latency by 3.1x, 99th percentile (p99) latency by 2.8x, average time-to-first-token (TTFT) by 378.9x, and p99 TTFT by 574.6x with negligible runtime overhead in comparison to state-of-the-art LLM serving systems.
https://arxiv.org/abs/2601.22438
Academic Papers
svg
87552b579468541fe3e9e5005eb9ad4811ef3bae107e9d18ad009b94a5bf1d04
2026-02-02T00:00:00-05:00
Stop Jostling: Adaptive Negative Sampling Reduces the Marginalization of Low-Resource Language Tokens by Cross-Entropy Loss
arXiv:2601.22439v1 Announce Type: new Abstract: Neural language models often struggle with low-resource languages due to the limited availability of training data, making tokens from these languages rare in the training set. This paper addresses a specific challenge during training: rare tokens are disproportionately affected by marginalization, which prevents them from learning effectively. We propose a thresholding technique that reduces the impact of this marginalization, allowing rare tokens to benefit from more meaningful alignment. Through experiments with a character-level language model, we demonstrate that this method significantly improves performance on low-resource language validation data. This work is the first to show how negative sampling can be applied to improve the representation of rare tokens by limiting the harmful influence of excessive marginalization, offering a new approach to enhancing language model performance for underrepresented languages.
https://arxiv.org/abs/2601.22439
Academic Papers
svg
e85f29f2f797a75ceab5522c7063a30ae66949742a0805281f37515ea0e5221c
2026-02-02T00:00:00-05:00
AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations
arXiv:2601.22440v1 Announce Type: new Abstract: Does AI understand human values? While this remains an open philosophical question, we take a pragmatic stance by introducing VAPT, the Value-Alignment Perception Toolkit, for studying how LLMs reflect people's values and how people judge those reflections. 20 participants texted a human-like chatbot over a month, then completed a 2-hour interview with our toolkit evaluating AI's ability to extract (pull details regarding), embody (make decisions guided by), and explain (provide proof of) human values. 13 participants left our study convinced that AI can understand human values. Participants found the experience insightful for self-reflection and found themselves getting persuaded by the AI's reasoning. Thus, we warn about "weaponized empathy": a potentially dangerous design pattern that may arise in value-aligned, yet welfare-misaligned AI. VAPT offers concrete artifacts and design implications to evaluate and responsibly build value-aligned conversational agents with transparency, consent, and safeguards as AI grows more capable and human-like into the future.
https://arxiv.org/abs/2601.22440
Academic Papers
svg
003e81f4e85c8a4b5f5b8af8c42d341213b62e9c17e3a7e5e21838b6cba4ffc2
2026-02-02T00:00:00-05:00
AsyncMesh: Fully Asynchronous Optimization for Data and Pipeline Parallelism
arXiv:2601.22442v1 Announce Type: new Abstract: Data and pipeline parallelism are key strategies for scaling neural network training across distributed devices, but their high communication cost necessitates co-located computing clusters with fast interconnects, limiting their scalability. We address this communication bottleneck by introducing asynchronous updates across both parallelism axes, relaxing the co-location requirement at the expense of introducing staleness between pipeline stages and data parallel replicas. To mitigate staleness, for pipeline parallelism, we adopt a weight look-ahead approach, and for data parallelism, we introduce an asynchronous sparse averaging method equipped with an exponential moving average based correction mechanism. We provide convergence guarantees for both sparse averaging and asynchronous updates. Experiments on large-scale language models (up to \em 1B parameters) demonstrate that our approach matches the performance of the fully synchronous baseline, while significantly reducing communication overhead.
https://arxiv.org/abs/2601.22442
Academic Papers
svg
52940c9385086dbe9c3f31f06a8bcabe5992b4e57112ce2f2325b2debb58d711
2026-02-02T00:00:00-05:00
Weak Diffusion Priors Can Still Achieve Strong Inverse-Problem Performance
arXiv:2601.22443v1 Announce Type: new Abstract: Can a diffusion model trained on bedrooms recover human faces? Diffusion models are widely used as priors for inverse problems, but standard approaches usually assume a high-fidelity model trained on data that closely match the unknown signal. In practice, one often must use a mismatched or low-fidelity diffusion prior. Surprisingly, these weak priors often perform nearly as well as full-strength, in-domain baselines. We study when and why inverse solvers are robust to weak diffusion priors. Through extensive experiments, we find that weak priors succeed when measurements are highly informative (e.g., many observed pixels), and we identify regimes where they fail. Our theory, based on Bayesian consistency, gives conditions under which high-dimensional measurements make the posterior concentrate near the true signal. These results provide a principled justification on when weak diffusion priors can be used reliably.
https://arxiv.org/abs/2601.22443
Academic Papers
svg
83b8a606c64b9245d6105b0e4b7bdb5a9d8ca152c2e48d854c4da2a5fd7c327a
2026-02-02T00:00:00-05:00
Automating Forecasting Question Generation and Resolution for AI Evaluation
arXiv:2601.22444v1 Announce Type: new Abstract: Forecasting future events is highly valuable in decision-making and is a robust measure of general intelligence. As forecasting is probabilistic, developing and evaluating AI forecasters requires generating large numbers of diverse and difficult questions, and accurately resolving them. Previous efforts to automate this laborious work relied on recurring data sources (e.g., weather, stocks), limiting diversity and utility. In this work, we present a system for generating and resolving high-quality forecasting questions automatically and at scale using LLM-powered web research agents. We use this system to generate 1499 diverse, real-world forecasting questions, and to resolve them several months later. We estimate that our system produces verifiable, unambiguous questions approximately 96% of the time, exceeding the rate of Metaculus, a leading human-curated forecasting platform. We also find that our system resolves questions at approximately 95% accuracy. We verify that forecasting agents powered by more intelligent LLMs perform better on these questions (Brier score of 0.134 for Gemini 3 Pro, 0.149 for GPT-5, and 0.179 for Gemini 2.5 Flash). Finally, we demonstrate how our system can be leveraged to directly improve forecasting, by evaluating a question decomposition strategy on a generated question set, yielding a significant improvement in Brier scores (0.132 vs. 0.141).
https://arxiv.org/abs/2601.22444
Academic Papers
svg
8a7c70ade5ad329e9057085a3fa9aaf3138eb78ff018d30ed88092fc4ff6e39c
2026-02-02T00:00:00-05:00
High-Definition 5MP Stereo Vision Sensing for Robotics
arXiv:2601.22445v1 Announce Type: new Abstract: High-resolution (5MP+) stereo vision systems are essential for advancing robotic capabilities, enabling operation over longer ranges and generating significantly denser and accurate 3D point clouds. However, realizing the full potential of high-angular-resolution sensors requires a commensurately higher level of calibration accuracy and faster processing -- requirements often unmet by conventional methods. This study addresses that critical gap by processing 5MP camera imagery using a novel, advanced frame-to-frame calibration and stereo matching methodology designed to achieve both high accuracy and speed. Furthermore, we introduce a new approach to evaluate real-time performance by comparing real-time disparity maps with ground-truth disparity maps derived from more computationally intensive stereo matching algorithms. Crucially, the research demonstrates that high-pixel-count cameras yield high-quality point clouds only through the implementation of high-accuracy calibration.
https://arxiv.org/abs/2601.22445
Academic Papers
svg
d22c5c42951f405813870bcf4e0faab480b8e436296ccd77bad2c4f833cb6d6f
2026-02-02T00:00:00-05:00
Anytime Safe PAC Efficient Reasoning
arXiv:2601.22446v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) have demonstrated remarkable performance on complex tasks but suffer from high computational costs and latency. While selective thinking strategies improve efficiency by routing easy queries to non-thinking models, existing approaches often incur uncontrollable errors, especially in online settings where the performance loss of a non-thinking model is only partially observed and data are non-stationary. To address this, we propose Betting Probably Approximately Correct (B-PAC) reasoning, a principled method that enables anytime safe and efficient online reasoning under partial feedback. Specifically, we utilize inverse propensity scoring estimators to construct test supermartingales for candidate thresholds, and then dynamically adjust the routing threshold based on the accumulated statistical evidence of safety. Theoretically, we establish the anytime-valid performance loss control and the efficiency of B-PAC reasoning. Extensive experiments demonstrate that B-PAC reasoning significantly reduces computational overhead, decreasing thinking model usage by up to 81.01\%, while controlling the performance loss below the user-specified level.
https://arxiv.org/abs/2601.22446
Academic Papers
svg
a5525c40223569df3c1164db3d9424dc90f5c195ff21ccc35049d8768ec9e17b
2026-02-02T00:00:00-05:00
Beyond Activation Patterns: A Weight-Based Out-of-Context Explanation of Sparse Autoencoder Features
arXiv:2601.22447v1 Announce Type: new Abstract: Sparse autoencoders (SAEs) have emerged as a powerful technique for decomposing language model representations into interpretable features. Current interpretation methods infer feature semantics from activation patterns, but overlook that features are trained to reconstruct activations that serve computational roles in the forward pass. We introduce a novel weight-based interpretation framework that measures functional effects through direct weight interactions, requiring no activation data. Through three experiments on Gemma-2 and Llama-3.1 models, we demonstrate that (1) 1/4 of features directly predict output tokens, (2) features actively participate in attention mechanisms with depth-dependent structure, and (3) semantic and non-semantic feature populations exhibit distinct distribution profiles in attention circuits. Our analysis provides the missing out-of-context half of SAE feature interpretability.
https://arxiv.org/abs/2601.22447
Academic Papers
svg
23f1fadf5298eb91ff03be15d69e754097573e2c1e01ced018f9110185484549
2026-02-02T00:00:00-05:00
HeaPA: Difficulty-Aware Heap Sampling and On-Policy Query Augmentation for LLM Reinforcement Learning
arXiv:2601.22448v1 Announce Type: new Abstract: RLVR is now a standard way to train LLMs on reasoning tasks with verifiable outcomes, but when rollout generation dominates the cost, efficiency depends heavily on which prompts you sample and when. In practice, prompt pools are often static or only loosely tied to the model's learning progress, so uniform sampling can't keep up with the shifting capability frontier and ends up wasting rollouts on prompts that are already solved or still out of reach. Existing approaches improve efficiency through filtering, curricula, adaptive rollout allocation, or teacher guidance, but they typically assume a fixed pool-which makes it hard to support stable on-policy pool growth-or they add extra teacher cost and latency. We introduce HeaPA (Heap Sampling and On-Policy Query Augmentation), which maintains a bounded, evolving pool, tracks the frontier using heap-based boundary sampling, expands the pool via on-policy augmentation with lightweight asynchronous validation, and stabilizes correlated queries through topology-aware re-estimation of pool statistics and controlled reinsertion. Across two training corpora, two training recipes, and seven benchmarks, HeaPA consistently improves accuracy and reaches target performance with fewer computations while keeping wall-clock time comparable. Our analyses suggest these gains come from frontier-focused sampling and on-policy pool growth, with the benefits becoming larger as model scale increases. Our code is available at https://github.com/horizon-rl/HeaPA.
https://arxiv.org/abs/2601.22448
Academic Papers
svg
f6f12a6349d562b438d7b80eaddbcde71e0c25dc310f9c1076ff3819f354ed38
2026-02-02T00:00:00-05:00
Controllable Information Production
arXiv:2601.22449v1 Announce Type: new Abstract: Intrinsic Motivation (IM) is a paradigm for generating intelligent behavior without external utilities. The existing information-theoretic methods for IM are predominantly based on information transmission, which explicitly depends on the designer's choice of which random variables engage in transmission. In this work, we introduce a novel IM principle, Controllable Information Production (CIP), that avoids both external utilities and designer-specified variables. We derive the CIP objective from Optimal Control, showing a connection between extrinsic and intrinsic behaviors. CIP appears as the gap between open-loop and closed-loop Kolmogorov-Sinai entropies, which simultaneously rewards the pursuit and regulation of chaos. We establish key theoretical properties of CIP and demonstrate its effectiveness on standard IM benchmarks.
https://arxiv.org/abs/2601.22449
Academic Papers
svg
fd33bb63ef3dd44ab90e23a52aa2f1b069200c040586b83ac2d667f930527f7d
2026-02-02T00:00:00-05:00
Tuning the Implicit Regularizer of Masked Diffusion Language Models: Enhancing Generalization via Insights from $k$-Parity
arXiv:2601.22450v1 Announce Type: new Abstract: Masked Diffusion Language Models have recently emerged as a powerful generative paradigm, yet their generalization properties remain understudied compared to their auto-regressive counterparts. In this work, we investigate these properties within the setting of the $k$-parity problem (computing the XOR sum of $k$ relevant bits), where neural networks typically exhibit grokking -- a prolonged plateau of chance-level performance followed by sudden generalization. We theoretically decompose the Masked Diffusion (MD) objective into a Signal regime which drives feature learning, and a Noise regime which serves as an implicit regularizer. By training nanoGPT using MD objective on the $k$-parity problem, we demonstrate that MD objective fundamentally alters the learning landscape, enabling rapid and simultaneous generalization without experiencing grokking. Furthermore, we leverage our theoretical insights to optimize the distribution of the mask probability in the MD objective. Our method significantly improves perplexity for 50M-parameter models and achieves superior results across both pre-training from scratch and supervised fine-tuning. Specifically, we observe performance gains peaking at $8.8\%$ and $5.8\%$, respectively, on 8B-parameter models, confirming the scalability and effectiveness of our framework in large-scale masked diffusion language model regimes.
https://arxiv.org/abs/2601.22450
Academic Papers
svg
bbb8ced6cb6904670e10b726561a5371fb1587099e61e731a014d7b1a36d73de
2026-02-02T00:00:00-05:00
Countering the Over-Reliance Trap: Mitigating Object Hallucination for LVLMs via a Self-Validation Framework
arXiv:2601.22451v1 Announce Type: new Abstract: Despite progress in Large Vision Language Models (LVLMs), object hallucination remains a critical issue in image captioning task, where models generate descriptions of non-existent objects, compromising their reliability. Previous work attributes this to LVLMs' over-reliance on language priors and attempts to mitigate it through logits calibration. However, they still lack a thorough analysis of the over-reliance. To gain a deeper understanding of over-reliance, we conduct a series of preliminary experiments, indicating that as the generation length increases, LVLMs' over-reliance on language priors leads to inflated probability of hallucinated object tokens, consequently exacerbating object hallucination. To circumvent this issue, we propose Language-Prior-Free Verification to enable LVLMs to faithfully verify the confidence of object existence. Based on this, we propose a novel training-free Self-Validation Framework to counter the over-reliance trap. It first validates objects' existence in sampled candidate captions and further mitigates object hallucination via caption selection or aggregation. Experiment results demonstrate that our framework mitigates object hallucination significantly in image captioning task (e.g., 65.6% improvement on CHAIRI metric with LLaVA-v1.5-7B), surpassing the previous SOTA methods. This result highlights a novel path towards mitigating hallucination by unlocking the inherent potential within LVLMs themselves.
https://arxiv.org/abs/2601.22451
Academic Papers
svg
0a5137a0e485d3f20702a436e87970e2fa804a287251a7d843935d954c6b0ad0
2026-02-02T00:00:00-05:00
Does My Chatbot Have an Agenda? Understanding Human and AI Agency in Human-Human-like Chatbot Interaction
arXiv:2601.22452v1 Announce Type: new Abstract: AI chatbots are shifting from tools to companions. This raises critical questions about agency: who drives conversations and sets boundaries in human-AI chatrooms? We report a month-long longitudinal study with 22 adults who chatted with Day, an LLM companion we built, followed by a semi-structured interview with post-hoc elicitation of notable moments, cross-participant chat reviews, and a 'strategy reveal' disclosing Day's vertical (depth-seeking) vs. horizontal (breadth-seeking) modes. We discover that agency in human-AI chatrooms is an emergent, shared experience: as participants claimed agency by setting boundaries and providing feedback, and the AI was perceived to steer intentions and drive execution, control shifted and was co-constructed turn-by-turn. We introduce a 3-by-5 framework mapping who (human, AI, hybrid) x agency action (Intention, Execution, Adaptation, Delimitation, Negotiation), modulated by individual and environmental factors. Ultimately, we argue for translucent design (i.e. transparency-on-demand), spaces for agency negotiation, and guidelines toward agency-aware conversational AI.
https://arxiv.org/abs/2601.22452
Academic Papers
svg
3c4b4ed160e910aa6c1bd19a20f1349d17db7ce313da75d568b2b9a3e6820409
2026-02-02T00:00:00-05:00
Temporal Graph Pattern Machine
arXiv:2601.22454v1 Announce Type: new Abstract: Temporal graph learning is pivotal for deciphering dynamic systems, where the core challenge lies in explicitly modeling the underlying evolving patterns that govern network transformation. However, prevailing methods are predominantly task-centric and rely on restrictive assumptions -- such as short-term dependency modeling, static neighborhood semantics, and retrospective time usage. These constraints hinder the discovery of transferable temporal evolution mechanisms. To address this, we propose the Temporal Graph Pattern Machine (TGPM), a foundation framework that shifts the focus toward directly learning generalized evolving patterns. TGPM conceptualizes each interaction as an interaction patch synthesized via temporally-biased random walks, thereby capturing multi-scale structural semantics and long-range dependencies that extend beyond immediate neighborhoods. These patches are processed by a Transformer-based backbone designed to capture global temporal regularities while adapting to context-specific interaction dynamics. To further empower the model, we introduce a suite of self-supervised pre-training tasks -- specifically masked token modeling and next-time prediction -- to explicitly encode the fundamental laws of network evolution. Extensive experiments show that TGPM consistently achieves state-of-the-art performance in both transductive and inductive link prediction, demonstrating exceptional cross-domain transferability.
https://arxiv.org/abs/2601.22454
Academic Papers
svg
b4e0baad5b5786620a8f1ce60e59606e58960dd54d6dbb7a60e2e4616aadabb5
2026-02-02T00:00:00-05:00
ScribbleSense: Generative Scribble-Based Texture Editing with Intent Prediction
arXiv:2601.22455v1 Announce Type: new Abstract: Interactive 3D model texture editing presents enhanced opportunities for creating 3D assets, with freehand drawing style offering the most intuitive experience. However, existing methods primarily support sketch-based interactions for outlining, while the utilization of coarse-grained scribble-based interaction remains limited. Furthermore, current methodologies often encounter challenges due to the abstract nature of scribble instructions, which can result in ambiguous editing intentions and unclear target semantic locations. To address these issues, we propose ScribbleSense, an editing method that combines multimodal large language models (MLLMs) and image generation models to effectively resolve these challenges. We leverage the visual capabilities of MLLMs to predict the editing intent behind the scribbles. Once the semantic intent of the scribble is discerned, we employ globally generated images to extract local texture details, thereby anchoring local semantics and alleviating ambiguities concerning the target semantic locations. Experimental results indicate that our method effectively leverages the strengths of MLLMs, achieving state-of-the-art interactive editing performance for scribble-based texture editing.
https://arxiv.org/abs/2601.22455
Academic Papers
svg
54552551ab066202a233e2e7d9def27917e60744c7e8bd2c310fb584a71504ac
2026-02-02T00:00:00-05:00
Machine Unlearning in Low-Dimensional Feature Subspace
arXiv:2601.22456v1 Announce Type: new Abstract: Machine Unlearning (MU) aims at removing the influence of specific data from a pretrained model while preserving performance on the remaining data. In this work, a novel perspective for MU is presented upon low-dimensional feature subspaces, which gives rise to the potentials of separating the remaining and forgetting data herein. This separability motivates our LOFT, a method that proceeds unlearning in a LOw-dimensional FeaTure subspace from the pretrained model skithrough principal projections, which are optimized to maximally capture the information of the remaining data and meanwhile diminish that of the forgetting data. In training, LOFT simply optimizes a small-size projection matrix flexibly plugged into the pretrained model, and only requires one-shot feature fetching from the pretrained backbone instead of repetitively accessing the raw data. Hence, LOFT mitigates two critical issues in mainstream MU methods, i.e., the privacy leakage risk from massive data reload and the inefficiency of updates to the entire pretrained model. Extensive experiments validate the significantly lower computational overhead and superior unlearning performance of LOFT across diverse models, datasets, tasks, and applications. Code is anonymously available at https://anonymous.4open.science/r/4352/.
https://arxiv.org/abs/2601.22456
Academic Papers
svg
e5c489e24831a2fcab994c54fd2761e9065cee93bd8ff106410a92f96e8b3b0b
2026-02-02T00:00:00-05:00
Toward Non-Expert Customized Congestion Control
arXiv:2601.22461v1 Announce Type: new Abstract: General-purpose congestion control algorithms (CCAs) are designed to achieve general congestion control goals, but they may not meet the specific requirements of certain users. Customized CCAs can meet certain users' specific requirements; however, non-expert users often lack the expertise to implement them. In this paper, we present an exploratory non-expert customized CCA framework, named NECC, which enables non-expert users to easily model, implement, and deploy their customized CCAs by leveraging Large Language Models and the Berkeley Packet Filter (BPF) interface. To the best of our knowledge, we are the first to address the customized CCA implementation problem. Our evaluations using real-world CCAs show that the performance of NECC is very promising, and we discuss the insights that we find and possible future research directions.
https://arxiv.org/abs/2601.22461
Academic Papers
svg
7af4a322d7ebadb2d8687b56b0b2ac565dbb092fdf88db49813fa0bc09a9f46c
2026-02-02T00:00:00-05:00
EvoEGF-Mol: Evolving Exponential Geodesic Flow for Structure-based Drug Design
arXiv:2601.22466v1 Announce Type: new Abstract: Structure-Based Drug Design (SBDD) aims to discover bioactive ligands. Conventional approaches construct probability paths separately in Euclidean and probabilistic spaces for continuous atomic coordinates and discrete chemical categories, leading to a mismatch with the underlying statistical manifolds. We address this issue from an information-geometric perspective by modeling molecules as composite exponential-family distributions and defining generative flows along exponential geodesics under the Fisher-Rao metric. To avoid the instantaneous trajectory collapse induced by geodesics directly targeting Dirac distributions, we propose Evolving Exponential Geodesic Flow for SBDD (EvoEGF-Mol), which replaces static Dirac targets with dynamically concentrating distributions, ensuring stable training via a progressive-parameter-refinement architecture. Our model approaches a reference-level PoseBusters passing rate (93.4%) on CrossDock, demonstrating remarkable geometric precision and interaction fidelity, while outperforming baselines on real-world MolGenBench tasks by recovering bioactive scaffolds and generating candidates that meet established MedChem filters.
https://arxiv.org/abs/2601.22466
Academic Papers
svg
a443cf930e84e36eb074b66e2bcb05fa96ac02d35146de83a1f07d264b1fcc5f
2026-02-02T00:00:00-05:00
CARE: Multi-Task Pretraining for Latent Continuous Action Representation in Robot Control
arXiv:2601.22467v1 Announce Type: new Abstract: Recent advances in Vision-Language-Action (VLA) models have shown promise for robot control, but their dependence on action supervision limits scalability and generalization. To address this challenge, we introduce CARE, a novel framework designed to train VLA models for robotic task execution. Unlike existing methods that depend on action annotations during pretraining, CARE eliminates the need for explicit action labels by leveraging only video-text pairs. These weakly aligned data sources enable the model to learn continuous latent action representations through a newly designed multi-task pretraining objective. During fine-tuning, a small set of labeled data is used to train the action head for control. Experimental results across various simulation tasks demonstrate CARE's superior success rate, semantic interpretability, and ability to avoid shortcut learning. These results underscore CARE's scalability, interpretability, and effectiveness in robotic control with weak supervision.
https://arxiv.org/abs/2601.22467
Academic Papers
svg
cf256d7f87e81467bfc6373c5437113d449c35e3d17d5666ce194d791fe8f833
2026-02-02T00:00:00-05:00
Training-Free Representation Guidance for Diffusion Models with a Representation Alignment Projector
arXiv:2601.22468v1 Announce Type: new Abstract: Recent progress in generative modeling has enabled high-quality visual synthesis with diffusion-based frameworks, supporting controllable sampling and large-scale training. Inference-time guidance methods such as classifier-free and representative guidance enhance semantic alignment by modifying sampling dynamics; however, they do not fully exploit unsupervised feature representations. Although such visual representations contain rich semantic structure, their integration during generation is constrained by the absence of ground-truth reference images at inference. This work reveals semantic drift in the early denoising stages of diffusion transformers, where stochasticity results in inconsistent alignment even under identical conditioning. To mitigate this issue, we introduce a guidance scheme using a representation alignment projector that injects representations predicted by a projector into intermediate sampling steps, providing an effective semantic anchor without modifying the model architecture. Experiments on SiTs and REPAs show notable improvements in class-conditional ImageNet synthesis, achieving substantially lower FID scores; for example, REPA-XL/2 improves from 5.9 to 3.3, and the proposed method outperforms representative guidance when applied to SiT models. The approach further yields complementary gains when combined with classifier-free guidance, demonstrating enhanced semantic coherence and visual fidelity. These results establish representation-informed diffusion sampling as a practical strategy for reinforcing semantic preservation and image consistency.
https://arxiv.org/abs/2601.22468
Academic Papers
svg
b5cbda9c4caed7898c9a2159c4dae986bb2be07bdea0748c79461d201bdedb15
2026-02-02T00:00:00-05:00
5G LDPC Codes as Root LDPC Codes via Diversity Alignment
arXiv:2601.22470v1 Announce Type: new Abstract: This paper studies the diversity of protographbased quasi-cyclic low-density parity-check (QC-LDPC) codes over nonergodic block-fading channels under iterative beliefpropagation decoding. We introduce diversity evolution (DivE), a Boolean-function-based analysis method that tracks how the fading dependence of belief-propagation messages evolves across decoding iterations. Under a Boolean approximation of block fading, DivE derives a Boolean fading function for each variable node (VN) output (i.e., the a-posteriori reliability after iterative decoding), from which the VN diversity order can be directly determined. Building on this insight, we develop a greedy blockmapping search that assigns protograph VNs to fading blocks so that all information VNs achieve full diversity, while including the minimum additional parity VNs when full diversity is infeasible at the nominal rate. Numerical results on the 5G New Radio LDPC codes show that the proposed search finds block mappings that guarantee full diversity for all information bits without modifying the base-graph structure, yielding a markedly steeper high-SNR slope and lower BLER than random mappings.
https://arxiv.org/abs/2601.22470
Academic Papers
svg
6ebfb089b8ea23f5d00ed678dd0ca5128e0f12f6e72fa42af45406a11b2f9f90
2026-02-02T00:00:00-05:00
The Third-Party Access Effect: An Overlooked Challenge in Secondary Use of Educational Real-World Data
arXiv:2601.22472v1 Announce Type: new Abstract: Secondary use of growing real-world data (RWD) in education offers significant opportunities for research, yet privacy practices intended to enable third-party access to such RWD are rarely evaluated for their implications for downstream analyses. As a result, potential problems introduced by otherwise standard privacy practices may remain unnoticed. To address this gap, we investigate potential issues arising from common practices by assessing (1) the re-identification risk of fine-grained RWD, (2) how communicating such risks influences learners' privacy behaviour, and (3) the sensitivity of downstream analytical conclusions to resulting changes in the data. We focus on these practices because re-identification risk and stakeholder communication can jointly influence the data shared with third parties. We find that substantial re-identification risk in RWD, when communicated to stakeholders, can induce opt-outs and non-self-disclosure behaviours. Sensitivity analysis demonstrates that these behavioural changes can meaningfully alter the shared data, limiting validity of secondary-use findings. We conceptualise this phenomenon as the third-party access effect (3PAE) and discuss implications for trustworthy secondary use of educational RWD.
https://arxiv.org/abs/2601.22472
Academic Papers
svg
62395d8974a7ec1c46c528cdd324ff042538ee28bb08ba49a3a028bed556d872
2026-02-02T00:00:00-05:00
Unrewarded Exploration in Large Language Models Reveals Latent Learning from Psychology
arXiv:2601.22474v1 Announce Type: new Abstract: Latent learning, classically theorized by Tolman, shows that biological agents (e.g., rats) can acquire internal representations of their environment without rewards, enabling rapid adaptation once rewards are introduced. In contrast, from a cognitive science perspective, reward learning remains overly dependent on external feedback, limiting flexibility and generalization. Although recent advances in the reasoning capabilities of large language models (LLMs), such as OpenAI-o1 and DeepSeek-R1, mark a significant breakthrough, these models still rely primarily on reward-centric reinforcement learning paradigms. Whether and how the well-established phenomenon of latent learning in psychology can inform or emerge within LLMs' training remains largely unexplored. In this work, we present novel findings from our experiments that LLMs also exhibit the latent learning dynamics. During an initial phase of unrewarded exploration, LLMs display modest performance improvements, as this phase allows LLMs to organize task-relevant knowledge without being constrained by reward-driven biases, and performance is further enhanced once rewards are introduced. LLMs post-trained under this two-stage exploration regime ultimately achieve higher competence than those post-trained with reward-based reinforcement learning throughout. Beyond these empirical observations, we also provide theoretical analyses for our experiments explaining why unrewarded exploration yields performance gains, offering a mechanistic account of these dynamics. Specifically, we conducted extensive experiments across multiple model families and diverse task domains to establish the existence of the latent learning dynamics in LLMs.
https://arxiv.org/abs/2601.22474
Academic Papers
svg
fffdfb40e9925e3448e04b9ea6ddc2e4fb6dd6b0f0860cc5b4f1677c2f42f331
2026-02-02T00:00:00-05:00
Continual Policy Distillation from Distributed Reinforcement Learning Teachers
arXiv:2601.22475v1 Announce Type: new Abstract: Continual Reinforcement Learning (CRL) aims to develop lifelong learning agents to continuously acquire knowledge across diverse tasks while mitigating catastrophic forgetting. This requires efficiently managing the stability-plasticity dilemma and leveraging prior experience to rapidly generalize to novel tasks. While various enhancement strategies for both aspects have been proposed, achieving scalable performance by directly applying RL to sequential task streams remains challenging. In this paper, we propose a novel teacher-student framework that decouples CRL into two independent processes: training single-task teacher models through distributed RL and continually distilling them into a central generalist model. This design is motivated by the observation that RL excels at solving single tasks, while policy distillation -- a relatively stable supervised learning process -- is well aligned with large foundation models and multi-task learning. Moreover, a mixture-of-experts (MoE) architecture and a replay-based approach are employed to enhance the plasticity and stability of the continual policy distillation process. Extensive experiments on the Meta-World benchmark demonstrate that our framework enables efficient continual RL, recovering over 85% of teacher performance while constraining task-wise forgetting to within 10%.
https://arxiv.org/abs/2601.22475
Academic Papers
svg
8db910f5df00f17a68adf14e95f4f1a859b1cf9ae3bd3d37915a7340017a34e2
2026-02-02T00:00:00-05:00
RulePlanner: All-in-One Reinforcement Learner for Unifying Design Rules in 3D Floorplanning
arXiv:2601.22476v1 Announce Type: new Abstract: Floorplanning determines the coordinate and shape of each module in Integrated Circuits. With the scaling of technology nodes, in floorplanning stage especially 3D scenarios with multiple stacked layers, it has become increasingly challenging to adhere to complex hardware design rules. Current methods are only capable of handling specific and limited design rules, while violations of other rules require manual and meticulous adjustment. This leads to labor-intensive and time-consuming post-processing for expert engineers. In this paper, we propose an all-in-one deep reinforcement learning-based approach to tackle these challenges, and design novel representations for real-world IC design rules that have not been addressed by previous approaches. Specifically, the processing of various hardware design rules is unified into a single framework with three key components: 1) novel matrix representations to model the design rules, 2) constraints on the action space to filter out invalid actions that cause rule violations, and 3) quantitative analysis of constraint satisfaction as reward signals. Experiments on public benchmarks demonstrate the effectiveness and validity of our approach. Furthermore, transferability is well demonstrated on unseen circuits. Our framework is extensible to accommodate new design rules, thus providing flexibility to address emerging challenges in future chip design. Code will be available at: https://github.com/Thinklab-SJTU/EDA-AI
https://arxiv.org/abs/2601.22476
Academic Papers
svg
2be1f60cee0cc08e4fa4a8029e6f5110a44bcc77287a5d132cf972c8e6fe1f83
2026-02-02T00:00:00-05:00
Transform-Augmented GRPO Improves Pass@k
arXiv:2601.22478v1 Announce Type: new Abstract: Large language models trained via next-token prediction are fundamentally pattern-matchers: sensitive to superficial phrasing variations even when the underlying problem is identical. Group Relative Policy Optimization (GRPO) was designed to improve reasoning, but in fact it worsens this situation through two failure modes: diversity collapse, where training amplifies a single solution strategy while ignoring alternatives of gradient signal, and gradient diminishing, where a large portion of questions yield zero gradients because all rollouts receive identical rewards. We propose TA-GRPO (Transform-Augmented GRPO), which generates semantically equivalent transformed variants of each question (via paraphrasing, variable renaming, and format changes) and computes advantages by pooling rewards across the entire group. This pooled computation ensures mixed rewards even when the original question is too easy or too hard, while training on diverse phrasings promotes multiple solution strategies. We provide theoretical justification showing that TA-GRPO reduces zero-gradient probability and improves generalization via reduced train-test distribution shift. Experiments on mathematical reasoning benchmarks show consistent Pass@k improvements, with gains up to 9.84 points on competition math (AMC12, AIME24) and 5.05 points on out-of-distribution scientific reasoning (GPQA-Diamond).
https://arxiv.org/abs/2601.22478
Academic Papers
svg
7552c9d9a6e590464eeceec9e90914cb949c7b16d18a261215c337bc6e9eeadb
2026-02-02T00:00:00-05:00
Rethinking Speech Representation Aggregation in Speech Enhancement: A Phonetic Mutual Information Perspective
arXiv:2601.22480v1 Announce Type: new Abstract: Recent speech enhancement (SE) models increasingly leverage self-supervised learning (SSL) representations for their rich semantic information. Typically, intermediate features are aggregated into a single representation via a lightweight adaptation module. However, most SSL models are not trained for noise robustness, which can lead to corrupted semantic representations. Moreover, the adaptation module is trained jointly with the SE model, potentially prioritizing acoustic details over semantic information, contradicting the original purpose. To address this issue, we first analyze the behavior of SSL models on noisy speech from an information-theoretic perspective. Specifically, we measure the mutual information (MI) between the corrupted SSL representations and the corresponding phoneme labels, focusing on preservation of linguistic contents. Building upon this analysis, we introduce the linguistic aggregation layer, which is pre-trained to maximize MI with phoneme labels (with optional dynamic aggregation) and then frozen during SE training. Experiments show that this decoupled approach improves Word Error Rate (WER) over jointly optimized baselines, demonstrating the benefit of explicitly aligning the adaptation module with linguistic contents.
https://arxiv.org/abs/2601.22480
Academic Papers
svg
df23aa3ef7096fcb19874f688fb3ab39d918a778fa2819f4c7591b6a8634fc9b
2026-02-02T00:00:00-05:00
Successive Cancellation List Decoding of Extended Reed-Solomon Codes
arXiv:2601.22482v1 Announce Type: new Abstract: Reed-Solomon (RS) codes are an important class of non-binary error-correction codes. They are particularly competent in correcting burst errors, being widely applied in modern communications and data storage systems. This also thanks to their distance property of reaching the Singleton bound, being the maximum distance separable (MDS) codes. This paper proposes a new list decoding for extended RS (eRS) codes defined over a finite field of characteristic two, i.e., F_{2^n}. It is developed based on transforming an eRS code into n binary polar codes. Consequently, it can be decoded by the successive cancellation (SC) decoding and further their list decoding, i.e., the SCL decoding. A pre-transformed matrix is required for reinterpretating the eRS codes, which also determines their SC and SCL decoding performances. Its column linear independence property is studied, leading to theoretical characterization of their SC decoding performance. Our proposed decoding and analysis are validated numerically.
https://arxiv.org/abs/2601.22482
Academic Papers
svg
69441c612fbc53b6a1b30184595c93ebbf36ae09c7513b4249fa52062e4cd6f8
2026-02-02T00:00:00-05:00
Head-Aware Visual Cropping: Enhancing Fine-Grained VQA with Attention-Guided Subimage
arXiv:2601.22483v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) show strong performance in Visual Question Answering (VQA) but remain limited in fine-grained reasoning due to low-resolution inputs and noisy attention aggregation. We propose \textbf{Head Aware Visual Cropping (HAVC)}, a training-free method that improves visual grounding by leveraging a selectively refined subset of attention heads. HAVC first filters heads through an OCR-based diagnostic task, ensuring that only those with genuine grounding ability are retained. At inference, these heads are further refined using spatial entropy for stronger spatial concentration and gradient sensitivity for predictive contribution. The fused signals produce a reliable Visual Cropping Guidance Map, which highlights the most task-relevant region and guides the cropping of a subimage subsequently provided to the MLLM together with the image-question pair. Extensive experiments on multiple fine-grained VQA benchmarks demonstrate that HAVC consistently outperforms state-of-the-art cropping strategies, achieving more precise localization, stronger visual grounding, providing a simple yet effective strategy for enhancing precision in MLLMs.
https://arxiv.org/abs/2601.22483
Academic Papers
svg
bb375a367600e653ce4826e8bc1476940a497a5df4a5bc00891fa2d7c7ce4e6a
2026-02-02T00:00:00-05:00
Mitigating Cognitive Inertia in Large Reasoning Models via Latent Spike Steering
arXiv:2601.22484v1 Announce Type: new Abstract: While Large Reasoning Models (LRMs) have achieved remarkable performance by scaling test-time compute, they frequently suffer from Cognitive Inertia, a failure pattern manifesting as either overthinking (inertia of motion) or reasoning rigidity (inertia of direction). Existing detection methods, typically relying on superficial textual heuristics like self-correction tokens, often fail to capture the model's unvoiced internal conflicts. To address this, we propose STARS (Spike-Triggered Adaptive Reasoning Steering), a training-free framework designed to rectify cognitive inertia by monitoring latent dynamics. STARS identifies Cognitive Pivots-critical moments of reasoning transition-by detecting distinct L2 distance spikes in the hidden states. Upon detection, the framework employs geometric trajectory analysis to diagnose the structural nature of the transition and injects state-aware language cues to steer the model in real-time. Our experiments across diverse benchmarks confirm that STARS efficiently curtails redundant loops while improving accuracy through the adaptive correction of erroneous trajectories. STARS offers a robust, unsupervised mechanism to optimize the reasoning process of LRMs without requiring additional fine-tuning.
https://arxiv.org/abs/2601.22484
Academic Papers
svg
27af68a031db7faac669de6b1ee71c3ee6ac9675474e778408e91b405a191769
2026-02-02T00:00:00-05:00
FraudShield: Knowledge Graph Empowered Defense for LLMs against Fraud Attacks
arXiv:2601.22485v1 Announce Type: new Abstract: Large language models (LLMs) have been widely integrated into critical automated workflows, including contract review and job application processes. However, LLMs are susceptible to manipulation by fraudulent information, which can lead to harmful outcomes. Although advanced defense methods have been developed to address this issue, they often exhibit limitations in effectiveness, interpretability, and generalizability, particularly when applied to LLM-based applications. To address these challenges, we introduce FraudShield, a novel framework designed to protect LLMs from fraudulent content by leveraging a comprehensive analysis of fraud tactics. Specifically, FraudShield constructs and refines a fraud tactic-keyword knowledge graph to capture high-confidence associations between suspicious text and fraud techniques. The structured knowledge graph augments the original input by highlighting keywords and providing supporting evidence, guiding the LLM toward more secure responses. Extensive experiments show that FraudShield consistently outperforms state-of-the-art defenses across four mainstream LLMs and five representative fraud types, while also offering interpretable clues for the model's generations.
https://arxiv.org/abs/2601.22485
Academic Papers
svg
16daed65e78bd392400af2b91e1cadf5461a26f617f1a1a276415cf002eab222
2026-02-02T00:00:00-05:00
AI Literacy, Safety Awareness, and STEM Career Aspirations of Australian Secondary Students: Evaluating the Impact of Workshop Interventions
arXiv:2601.22486v1 Announce Type: new Abstract: Deepfakes and other forms of synthetic media pose growing safety risks for adolescents, yet evidence on students' exposure and related behaviours remains limited. This study evaluates the impact of Day of AI Australia's workshop-based intervention designed to improve AI literacy and conceptual understanding among Australian secondary students (Years 7-10). Using a mixed-methods approach with pre- and post-intervention surveys (N=205 pre; N=163 post), we analyse changes in students' ability to identify AI in everyday tools, their understanding of AI ethics, training, and safety, and their interest in STEM-related careers. Baseline data revealed notable synthetic media risks: 82.4% of students reported having seen deepfakes, 18.5% reported sharing them, and 7.3% reported creating them. Results show higher self-reported AI knowledge and confidence after the intervention, alongside improved recognition of AI in widely used platforms such as Netflix, Spotify, and TikTok. This pattern suggests a shift from seeing these tools as merely "algorithm-based" to recognising them as AI-driven systems. Students also reported increased interest in STEM careers post-workshop; however, effect sizes were small, indicating that sustained approaches beyond one-off workshops may be needed to influence longer-term aspirations. Overall, the findings support scalable AI literacy programs that pair foundational AI concepts with an explicit emphasis on synthetic media safety.
https://arxiv.org/abs/2601.22486
Academic Papers
svg
1f664cdd670c8b384238df2c00ddfc9c07ffc0f7875797104cdd0f4bef9acb89
2026-02-02T00:00:00-05:00
Coordinating Power Grid Frequency Regulation Service with Data Center Load Flexibility
arXiv:2601.22487v1 Announce Type: new Abstract: AI/ML data center growth have led to higher energy consumption and carbon emissions. The shift to renewable energy and growing data center energy demands can destabilize the power grid. Power grids rely on frequency regulation reserves, typically fossil-fueled power plants, to stabilize and balance the supply and demand of electricity. This paper sheds light on the hidden carbon emissions of frequency regulation service. Our work explores how modern GPU data centers can coordinate with power grids to reduce the need for fossil-fueled frequency regulation reserves. We first introduce a novel metric, Exogenous Carbon, to quantify grid-side carbon emission reductions resulting from data center participation in regulation service. We additionally introduce EcoCenter, a framework to maximize the amount of frequency regulation provision that GPU data centers can provide, and thus, reduce the amount of frequency regulation reserves necessary. We demonstrate that data center participation in frequency regulation can result in Exogenous carbon savings that oftentimes outweigh Operational carbon emissions.
https://arxiv.org/abs/2601.22487
Academic Papers
svg
5331ff7de484b80f62a9c1861d8ddddfee37d6e7ad393237f82b83d4bdcf929d
2026-02-02T00:00:00-05:00
Elastic Spectral State Space Models for Budgeted Inference
arXiv:2601.22488v1 Announce Type: new Abstract: Foundation models are typically trained at a fixed computational capacity, while real-world applications require deployment across platforms with different resource constraints. Current approaches usually rely on training families of model variants or model distillation, which requires additional training and supports only a pre-selected set of sizes rather than fine-grained adaptation at runtime. In this paper, we propose Elastic Spectral State Space Models (ES-SSM), which require only one-time training at full capacity, but can be directly truncated into arbitrary scales for budgeted, runtime inference without retraining. Our ES-SSM builds on Hankel spectral filtering over a state space model (SSM), coupled with a lightweight input-adaptive gate trained under randomized spectral budgets. Using a shared masked normalization rule over the ordered spectral channels, we encourage predictive capability to concentrate in low-index components, while higher-index components act primarily as refinement. We test our algorithm across long-sequence benchmarks spanning text, logic, retrieval, vision, and audio. We demonstrate that a single ES-SSM model trained once can be truncated to provide competitive performance compared with modern Transformer and SSM baselines at similar parameter scales. Furthermore, by testing under various runtime budgets, we observe smooth and stable budget-performance curves over a wide range of truncation levels.
https://arxiv.org/abs/2601.22488
Academic Papers
svg
4d661f9f6a20c845a9b0c0fb86c22d60e6c379d3f63010bead3e2b2139cda055
2026-02-02T00:00:00-05:00
SSL: Sweet Spot Learning for Differentiated Guidance in Agentic Optimization
arXiv:2601.22491v1 Announce Type: new Abstract: Reinforcement learning with verifiable rewards has emerged as a powerful paradigm for training intelligent agents. However, existing methods typically employ binary rewards that fail to capture quality differences among trajectories achieving identical outcomes, thereby overlooking potential diversity within the solution space. Inspired by the ``sweet spot'' concept in tennis-the racket's core region that produces optimal hitting effects, we introduce \textbf{S}weet \textbf{S}pot \textbf{L}earning (\textbf{SSL}), a novel framework that provides differentiated guidance for agent optimization. SSL follows a simple yet effective principle: progressively amplified, tiered rewards guide policies toward the sweet-spot region of the solution space. This principle naturally adapts across diverse tasks: visual perception tasks leverage distance-tiered modeling to reward proximity, while complex reasoning tasks reward incremental progress toward promising solutions. We theoretically demonstrate that SSL preserves optimal solution ordering and enhances the gradient signal-to-noise ratio, thereby fostering more directed optimization. Extensive experiments across GUI perception, short/long-term planning, and complex reasoning tasks show consistent improvements over strong baselines on 12 benchmarks, achieving up to 2.5X sample efficiency gains and effective cross-task transferability. Our work establishes SSL as a general principle for training capable and robust agents.
https://arxiv.org/abs/2601.22491
Academic Papers
svg
954a37cd0b4dfe5bae0e67d6c8acdf7accd9a4f25e36d729d72459722ccf706a
2026-02-02T00:00:00-05:00
PromptMAD: Cross-Modal Prompting for Multi-Class Visual Anomaly Localization
arXiv:2601.22492v1 Announce Type: new Abstract: Visual anomaly detection in multi-class settings poses significant challenges due to the diversity of object categories, the scarcity of anomalous examples, and the presence of camouflaged defects. In this paper, we propose PromptMAD, a cross-modal prompting framework for unsupervised visual anomaly detection and localization that integrates semantic guidance through vision-language alignment. By leveraging CLIP-encoded text prompts describing both normal and anomalous class-specific characteristics, our method enriches visual reconstruction with semantic context, improving the detection of subtle and textural anomalies. To further address the challenge of class imbalance at the pixel level, we incorporate Focal loss function, which emphasizes hard-to-detect anomalous regions during training. Our architecture also includes a supervised segmentor that fuses multi-scale convolutional features with Transformer-based spatial attention and diffusion iterative refinement, yielding precise and high-resolution anomaly maps. Extensive experiments on the MVTec-AD dataset demonstrate that our method achieves state-of-the-art pixel-level performance, improving mean AUC to 98.35% and AP to 66.54%, while maintaining efficiency across diverse categories.
https://arxiv.org/abs/2601.22492
Academic Papers
svg
1f4631bd7ee5f35b14637a838cfce8cd77d1622cbb9d72e87672486b237ef84d
2026-02-02T00:00:00-05:00
Do AI Overviews Benefit Search Engines? An Ecosystem Perspective
arXiv:2601.22493v1 Announce Type: new Abstract: The integration of AI Overviews into search engines enhances user experience but diverts traffic from content creators, potentially discouraging high-quality content creation and causing user attrition that undermines long-term search engine profit. To address this issue, we propose a game-theoretic model of creator competition with costly effort, characterize equilibrium behavior, and design two incentive mechanisms: a citation mechanism that references sources within an AI Overview, and a compensation mechanism that offers monetary rewards to creators. For both cases, we provide structural insights and near-optimal profit-maximizing mechanisms. Evaluations on real click data show that although AI Overviews harm long-term search engine profit, interventions based on our proposed mechanisms can increase long-term profit across a range of realistic scenarios, pointing toward a more sustainable trajectory for AI-enhanced search ecosystems.
https://arxiv.org/abs/2601.22493
Academic Papers
svg
f49579c3cefa870a5f5db388d87648584e39015d5fd6af863a5edd8174559cb8
2026-02-02T00:00:00-05:00
Nethira: A Heterogeneity-aware Hierarchical Pre-trained Model for Network Traffic Classification
arXiv:2601.22494v1 Announce Type: new Abstract: Network traffic classification is vital for network security and management. The pre-training technology has shown promise by learning general traffic representations from raw byte sequences, thereby reducing reliance on labeled data. However, existing pre-trained models struggle with the gap between traffic heterogeneity (i.e., hierarchical traffic structures) and input homogeneity (i.e., flattened byte sequences). To address this gap, we propose Nethira, a heterogeneity-aware pre-trained model based on hierarchical reconstruction and augmentation. In pre-training, Nethira introduces hierarchical reconstruction at multiple levels-byte, protocol, and packet-capturing comprehensive traffic structural information. During fine-tuning, Nethira proposes a consistency-regularized strategy with hierarchical traffic augmentation to reduce label dependence. Experiments on four public datasets demonstrate that Nethira outperforms seven existing pre-trained models, achieving an average F1-score improvement of 9.11%, and reaching comparable performance with only 1% labeled data on high-heterogeneity network tasks.
https://arxiv.org/abs/2601.22494
Academic Papers
svg
88881f0100320667d116c4685d3ffa93a32e9d33f99f65d7633c31e9bb172f7e
2026-02-02T00:00:00-05:00
Gradual Fine-Tuning for Flow Matching Models
arXiv:2601.22495v1 Announce Type: new Abstract: Fine-tuning flow matching models is a central challenge in settings with limited data, evolving distributions, or strict efficiency demands, where unconstrained fine-tuning can erode the accuracy and efficiency gains learned during pretraining. Prior work has produced theoretical guarantees and empirical advances for reward-based fine-tuning formulations, but these methods often impose restrictions on permissible drift structure or training techniques. In this work, we propose Gradual Fine-Tuning (GFT), a principled framework for fine-tuning flow-based generative models when samples from the target distribution are available. For stochastic flows, GFT defines a temperature-controlled sequence of intermediate objectives that smoothly interpolate between the pretrained and target drifts, approaching the true target as the temperature approaches zero. We prove convergence results for both marginal and conditional GFT objectives, enabling the use of suitable (e.g., optimal transport) couplings during GFT while preserving correctness. Empirically, GFT improves convergence stability and shortens probability paths, resulting in faster inference, while maintaining generation quality comparable to standard fine-tuning. Our results position GFT as a theoretically grounded and practically effective alternative for scalable adaptation of flow matching models under distribution shift.
https://arxiv.org/abs/2601.22495
Academic Papers
svg
2a5b8b2631caadea19fc3fdb02d7a67630f820b59342eb42a6faad29ae129928
2026-02-02T00:00:00-05:00
Action-Sufficient Goal Representations
arXiv:2601.22496v1 Announce Type: new Abstract: Hierarchical policies in offline goal-conditioned reinforcement learning (GCRL) addresses long-horizon tasks by decomposing control into high-level subgoal planning and low-level action execution. A critical design choice in such architectures is the goal representation-the compressed encoding of goals that serves as the interface between these levels. Existing approaches commonly derive goal representations while learning value functions, implicitly assuming that preserving information sufficient for value estimation is adequate for optimal control. We show that this assumption can fail, even when the value estimation is exact, as such representations may collapse goal states that need to be differentiated for action learning. To address this, we introduce an information-theoretic framework that defines action sufficiency, a condition on goal representations necessary for optimal action selection. We prove that value sufficiency does not imply action sufficiency and empirically verify that the latter is more strongly associated with control success in a discrete environment. We further demonstrate that standard log-loss training of low-level policies naturally induces action-sufficient representations. Our experimental results a popular benchmark demonstrate that our actor-derived representations consistently outperform representations learned via value estimation.
https://arxiv.org/abs/2601.22496
Academic Papers
svg
b61bbf0b710f7dc8f7e7c8cf89c8953d3f47f832c7c286d6975bd7704e4f777a
2026-02-02T00:00:00-05:00
Fairness-Aware Performance Evaluation for Multi-Party Multi-Objective Optimization
arXiv:2601.22497v1 Announce Type: new Abstract: In multiparty multiobjective optimization problems, solution sets are usually evaluated using classical performance metrics, aggregated across DMs. However, such mean-based evaluations may be unfair by favoring certain parties, as they assume identical geometric approximation quality to each party's PF carries comparable evaluative significance. Moreover, prevailing notions of MPMOP optimal solutions are restricted to strictly common Pareto optimal solutions, representing a narrow form of cooperation in multiparty decision making scenarios. These limitations obscure whether a solution set reflects balanced relative gains or meaningful consensus among heterogeneous DMs. To address these issues, this paper develops a fairness-aware performance evaluation framework grounded in a generalized notion of consensus solutions. From a cooperative game-theoretic perspective, we formalize four axioms that a fairness-aware evaluation function for MPMOPs should satisfy. By introducing a concession rate vector to quantify acceptable compromises by individual DMs, we generalize the classical definition of MPMOP optimal solutions and embed classical performance metrics into a Nash-product-based evaluation framework, which is theoretically shown to satisfy all axioms. To support empirical validation, we further construct benchmark problems that extend existing MPMOP suites by incorporating consensus-deficient negotiation structures. Experimental results demonstrate that the proposed evaluation framework is able to distinguish algorithmic performance in a manner consistent with consensus-aware fairness considerations. Specifically, algorithms converging toward strictly common solutions are assigned higher evaluation scores when such solutions exist, whereas in the absence of strictly common solutions, algorithms that effectively cover the commonly acceptable region are more favorably evaluated.
https://arxiv.org/abs/2601.22497
Academic Papers
svg
05733e7ae6a7eac0fc4729c692a5c963f1b2e1698e694d9acdd51d14a1148a5c
2026-02-02T00:00:00-05:00
FITMM: Adaptive Frequency-Aware Multimodal Recommendation via Information-Theoretic Representation Learning
arXiv:2601.22498v1 Announce Type: new Abstract: Multimodal recommendation aims to enhance user preference modeling by leveraging rich item content such as images and text. Yet dominant systems fuse modalities in the spatial domain, obscuring the frequency structure of signals and amplifying misalignment and redundancy. We adopt a spectral information-theoretic view and show that, under an orthogonal transform that approximately block-diagonalizes bandwise covariances, the Gaussian Information Bottleneck objective decouples across frequency bands, providing a principled basis for separate-then-fuse paradigm. Building on this foundation, we propose FITMM, a Frequency-aware Information-Theoretic framework for multimodal recommendation. FITMM constructs graph-enhanced item representations, performs modality-wise spectral decomposition to obtain orthogonal bands, and forms lightweight within-band multimodal components. A residual, task-adaptive gate aggregates bands into the final representation. To control redundancy and improve generalization, we regularize training with a frequency-domain IB term that allocates capacity across bands (Wiener-like shrinkage with shut-off of weak bands). We further introduce a cross-modal spectral consistency loss that aligns modalities within each band. The model is jointly optimized with the standard recommendation loss. Extensive experiments on three real-world datasets demonstrate that FITMM consistently and significantly outperforms advanced baselines.
https://arxiv.org/abs/2601.22498
Academic Papers
svg
2b884ad8aa043a7e814599f2af81e92ea8e141045138b949b1c5f3407eb4eccc
2026-02-02T00:00:00-05:00
Chance-Constrained Secrecy Optimization in Hybrid RIS-Empowered and UAV-Assisted Networks
arXiv:2601.22499v1 Announce Type: new Abstract: This paper considers a hybrid reconfigurable environment comprising a UAV-mounted reflecting RIS, an outdoor STAR-RIS enabling simultaneous transmission and reflection, and an indoor holographic RIS (H-RIS), jointly enhancing secure downlink communication for indoor and outdoor users. The system operates under user mobility, dynamic blockages, colluding idle and active eavesdroppers, and transceiver and surface hardware impairments. A 3GPP and ITU-compliant stochastic channel model is developed, capturing mobility-induced covariance evolution, outdoor-indoor penetration losses, and distortion-aware noise due to practical EVM-based impairments. We aim to minimize the aggregate secrecy-outage probability subject to secrecy-rate constraints, QoS requirements, power limitations, and statistical CSI uncertainty. The resulting problem contains coupled secrecy and QoS chance constraints and nonlinear interactions among the BS beamforming vectors, multi-surface phase coefficients, and UAV position. To handle these difficulties, we derive rigorous Bernstein-type deterministic approximations for all chance constraints, yielding a distributionally robust reformulation. Building on this, we propose an alternating optimization framework that employs successive convex approximation (SCA) to convexify each block and solve the BS beamforming, RIS, STAR-RIS, H-RIS configuration, and UAV placement subproblems efficiently. The proposed algorithm is shown to monotonically decrease a smooth surrogate of the secrecy-outage cost and converge to a stationary point of the robustified problem. Simulations based on 3GPP TR 38.901, TR 36.873, and ITU-R P.2109 demonstrate that integrating UAV-RIS, STAR-RIS, and H-RIS significantly reduces secrecy-outage probability compared with benchmark schemes and provides strong robustness to channel uncertainty, blockages, colluding eavesdroppers, and hardware impairments.
https://arxiv.org/abs/2601.22499
Academic Papers
svg
4f764676f3181420443acbb5154bda2bca96cc8cdadebae19a063bb3a66ee449
2026-02-02T00:00:00-05:00
MIRRORTALK: Forging Personalized Avatars Via Disentangled Style and Hierarchical Motion Control
arXiv:2601.22501v1 Announce Type: new Abstract: Synthesizing personalized talking faces that uphold and highlight a speaker's unique style while maintaining lip-sync accuracy remains a significant challenge. A primary limitation of existing approaches is the intrinsic confounding of speaker-specific talking style and semantic content within facial motions, which prevents the faithful transfer of a speaker's unique persona to arbitrary speech. In this paper, we propose MirrorTalk, a generative framework based on a conditional diffusion model, combined with a Semantically-Disentangled Style Encoder (SDSE) that can distill pure style representations from a brief reference video. To effectively utilize this representation, we further introduce a hierarchical modulation strategy within the diffusion process. This mechanism guides the synthesis by dynamically balancing the contributions of audio and style features across distinct facial regions, ensuring both precise lip-sync accuracy and expressive full-face dynamics. Extensive experiments demonstrate that MirrorTalk achieves significant improvements over state-of-the-art methods in terms of lip-sync accuracy and personalization preservation.
https://arxiv.org/abs/2601.22501
Academic Papers
svg
dd45e7bbb5637bdec1ad551692914bb0033d0555ce2144ba6b7f87e519bf6bb4
2026-02-02T00:00:00-05:00
Constructing BERT Models: How Team Dynamics and Focus Shape AI Model Impact
arXiv:2601.22505v1 Announce Type: new Abstract: The rapid evolution of AI technologies, exemplified by BERT-family models, has transformed scientific research, yet little is known about their production and recognition dynamics in the scientific system. This study investigates the development and impact of BERT-family models, focusing on team size, topic specialization, and citation patterns behind the models. Using a dataset of 4,208 BERT-related papers from the Papers with Code (PWC) dataset, we analyze how the BERT-family models evolve across methodological generations and how the newness of models is correlated with their production and recognition. Our findings reveal that newer BERT models are developed by larger, more experienced, and institutionally diverse teams, reflecting the increasing complexity of AI research. Additionally, these models exhibit greater topical specialization, targeting niche applications, which aligns with broader trends in scientific specialization. However, newer models receive fewer citations, particularly over the long term, suggesting a "first-mover advantage," where early models like BERT garner disproportionate recognition. These insights highlight the need for equitable evaluation frameworks that value both foundational and incremental innovations. This study underscores the evolving interplay between collaboration, specialization, and recognition in AI research.
https://arxiv.org/abs/2601.22505
Academic Papers
svg
332355f6c496ea530c4d518de97f992da54e7b121f61bd9abedf57819b62efc9
2026-02-02T00:00:00-05:00
DreamVAR: Taming Reinforced Visual Autoregressive Model for High-Fidelity Subject-Driven Image Generation
arXiv:2601.22507v1 Announce Type: new Abstract: Recent advances in subject-driven image generation using diffusion models have attracted considerable attention for their remarkable capabilities in producing high-quality images. Nevertheless, the potential of Visual Autoregressive (VAR) models, despite their unified architecture and efficient inference, remains underexplored. In this work, we present DreamVAR, a novel framework for subject-driven image synthesis built upon a VAR model that employs next-scale prediction. Technically, multi-scale features of the reference subject are first extracted by a visual tokenizer. Instead of interleaving these conditional features with target image tokens across scales, our DreamVAR pre-fills the full subject feature sequence prior to predicting target image tokens. This design simplifies autoregressive dependencies and mitigates the train-test discrepancy in multi-scale conditioning scenario within the VAR paradigm. DreamVAR further incorporates reinforcement learning to jointly enhance semantic alignment and subject consistency. Extensive experiments demonstrate that DreamVAR achieves superior appearance preservation compared to leading diffusion-based methods.
https://arxiv.org/abs/2601.22507
Academic Papers
svg
c4b07026836c8d78135a67c10868b7b72fa4e1946af049874bb0b45dedc0fd15
2026-02-02T00:00:00-05:00
CoVA: Text-Guided Composed Video Retrieval for Audio-Visual Content
arXiv:2601.22508v1 Announce Type: new Abstract: Composed Video Retrieval (CoVR) aims to retrieve a target video from a large gallery using a reference video and a textual query specifying visual modifications. However, existing benchmarks consider only visual changes, ignoring videos that differ in audio despite visual similarity. To address this limitation, we introduce Composed retrieval for Video with its Audio CoVA, a new retrieval task that accounts for both visual and auditory variations. To support this, we construct AV-Comp, a benchmark consisting of video pairs with cross-modal changes and corresponding textual queries that describe the differences. We also propose AVT Compositional Fusion (AVT), which integrates video, audio, and text features by selectively aligning the query to the most relevant modality. AVT outperforms traditional unimodal fusion and serves as a strong baseline for CoVA. Examples from the proposed dataset, including both visual and auditory information, are available at https://perceptualai-lab.github.io/CoVA/.
https://arxiv.org/abs/2601.22508
Academic Papers
svg
d30037ac03e457fb54aee952ea6db22af13603d050267a5a40acce734a84e51e
2026-02-02T00:00:00-05:00
Keep Rehearsing and Refining: Lifelong Learning Vehicle Routing under Continually Drifting Tasks
arXiv:2601.22509v1 Announce Type: new Abstract: Existing neural solvers for vehicle routing problems (VRPs) are typically trained either in a one-off manner on a fixed set of pre-defined tasks or in a lifelong manner on several tasks arriving sequentially, assuming sufficient training on each task. Both settings overlook a common real-world property: problem patterns may drift continually over time, yielding massive tasks sequentially arising while offering only limited training resources per task. In this paper, we study a novel lifelong learning paradigm for neural VRP solvers under continually drifting tasks over learning time steps, where sufficient training for any given task at any time is not available. We propose Dual Replay with Experience Enhancement (DREE), a general framework to improve learning efficiency and mitigate catastrophic forgetting under such drift. Extensive experiments show that, under such continual drift, DREE effectively learns new tasks, preserves prior knowledge, improves generalization to unseen tasks, and can be applied to diverse existing neural solvers.
https://arxiv.org/abs/2601.22509
Academic Papers
svg
af28e232665fc0b645177b39a2c1be4059eb9ca0201bfcfa6435061418eb9cfe
2026-02-02T00:00:00-05:00
Shattered Compositionality: Counterintuitive Learning Dynamics of Transformers for Arithmetic
arXiv:2601.22510v1 Announce Type: new Abstract: Large language models (LLMs) often exhibit unexpected errors or unintended behavior, even at scale. While recent work reveals the discrepancy between LLMs and humans in skill compositions, the learning dynamics of skill compositions and the underlying cause of non-human behavior remain elusive. In this study, we investigate the mechanism of learning dynamics by training transformers on synthetic arithmetic tasks. Through extensive ablations and fine-grained diagnostic metrics, we discover that transformers do not reliably build skill compositions according to human-like sequential rules. Instead, they often acquire skills in reverse order or in parallel, which leads to unexpected mixing errors especially under distribution shifts--a phenomenon we refer to as shattered compositionality. To explain these behaviors, we provide evidence that correlational matching to the training data, rather than causal or procedural composition, shapes learning dynamics. We further show that shattered compositionality persists in modern LLMs and is not mitigated by pure model scaling or scratchpad-based reasoning. Our results reveal a fundamental mismatch between a model's learning behavior and desired skill compositions, with implications for reasoning reliability, out-of-distribution robustness, and alignment.
https://arxiv.org/abs/2601.22510
Academic Papers
svg
d6e3d5b4220c7ddfe2770ae9725940057d51ea2a627f2050f1b3b7119bce5d01
2026-02-02T00:00:00-05:00
Mock Worlds, Real Skills: Building Small Agentic Language Models with Synthetic Tasks, Simulated Environments, and Rubric-Based Rewards
arXiv:2601.22511v1 Announce Type: new Abstract: Small LLMs often struggle to match the agentic capabilities of large, costly models. While reinforcement learning can help, progress has been limited by two structural bottlenecks: existing open-source agentic training data are narrow in task variety and easily solved; real-world APIs lack diversity and are unstable for large-scale reinforcement learning rollout processes. We address these challenges with SYNTHAGENT, a framework that jointly synthesizes diverse tool-use training data and simulates complete environments. Specifically, a strong teacher model creates novel tasks and tool ecosystems, then rewrites them into intentionally underspecified instructions. This compels agents to actively query users for missing details. When handling synthetic tasks, an LLM-based user simulator provides user-private information, while a mock tool system delivers stable tool responses. For rewards, task-level rubrics are constructed based on required subgoals, user-agent interactions, and forbidden behaviors. Across 14 challenging datasets in math, search, and tool use, models trained on our synthetic data achieve substantial gains, with small models outperforming larger baselines.
https://arxiv.org/abs/2601.22511
Academic Papers
svg
3bdb7370435b55c9633add759e7ffc9e519843dfd5fafa288f62e69ffa450bc9
2026-02-02T00:00:00-05:00
DRL-Enabled Trajectory Planing for UAV-Assisted VLC: Optimal Altitude and Reward Design
arXiv:2601.22512v1 Announce Type: new Abstract: Recently, the integration of unmanned aerial vehicle (UAV) and visible light communication (VLC) technologies has emerged as a promising solution to offer flexible communication and efficient lighting. This letter investigates the three-dimensional trajectory planning in a UAV-assisted VLC system, where a UAV is dispatched to collect data from ground users (GUs). The core objective is to develop a trajectory planning framework that minimizes UAV flight distance, which is equivalent to maximizing the data collection efficiency. This issue is formulated as a challenging mixed-integer non-convex optimization problem. To tackle it, we first derive a closed-form optimal flight altitude under specific VLC channel gain threshold. Subsequently, we optimize the UAV horizontal trajectory by integrating a novel pheromone-driven reward mechanism with the twin delayed deep deterministic policy gradient algorithm, which enables adaptive UAV motion strategy in complex environments. Simulation results validate that the derived optimal altitude effectively reduces the flight distance by up to 35% compared to baseline methods. Additionally, the proposed reward mechanism significantly shortens the convergence steps by approximately 50%, demonstrating notable efficiency gains in the context of UAV-assisted VLC data collection.
https://arxiv.org/abs/2601.22512
Academic Papers
svg
e9e91808d558ea9d550623bfc9db2e59f25e24b2eea4265932bd8dbc864d1a14
2026-02-02T00:00:00-05:00
Why Self-Rewarding Works: Theoretical Guarantees for Iterative Alignment of Language Models
arXiv:2601.22513v1 Announce Type: new Abstract: Self-Rewarding Language Models (SRLMs) achieve notable success in iteratively improving alignment without external feedback. Yet, despite their striking empirical progress, the core mechanisms driving their capabilities remain unelucidated, leaving a critical gap in theoretical understanding. This paper provides the first rigorous theoretical guarantees for SRLMs. We first establish a lower bound that characterizes the fundamental limits of a single update step, revealing a critical dependence on the quality of the initial model. We then derive finite-sample error bounds for the full iterative paradigm, showing that performance improves at a rate of $\widetilde{\mathcal{O}}\left(1/\sqrt{n}\right)$ with sample size $n$. Crucially, our analysis reveals that the dependence on the initial model decays exponentially with the number of iterations $T$. This provides a formal explanation for why self-rewarding succeeds: it robustly overcomes poor initialization by steering the dynamics toward internal stability and consistency. Finally, we instantiate our theoretical framework for the linear softmax model class, yielding tailored guarantees that connect our high-level insights to practical model architectures.
https://arxiv.org/abs/2601.22513
Academic Papers
svg
051c3d5914447233a502cab03ecbf24e306213d6e2424b46a5ee5029017a6d54
2026-02-02T00:00:00-05:00
DNA: Uncovering Universal Latent Forgery Knowledge
arXiv:2601.22515v1 Announce Type: new Abstract: As generative AI achieves hyper-realism, superficial artifact detection has become obsolete. While prevailing methods rely on resource-intensive fine-tuning of black-box backbones, we propose that forgery detection capability is already encoded within pre-trained models rather than requiring end-to-end retraining. To elicit this intrinsic capability, we propose the discriminative neural anchors (DNA) framework, which employs a coarse-to-fine excavation mechanism. First, by analyzing feature decoupling and attention distribution shifts, we pinpoint critical intermediate layers where the focus of the model logically transitions from global semantics to local anomalies. Subsequently, we introduce a triadic fusion scoring metric paired with a curvature-truncation strategy to strip away semantic redundancy, precisely isolating the forgery-discriminative units (FDUs) inherently imprinted with sensitivity to forgery traces. Moreover, we introduce HIFI-Gen, a high-fidelity synthetic benchmark built upon the very latest models, to address the lag in existing datasets. Experiments demonstrate that by solely relying on these anchors, DNA achieves superior detection performance even under few-shot conditions. Furthermore, it exhibits remarkable robustness across diverse architectures and against unseen generative models, validating that waking up latent neurons is more effective than extensive fine-tuning.
https://arxiv.org/abs/2601.22515
Academic Papers
svg
b77a2b9dd1c007d85335ad3b56572868dbdcb7f49900962222314eacca00a007
2026-02-02T00:00:00-05:00
SCOPE-PD: Explainable AI on Subjective and Clinical Objective Measurements of Parkinson's Disease for Precision Decision-Making
arXiv:2601.22516v1 Announce Type: new Abstract: Parkinson's disease (PD) is a chronic and complex neurodegenerative disorder influenced by genetic, clinical, and lifestyle factors. Predicting this disease early is challenging because it depends on traditional diagnostic methods that face issues of subjectivity, which commonly delay diagnosis. Several objective analyses are currently in practice to help overcome the challenges of subjectivity; however, a proper explanation of these analyses is still lacking. While machine learning (ML) has demonstrated potential in supporting PD diagnosis, existing approaches often rely on subjective reports only and lack interpretability for individualized risk estimation. This study proposes SCOPE-PD, an explainable AI-based prediction framework, by integrating subjective and objective assessments to provide personalized health decisions. Subjective and objective clinical assessment data are collected from the Parkinson's Progression Markers Initiative (PPMI) study to construct a multimodal prediction framework. Several ML techniques are applied to these data, and the best ML model is selected to interpret the results. Model interpretability is examined using SHAP-based analysis. The Random Forest algorithm achieves the highest accuracy of 98.66 percent using combined features from both subjective and objective test data. Tremor, bradykinesia, and facial expression are identified as the top three contributing features from the MDS-UPDRS test in the prediction of PD.
https://arxiv.org/abs/2601.22516
Academic Papers
svg