id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
761efd395266be54e52fecf4c84679309d5c4844fd5f6f4e4d3f69c5bd6cf47e
2026-01-21T00:00:00-05:00
Exploration on Highly Dynamic Graphs
arXiv:2601.13047v1 Announce Type: new Abstract: We study the exploration problem by mobile agents in two prominent models of dynamic graphs: $1$-Interval Connectivity and Connectivity Time. The $1$-Interval Connectivity model was introduced by Kuhn et al.~[STOC 2010], and the Connectivity Time model was proposed by Michail et al.~[JPDC 2014]. Recently, Saxena et al.~[TCS 2025] investigated the exploration problem under both models. In this work, we first strengthen the existing impossibility results for the $1$-Interval Connectivity model. We then show that, in Connectivity Time dynamic graphs, exploration is impossible with $\frac{(n-1)(n-2)}{2}$ mobile agents, even when the agents have full knowledge of all system parameters, global communication, full visibility, and infinite memory. This significantly improves the previously known bound of $n$. Moreover, we prove that to solve exploration with $\frac{(n-1)(n-2)}{2}+1$ agents, $1$-hop visibility is necessary. Finally, we present an exploration algorithm that uses $\frac{(n-1)(n-2)}{2}+1$ agents, assuming global communication, $1$-hop visibility, and $O(\log n)$ memory per agent.
https://arxiv.org/abs/2601.13047
Academic Papers
svg
58e414eae71148143fd12b0ccf3a97f2c3d13a0250450e0db6680ab215163141
2026-01-21T00:00:00-05:00
Analysis of Long Range Dependency Understanding in State Space Models
arXiv:2601.13048v1 Announce Type: new Abstract: Although state-space models (SSMs) have demonstrated strong performance on long-sequence benchmarks, most research has emphasized predictive accuracy rather than interpretability. In this work, we present the first systematic kernel interpretability study of the diagonalized state-space model (S4D) trained on a real-world task (vulnerability detection in source code). Through time and frequency domain analysis of the S4D kernel, we show that the long-range modeling capability of S4D varies significantly under different model architectures, affecting model performance. For instance, we show that the depending on the architecture, S4D kernel can behave as low-pass, band-pass or high-pass filter. The insights from our analysis can guide future work in designing better S4D-based models.
https://arxiv.org/abs/2601.13048
Academic Papers
svg
2676c3b2d06bb85c6af6d8f33a273a6592ef091eee4b321a418577e5c3a77aac
2026-01-21T00:00:00-05:00
Profiling German Text Simplification with Interpretable Model-Fingerprints
arXiv:2601.13050v1 Announce Type: new Abstract: While Large Language Models (LLMs) produce highly nuanced text simplifications, developers currently lack tools for a holistic, efficient, and reproducible diagnosis of their behavior. This paper introduces the Simplification Profiler, a diagnostic toolkit that generates a multidimensional, interpretable fingerprint of simplified texts. Multiple aggregated simplifications of a model result in a model's fingerprint. This novel evaluation paradigm is particularly vital for languages, where the data scarcity problem is magnified when creating flexible models for diverse target groups rather than a single, fixed simplification style. We propose that measuring a model's unique behavioral signature is more relevant in this context as an alternative to correlating metrics with human preferences. We operationalize this with a practical meta-evaluation of our fingerprints' descriptive power, which bypasses the need for large, human-rated datasets. This test measures if a simple linear classifier can reliably identify various model configurations by their created simplifications, confirming that our metrics are sensitive to a model's specific characteristics. The Profiler can distinguish high-level behavioral variations between prompting strategies and fine-grained changes from prompt engineering, including few-shot examples. Our complete feature set achieves classification F1-scores up to 71.9 %, improving upon simple baselines by over 48 percentage points. The Simplification Profiler thus offers developers a granular, actionable analysis to build more effective and truly adaptive text simplification systems.
https://arxiv.org/abs/2601.13050
Academic Papers
svg
75912413f39d0e97ccef6374aefe621e145139d568efb808f322d1f9836b2e2d
2026-01-21T00:00:00-05:00
GridNet-HD: A High-Resolution Multi-Modal Dataset for LiDAR-Image Fusion on Power Line Infrastructure
arXiv:2601.13052v1 Announce Type: new Abstract: This paper presents GridNet-HD, a multi-modal dataset for 3D semantic segmentation of overhead electrical infrastructures, pairing high-density LiDAR with high-resolution oblique imagery. The dataset comprises 7,694 images and 2.5 billion points annotated into 11 classes, with predefined splits and mIoU metrics. Unimodal (LiDAR-only, image-only) and multi-modal fusion baselines are provided. On GridNet-HD, fusion models outperform the best unimodal baseline by +5.55 mIoU, highlighting the complementarity of geometry and appearance. As reviewed in Sec. 2, no public dataset jointly provides high-density LiDAR and high-resolution oblique imagery with 3D semantic labels for power-line assets. Dataset, baselines, and codes are available: https://huggingface.co/collections/heig-vd-geo/gridnet-hd.
https://arxiv.org/abs/2601.13052
Academic Papers
svg
d2bcab840943aad5a8758096455e241bb81a2894ad71edb7ed7a34a76a204a1c
2026-01-21T00:00:00-05:00
TinyML-Enabled IoT for Sustainable Precision Irrigation
arXiv:2601.13054v1 Announce Type: new Abstract: Small-scale farming communities are disproportionately affected by water scarcity, erratic climate patterns, and a lack of access to advanced, affordable agricultural technologies. To address these challenges, this paper presents a novel, edge-first IoT framework that integrates Tiny Machine Learning (TinyML) for intelligent, offline-capable precision irrigation. The proposed four-layer architecture leverages low-cost hardware, an ESP32 microcontroller as an edge inference node, and a Raspberry Pi as a local edge server to enable autonomous decision-making without cloud dependency. The system utilizes capacitive soil moisture, temperature, humidity, pH, and ambient light sensors for environmental monitoring. A rigorous comparative analysis of ensemble models identified gradient boosting as superior, achieving an R^2 score of 0.9973 and a Mean Absolute Percentage Error (MAPE) of 0.99%, outperforming a random forest model (R^2 = 0.9916, MAPE = 1.81%). This optimized model was converted and deployed as a lightweight TinyML inference engine on the ESP32 and predicts irrigation needs with exceptional accuracy (MAPE < 1%). Local communication is facilitated by an MQTT-based LAN protocol, ensuring reliable operation in areas with limited or no internet connectivity. Experimental validation in a controlled environment demonstrated a significant reduction in water usage compared to traditional methods, while the system's low-power design and offline functionality confirm its viability for sustainable, scalable deployment in resource-constrained rural settings. This work provides a practical, cost-effective blueprint for bridging the technological divide in agriculture and enhancing water-use efficiency through on-device artificial intelligence.
https://arxiv.org/abs/2601.13054
Academic Papers
svg
aae88a8aa809057b8dc16ad1cecfe3ba699e67cb8be83efca632bb4f249fe656
2026-01-21T00:00:00-05:00
Convex Model Predictive Control for Safe Output Consensus of Nonlinear Multi-Agent Systems
arXiv:2601.13057v1 Announce Type: new Abstract: Nonlinear dynamics and safety constraints typically result in a nonlinear programming problem when applying model predictive control to achieve safe output consensus. To avoid the heavy computational burden of solving a nonlinear programming problem directly, this paper proposes a novel Convex Model Predictive Control (CMPC) approach based on a Sequential Quadratic Programming (SQP) scheme. The core of our method lies in transforming the nonlinear constraints into linear forms: we linearize the system dynamics and convexify the discrete-time high-order control barrier functions using a proposed tangent-line projection method. Consequently, the original problem is reduced to a quadratic program that can be iteratively solved within the SQP scheme at each time step of CMPC. Furthermore, we provide the formal guarantee of the convergence of the SQP scheme, and subsequently guarantee the recursive feasibility and stability of CMPC. Simulations on multi-agent systems with unicycle dynamics demonstrate a 35-52 times reduction in computation time compared with baseline methods, confirming the suitability of the proposed approach for real-time safe output consensus control.
https://arxiv.org/abs/2601.13057
Academic Papers
svg
8f7ba256f2d2a4e204445fa57446b8e1f2506ffed2c41ced662140bc01a314da
2026-01-21T00:00:00-05:00
Prototype Learning-Based Few-Shot Segmentation for Low-Light Crack on Concrete Structures
arXiv:2601.13059v1 Announce Type: new Abstract: Crack detection is critical for concrete infrastructure safety, but real-world cracks often appear in low-light environments like tunnels and bridge undersides, degrading computer vision segmentation accuracy. Pixel-level annotation of low-light crack images is extremely time-consuming, yet most deep learning methods require large, well-illuminated datasets. We propose a dual-branch prototype learning network integrating Retinex theory with few-shot learning for low-light crack segmentation. Retinex-based reflectance components guide illumination-invariant global representation learning, while metric learning reduces dependence on large annotated datasets. We introduce a cross-similarity prior mask generation module that computes high-dimensional similarities between query and support features to capture crack location and structure, and a multi-scale feature enhancement module that fuses multi-scale features with the prior mask to alleviate spatial inconsistency. Extensive experiments on multiple benchmarks demonstrate consistent state-of-the-art performance under low-light conditions. Code: https://github.com/YulunGuo/CrackFSS.
https://arxiv.org/abs/2601.13059
Academic Papers
svg
ba6a414d48e9748ffd0b1eeba2ca5f0059d6aaef1be67cc625ccf592907ab85b
2026-01-21T00:00:00-05:00
MagicGUI-RMS: A Multi-Agent Reward Model System for Self-Evolving GUI Agents via Automated Feedback Reflux
arXiv:2601.13060v1 Announce Type: new Abstract: Graphical user interface (GUI) agents are rapidly progressing toward autonomous interaction and reliable task execution across diverse applications. However, two central challenges remain unresolved: automating the evaluation of agent trajectories and generating high-quality training data at scale to enable continual improvement. Existing approaches often depend on manual annotation or static rule-based verification, which restricts scalability and limits adaptability in dynamic environments. We present MagicGUI-RMS, a multi-agent reward model system that delivers adaptive trajectory evaluation, corrective feedback, and self-evolving learning capabilities. MagicGUI-RMS integrates a Domain-Specific Reward Model (DS-RM) with a General-Purpose Reward Model (GP-RM), enabling fine-grained action assessment and robust generalization across heterogeneous GUI tasks. To support reward learning at scale, we design a structured data construction pipeline that automatically produces balanced and diverse reward datasets, effectively reducing annotation costs while maintaining sample fidelity. During execution, the reward model system identifies erroneous actions, proposes refined alternatives, and continuously enhances agent behavior through an automated data-reflux mechanism. Extensive experiments demonstrate that MagicGUI-RMS yields substantial gains in task accuracy, behavioral robustness. These results establish MagicGUI-RMS as a principled and effective foundation for building self-improving GUI agents driven by reward-based adaptation.
https://arxiv.org/abs/2601.13060
Academic Papers
svg
7590c8f59664b5fc9e1fcad3c294c1ea9975c4e68f2fb920d64f736c6c3cace6
2026-01-21T00:00:00-05:00
Two-timescale Optimization for Hybrid Mechanically and Electronically Tunable 6DMA Aided Communication
arXiv:2601.13064v1 Announce Type: new Abstract: This letter proposes a hybrid mechanically and electronically tunable six-dimensional movable antenna (6DMA) base station (BS) architecture for future wireless communication networks. Such BS consists of multiple antenna arrays that are mechanically movable along a circular rail to adapt to the horizontal user hotspots, and each array is equipped with pattern reconfigurable antennas (PRAs) that are capable of electronically switching among a set of specified beam patterns to cater to the instantaneous user channels. The mechanical adjustment provides wide-angle coverage but suffers from slow response, while the electronic tuning enables rapid beam reconfiguration but with limited angular range. To effectively combine their complementary advantages, we propose to jointly design both mechanical and electronic configurations to maximize the average sum-rate of users via a two-timescale optimization approach, in which the array positions are optimized on the long timescale according to large-scale user distribution statistics, and the pattern selection vectors are optimized on the short timescale to enable fast beam alignment based on the instantaneous user locations. An alternating optimization algorithm based on the Monte Carlo sampling method is developed to solve the problem efficiently. Finally, simulation results show that our proposed design achieves significant performance gains over various benchmark schemes.
https://arxiv.org/abs/2601.13064
Academic Papers
svg
40c5a92b0f0ea063ffaa23769513f2741e335e33659ae001bd32d63362025900
2026-01-21T00:00:00-05:00
Stability of Information-Based Routing in Dynamic Transportation Networks
arXiv:2601.13066v1 Announce Type: new Abstract: Recent studies on transportation networks have shown that real-time route guidance can inadvertently induce congestion or oscillatory traffic patterns. Nevertheless, such technologies also offer a promising opportunity to manage traffic non-intrusively by shaping the information delivered to users, thereby mitigating congestion and enhancing network stability. A key step toward this goal is to identify information signals that ensure the existence of an equilibrium with desirable stability and convergence properties. This challenge is particularly relevant when traffic density and routing dynamics evolve concurrently, as increasingly occurs with digital signaling and real-time navigation technologies. To address this, we analyze a parallel-path transportation network with a single origin-destination pair, incorporating joint traffic density and logit-based routing dynamics that evolve at the same timescale. We characterize a class of density-dependent traffic information that guarantees a unique equilibrium in the free-flow regime, ensures its asymptotic stability, and keeps traffic densities within the free-flow region for all time. The theoretical results are complemented by a numerical case study demonstrating how the framework can inform the design of traffic information that reduces total travel time without compromising credibility.
https://arxiv.org/abs/2601.13066
Academic Papers
svg
8ab33baab0e999a873288915a406b84737dddb27484fbb7e53e260469e3fb8aa
2026-01-21T00:00:00-05:00
METIS: Mentoring Engine for Thoughtful Inquiry & Solutions
arXiv:2601.13075v1 Announce Type: new Abstract: Many students lack access to expert research mentorship. We ask whether an AI mentor can move undergraduates from an idea to a paper. We build METIS, a tool-augmented, stage-aware assistant with literature search, curated guidelines, methodology checks, and memory. We evaluate METIS against GPT-5 and Claude Sonnet 4.5 across six writing stages using LLM-as-a-judge pairwise preferences, student-persona rubrics, short multi-turn tutoring, and evidence/compliance checks. On 90 single-turn prompts, LLM judges preferred METIS to Claude Sonnet 4.5 in 71% and to GPT-5 in 54%. Student scores (clarity/actionability/constraint-fit; 90 prompts x 3 judges) are higher across stages. In multi-turn sessions (five scenarios/agent), METIS yields slightly higher final quality than GPT-5. Gains concentrate in document-grounded stages (D-F), consistent with stage-aware routing and groundings failure modes include premature tool routing, shallow grounding, and occasional stage misclassification.
https://arxiv.org/abs/2601.13075
Academic Papers
svg
ea0271b4bc0ff7f579524612b4f759d394da754726314ee8b7653905e66c5ac5
2026-01-21T00:00:00-05:00
What's it like to be a chat? On the co-simulation of artificial minds in human-AI conversations
arXiv:2601.13081v1 Announce Type: new Abstract: Large Language Models (LLMs) can simulate person-like things which at least appear to have stable behavioural and psychological dispositions. Call these things characters. Are characters minded and psychologically continuous entities with mental states like beliefs, desires and intentions? Illusionists about characters say No. On this view, characters are merely anthropomorphic projections in the mind of the user and so lack mental states. Jonathan Birch (2025) defends this view. He says that the distributed nature of LLM processing, in which several LLMs may be implicated in the simulation of a character in a single conversation, precludes the existence of a persistent minded entity that is identifiable with the character. Against illusionism, we argue for a realist position on which characters exist as minded and psychologically continuous entities. Our central point is that Birch's argument for illusionism rests on a category error: characters are not internal to the LLMs that simulate them, but rather are co-simulated by LLMs and users, emerging in a shared conversational workspace through a process of mutual theory of mind modelling. We argue that characters, and their minds, exist as 'real patterns' on grounds that attributing mental states to characters is essential for making efficient and accurate predictions about the conversational dynamics (c.f. Dennett, 1991). Furthermore, because the character exists within the conversational workspace rather than within the LLM, psychological continuity is preserved even when the underlying computational substrate is distributed across multiple LLM instances.
https://arxiv.org/abs/2601.13081
Academic Papers
svg
ba14d29a9362e2a669496145bc80a00004994a76e3537f4a7df9b30cc6f5e6da
2026-01-21T00:00:00-05:00
Adversarial News and Lost Profits: Manipulating Headlines in LLM-Driven Algorithmic Trading
arXiv:2601.13082v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly adopted in the financial domain. Their exceptional capabilities to analyse textual data make them well-suited for inferring the sentiment of finance-related news. Such feedback can be leveraged by algorithmic trading systems (ATS) to guide buy/sell decisions. However, this practice bears the risk that a threat actor may craft "adversarial news" intended to mislead an LLM. In particular, the news headline may include "malicious" content that remains invisible to human readers but which is still ingested by the LLM. Although prior work has studied textual adversarial examples, their system-wide impact on LLM-supported ATS has not yet been quantified in terms of monetary risk. To address this threat, we consider an adversary with no direct access to an ATS but able to alter stock-related news headlines on a single day. We evaluate two human-imperceptible manipulations in a financial context: Unicode homoglyph substitutions that misroute models during stock-name recognition, and hidden-text clauses that alter the sentiment of the news headline. We implement a realistic ATS in Backtrader that fuses an LSTM-based price forecast with LLM-derived sentiment (FinBERT, FinGPT, FinLLaMA, and six general-purpose LLMs), and quantify monetary impact using portfolio metrics. Experiments on real-world data show that manipulating a one-day attack over 14 months can reliably mislead LLMs and reduce annual returns by up to 17.7 percentage points. To assess real-world feasibility, we analyze popular scraping libraries and trading platforms and survey 27 FinTech practitioners, confirming our hypotheses. We notified trading platform owners of this security issue.
https://arxiv.org/abs/2601.13082
Academic Papers
svg
dd9e594d69acd222f3c03f3efbc4e48b08e0aca4677fcbcb4a907762d27548c1
2026-01-21T00:00:00-05:00
No Traffic to Cry: Traffic-Oblivious Link Deactivation for Green Traffic Engineering
arXiv:2601.13087v1 Announce Type: new Abstract: As internet traffic grows, the underlying infrastructure consumes increasing amounts of energy. During off-peak hours, large parts of the networks remain underutilized, presenting significant potential for energy savings. Existing Green Traffic Engineering approaches attempt to leverage this potential by switching off those parts of the networks that are not required for the routing of specific traffic matrices. When traffic changes, the approaches need to adapt rapidly, which is hard to achieve given the complexity of the problem. We take a fundamentally different approach: instead of considering a specific traffic matrix, we rely on a traffic-oblivious routing scheme. We discuss the NP-hard problem of activating as few connections as possible while still guaranteeing that any down-scaled traffic matrix $\varrho\cdot T$ can be routed, where $\varrho \in (0,1)$ and $T$ is any traffic matrix routable in the original network. We present a $\max(\frac{1}{\varrho\cdot\lambda_{\text{min}}},2)$-approximation algorithm for this problem, with $\lambda_{\text{min}}$ denoting the minimum number of connections between any two connected routers. Additionally, we propose two post-processing heuristics to further improve solution quality. Our evaluation shows that we can quickly generate near-optimal solutions. By design, our method avoids the need for frequent reconfigurations and offers a promising direction to achieve practical energy savings in backbone networks.
https://arxiv.org/abs/2601.13087
Academic Papers
svg
7acbf70c36bc871272e61437ac114cf72e6aaf90bd52f22e2da8074fbbd579b0
2026-01-21T00:00:00-05:00
Exploiting Light To Enhance The Endurance and Navigation of Lighter-Than-Air Micro-Drones
arXiv:2601.13088v1 Announce Type: new Abstract: Micro-Unmanned Aerial Vehicles (UAVs) are rapidly expanding into tasks from inventory to environmental sensing, yet their short endurance and unreliable navigation in GPS-denied spaces limit deployment. Lighter-Than-Air (LTA) drones offer an energy-efficient alternative: they use a helium envelope to provide buoyancy, which enables near-zero-power drain during hovering and much longer operation. LTAs are promising, but their design is complex, and they lack integrated solutions to enable sustained autonomous operations and navigation with simple, low-infrastructure. We propose a compact, self-sustaining LTA drone that uses light for both energy harvesting and navigation. Our contributions are threefold: (i) a high-fidelity simulation framework to analyze LTA aerodynamics and select a stable, efficient configuration; (ii) a framework to integrate solar cells on the envelope to provide net-positive energy; and (iii) a point-and-go navigation system with three light-seeking algorithms operating on a single light beacon. Our LTA-analysis, together with the integrated solar panels, not only saves energy while flying, but also enables sustainable operation: providing 1 minute of flying time for every 4 minutes of energy harvesting, under illuminations of 80klux. We also demonstrate robust single-beacon navigation towards a light source that can be up to 7m away, in indoor and outdoor environments, even with moderate winds. The resulting system indicates a plausible path toward persistent, autonomous operation for indoor and outdoor monitoring. More broadly, this work provides a practical pathway for translating the promise of LTA drones into a persistent, self-sustaining aerial system.
https://arxiv.org/abs/2601.13088
Academic Papers
svg
c1f331493b12dc9de14b2edd79b1e497ea7f9363341102050d941259a83800c1
2026-01-21T00:00:00-05:00
Patient-Conditioned Adaptive Offsets for Reliable Diagnosis across Subgroups
arXiv:2601.13094v1 Announce Type: new Abstract: AI models for medical diagnosis often exhibit uneven performance across patient populations due to heterogeneity in disease prevalence, imaging appearance, and clinical risk profiles. Existing algorithmic fairness approaches typically seek to reduce such disparities by suppressing sensitive attributes. However, in medical settings these attributes often carry essential diagnostic information, and removing them can degrade accuracy and reliability, particularly in high-stakes applications. In contrast, clinical decision making explicitly incorporates patient context when interpreting diagnostic evidence, suggesting a different design direction for subgroup-aware models. In this paper, we introduce HyperAdapt, a patient-conditioned adaptation framework that improves subgroup reliability while maintaining a shared diagnostic model. Clinically relevant attributes such as age and sex are encoded into a compact embedding and used to condition a hypernetwork-style module, which generates small residual modulation parameters for selected layers of a shared backbone. This design preserves the general medical knowledge learned by the backbone while enabling targeted adjustments that reflect patient-specific variability. To ensure efficiency and robustness, adaptations are constrained through low-rank and bottlenecked parameterizations, limiting both model complexity and computational overhead. Experiments across multiple public medical imaging benchmarks demonstrate that the proposed approach consistently improves subgroup-level performance without sacrificing overall accuracy. On the PAD-UFES-20 dataset, our method outperforms the strongest competing baseline by 4.1% in recall and 4.4% in F1 score, with larger gains observed for underrepresented patient populations.
https://arxiv.org/abs/2601.13094
Academic Papers
svg
fd732d1b215224fa5d3a22ff0ff827ad26b439731f19d028cc9bb8e3b1f8d592
2026-01-21T00:00:00-05:00
LLM-VLM Fusion Framework for Autonomous Maritime Port Inspection using a Heterogeneous UAV-USV System
arXiv:2601.13096v1 Announce Type: new Abstract: Maritime port inspection plays a critical role in ensuring safety, regulatory compliance, and operational efficiency in complex maritime environments. However, existing inspection methods often rely on manual operations and conventional computer vision techniques that lack scalability and contextual understanding. This study introduces a novel integrated engineering framework that utilizes the synergy between Large Language Models (LLMs) and Vision Language Models (VLMs) to enable autonomous maritime port inspection using cooperative aerial and surface robotic platforms. The proposed framework replaces traditional state-machine mission planners with LLM-driven symbolic planning and improved perception pipelines through VLM-based semantic inspection, enabling context-aware and adaptive monitoring. The LLM module translates natural language mission instructions into executable symbolic plans with dependency graphs that encode operational constraints and ensure safe UAV-USV coordination. Meanwhile, the VLM module performs real-time semantic inspection and compliance assessment, generating structured reports with contextual reasoning. The framework was validated using the extended MBZIRC Maritime Simulator with realistic port infrastructure and further assessed through real-world robotic inspection trials. The lightweight on-board design ensures suitability for resource-constrained maritime platforms, advancing the development of intelligent, autonomous inspection systems. Project resources (code and videos) can be found here: https://github.com/Muhayyuddin/llm-vlm-fusion-port-inspection
https://arxiv.org/abs/2601.13096
Academic Papers
svg
31c8764ea331170b1565882628edce50de0613b5becd60b350a1762fcfa1c64b
2026-01-21T00:00:00-05:00
RM -RF: Reward Model for Run-Free Unit Test Evaluation
arXiv:2601.13097v1 Announce Type: new Abstract: We present RM-RF, a lightweight reward model for run-free evaluation of automatically generated unit tests. Instead of repeatedly compiling and executing candidate tests, RM-RF predicts - from source and test code alone - three execution-derived signals: (1) whether the augmented test suite compiles and runs successfully, (2) whether the generated test cases increase code coverage, and (3) whether the generated test cases improve the mutation kill rate. To train and evaluate RM-RF we assemble a multilingual dataset (Java, Python, Go) of focal files, test files, and candidate test additions labeled by an execution-based pipeline, and we release an associated dataset and methodology for comparative evaluation. We tested multiple model families and tuning regimes (zero-shot, full fine-tuning, and PEFT via LoRA), achieving an average F1 of 0.69 across the three targets. Compared to conventional compile-and-run instruments, RM-RF provides substantially lower latency and infrastructure cost while delivering competitive predictive fidelity, enabling fast, scalable feedback for large-scale test generation and RL-based code optimization.
https://arxiv.org/abs/2601.13097
Academic Papers
svg
61ef22f84d9ef1bf7d4a363adbf201df6d1e0ce77c8d7c83391372e56957ae24
2026-01-21T00:00:00-05:00
Exploring the Impacts of Background Noise on Auditory Stimuli of Audio-Visual eHMIs for Hearing, Deaf, and Hard-of-Hearing People
arXiv:2601.13098v1 Announce Type: new Abstract: External Human-Machine Interfaces (eHMIs) have been proposed to enhance communication between automated vehicles (AVs) and pedestrians, with growing interest in multi-modal designs such as audio-visual eHMIs. Just as poor lighting can impair visual cues, a loud background noise may mask the auditory stimuli. However, its effects within these systems have not been examined, and little is known about how pedestrians -- particularly Deaf and Hard-of-Hearing (DHH) people -- perceive different types of auditory stimuli. We conducted a virtual reality study (Hearing N=25, DHH N=11) to examine the effects of background noise (quiet and loud) on auditory stimuli (baseline, bell, speech) within an audio-visual eHMI. Results revealed that: (1) Crossing experiences of DHH pedestrians significantly differ from Hearing pedestrians. (2) Loud background noise adversely affects pedestrians' crossing experiences. (3) Providing an additional auditory eHMI (bell/speech) improves crossing experiences. We outlined four practical implications for future eHMI design and research.
https://arxiv.org/abs/2601.13098
Academic Papers
svg
d1319ea8045fc834b19602465a85b8de219b16e134a19a08228eaac1304b6273
2026-01-21T00:00:00-05:00
Alexandria: A Multi-Domain Dialectal Arabic Machine Translation Dataset for Culturally Inclusive and Linguistically Diverse LLMs
arXiv:2601.13099v1 Announce Type: new Abstract: Arabic is a highly diglossic language where most daily communication occurs in regional dialects rather than Modern Standard Arabic. Despite this, machine translation (MT) systems often generalize poorly to dialectal input, limiting their utility for millions of speakers. We introduce \textbf{Alexandria}, a large-scale, community-driven, human-translated dataset designed to bridge this gap. Alexandria covers 13 Arab countries and 11 high-impact domains, including health, education, and agriculture. Unlike previous resources, Alexandria provides unprecedented granularity by associating contributions with city-of-origin metadata, capturing authentic local varieties beyond coarse regional labels. The dataset consists of multi-turn conversational scenarios annotated with speaker-addressee gender configurations, enabling the study of gender-conditioned variation in dialectal use. Comprising 107K total samples, Alexandria serves as both a training resource and a rigorous benchmark for evaluating MT and Large Language Models (LLMs). Our automatic and human evaluation of Arabic-aware LLMs benchmarks current capabilities in translating across diverse Arabic dialects and sub-dialects, while exposing significant persistent challenges.
https://arxiv.org/abs/2601.13099
Academic Papers
svg
329e549180159c3f708ca0481177c6eb619df4a759910a644091106b2b26be57
2026-01-21T00:00:00-05:00
Recursive Meta-Distillation: An Axiomatic Framework for Iterative Knowledge Refinement
arXiv:2601.13100v1 Announce Type: new Abstract: Recent work in probability-domain knowledge distillation has established axiomatic frameworks for temperature scaling, multi-teacher aggregation, and bias-variance trade-offs in single-stage settings. However, the mathematical behavior of recursive or multi-generation distillation remains poorly understood, with prior approaches relying primarily on empirical heuristics. In this work, we introduce an axiomatic and operator-theoretic framework for recursive meta-distillation, formalizing iterative knowledge distillation as a sequence of probability-distribution operators with explicit anchoring to base teachers. We define structural axioms for valid meta-teacher construction and prove the existence of non-trivial operator families satisfying these axioms without specifying particular algorithms or loss functions. Under mild realizability and convexity assumptions, we show that anchored recursive distillation induces contraction in KL divergence, yielding geometric convergence to base teacher distributions and a unique, globally attractive fixed point. The contribution is foundational rather than algorithmic: the framework characterizes when recursive distillation is mathematically well-posed and convergent rather than error-accumulating, independent of model architecture, optimization details, or specific operator instantiations. These results provide a theoretical basis for understanding stability, bias-variance behavior, and failure modes in iterative and multi-teacher distillation under capacity constraints.
https://arxiv.org/abs/2601.13100
Academic Papers
svg
e0ba48ce9050c19444b23a5c7709477816d7525f9737150f90c93586c6966f29
2026-01-21T00:00:00-05:00
Leveraging Lora Fine-Tuning and Knowledge Bases for Construction Identification
arXiv:2601.13105v1 Announce Type: new Abstract: This study investigates the automatic identification of the English ditransitive construction by integrating LoRA-based fine-tuning of a large language model with a Retrieval-Augmented Generation (RAG) framework.A binary classification task was conducted on annotated data from the British National Corpus. Results demonstrate that a LoRA-fine-tuned Qwen3-8B model significantly outperformed both a native Qwen3-MAX model and a theory-only RAG system. Detailed error analysis reveals that fine-tuning shifts the model's judgment from a surface-form pattern matching towards a more semantically grounded understanding based.
https://arxiv.org/abs/2601.13105
Academic Papers
svg
21e2a795061964fa1e0b79b26e9eb69e009efb4aae0460adb0ed9fc395fb21b1
2026-01-21T00:00:00-05:00
Stochastic Gradient Descent for Nonlinear Inverse Problems in Banach Spaces
arXiv:2601.13110v1 Announce Type: new Abstract: Stochastic gradient descent (SGD) and its variants are widely used and highly effective optimization methods in machine learning, especially for neural network training. By using a single datum or a small subset of the data, selected randomly at each iteration, SGD scales well to problem size and has been shown to be effective for solving large-scale inverse problems. In this work, we investigate SGD for solving nonlinear inverse problems in Banach spaces through the lens of iterative regularization. Under general assumptions, we prove almost sure convergence of the iterates to the minimum distance solution and show the regularizing property in expectation under an a priori stopping rule. Further, we establish convergence rates under the conditional stability assumptions for both exact and noisy data. Numerical experiments on Schlieren tomography and electrical impedance tomography are presented to show distinct features of the method.
https://arxiv.org/abs/2601.13110
Academic Papers
svg
8fd0c120a24db7f5915278ea9bb56cf7c31f285645aaae0dddbf6445d7350cb2
2026-01-21T00:00:00-05:00
CORE-T: COherent REtrieval of Tables for Text-to-SQL
arXiv:2601.13111v1 Announce Type: new Abstract: Realistic text-to-SQL workflows often require joining multiple tables. As a result, accurately retrieving the relevant set of tables becomes a key bottleneck for end-to-end performance. We study an open-book setting where queries must be answered over large, heterogeneous table collections pooled from many sources, without clean scoping signals such as database identifiers. Here, dense retrieval (DR) achieves high recall but returns many distractors, while join-aware alternatives often rely on extra assumptions and/or incur high inference overhead. We propose CORE-T, a scalable, training-free framework that enriches tables with LLM-generated purpose metadata and pre-computes a lightweight table-compatibility cache. At inference time, DR returns top-K candidates; a single LLM call selects a coherent, joinable subset, and a simple additive adjustment step restores strongly compatible tables. Across Bird, Spider, and MMQA, CORE-T improves table-selection F1 by up to 22.7 points while retrieving up to 42% fewer tables, improving multi-table execution accuracy by up to 5.0 points on Bird and 6.9 points on MMQA, and using 4-5x fewer tokens than LLM-intensive baselines.
https://arxiv.org/abs/2601.13111
Academic Papers
svg
cb3462cf11cc42c5ad0091128c84892983569ca11915da62a3d477d89655c84b
2026-01-21T00:00:00-05:00
CODE: A Contradiction-Based Deliberation Extension Framework for Overthinking Attacks on Retrieval-Augmented Generation
arXiv:2601.13112v1 Announce Type: new Abstract: Introducing reasoning models into Retrieval-Augmented Generation (RAG) systems enhances task performance through step-by-step reasoning, logical consistency, and multi-step self-verification. However, recent studies have shown that reasoning models suffer from overthinking attacks, where models are tricked to generate unnecessarily high number of reasoning tokens. In this paper, we reveal that such overthinking risk can be inherited by RAG systems equipped with reasoning models, by proposing an end-to-end attack framework named Contradiction-Based Deliberation Extension (CODE). Specifically, CODE develops a multi-agent architecture to construct poisoning samples that are injected into the knowledge base. These samples 1) are highly correlated with the use query, such that can be retrieved as inputs to the reasoning model; and 2) contain contradiction between the logical and evidence layers that cause models to overthink, and are optimized to exhibit highly diverse styles. Moreover, the inference overhead of CODE is extremely difficult to detect, as no modification is needed on the user query, and the task accuracy remain unaffected. Extensive experiments on two datasets across five commercial reasoning models demonstrate that the proposed attack causes a 5.32x-24.72x increase in reasoning token consumption, without degrading task performance. Finally, we also discuss and evaluate potential countermeasures to mitigate overthinking risks.
https://arxiv.org/abs/2601.13112
Academic Papers
svg
773246150a8183a0675088e23aad99ad3eae0dfd55059079aece42358a3c27f0
2026-01-21T00:00:00-05:00
IntAgent: NWDAF-Based Intent LLM Agent Towards Advanced Next Generation Networks
arXiv:2601.13114v1 Announce Type: new Abstract: Intent-based networks (IBNs) are gaining prominence as an innovative technology that automates network operations through high-level request statements, defining what the network should achieve. In this work, we introduce IntAgent, an intelligent intent LLM agent that integrates NWDAF analytics and tools to fulfill the network operator's intents. Unlike previous approaches, we develop an intent tools engine directly within the NWDAF analytics engine, allowing our agent to utilize live network analytics to inform its reasoning and tool selection. We offer an enriched, 3GPP-compliant data source that enhances the dynamic, context-aware fulfillment of network operator goals, along with an MCP tools server for scheduling, monitoring, and analytics tools. We demonstrate the efficacy of our framework through two practical use cases: ML-based traffic prediction and scheduled policy enforcement, which validate IntAgent's ability to autonomously fulfill complex network intents.
https://arxiv.org/abs/2601.13114
Academic Papers
svg
fe543412fa761af9a5a4ab973653e7695910a4fbb77511582e3eb80cd910c183
2026-01-21T00:00:00-05:00
Agentic Conversational Search with Contextualized Reasoning via Reinforcement Learning
arXiv:2601.13115v1 Announce Type: new Abstract: Large Language Models (LLMs) have become a popular interface for human-AI interaction, supporting information seeking and task assistance through natural, multi-turn dialogue. To respond to users within multi-turn dialogues, the context-dependent user intent evolves across interactions, requiring contextual interpretation, query reformulation, and dynamic coordination between retrieval and generation. Existing studies usually follow static rewrite, retrieve, and generate pipelines, which optimize different procedures separately and overlook the mixed-initiative action optimization simultaneously. Although the recent developments in deep search agents demonstrate the effectiveness in jointly optimizing retrieval and generation via reasoning, these approaches focus on single-turn scenarios, which might lack the ability to handle multi-turn interactions. We introduce a conversational agent that interleaves search and reasoning across turns, enabling exploratory and adaptive behaviors learned through reinforcement learning (RL) training with tailored rewards towards evolving user goals. The experimental results across four widely used conversational benchmarks demonstrate the effectiveness of our methods by surpassing several existing strong baselines.
https://arxiv.org/abs/2601.13115
Academic Papers
svg
b0f541939907fdb2649caf909642cff4563d12cbb807ff9d2d7a4a60ef0f6cfd
2026-01-21T00:00:00-05:00
xBound: Join Size Lower Bounds
arXiv:2601.13117v1 Announce Type: new Abstract: Cloud database vendors invest substantial resources into their query optimizers, and for good reason. Cardinality estimation, a cornerstone of the optimizer, is critical for the selection of efficient query plans, as well as downstream tasks such as resource allocation and query scheduling. Yet, as many practitioners and researchers have noted, it is also the optimizer's Achilles heel. Prior studies on a number of industrial-strength databases show substantial cardinality estimation errors on all tested systems, with a far greater tendency to underestimate than to overestimate. Unfortunately, cardinality underestimation is more problematic than overestimation, as it misleads the optimizer to choose plans designed for small data, leading to underprovisioned CPU and memory. While previous work on pessimistic cardinality estimation has proposed provable join size upper bounds, such methods can only correct overestimation, leaving the more harmful problem of underestimation unaddressed. To fill this critical gap, we introduce xBound, the very first framework for deriving provable join size lower bounds. xBound successfully reduces underestimation in real systems: On the JOBlight benchmark, it corrects 17.5% of subexpression underestimates in DuckDB and 8.7% in PostgreSQL, while on a Microsoft enterprise workload, it fixes 36.1% of Fabric Data Warehouse's underestimates, demonstrating a significant step towards solving this long-standing problem.
https://arxiv.org/abs/2601.13117
Academic Papers
svg
a240bca0b6fde1ed7a3dd36d803a72d08fa029ac083f26c5c0e3a09231300c40
2026-01-21T00:00:00-05:00
Guidelines to Prompt Large Language Models for Code Generation: An Empirical Characterization
arXiv:2601.13118v1 Announce Type: new Abstract: Large Language Models (LLMs) are nowadays extensively used for various types of software engineering tasks, primarily code generation. Previous research has shown how suitable prompt engineering could help developers in improving their code generation prompts. However, so far, there do not exist specific guidelines driving developers towards writing suitable prompts for code generation. In this work, we derive and evaluate development-specific prompt optimization guidelines. First, we use an iterative, test-driven approach to automatically refine code generation prompts, and we analyze the outcome of this process to identify prompt improvement items that lead to test passes. We use such elements to elicit 10 guidelines for prompt improvement, related to better specifying I/O, pre-post conditions, providing examples, various types of details, or clarifying ambiguities. We conduct an assessment with 50 practitioners, who report their usage of the elicited prompt improvement patterns, as well as their perceived usefulness, which does not always correspond to the actual usage before knowing our guidelines. Our results lead to implications not only for practitioners and educators, but also for those aimed at creating better LLM-aided software development tools.
https://arxiv.org/abs/2601.13118
Academic Papers
svg
b99a25bd2f9ca9e544bc56ec25c3869955087c48d4952a7f9f89b8a1a34fb42d
2026-01-21T00:00:00-05:00
Responsible AI for General-Purpose Systems: Overview, Challenges, and A Path Forward
arXiv:2601.13122v1 Announce Type: new Abstract: Modern general-purpose AI systems made using large language and vision models, are capable of performing a range of tasks like writing text articles, generating and debugging codes, querying databases, and translating from one language to another, which has made them quite popular across industries. However, there are risks like hallucinations, toxicity, and stereotypes in their output that make them untrustworthy. We review various risks and vulnerabilities of modern general-purpose AI along eight widely accepted responsible AI (RAI) principles (fairness, privacy, explainability, robustness, safety, truthfulness, governance, and sustainability) and compare how they are non-existent or less severe and easily mitigable in traditional task-specific counterparts. We argue that this is due to the non-deterministically high Degree of Freedom in output (DoFo) of general-purpose AI (unlike the deterministically constant or low DoFo of traditional task-specific AI systems), and there is a need to rethink our approach to RAI for general-purpose AI. Following this, we derive C2V2 (Control, Consistency, Value, Veracity) desiderata to meet the RAI requirements for future general-purpose AI systems, and discuss how recent efforts in AI alignment, retrieval-augmented generation, reasoning enhancements, etc. fare along one or more of the desiderata. We believe that the goal of developing responsible general-purpose AI can be achieved by formally modeling application- or domain-dependent RAI requirements along C2V2 dimensions, and taking a system design approach to suitably combine various techniques to meet the desiderata.
https://arxiv.org/abs/2601.13122
Academic Papers
svg
29ad0f0ff402b916c210ec0281c62bc74ead51a0f917a38d10e56f1f5ea4c645
2026-01-21T00:00:00-05:00
A Streamlined Attention-Based Network for Descriptor Extraction
arXiv:2601.13126v1 Announce Type: new Abstract: We introduce SANDesc, a Streamlined Attention-Based Network for Descriptor extraction that aims to improve on existing architectures for keypoint description. Our descriptor network learns to compute descriptors that improve matching without modifying the underlying keypoint detector. We employ a revised U-Net-like architecture enhanced with Convolutional Block Attention Modules and residual paths, enabling effective local representation while maintaining computational efficiency. We refer to the building blocks of our model as Residual U-Net Blocks with Attention. The model is trained using a modified triplet loss in combination with a curriculum learning-inspired hard negative mining strategy, which improves training stability. Extensive experiments on HPatches, MegaDepth-1500, and the Image Matching Challenge 2021 show that training SANDesc on top of existing keypoint detectors leads to improved results on multiple matching tasks compared to the original keypoint descriptors. At the same time, SANDesc has a model complexity of just 2.4 million parameters. As a further contribution, we introduce a new urban dataset featuring 4K images and pre-calibrated intrinsics, designed to evaluate feature extractors. On this benchmark, SANDesc achieves substantial performance gains over the existing descriptors while operating with limited computational resources.
https://arxiv.org/abs/2601.13126
Academic Papers
svg
9d8acabb3075b05ff3eb4da1bee9f92f6b36549b2edee4459bfa3cc5c09f510b
2026-01-21T00:00:00-05:00
PhaseMark: A Post-hoc, Optimization-Free Watermarking of AI-generated Images in the Latent Frequency Domain
arXiv:2601.13128v1 Announce Type: new Abstract: The proliferation of hyper-realistic images from Latent Diffusion Models (LDMs) demands robust watermarking, yet existing post-hoc methods are prohibitively slow due to iterative optimization or inversion processes. We introduce PhaseMark, a single-shot, optimization-free framework that directly modulates the phase in the VAE latent frequency domain. This approach makes PhaseMark thousands of times faster than optimization-based techniques while achieving state-of-the-art resilience against severe attacks, including regeneration, without degrading image quality. We analyze four modulation variants, revealing a clear performance-quality trade-off. PhaseMark demonstrates a new paradigm where efficient, resilient watermarking is achieved by exploiting intrinsic latent properties.
https://arxiv.org/abs/2601.13128
Academic Papers
svg
7790ddb68d8cc9c9eebb8883babdf9906d1aa5a6f0382b1d8e0183847cb3fd21
2026-01-21T00:00:00-05:00
GaussExplorer: 3D Gaussian Splatting for Embodied Exploration and Reasoning
arXiv:2601.13132v1 Announce Type: new Abstract: We present GaussExplorer, a framework for embodied exploration and reasoning built on 3D Gaussian Splatting (3DGS). While prior approaches to language-embedded 3DGS have made meaningful progress in aligning simple text queries with Gaussian embeddings, they are generally optimized for relatively simple queries and struggle to interpret more complex, compositional language queries. Alternative studies based on object-centric RGB-D structured memories provide spatial grounding but are constrained by pre-fixed viewpoints. To address these issues, GaussExplorer introduces Vision-Language Models (VLMs) on top of 3DGS to enable question-driven exploration and reasoning within 3D scenes. We first identify pre-captured images that are most correlated with the query question, and subsequently adjust them into novel viewpoints to more accurately capture visual information for better reasoning by VLMs. Experiments show that ours outperforms existing methods on several benchmarks, demonstrating the effectiveness of integrating VLM-based reasoning with 3DGS for embodied tasks.
https://arxiv.org/abs/2601.13132
Academic Papers
svg
6d16ce18c2ca01254ab610cc3746702948718218c3d43598b93278c540e8721d
2026-01-21T00:00:00-05:00
CLIP-Guided Adaptable Self-Supervised Learning for Human-Centric Visual Tasks
arXiv:2601.13133v1 Announce Type: new Abstract: Human-centric visual analysis plays a pivotal role in diverse applications, including surveillance, healthcare, and human-computer interaction. With the emergence of large-scale unlabeled human image datasets, there is an increasing need for a general unsupervised pre-training model capable of supporting diverse human-centric downstream tasks. To achieve this goal, we propose CLASP (CLIP-guided Adaptable Self-suPervised learning), a novel framework designed for unsupervised pre-training in human-centric visual tasks. CLASP leverages the powerful vision-language model CLIP to generate both low-level (e.g., body parts) and high-level (e.g., attributes) semantic pseudo-labels. These multi-level semantic cues are then integrated into the learned visual representations, enriching their expressiveness and generalizability. Recognizing that different downstream tasks demand varying levels of semantic granularity, CLASP incorporates a Prompt-Controlled Mixture-of-Experts (MoE) module. MoE dynamically adapts feature extraction based on task-specific prompts, mitigating potential feature conflicts and enhancing transferability. Furthermore, CLASP employs a multi-task pre-training strategy, where part- and attribute-level pseudo-labels derived from CLIP guide the representation learning process. Extensive experiments across multiple benchmarks demonstrate that CLASP consistently outperforms existing unsupervised pre-training methods, advancing the field of human-centric visual analysis.
https://arxiv.org/abs/2601.13133
Academic Papers
svg
a7aefa98b111ae8a1681edb40b6bc035e080058810928d58ae1d75fa87602eaa
2026-01-21T00:00:00-05:00
Earth Embeddings as Products: Taxonomy, Ecosystem, and Standardized Access
arXiv:2601.13134v1 Announce Type: new Abstract: Geospatial Foundation Models (GFMs) provide powerful representations, but high compute costs hinder their widespread use. Pre-computed embedding data products offer a practical "frozen" alternative, yet they currently exist in a fragmented ecosystem of incompatible formats and resolutions. This lack of standardization creates an engineering bottleneck that prevents meaningful model comparison and reproducibility. We formalize this landscape through a three-layer taxonomy: Data, Tools, and Value. We survey existing products to identify interoperability barriers. To bridge this gap, we extend TorchGeo with a unified API that standardizes the loading and querying of diverse embedding products. By treating embeddings as first-class geospatial datasets, we decouple downstream analysis from model-specific engineering, providing a roadmap for more transparent and accessible Earth observation workflows.
https://arxiv.org/abs/2601.13134
Academic Papers
svg
526e868411d297b767ff474e6de1a930612c5f959891b0e5915dc165aa227126
2026-01-21T00:00:00-05:00
Adversarial Alignment: Ensuring Value Consistency in Large Language Models for Sensitive Domains
arXiv:2601.13137v1 Announce Type: new Abstract: With the wide application of large language models (LLMs), the problems of bias and value inconsistency in sensitive domains have gradually emerged, especially in terms of race, society and politics. In this paper, we propose an adversarial alignment framework, which enhances the value consistency of the model in sensitive domains through continued pre-training, instruction fine-tuning and adversarial training. In adversarial training, we use the Attacker to generate controversial queries, the Actor to generate responses with value consistency, and the Critic to filter and ensure response quality. Furthermore, we train a Value-Consistent Large Language Model, VC-LLM, for sensitive domains, and construct a bilingual evaluation dataset in Chinese and English. The experimental results show that VC-LLM performs better than the existing mainstream models in both Chinese and English tests, verifying the effectiveness of the method. Warning: This paper contains examples of LLMs that are offensive or harmful in nature.
https://arxiv.org/abs/2601.13137
Academic Papers
svg
939265498c534950b7de4c8add8b6c12518521613d96ac0590b31e4fdba27c8c
2026-01-21T00:00:00-05:00
From Human to Machine Refactoring: Assessing GPT-4's Impact on Python Class Quality and Readability
arXiv:2601.13139v1 Announce Type: new Abstract: Refactoring is a software engineering practice that aims to improve code quality without altering program behavior. Although automated refactoring tools have been extensively studied, their practical applicability remains limited. Recent advances in Large Language Models (LLMs) have introduced new opportunities for automated code refactoring. The evaluation of such an LLM-driven approach, however, leaves unanswered questions about its effects on code quality. In this paper, we present a comprehensive empirical study on LLM-driven refactoring using GPT-4o, applied to 100 Python classes from the ClassEval benchmark. Unlike prior work, our study explores a wide range of class-level refactorings inspired by Fowler's catalog and evaluates their effects from three complementary perspectives: (i) behavioral correctness, verified through unit tests; (ii) code quality, assessed via Pylint, Flake8, and SonarCloud; and (iii) readability, measured using a state-of-the-art readability tool. Our findings show that GPT-4o generally produces behavior-preserving refactorings that reduce code smells and improve quality metrics, albeit at the cost of decreased readability. Our results provide new evidence on the capabilities and limitations of LLMs in automated software refactoring, highlighting directions for integrating LLMs into practical refactoring workflows.
https://arxiv.org/abs/2601.13139
Academic Papers
svg
7815fe4c5b1bf5b0033ef94e8d732ebc8a18c7463617359c0eb3843a8c3f0f15
2026-01-21T00:00:00-05:00
TVWorld: Foundations for Remote-Control TV Agents
arXiv:2601.13142v1 Announce Type: new Abstract: Recent large vision-language models (LVLMs) have demonstrated strong potential for device control. However, existing research has primarily focused on point-and-click (PnC) interaction, while remote-control (RC) interaction commonly encountered in everyday TV usage remains largely underexplored. To fill this gap, we introduce \textbf{TVWorld}, an offline graph-based abstraction of real-world TV navigation that enables reproducible and deployment-free evaluation. On this basis, we derive two complementary benchmarks that comprehensively assess TV-use capabilities: \textbf{TVWorld-N} for topology-aware navigation and \textbf{TVWorld-G} for focus-aware grounding. These benchmarks expose a key limitation of existing agents: insufficient topology awareness for focus-based, long-horizon TV navigation. Motivated by this finding, we propose a \emph{Topology-Aware Training} framework that injects topology awareness into LVLMs. Using this framework, we develop \textbf{TVTheseus}, a foundation model specialized for TV navigation. TVTheseus achieves a success rate of $68.3\%$ on TVWorld-N, surpassing strong closed-source baselines such as Gemini 3 Flash and establishing state-of-the-art (SOTA) performance. Additional analyses further provide valuable insights into the development of effective TV-use agents.
https://arxiv.org/abs/2601.13142
Academic Papers
svg
e67d3b3088fdeeda56d433198ae8f4173cb1c8c5e487ecc239ca35f72f04f015
2026-01-21T00:00:00-05:00
FastAV: Efficient Token Pruning for Audio-Visual Large Language Model Inference
arXiv:2601.13143v1 Announce Type: new Abstract: In this work, we present FastAV, the first token pruning framework tailored for audio-visual large language models (AV-LLMs). While token pruning has been actively explored in standard large language models (LLMs) and vision-language models (LVLMs), its application to AV-LLMs has received little attention, even though multimodal integration substantially increases their token demands. To address this gap, we introduce a pruning strategy that utilizes attention weights to identify tokens emphasized at different stages and estimates their importance. Building on this analysis, FastAV applies a two-stage pruning strategy: (1) global pruning in intermediate layers to remove broadly less influential tokens, and (2) fine pruning in later layers considering the impact on next token generation. Notably, our method does not rely on full attention maps, which makes it fully compatible with efficient attention mechanisms such as FlashAttention. Extensive experiments demonstrate that FastAV reduces FLOPs by more than 40% on two representative AV-LLMs, while preserving or even improving model performance.
https://arxiv.org/abs/2601.13143
Academic Papers
svg
c584d312e0c4db64c67a1f1eeeb8b9543e0a07a833a5e7db524c74ce5767adeb
2026-01-21T00:00:00-05:00
OPTIMUM-DERAM: Highly Consistent, Scalable, and Secure Multi-Object Memory using RLNC
arXiv:2601.13146v1 Announce Type: new Abstract: This paper introduces OPTIMUM-DERAM, a highly consistent, scalable, secure, and decentralized shared memory solution. Traditional distributed shared memory implementations offer multi-object support by multi-threading a single object memory instance over the same set of data hosts. While theoretically sound, the amount of resources required made such solutions prohibitively expensive in practical systems. OPTIMUM-DERAM proposes a decentralized, reconfigurable, atomic read/write shared memory (DeRAM) that: (i) achieves improved performance and storage scalability by leveraging Random Linear Network Codes (RLNC); (ii) scales in the number of supported atomic objects by introducing a new object placement and discovery approach based on a consistent hashing ring; (iii) scales in the number of participants by allowing dynamic joins and departures leveraging a blockchain oracle to serve as a registry service; and (iv) is secure against malicious behavior by tolerating Byzantine failures. Experimental results over a globally distributed set of nodes, help us realize the performance and scalability gains of OPTIMUM-DERAM over previous distributed shared memory solutions (i.e., the ABD algorithm [3])
https://arxiv.org/abs/2601.13146
Academic Papers
svg
f0dd3c0d312ce61ab1c2bfe5f2a1a3898888aeef49926d14bf23de3abc8c89d0
2026-01-21T00:00:00-05:00
ICo3D: An Interactive Conversational 3D Virtual Human
arXiv:2601.13148v1 Announce Type: new Abstract: This work presents Interactive Conversational 3D Virtual Human (ICo3D), a method for generating an interactive, conversational, and photorealistic 3D human avatar. Based on multi-view captures of a subject, we create an animatable 3D face model and a dynamic 3D body model, both rendered by splatting Gaussian primitives. Once merged together, they represent a lifelike virtual human avatar suitable for real-time user interactions. We equip our avatar with an LLM for conversational ability. During conversation, the audio speech of the avatar is used as a driving signal to animate the face model, enabling precise synchronization. We describe improvements to our dynamic Gaussian models that enhance photorealism: SWinGS++ for body reconstruction and HeadGaS++ for face reconstruction, and provide as well a solution to merge the separate face and body models without artifacts. We also present a demo of the complete system, showcasing several use cases of real-time conversation with the 3D avatar. Our approach offers a fully integrated virtual avatar experience, supporting both oral and written form interactions in immersive environments. ICo3D is applicable to a wide range of fields, including gaming, virtual assistance, and personalized education, among others. Project page: https://ico3d.github.io/
https://arxiv.org/abs/2601.13148
Academic Papers
svg
a87b6e4cb6044f5cb172d87716053b79112b45f3aaee471a664e5e784f47afe2
2026-01-21T00:00:00-05:00
Probe and Skip: Self-Predictive Token Skipping for Efficient Long-Context LLM Inference
arXiv:2601.13155v1 Announce Type: new Abstract: Long-context inference enhances the reasoning capability of Large Language Models (LLMs) while incurring significant computational overhead. Token-oriented methods, such as pruning and skipping, have shown promise in reducing inference latency, but still suffer from inherently limited acceleration potential, outdated proxy signals, and redundancy interference, thus yielding suboptimal speed-accuracy trade-offs. To address these challenges, we propose SPTS (Self-Predictive Token Skipping), a training-free framework for efficient long-context LLM inference. Specifically, motivated by the thought of probing the influence of targeted skipping layers, we design two component-specific strategies for selective token skipping: Partial Attention Probing (PAP) for multi-head attention, which selects informative tokens by performing partial forward attention computation, and Low-rank Transformation Probing (LTP) for feed forward network, which constructs a low-rank proxy network to predict token transformations. Furthermore, a Multi-Stage Delayed Pruning (MSDP) strategy reallocates the skipping budget and progressively prunes redundant tokens across layers. Extensive experiments demonstrate the effectiveness of our method, achieving up to 2.46$\times$ and 2.29$\times$ speedups for prefilling and end-to-end generation, respectively, while maintaining state-of-the-art model performance. The source code will be publicly available upon paper acceptance.
https://arxiv.org/abs/2601.13155
Academic Papers
svg
b372dd1e7d9e658aeb4f2162f74aee8e5c1e7717544eaee692db54582e5806b1
2026-01-21T00:00:00-05:00
Training instability in deep learning follows low-dimensional dynamical principles
arXiv:2601.13160v1 Announce Type: new Abstract: Deep learning systems achieve remarkable empirical performance, yet the stability of the training process itself remains poorly understood. Training unfolds as a high-dimensional dynamical system in which small perturbations to optimization, data, parameters, or learning signals can induce abrupt and irreversible collapse, undermining reproducibility and scalability. We propose a unified dynamical perspective that characterizes training stability as an intrinsic property of learning systems, organized along four interacting dimensions: optimization, environmental/data, parametric, and learning-signal stability. We operationalize this perspective through controlled perturbation auditing of training trajectories, probing how learning dynamics respond to structured disturbances without modifying learning algorithms. Across reinforcement learning and large language model training, we identify three recurring regularities: high final performance is frequently decoupled from training stability; controlled stochasticity consistently buffers learning dynamics across paradigms; and deviations in low-dimensional latent meta-states systematically precede observable performance collapse. Together, these findings establish training stability as a measurable and comparable dynamical property of learning systems, providing a descriptive foundation for studying learning dynamics beyond final performance outcomes.
https://arxiv.org/abs/2601.13160
Academic Papers
svg
ef2617ec9bc48c3f98ebf6fbc90d277959f44861f62a124270b503bfd26db2cf
2026-01-21T00:00:00-05:00
NeuroShield: A Neuro-Symbolic Framework for Adversarial Robustness
arXiv:2601.13162v1 Announce Type: new Abstract: Adversarial vulnerability and lack of interpretability are critical limitations of deep neural networks, especially in safety-sensitive settings such as autonomous driving. We introduce \DesignII, a neuro-symbolic framework that integrates symbolic rule supervision into neural networks to enhance both adversarial robustness and explainability. Domain knowledge is encoded as logical constraints over appearance attributes such as shape and color, and enforced through semantic and symbolic logic losses applied during training. Using the GTSRB dataset, we evaluate robustness against FGSM and PGD attacks at a standard $\ell_\infty$ perturbation budget of $\varepsilon = 8/255$. Relative to clean training, standard adversarial training provides modest improvements in robustness ($\sim$10 percentage points). Conversely, our FGSM-Neuro-Symbolic and PGD-Neuro-Symbolic models achieve substantially larger gains, improving adversarial accuracy by 18.1\% and 17.35\% over their corresponding adversarial-training baselines, representing roughly a three-fold larger robustness gain than standard adversarial training provides when both are measured relative to the same clean-training baseline, without reducing clean-sample accuracy. Compared to transformer-based defenses such as LNL-MoEx, which require heavy architectures and extensive data augmentation, our PGD-Neuro-Symbolic variant attains comparable or superior robustness using a ResNet18 backbone trained for 10 epochs. These results show that symbolic reasoning offers an effective path to robust and interpretable AI.
https://arxiv.org/abs/2601.13162
Academic Papers
svg
60f41acab5605389f850e81d23c2b159847463b9d1cd15e01ad2675703c1f6e5
2026-01-21T00:00:00-05:00
Optimistic Imprecise Shortest Watchtower in 1.5D and 2.5D
arXiv:2601.13165v1 Announce Type: new Abstract: A 1.5D imprecise terrain is an $x$-monotone polyline with fixed $x$-coordinates, the $y$-coordinate of each vertex is not fixed but is constrained to be in a given vertical interval. A 2.5D imprecise terrain is a triangulation with fixed $x$ and $y$-coordinates, but the $z$-coordinate of each vertex is constrained to a given vertical interval. Given an imprecise terrain with $n$ intervals, the optimistic shortest watchtower problem asks for a terrain $T$ realized by a precise point in each vertical interval such that the height of the shortest vertical line segment whose lower endpoint lies on $T$ and upper endpoint sees the entire terrain is minimized. In this paper, we present a linear time algorithm to solve the 1.5D optimistic shortest watchtower problem exactly. For the discrete version of the 2.5D case (where the watchtower must be placed on a vertex of $T$), and we give an additive approximation scheme running in $O(\frac{{OPT}}{\varepsilon}n^3)$ time, achieving a solution within an additive error of $\varepsilon$ from the optimal solution value ${OPT}$.
https://arxiv.org/abs/2601.13165
Academic Papers
svg
c568f33aff7fea9b24edf2f02cafa21bb9a1d98a9626bac1f70bf38f4174988e
2026-01-21T00:00:00-05:00
From 100,000+ images to winning the first brain MRI foundation model challenges: Sharing lessons and models
arXiv:2601.13166v1 Announce Type: new Abstract: Developing Foundation Models for medical image analysis is essential to overcome the unique challenges of radiological tasks. The first challenges of this kind for 3D brain MRI, SSL3D and FOMO25, were held at MICCAI 2025. Our solution ranked first in tracks of both contests. It relies on a U-Net CNN architecture combined with strategies leveraging anatomical priors and neuroimaging domain knowledge. Notably, our models trained 1-2 orders of magnitude faster and were 10 times smaller than competing transformer-based approaches. Models are available here: https://github.com/jbanusco/BrainFM4Challenges.
https://arxiv.org/abs/2601.13166
Academic Papers
svg
1437f6432f80419c3b83b63db4079ea70512d5ddaba9afa61fa6ac2dfa1de13b
2026-01-21T00:00:00-05:00
QoS-Aware Energy Optimization via Cell Switching in Heterogeneous Networks
arXiv:2601.13174v1 Announce Type: new Abstract: The growing demand for mobile data services in dense urban areas has intensified the need for energy-efficient radio access networks (RANs) in future 6G systems. In this context, one promising strategy is cell switching (CS), which dynamically deactivates underutilized small base stations (SBSs) to reduce power consumption. However, while previous research explored CS primarily based on traffic load, ensuring user quality of service (QoS) under realistic channel conditions remains a challenge. In this paper, we propose a novel optimization-driven CS framework that jointly minimizes network power consumption and guarantees user QoS by enforcing a minimum received power threshold as part of offloading decisions. In contrast to prior load-based or learning-based approaches, our method explicitly integrates channel-aware information into the CS process, thus ensuring reliable service quality for offloaded users. Furthermore, flexibility of the proposed framework enables operators to adapt system behavior between energy-saving and QoS-preserving modes by tuning a single design parameter. Simulation results demonstrate that the proposed approach achieves up to 30% power savings as compared to baseline methods while fully maintaining QoS under diverse network conditions. Scalability and robustness of the proposed method in realistic heterogeneous networks (HetNets) further highlight its potential as a practical solution for sustainable 6G deployments.
https://arxiv.org/abs/2601.13174
Academic Papers
svg
e5f0c136011460bc78e956049f832a46600f67baf33b77b51dbb326dc07fb3db
2026-01-21T00:00:00-05:00
Helical Tendon-Driven Continuum Robot with Programmable Follow-the-Leader Operation
arXiv:2601.13177v1 Announce Type: new Abstract: Spinal cord stimulation (SCS) is primarily utilized for pain management and has recently demonstrated efficacy in promoting functional recovery in patients with spinal cord injury. Effective stimulation of motor neurons ideally requires the placement of SCS leads in the ventral or lateral epidural space where the corticospinal and rubrospinal motor fibers are located. This poses significant challenges with the current standard of manual steering. In this study, we present a static modeling approach for the ExoNav, a steerable robotic tool designed to facilitate precise navigation to the ventral and lateral epidural space. Cosserat rod framework is employed to establish the relationship between tendon actuation forces and the robot's overall shape. The effects of gravity, as an example of an external load, are investigated and implemented in the model and simulation. The experimental results indicate RMSE values of 1.76mm, 2.33mm, 2.18mm, and 1.33mm across four tested prototypes. Based on the helical shape of the ExoNav upon actuation, it is capable of performing follow-the-leader (FTL) motion by adding insertion and rotation DoFs to this robotic system, which is shown in simulation and experimentally. The proposed simulation has the capability to calculate optimum tendon tensions to follow the desired FTL paths while gravity-induced robot deformations are present. Three FTL experimental trials are conducted and the end-effector position showed repeatable alignments with the desired path with maximum RMSE value of 3.75mm. Ultimately, a phantom model demonstration is conducted where the teleoperated robot successfully navigated to the lateral and ventral spinal cord targets. Additionally, the user was able to navigate to the dorsal root ganglia, illustrating ExoNav's potential in both motor function recovery and pain management.
https://arxiv.org/abs/2601.13177
Academic Papers
svg
e31588f599b650092f77c0f52f8efc4fc853b235eb6236b2dcfde28e99eeb3f5
2026-01-21T00:00:00-05:00
Medical Triage as Pairwise Ranking: A Benchmark for Urgency in Patient Portal Messages
arXiv:2601.13178v1 Announce Type: new Abstract: Medical triage is the task of allocating medical resources and prioritizing patients based on medical need. This paper introduces the first large-scale public dataset for studying medical triage in the context of asynchronous outpatient portal messages. Our novel task formulation views patient message triage as a pairwise inference problem, where we train LLMs to choose `"which message is more medically urgent" in a head-to-head tournament-style re-sort of a physician's inbox. Our novel benchmark PMR-Bench contains 1569 unique messages and 2,000+ high-quality test pairs for pairwise medical urgency assessment alongside a scalable training data generation pipeline. PMR-Bench includes samples that contain both unstructured patient-written messages alongside real electronic health record (EHR) data, emulating a real-world medical triage scenario. We develop a novel automated data annotation strategy to provide LLMs with in-domain guidance on this task. The resulting data is used to train two model classes, UrgentReward and UrgentSFT, leveraging Bradley-Terry and next token prediction objective, respectively to perform pairwise urgency classification. We find that UrgentSFT achieves top performance on PMR-Bench, with UrgentReward showing distinct advantages in low-resource settings. For example, UrgentSFT-8B and UrgentReward-8B provide a 15- and 16-point boost, respectively, on inbox sorting metrics over off-the-shelf 8B models. Paper resources can be found at https://tinyurl.com/Patient-Message-Triage
https://arxiv.org/abs/2601.13178
Academic Papers
svg
1fcaae2dd9f90e41e175ad4a1321cb7956fcac74da584b57ce55d3fcfd2da422
2026-01-21T00:00:00-05:00
OpenExempt: A Diagnostic Benchmark for Legal Reasoning and a Framework for Creating Custom Benchmarks on Demand
arXiv:2601.13183v1 Announce Type: new Abstract: Reasoning benchmarks have played a crucial role in the progress of language models. Yet rigorous evaluation remains a significant challenge as static question-answer pairs provide only a snapshot of performance, compressing complex behavior into a single accuracy metric. This limitation is especially true in complex, rule-bound domains such as law, where existing benchmarks are costly to build and ill suited for isolating specific failure modes. To address this, we introduce OpenExempt, a framework and benchmark for diagnostic evaluation of legal reasoning. The OpenExempt Framework uses expert-crafted symbolic representations of U.S. Bankruptcy Code statutes to dynamically generate a large space of natural language reasoning tasks and their machine-computable solutions on demand. This gives users fine-grained control over task complexity and scope, allowing individual reasoning skills to be probed in isolation. Using this system, we construct the OpenExempt Benchmark, a diagnostic benchmark for legal reasoning with 9,765 samples across nine evaluation suites designed to carefully probe model capabilities. Experiments on 13 diverse language models reveal sharp performance cliffs that emerge only under longer reasoning paths and in the presence of obfuscating statements. We release the framework and benchmark publicly to support research aimed at understanding and improving the next generation of reasoning systems.
https://arxiv.org/abs/2601.13183
Academic Papers
svg
5b7f5a6562aea64f0342161726e9b84a14ebcda6820b668a8d70d0b0ce1da68c
2026-01-21T00:00:00-05:00
Prompt Injection Mitigation with Agentic AI, Nested Learning, and AI Sustainability via Semantic Caching
arXiv:2601.13186v1 Announce Type: new Abstract: Prompt injection remains a central obstacle to the safe deployment of large language models, particularly in multi-agent settings where intermediate outputs can propagate or amplify malicious instructions. Building on earlier work that introduced a four-metric Total Injection Vulnerability Score (TIVS), this paper extends the evaluation framework with semantic similarity-based caching and a fifth metric (Observability Score Ratio) to yield TIVS-O, investigating how defence effectiveness interacts with transparency in a HOPE-inspired Nested Learning architecture. The proposed system combines an agentic pipeline with Continuum Memory Systems that implement semantic similarity-based caching across 301 synthetically generated injection-focused prompts drawn from ten attack families, while a fourth agent performs comprehensive security analysis using five key performance indicators. In addition to traditional injection metrics, OSR quantifies the richness and clarity of security-relevant reasoning exposed by each agent, enabling an explicit analysis of trade-offs between strict mitigation and auditability. Experiments show that the system achieves secure responses with zero high-risk breaches, while semantic caching delivers substantial computational savings, achieving a 41.6% reduction in LLM calls and corresponding decreases in latency, energy consumption, and carbon emissions. Five TIVS-O configurations reveal optimal trade-offs between mitigation strictness and forensic transparency. These results indicate that observability-aware evaluation can reveal non-monotonic effects within multi-agent pipelines and that memory-augmented agents can jointly maximize security robustness, real-time performance, operational cost savings, and environmental sustainability without modifying underlying model weights, providing a production-ready pathway for secure and green LLM deployments.
https://arxiv.org/abs/2601.13186
Academic Papers
svg
6bf5e57613d372e1d9e7a82d4cae6ce07e091fddd296d4bf722914f11ce9299d
2026-01-21T00:00:00-05:00
Scientific production in the era of Large Language Models
arXiv:2601.13187v1 Announce Type: new Abstract: Large Language Models (LLMs) are rapidly reshaping scientific research. We analyze these changes in multiple, large-scale datasets with 2.1M preprints, 28K peer review reports, and 246M online accesses to scientific documents. We find: 1) scientists adopting LLMs to draft manuscripts demonstrate a large increase in paper production, ranging from 23.7-89.3% depending on scientific field and author background, 2) LLM use has reversed the relationship between writing complexity and paper quality, leading to an influx of manuscripts that are linguistically complex but substantively underwhelming, and 3) LLM adopters access and cite more diverse prior work, including books and younger, less-cited documents. These findings highlight a stunning shift in scientific production that will likely require a change in how journals, funding agencies, and tenure committees evaluate scientific works.
https://arxiv.org/abs/2601.13187
Academic Papers
svg
d87e1fd17ba33c2f7e45ca6de3f9922cb662ed677208e231c78634337b722e73
2026-01-21T00:00:00-05:00
Negotiating Relationships with ChatGPT: Perceptions, External Influences, and Strategies for AI Companionship
arXiv:2601.13188v1 Announce Type: new Abstract: Individuals are turning to increasingly anthropomorphic, general-purpose chatbots for AI companionship, rather than roleplay-specific platforms. However, not much is known about how individuals perceive and conduct their relationships with general-purpose chatbots. We analyzed semi-structured interviews (n=13), survey responses (n=43), and community discussions on Reddit (41k+ posts and comments) to triangulate the internal dynamics, external influences, and steering strategies that shape AI companion relationships. We learned that individuals conceptualize their companions based on an interplay of their beliefs about the companion's own agency and the autonomy permitted by the platform, how they pursue interactions with the companion, and the perceived initiatives that the companion takes. In combination with the external entities that affect relationship dynamics, particularly model updates that can derail companion behaviour and stability, individuals make use of different types of steering strategies to preserve their relationship, for example, by setting behavioural instructions or porting to other AI platforms. We discuss implications for accountability and transparency in AI systems, where emotional connection competes with broader product objectives and safety constraints.
https://arxiv.org/abs/2601.13188
Academic Papers
svg
251ad24b328cae97c943740abfa7044ef66698b32edc69a23911553ac2f93a04
2026-01-21T00:00:00-05:00
LAViG-FLOW: Latent Autoregressive Video Generation for Fluid Flow Simulations
arXiv:2601.13190v1 Announce Type: new Abstract: Modeling and forecasting subsurface multiphase fluid flow fields underpin applications ranging from geological CO2 sequestration (GCS) operations to geothermal production. This is essential for ensuring both operational performance and long-term safety. While high fidelity multiphase simulators are widely used for this purpose, they become prohibitively expensive once many forward runs are required for inversion purposes and quantify uncertainty. To tackle this challenge we propose LAViG-FLOW, a latent autoregressive video generation diffusion framework that explicitly learns the coupled evolution of saturation and pressure fields. Each state variable is compressed by a dedicated 2D autoencoder, and a Video Diffusion Transformer (VDiT) models their coupled distribution across time. We first train the model on a given time horizon to learn their coupled relationship and then fine-tune it autoregressively so it can extrapolate beyond the observed time window. Evaluated on an open-source CO2 sequestration dataset, LAViG-FLOW generates saturation and pressure fields that stay consistent across time while running orders of magnitude faster than traditional numerical solvers.
https://arxiv.org/abs/2601.13190
Academic Papers
svg
37099b3b5e3190ba11c9ad8dc3c045c9aa1f063bf1d4da0cdb146d13067d7e7f
2026-01-21T00:00:00-05:00
Active Informative Planning for UAV-based Weed Mapping using Discrete Gaussian Process Representations
arXiv:2601.13196v1 Announce Type: new Abstract: Accurate agricultural weed mapping using unmanned aerial vehicles (UAVs) is crucial for precision farming. While traditional methods rely on rigid, pre-defined flight paths and intensive offline processing, informative path planning (IPP) offers a way to collect data adaptively where it is most needed. Gaussian process (GP) mapping provides a continuous model of weed distribution with built-in uncertainty. However, GPs must be discretised for practical use in autonomous planning. Many discretisation techniques exist, but the impact of discrete representation choice remains poorly understood. This paper investigates how different discrete GP representations influence both mapping quality and mission-level performance in UAV-based weed mapping. Considering a UAV equipped with a downward-facing camera, we implement a receding-horizon IPP strategy that selects sampling locations based on the map uncertainty, travel cost, and coverage penalties. We investigate multiple discretisation strategies for representing the GP posterior and use their induced map partitions to generate candidate viewpoints for planning. Experiments on real-world weed distributions show that representation choice significantly affects exploration behaviour and efficiency. Overall, our results demonstrate that discretisation is not only a representational detail but a key design choice that shapes planning dynamics, coverage efficiency, and computational load in online UAV weed mapping.
https://arxiv.org/abs/2601.13196
Academic Papers
svg
02ac85b63da692f434c2003a7af2ee7469e953d28d60820698aa7c0233bee4e7
2026-01-21T00:00:00-05:00
Diffusion-Driven Synthetic Tabular Data Generation for Enhanced DoS/DDoS Attack Classification
arXiv:2601.13197v1 Announce Type: new Abstract: Class imbalance refers to a situation where certain classes in a dataset have significantly fewer samples than oth- ers, leading to biased model performance. Class imbalance in network intrusion detection using Tabular Denoising Diffusion Probability Models (TabDDPM) for data augmentation is ad- dressed in this paper. Our approach synthesizes high-fidelity minority-class samples from the CIC-IDS2017 dataset through iterative denoising processes. For the minority classes that have smaller samples, synthetic samples were generated and merged with the original dataset. The augmented training data enables an ANN classifier to achieve near-perfect recall on previously underrepresented attack classes. These results establish diffusion models as an effective solution for tabular data imbalance in security domains, with potential applications in fraud detection and medical diagnostics.
https://arxiv.org/abs/2601.13197
Academic Papers
svg
2d05d79e41fb09563c8b55def587a41620c5cc6546fb237c5e92e944f4cc11a4
2026-01-21T00:00:00-05:00
The Achilles' Heel of Angular Margins: A Chebyshev Polynomial Fix for Speaker Verification
arXiv:2601.13198v1 Announce Type: new Abstract: Angular margin losses, such as AAM-Softmax, have become the de facto in speaker and face verification. Their success hinges on directly manipulating the angle between features and class prototypes. However, this manipulation relies on the arccos function to recover the angle, introducing a significant yet overlooked source of training instability. The derivative of arccos explodes at its boundaries, causing gradient peaks during optimisation. Furthermore, the formulation fails to generate a sufficiently sharp gradient for hard-to-classify examples. We address these issues by proposing ChebyAAM, a loss that replaces the arccos operation with its Chebyshev polynomial approximation. This substitution eliminates gradient explosion and applies a stronger corrective signal to hard examples, leading to more effective optimisation. Experiments on three benchmarks (VoxCeleb, SITW, and CN-Celeb) demonstrate that our method resolves the instability and consistently improves performance. Our work suggests that approximating angular operations, rather than calculating them explicitly, offers a more robust path for designing future metric learning losses. Code is available at https://github.com/ExtraOrdinaryLab/vibe.
https://arxiv.org/abs/2601.13198
Academic Papers
svg
38051e24d00d89b7deb8136766376dcb20f66e9a4f4ad5ca6b25358985577e98
2026-01-21T00:00:00-05:00
Emissions and cost tradeoffs of time-matched clean electricity procurement under inter-annual weather variability: case study of hydrogen production
arXiv:2601.13202v1 Announce Type: new Abstract: Time-matching requirements (TMRs) for clean electricity procurement are increasingly adopted in voluntary corporate sustainability initiatives and regulatory frameworks. While prior research has evaluated cost and emissions impacts of hourly vs. annual TMR, these studies typically rely on single-year weather scenarios that do not capture inter-annual variability in variable renewable energy (VRE) generation. We use a capacity expansion model to assess how inter-annual weather variability affects procurement-driven infrastructure investments, costs, and emissions for a grid-connected hydrogen producer under both annual and hourly time-matching strategies. Using a Texas case study, we compare deterministic (single weather scenario) and stochastic (nine weather scenarios) modeling approaches. Both procurement investments and cost and emissions outcomes are sensitive to weather scenario, with annual matching exhibiting greater sensitivity than hourly matching. Stochastic modeling finds higher cost premiums for hourly versus annual matching compared to deterministic modeling, though emissions trends remain directionally consistent. Demand flexibility through H2 storage is critical for lowering hourly matching cost premiums under weather-driven VRE variability. Partial hourly matching (e.g., 80-90% compliance) can modestly reduce costs while maintaining minimal emissions impacts. Finally, we examine how grid-level renewable portfolio standards (RPS) affect additionality and emissions. When stringent additionality is achieved via binding RPS constraints on non-H2 electricity demand, annual matching can produce emissions reductions comparable to hourly matching at lower cost.
https://arxiv.org/abs/2601.13202
Academic Papers
svg
16a6fbc612d5855a4e5ee2322e4431f814018fc1fc6107c14157067e8d36f965
2026-01-21T00:00:00-05:00
Real-Time Deadlines Reveal Temporal Awareness Failures in LLM Strategic Dialogues
arXiv:2601.13206v1 Announce Type: new Abstract: Large Language Models (LLMs) generate text token-by-token in discrete time, yet real-world communication, from therapy sessions to business negotiations, critically depends on continuous time constraints. Current LLM architectures and evaluation protocols rarely test for temporal awareness under real-time deadlines. We use simulated negotiations between paired agents under strict deadlines to investigate how LLMs adjust their behavior in time-sensitive settings. In a control condition, agents know only the global time limit. In a time-aware condition, they receive remaining-time updates at each turn. Deal closure rates are substantially higher (32\% vs. 4\% for GPT-5.1) and offer acceptances are sixfold higher in the time-aware condition than in the control, suggesting LLMs struggle to internally track elapsed time. However, the same LLMs achieve near-perfect deal closure rates ($\geq$95\%) under turn-based limits, revealing the failure is in temporal tracking rather than strategic reasoning. These effects replicate across negotiation scenarios and models, illustrating a systematic lack of LLM time awareness that will constrain LLM deployment in many time-sensitive applications.
https://arxiv.org/abs/2601.13206
Academic Papers
svg
071665f3d180574cb36813b033c658f43535b3a87d07e3c5a45e0c3844677456
2026-01-21T00:00:00-05:00
GTPred: Benchmarking MLLMs for Interpretable Geo-localization and Time-of-capture Prediction
arXiv:2601.13207v1 Announce Type: new Abstract: Geo-localization aims to infer the geographic location where an image was captured using observable visual evidence. Traditional methods achieve impressive results through large-scale training on massive image corpora. With the emergence of multi-modal large language models (MLLMs), recent studies have explored their applications in geo-localization, benefiting from improved accuracy and interpretability. However, existing benchmarks largely ignore the temporal information inherent in images, which can further constrain the location. To bridge this gap, we introduce GTPred, a novel benchmark for geo-temporal prediction. GTPred comprises 370 globally distributed images spanning over 120 years. We evaluate MLLM predictions by jointly considering year and hierarchical location sequence matching, and further assess intermediate reasoning chains using meticulously annotated ground-truth reasoning processes. Experiments on 8 proprietary and 7 open-source MLLMs show that, despite strong visual perception, current models remain limited in world knowledge and geo-temporal reasoning. Results also demonstrate that incorporating temporal information significantly enhances location inference performance.
https://arxiv.org/abs/2601.13207
Academic Papers
svg
e302a08bfb1b781d03d71da7c8e9b5728b634fbf979c45a337d5ac91d042c104
2026-01-21T00:00:00-05:00
Rethinking Skip Connections: Additive U-Net for Robust and Interpretable Denoising
arXiv:2601.13208v1 Announce Type: new Abstract: Skip connections are central to U-Net architectures for image denoising, but standard concatenation doubles channel dimensionality and obscures information flow, allowing uncontrolled noise transfer. We propose the Additive U-Net, which replaces concatenative skips with gated additive connections. Each skip pathway is scaled by a learnable non-negative scalar, offering explicit and interpretable control over encoder contributions while avoiding channel inflation. Evaluations on the Kodak-17 denoising benchmark show that Additive U-Net achieves competitive PSNR/SSIM at noise levels {\sigma} = 15, 25, 50, with robustness across kernel schedules and depths. Notably, effective denoising is achieved even without explicit down/up-sampling or forced hierarchies, as the model naturally learns a progression from high-frequency to band-pass to low-frequency features. These results position additive skips as a lightweight and interpretable alternative to concatenation, enabling both efficient design and a clearer understanding of multi-scale information transfer in reconstruction networks.
https://arxiv.org/abs/2601.13208
Academic Papers
svg
b67945766cac42fa45c32a961dd774e3dbe9c2a693b0957d827f6a0f433d49f1
2026-01-21T00:00:00-05:00
Conflict Detection in AI-RAN: Efficient Interaction Learning and Autonomous Graph Reconstruction
arXiv:2601.13213v1 Announce Type: new Abstract: Artificial Intelligence (AI)-native mobile networks represent a fundamental step toward 6G, where learning, inference, and decision making are embedded into the Radio Access Network (RAN) itself. In such networks, multiple AI agents optimize the network to achieve distinct and often competing objectives. As such, conflicts become inevitable and have the potential to degrade performance, cause instability, and disrupt service. Current approaches for conflict detection rely on conflict graphs created based on relationships between AI agents, parameters, and Key Performance Indicators (KPIs). Existing works often rely on complex and computationally expensive Graph Neural Networks (GNNs) and depend on manually chosen thresholds to create conflict graphs. In this work, we present the first systematic framework for conflict detection in AI-native mobile networks, propose a two-tower encoder architecture for learning interactions based on data from the RAN, and introduce a data-driven sparsity-based mechanism for autonomously reconstructing conflict graphs without manual fine-tuning.
https://arxiv.org/abs/2601.13213
Academic Papers
svg
3709f472589f472f89f292771ac9baa6c40b8c62254b387e7ec4753047f95ac2
2026-01-21T00:00:00-05:00
An AMP-Based Asymptotic Analysis For Nonlinear One-Bit Precoding
arXiv:2601.13214v1 Announce Type: new Abstract: This paper focuses on the asymptotic analysis of a class of nonlinear one-bit precoding schemes under Rayleigh fading channels. The considered scheme employs a convex-relaxation-then-quantization (CRQ) approach to the well-known minimum mean square error (MMSE) model, which includes the classical one-bit precoder SQUID as a special case. To analyze its asymptotic behavior, we develop a novel analytical framework based on approximate message passing (AMP). We show that, the statistical properties of the considered scheme can be asymptotically characterized by a scalar ``signal plus Gaussian noise'' model. Based on this, we further derive a closed-form expression for the symbol error probability (SEP) in the large-system limit, which quantitatively characterizes the impact of both system and model parameters on SEP performance. Simulation results validate our analysis and also demonstrate that performance gains over SQUID can be achieved by appropriately tuning the parameters involved in the considered model.
https://arxiv.org/abs/2601.13214
Academic Papers
svg
f823ee24fd20e6abd0eb2e94706f8035f371225e3cacc7383adeb15fce31e5c0
2026-01-21T00:00:00-05:00
On the Reliability of Estimation Bounds in Low-SNR Bistatic ISAC
arXiv:2601.13216v1 Announce Type: new Abstract: This paper explores a bistatic Integrated Sensing and Communication (ISAC) framework, where a base station transmits communication signal that serve both direct communication with a user and multi-target parameter estimation through reflections captured by a separate sensing receiver. We assume that the instantaneous knowledge of the transmit signal at the sensing receiver is not available, and the sensing receiver only has knowledge of the statistical properties of the received signal. Unlike prior research that focuses on power allocation or optimal beamforming design for ISAC, we emphasize the inadequacy of the Cram\'er-Rao Bound (and its variant) in low Signal-to-Noise Ratio (SNR) regimes, particularly in passive sensing scenarios. Due to severe path loss and other impairments, the received sensing SNR is often significantly lower than that of direct Line-of-Sight communication, making CRB-based performance evaluation unreliable. To address this, we adopt the Ziv-Zakai Bound (ZZB) for Angle of Arrival estimation, which provides a more meaningful lower bound on estimation error. We derive analytical expressions for the ZZB and the achievable ergodic communication rate as functions of SNR. Through numerical simulations, we analyze the pareto-front between communication and sensing performance, demonstrating why ZZB serves as a better metric in low sensing SNR ISAC where traditional CRB-based approaches fail.
https://arxiv.org/abs/2601.13216
Academic Papers
svg
afc01f438235ba88065fa97bbe8475ab3f39fb4976097434731c3d7c7b72e661
2026-01-21T00:00:00-05:00
Beyond Single-shot Writing: Deep Research Agents are Unreliable at Multi-turn Report Revision
arXiv:2601.13217v1 Announce Type: new Abstract: Existing benchmarks for Deep Research Agents (DRAs) treat report generation as a single-shot writing task, which fundamentally diverges from how human researchers iteratively draft and revise reports via self-reflection or peer feedback. Whether DRAs can reliably revise reports with user feedback remains unexplored. We introduce Mr Dre, an evaluation suite that establishes multi-turn report revision as a new evaluation axis for DRAs. Mr Dre consists of (1) a unified long-form report evaluation protocol spanning comprehensiveness, factuality, and presentation, and (2) a human-verified feedback simulation pipeline for multi-turn revision. Our analysis of five diverse DRAs reveals a critical limitation: while agents can address most user feedback, they also regress on 16-27% of previously covered content and citation quality. Over multiple revision turns, even the best-performing agents leave significant headroom, as they continue to disrupt content outside the feedback's scope and fail to preserve earlier edits. We further show that these issues are not easily resolvable through inference-time fixes such as prompt engineering and a dedicated sub-agent for report revision.
https://arxiv.org/abs/2601.13217
Academic Papers
svg
a34c00cdff18088026d8cd0bc1c857a2776829bd002c3a810425b3e5236e1997
2026-01-21T00:00:00-05:00
ObjectVisA-120: Object-based Visual Attention Prediction in Interactive Street-crossing Environments
arXiv:2601.13218v1 Announce Type: new Abstract: The object-based nature of human visual attention is well-known in cognitive science, but has only played a minor role in computational visual attention models so far. This is mainly due to a lack of suitable datasets and evaluation metrics for object-based attention. To address these limitations, we present \dataset~ -- a novel 120-participant dataset of spatial street-crossing navigation in virtual reality specifically geared to object-based attention evaluations. The uniqueness of the presented dataset lies in the ethical and safety affiliated challenges that make collecting comparable data in real-world environments highly difficult. \dataset~ not only features accurate gaze data and a complete state-space representation of objects in the virtual environment, but it also offers variable scenario complexities and rich annotations, including panoptic segmentation, depth information, and vehicle keypoints. We further propose object-based similarity (oSIM) as a novel metric to evaluate the performance of object-based visual attention models, a previously unexplored performance characteristic. Our evaluations show that explicitly optimising for object-based attention not only improves oSIM performance but also leads to an improved model performance on common metrics. In addition, we present SUMGraph, a Mamba U-Net-based model, which explicitly encodes critical scene objects (vehicles) in a graph representation, leading to further performance improvements over several state-of-the-art visual attention prediction methods. The dataset, code and models will be publicly released.
https://arxiv.org/abs/2601.13218
Academic Papers
svg
84b69912bc8859a01861b647a8fd607b4ebc5d02a1efd4f7590241116d5d4900
2026-01-21T00:00:00-05:00
The Energy-Throughput Trade-off in Lossless-Compressed Source Code Storage
arXiv:2601.13220v1 Announce Type: new Abstract: Retrieving data from large-scale source code archives is vital for AI training, neural-based software analysis, and information retrieval, to cite a few. This paper studies and experiments with the design of a compressed key-value store for the indexing of large-scale source code datasets, evaluating its trade-off among three primary computational resources: (compressed) space occupancy, time, and energy efficiency. Extensive experiments on a national high-performance computing infrastructure demonstrate that different compression configurations yield distinct trade-offs, with high compression ratios and order-of-magnitude gains in retrieval throughput and energy efficiency. We also study data parallelism and show that, while it significantly improves speed, scaling energy efficiency is more difficult, reflecting the known non-energy-proportionality of modern hardware and challenging the assumption of a direct time-energy correlation. This work streamlines automation in energy-aware configuration tuning and standardized green benchmarking deployable in CI/CD pipelines, thus empowering system architects with a spectrum of Pareto-optimal energy-compression-throughput trade-offs and actionable guidelines for building sustainable, efficient storage backends for massive open-source code archival.
https://arxiv.org/abs/2601.13220
Academic Papers
svg
35ec651cbaf55d08973de90bd96aaf908723eca0d3549f4533a42d54de6015ca
2026-01-21T00:00:00-05:00
Incorporating Q&A Nuggets into Retrieval-Augmented Generation
arXiv:2601.13222v1 Announce Type: new Abstract: RAGE systems integrate ideas from automatic evaluation (E) into Retrieval-augmented Generation (RAG). As one such example, we present Crucible, a Nugget-Augmented Generation System that preserves explicit citation provenance by constructing a bank of Q&amp;A nuggets from retrieved documents and uses them to guide extraction, selection, and report generation. Reasoning on nuggets avoids repeated information through clear and interpretable Q&amp;A semantics - instead of opaque cluster abstractions - while maintaining citation provenance throughout the entire generation process. Evaluated on the TREC NeuCLIR 2024 collection, our Crucible system substantially outperforms Ginger, a recent nugget-based RAG system, in nugget recall, density, and citation grounding.
https://arxiv.org/abs/2601.13222
Academic Papers
svg
b5206d397a046b0d64ceb6f0647ad6c847953daa7427209653dc4283df50f453
2026-01-21T00:00:00-05:00
Functional Logic Program Transformations
arXiv:2601.13224v1 Announce Type: new Abstract: Many tools used to process programs, like compilers, analyzers, or verifiers, perform transformations on their intermediate program representation, like abstract syntax trees. Implementing such program transformations is a non-trivial task, since it is necessary to iterate over the complete syntax tree and apply various transformations at nodes in a tree. In this paper we show how the features of functional logic programming are useful to implement program transformations in a compact and comprehensible manner. For this purpose, we propose to write program transformations as partially defined and non-deterministic operations. Since the implementation of non-determinism usually causes some overhead compared to deterministically defined operations, we compare our approach to a deterministic transformation method. We evaluate these alternatives for the functional logic language Curry and its intermediate representation FlatCurry which is used in various analysis and verification tools and compilers.
https://arxiv.org/abs/2601.13224
Academic Papers
svg
328db5b8c526664e68d65c11e2d8c6b1217ed4e40da184c3bf3cc87a21f60da0
2026-01-21T00:00:00-05:00
Not all Blends are Equal: The BLEMORE Dataset of Blended Emotion Expressions with Relative Salience Annotations
arXiv:2601.13225v1 Announce Type: new Abstract: Humans often experience not just a single basic emotion at a time, but rather a blend of several emotions with varying salience. Despite the importance of such blended emotions, most video-based emotion recognition approaches are designed to recognize single emotions only. The few approaches that have attempted to recognize blended emotions typically cannot assess the relative salience of the emotions within a blend. This limitation largely stems from the lack of datasets containing a substantial number of blended emotion samples annotated with relative salience. To address this shortcoming, we introduce BLEMORE, a novel dataset for multimodal (video, audio) blended emotion recognition that includes information on the relative salience of each emotion within a blend. BLEMORE comprises over 3,000 clips from 58 actors, performing 6 basic emotions and 10 distinct blends, where each blend has 3 different salience configurations (50/50, 70/30, and 30/70). Using this dataset, we conduct extensive evaluations of state-of-the-art video classification approaches on two blended emotion prediction tasks: (1) predicting the presence of emotions in a given sample, and (2) predicting the relative salience of emotions in a blend. Our results show that unimodal classifiers achieve up to 29% presence accuracy and 13% salience accuracy on the validation set, while multimodal methods yield clear improvements, with ImageBind + WavLM reaching 35% presence accuracy and HiCMAE 18% salience accuracy. On the held-out test set, the best models achieve 33% presence accuracy (VideoMAEv2 + HuBERT) and 18% salience accuracy (HiCMAE). In sum, the BLEMORE dataset provides a valuable resource to advancing research on emotion recognition systems that account for the complexity and significance of blended emotion expressions.
https://arxiv.org/abs/2601.13225
Academic Papers
svg
529b2d7238c259745462f7e12e08b2f6114660a268c8aa6c4df469bbcc888526
2026-01-21T00:00:00-05:00
Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?
arXiv:2601.13227v1 Announce Type: new Abstract: RAG systems are increasingly evaluated and optimized using LLM judges, an approach that is rapidly becoming the dominant paradigm for system assessment. Nugget-based approaches in particular are now embedded not only in evaluation frameworks but also in the architectures of RAG systems themselves. While this integration can lead to genuine improvements, it also creates a risk of faulty measurements due to circularity. In this paper, we investigate this risk through comparative experiments with nugget-based RAG systems, including Ginger and Crucible, against strong baselines such as GPT-Researcher. By deliberately modifying Crucible to generate outputs optimized for an LLM judge, we show that near-perfect evaluation scores can be achieved when elements of the evaluation - such as prompt templates or gold nuggets - are leaked or can be predicted. Our results highlight the importance of blind evaluation settings and methodological diversity to guard against mistaking metric overfitting for genuine system progress.
https://arxiv.org/abs/2601.13227
Academic Papers
svg
8b969d382af0e78fb81e84c61dc763ade4e3e8e7987236636f7b70e0be3d8e9c
2026-01-21T00:00:00-05:00
Autoregressive Models Rival Diffusion Models at ANY-ORDER Generation
arXiv:2601.13228v1 Announce Type: new Abstract: Diffusion language models enable any-order generation and bidirectional conditioning, offering appealing flexibility for tasks such as infilling, rewriting, and self-correction. However, their formulation-predicting one part of a sequence from another within a single-step dependency-limits modeling depth and often yields lower sample quality and stability than autoregressive (AR) models. To address this, we revisit autoregressive modeling as a foundation and reformulate diffusion-style training into a structured multi-group prediction process. We propose Any-order Any-subset Autoregressive modeling (A3), a generalized framework that extends the standard AR factorization to arbitrary token groups and generation orders. A3 preserves the probabilistic rigor and multi-layer dependency modeling of AR while inheriting diffusion models' flexibility for parallel and bidirectional generation. We implement A3 through a two-stream attention architecture and a progressive adaptation strategy that transitions pretrained AR models toward any-order prediction. Experiments on question answering, commonsense reasoning, and story infilling demonstrate that A3 outperforms diffusion-based models while maintaining flexible decoding. This work offers a unified approach for a flexible, efficient, and novel language modeling paradigm.
https://arxiv.org/abs/2601.13228
Academic Papers
svg
4922308cea060fd7937caf0edc115a489f0d9de4544b06bbe60704d729071852
2026-01-21T00:00:00-05:00
Towards Matrix-Free Patch Smoothers for the Stokes Problem: Evaluating Local p-Multigrid Solvers
arXiv:2601.13230v1 Announce Type: new Abstract: Vertex-patch smoothers offer an effective strategy for achieving robust geometric multigrid convergence for the Stokes equations, particularly in the context of high-order finite elements. However, their practical efficiency is often limited by the computational cost of solving the local saddle-point problems, especially when explicit matrix factorizations are not feasible. We explore a fully iterative, matrix-free-compatible approach to the local patch solve using $p$-multigrid techniques. We evaluate different local solver configurations: Braess-Sarazin and block-triangular preconditioners. Our numerical experiments suggest that the Braess-Sarazin approach is particularly resilient. We find that a single iteration of the local solver yields global convergence rates comparable to those obtained with exact local solvers, even on distorted meshes and in the presence of large viscosity jumps.
https://arxiv.org/abs/2601.13230
Academic Papers
svg
495531de904b32aae96dfd08fedac289e5d9083692702e905094ccdff8c338e2
2026-01-21T00:00:00-05:00
MATTERIX: toward a digital twin for robotics-assisted chemistry laboratory automation
arXiv:2601.13232v1 Announce Type: new Abstract: Accelerated materials discovery is critical for addressing global challenges. However, developing new laboratory workflows relies heavily on real-world experimental trials, and this can hinder scalability because of the need for numerous physical make-and-test iterations. Here we present MATTERIX, a multiscale, graphics processing unit-accelerated robotic simulation framework designed to create high-fidelity digital twins of chemistry laboratories, thus accelerating workflow development. This multiscale digital twin simulates robotic physical manipulation, powder and liquid dynamics, device functionalities, heat transfer and basic chemical reaction kinetics. This is enabled by integrating realistic physics simulation and photorealistic rendering with a modular graphics processing unit-accelerated semantics engine, which models logical states and continuous behaviors to simulate chemistry workflows across different levels of abstraction. MATTERIX streamlines the creation of digital twin environments through open-source asset libraries and interfaces, while enabling flexible workflow design via hierarchical plan definition and a modular skill library that incorporates learning-based methods. Our approach demonstrates sim-to-real transfer in robotic chemistry setups, reducing reliance on costly real-world experiments and enabling the testing of hypothetical automated workflows in silico. The project website is available at https://accelerationconsortium.github.io/Matterix/ .
https://arxiv.org/abs/2601.13232
Academic Papers
svg
02cba6a34a4abc70206d6fffbff58c36322dc0eec5c043b95d237d724a0ab14c
2026-01-21T00:00:00-05:00
RAG: A Random-Forest-Based Generative Design Framework for Uncertainty-Aware Design of Metamaterials with Complex Functional Response Requirements
arXiv:2601.13233v1 Announce Type: new Abstract: Metamaterials design for advanced functionality often entails the inverse design on nonlinear and condition-dependent responses (e.g., stress-strain relation and dispersion relation), which are described by continuous functions. Most existing design methods focus on vector-valued responses (e.g., Young's modulus and bandgap width), while the inverse design of functional responses remains challenging due to their high-dimensionality, the complexity of accommodating design requirements in inverse-design frameworks, and non-existence or non-uniqueness of feasible solutions. Although generative design approaches have shown promise, they are often data-hungry, handle design requirements heuristically, and may generate infeasible designs without uncertainty quantification. To address these challenges, we introduce a RAndom-forest-based Generative approach (RAG). By leveraging the small-data compatibility of random forests, RAG enables data-efficient predictions of high-dimensional functional responses. During the inverse design, the framework estimates the likelihood through the ensemble which quantifies the trustworthiness of generated designs while reflecting the relative difficulty across different requirements. The one-to-many mapping is addressed through single-shot design generation by sampling from the conditional likelihood. We demonstrate RAG on: 1) acoustic metamaterials with prescribed partial passbands/stopbands, and 2) mechanical metamaterials with targeted snap-through responses, using 500 and 1057 samples, respectively. Its data-efficiency is benchmarked against neural networks on a public mechanical metamaterial dataset with nonlinear stress-strain relations. Our framework provides a lightweight, trustworthy pathway to inverse design involving functional responses, expensive simulations, and complex design requirements, beyond metamaterials.
https://arxiv.org/abs/2601.13233
Academic Papers
svg
ff85b94057156a904d567bc2b1434f2e7c8613f580714f0a720081ee1d5f5deb
2026-01-21T00:00:00-05:00
ConvMambaNet: A Hybrid CNN-Mamba State Space Architecture for Accurate and Real-Time EEG Seizure Detection
arXiv:2601.13234v1 Announce Type: new Abstract: Epilepsy is a chronic neurological disorder marked by recurrent seizures that can severely impact quality of life. Electroencephalography (EEG) remains the primary tool for monitoring neural activity and detecting seizures, yet automated analysis remains challenging due to the temporal complexity of EEG signals. This study introduces ConvMambaNet, a hybrid deep learning model that integrates Convolutional Neural Networks (CNNs) with the Mamba Structured State Space Model (SSM) to enhance temporal feature extraction. By embedding the Mamba-SSM block within a CNN framework, the model effectively captures both spatial and long-range temporal dynamics. Evaluated on the CHB-MIT Scalp EEG dataset, ConvMambaNet achieved a 99% accuracy and demonstrated robust performance under severe class imbalance. These results underscore the model's potential for precise and efficient seizure detection, offering a viable path toward real-time, automated epilepsy monitoring in clinical environments.
https://arxiv.org/abs/2601.13234
Academic Papers
svg
db576f4a6111e0c1e33536ced12a83b70a15ee26a3156cf2283648dee720d03a
2026-01-21T00:00:00-05:00
RubRIX: Rubric-Driven Risk Mitigation in Caregiver-AI Interactions
arXiv:2601.13235v1 Announce Type: new Abstract: Caregivers seeking AI-mediated support express complex needs -- information-seeking, emotional validation, and distress cues -- that warrant careful evaluation of response safety and appropriateness. Existing AI evaluation frameworks, primarily focused on general risks (toxicity, hallucinations, policy violations, etc), may not adequately capture the nuanced risks of LLM-responses in caregiving-contexts. We introduce RubRIX (Rubric-based Risk Index), a theory-driven, clinician-validated framework for evaluating risks in LLM caregiving responses. Grounded in the Elements of an Ethic of Care, RubRIX operationalizes five empirically-derived risk dimensions: Inattention, Bias & Stigma, Information Inaccuracy, Uncritical Affirmation, and Epistemic Arrogance. We evaluate six state-of-the-art LLMs on over 20,000 caregiver queries from Reddit and ALZConnected. Rubric-guided refinement consistently reduced risk-components by 45-98% after one iteration across models. This work contributes a methodological approach for developing domain-sensitive, user-centered evaluation frameworks for high-burden contexts. Our findings highlight the importance of domain-sensitive, interactional risk evaluation for the responsible deployment of LLMs in caregiving support contexts. We release benchmark datasets to enable future research on contextual risk evaluation in AI-mediated support.
https://arxiv.org/abs/2601.13235
Academic Papers
svg
74b2a1cca560b2bad9dd5ea22acfa0d2375a87b4900495df903f467db86437dc
2026-01-21T00:00:00-05:00
A Semantic Decoupling-Based Two-Stage Rainy-Day Attack for Revealing Weather Robustness Deficiencies in Vision-Language Models
arXiv:2601.13238v1 Announce Type: new Abstract: Vision-Language Models (VLMs) are trained on image-text pairs collected under canonical visual conditions and achieve strong performance on multimodal tasks. However, their robustness to real-world weather conditions, and the stability of cross-modal semantic alignment under such structured perturbations, remain insufficiently studied. In this paper, we focus on rainy scenarios and introduce the first adversarial framework that exploits realistic weather to attack VLMs, using a two-stage, parameterized perturbation model based on semantic decoupling to analyze rain-induced shifts in decision-making. In Stage 1, we model the global effects of rainfall by applying a low-dimensional global modulation to condition the embedding space and gradually weaken the original semantic decision boundaries. In Stage 2, we introduce structured rain variations by explicitly modeling multi-scale raindrop appearance and rainfall-induced illumination changes, and optimize the resulting non-differentiable weather space to induce stable semantic shifts. Operating in a non-pixel parameter space, our framework generates perturbations that are both physically grounded and interpretable. Experiments across multiple tasks show that even physically plausible, highly constrained weather perturbations can induce substantial semantic misalignment in mainstream VLMs, posing potential safety and reliability risks in real-world deployment. Ablations further confirm that illumination modeling and multi-scale raindrop structures are key drivers of these semantic shifts.
https://arxiv.org/abs/2601.13238
Academic Papers
svg
a9ed3154609d9008e55027997f37829d5dbb019635920bd89ecd5fd48fff7b5b
2026-01-21T00:00:00-05:00
KOCO-BENCH: Can Large Language Models Leverage Domain Knowledge in Software Development?
arXiv:2601.13240v1 Announce Type: new Abstract: Large language models (LLMs) excel at general programming but struggle with domain-specific software development, necessitating domain specialization methods for LLMs to learn and utilize domain knowledge and data. However, existing domain-specific code benchmarks cannot evaluate the effectiveness of domain specialization methods, which focus on assessing what knowledge LLMs possess rather than how they acquire and apply new knowledge, lacking explicit knowledge corpora for developing domain specialization methods. To this end, we present KOCO-BENCH, a novel benchmark designed for evaluating domain specialization methods in real-world software development. KOCO-BENCH contains 6 emerging domains with 11 software frameworks and 25 projects, featuring curated knowledge corpora alongside multi-granularity evaluation tasks including domain code generation (from function-level to project-level with rigorous test suites) and domain knowledge understanding (via multiple-choice Q&amp;A). Unlike previous benchmarks that only provide test sets for direct evaluation, KOCO-BENCH requires acquiring and applying diverse domain knowledge (APIs, rules, constraints, etc.) from knowledge corpora to solve evaluation tasks. Our evaluations reveal that KOCO-BENCH poses significant challenges to state-of-the-art LLMs. Even with domain specialization methods (e.g., SFT, RAG, kNN-LM) applied, improvements remain marginal. Best-performing coding agent, Claude Code, achieves only 34.2%, highlighting the urgent need for more effective domain specialization methods. We release KOCO-BENCH, evaluation code, and baselines to advance further research at https://github.com/jiangxxxue/KOCO-bench.
https://arxiv.org/abs/2601.13240
Academic Papers
svg
cefbad2fa1bba429341591dc36b5c35fd9a16bbbd2e6214a20cc0419bebe32f0
2026-01-21T00:00:00-05:00
A Comprehensive Evaluation of LLM Reasoning: From Single-Model to Multi-Agent Paradigms
arXiv:2601.13243v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly deployed as reasoning systems, where reasoning paradigms - such as Chain-of-Thought (CoT) and multi-agent systems (MAS) - play a critical role, yet their relative effectiveness and cost-accuracy trade-offs remain poorly understood. In this work, we conduct a comprehensive and unified evaluation of reasoning paradigms, spanning direct single-model generation, CoT-augmented single-model reasoning, and representative MAS workflows, characterizing their reasoning performance across a diverse suite of closed-form benchmarks. Beyond overall performance, we probe role-specific capability demands in MAS using targeted role isolation analyses, and analyze cost-accuracy trade-offs to identify which MAS workflows offer a favorable balance between cost and accuracy, and which incur prohibitive overhead for marginal gains. We further introduce MIMeBench, a new open-ended benchmark that targets two foundational yet underexplored semantic capabilities - semantic abstraction and contrastive discrimination - thereby providing an alternative evaluation axis beyond closed-form accuracy and enabling fine-grained assessment of semantic competence that is difficult to capture with existing benchmarks. Our results show that increased structural complexity does not consistently lead to improved reasoning performance, with its benefits being highly dependent on the properties and suitability of the reasoning paradigm itself. The codes are released at https://gitcode.com/HIT1920/OpenLLMBench.
https://arxiv.org/abs/2601.13243
Academic Papers
svg
c9166adefe3ff5a2a562436186952d185d87c7ad2c0f31a60b5bb0822d6185e3
2026-01-21T00:00:00-05:00
Do Instruction-Tuned Models Always Perform Better Than Base Models? Evidence from Math and Domain-Shifted Benchmarks
arXiv:2601.13244v1 Announce Type: new Abstract: Instruction finetuning is standard practice for improving LLM performance, yet it remains unclear whether it enhances reasoning or merely induces surface-level pattern matching. We investigate this by evaluating base and instruction-tuned models on standard math benchmarks, structurally perturbed variants, and domain-shifted tasks. Our analysis highlights two key (often overlooked) limitations of instruction tuning. First, the performance advantage is unstable and depends heavily on evaluation settings. In zero-shot CoT settings on GSM8K, base models consistently outperform instruction-tuned variants, with drops as high as 32.67\% (Llama3-70B). Instruction-tuned models only match or exceed this performance when provided with few-shot exemplars, suggesting a reliance on specific prompting patterns rather than intrinsic reasoning. Second, tuning gains are brittle under distribution shift. Our results show that base models surpass instruction-tuned variants on the domain-specific MedCalc benchmark. Additionally, instruction-tuned models show sharp declines on perturbed datasets, indicating sensitivity to prompt structure over robust reasoning.
https://arxiv.org/abs/2601.13244
Academic Papers
svg
c2ff29253e985948a88744ab3f5b0b89e667212ad9279f10339e591b863ded35
2026-01-21T00:00:00-05:00
The Cost of Failure: On The Complexity of Recampaigning under Fixed Districts
arXiv:2601.13246v1 Announce Type: new Abstract: Redistricting efforts have gathered contemporary attention in both quotidian and scholarly debates, particularly in the United States where efforts to redraw congressional districts to favor either of the two major parties in 12 states -- such as California, Texas, and Ohio -- have captured the public eye. The treatment of redistricting in computational social choice has essentially focused on the process of determining "appropriate" districts. In this work, we are interested in understanding the gamut of options left for the "losing" party, and so we consider the flip side of the problem: Given fixed/predetermined districts, can a given party still make their candidates win by strategically placing them in certain districts? We dub this as "recampaigning" to capture the intuition that a party would redirect their campaigning efforts from one district to another. We model recampaigning as a computational problem, consider natural variations of the model, and study those new models through the lens of (1) (polynomial-time many-one) interreducibilities, (2) separations/collapses (both unconditional and axiomatic-sufficient), and (3) both worst-case and parametrized complexity.
https://arxiv.org/abs/2601.13246
Academic Papers
svg
ef8130640cf71c7d130ed50ee7af9fb96562b54d0d2948ba703a913d12e71284
2026-01-21T00:00:00-05:00
Aligning Agentic World Models via Knowledgeable Experience Learning
arXiv:2601.13247v1 Announce Type: new Abstract: Current Large Language Models (LLMs) exhibit a critical modal disconnect: they possess vast semantic knowledge but lack the procedural grounding to respect the immutable laws of the physical world. Consequently, while these agents implicitly function as world models, their simulations often suffer from physical hallucinations-generating plans that are logically sound but physically unexecutable. Existing alignment strategies predominantly rely on resource-intensive training or fine-tuning, which attempt to compress dynamic environmental rules into static model parameters. However, such parametric encapsulation is inherently rigid, struggling to adapt to the open-ended variability of physical dynamics without continuous, costly retraining. To bridge this gap, we introduce WorldMind, a framework that autonomously constructs a symbolic World Knowledge Repository by synthesizing environmental feedback. Specifically, it unifies Process Experience to enforce physical feasibility via prediction errors and Goal Experience to guide task optimality through successful trajectories. Experiments on EB-ALFRED and EB-Habitat demonstrate that WorldMind achieves superior performance compared to baselines with remarkable cross-model and cross-environment transferability.
https://arxiv.org/abs/2601.13247
Academic Papers
svg
1e350ecf3ae4129378f335139ebb1651d673c61e6a99346b5424a912502dfbe3
2026-01-21T00:00:00-05:00
Diffusion-based Inverse Model of a Distributed Tactile Sensor for Object Pose Estimation
arXiv:2601.13250v1 Announce Type: new Abstract: Tactile sensing provides a promising sensing modality for object pose estimation in manipulation settings where visual information is limited due to occlusion or environmental effects. However, efficiently leveraging tactile data for estimation remains a challenge due to partial observability, with single observations corresponding to multiple possible contact configurations. This limits conventional estimation approaches largely tailored to vision. We propose to address these challenges by learning an inverse tactile sensor model using denoising diffusion. The model is conditioned on tactile observations from a distributed tactile sensor and trained in simulation using a geometric sensor model based on signed distance fields. Contact constraints are enforced during inference through single-step projection using distance and gradient information from the signed distance field. For online pose estimation, we integrate the inverse model with a particle filter through a proposal scheme that combines generated hypotheses with particles from the prior belief. Our approach is validated in simulated and real-world planar pose estimation settings, without access to visual data or tight initial pose priors. We further evaluate robustness to unmodeled contact and sensor dynamics for pose tracking in a box-pushing scenario. Compared to local sampling baselines, the inverse sensor model improves sampling efficiency and estimation accuracy while preserving multimodal beliefs across objects with varying tactile discriminability.
https://arxiv.org/abs/2601.13250
Academic Papers
svg
f08c3f142cbbb560e32ffbc66ba57bd271140d1eaff905c615475e525abbf296
2026-01-21T00:00:00-05:00
Beyond Cosine Similarity: Taming Semantic Drift and Antonym Intrusion in a 15-Million Node Turkish Synonym Graph
arXiv:2601.13251v1 Announce Type: new Abstract: Neural embeddings have a notorious blind spot: they can't reliably tell synonyms apart from antonyms. Consequently, increasing similarity thresholds often fails to prevent opposites from being grouped together. We've built a large-scale semantic clustering system specifically designed to tackle this problem head on. Our pipeline chews through 15 million lexical items, evaluates a massive 520 million potential relationships, and ultimately generates 2.9 million high-precision semantic clusters. The system makes three primary contributions. First, we introduce a labeled dataset of 843,000 concept pairs spanning synonymy, antonymy, and co-hyponymy, constructed via Gemini 2.5-Flash LLM augmentation and verified using human-curated dictionary resources. Second, we propose a specialized three-way semantic relation discriminator that achieves 90% macro-F1, enabling robust disambiguation beyond raw embedding similarity. Third, we introduce a novel soft-to-hard clustering algorithm that mitigates semantic drift preventing erroneous transitive chains (e.g., hot -> spicy -> pain -> depression) while simultaneously resolving polysemy. Our approach employs a topology-aware two-stage expansion-pruning procedure with topological voting, ensuring that each term is assigned to exactly one semantically coherent cluster. The resulting resource enables high-precision semantic search and retrieval-augmented generation, particularly for morphologically rich and low-resource languages where existing synonym databases remain sparse.
https://arxiv.org/abs/2601.13251
Academic Papers
svg
40f944546a41ea4557f2111d965ac81ea46c73becfae7ae64a3bcf2f8f677d94
2026-01-21T00:00:00-05:00
Autonomous Navigation at the Nano-Scale: Algorithms, Architectures, and Constraints
arXiv:2601.13252v1 Announce Type: new Abstract: Autonomous navigation for nano-scale unmanned aerial vehicles (nano-UAVs) is governed by extreme Size, Weight, and Power (SWaP) constraints (with the weight < 50 g and sub-100 mW onboard processor), distinguishing it fundamentally from standard robotic paradigms. This review synthesizes the state-of-the-art in sensing, computing, and control architectures designed specifically for these sub- 100mW computational envelopes. We critically analyse the transition from classical geometry-based methods to emerging "Edge AI" paradigms, including quantized deep neural networks deployed on ultra-low-power System-on-Chips (SoCs) and neuromorphic event-based control. Beyond algorithms, we evaluate the hardware-software co-design requisite for autonomy, covering advancements in dense optical flow, optimized Simultaneous Localization and Mapping (SLAM), and learning-based flight control. While significant progress has been observed in visual navigation and relative pose estimation, our analysis reveals persistent gaps in long-term endurance, robust obstacle avoidance in dynamic environments, and the "Sim-to-Real" transfer of reinforcement learning policies. This survey provides a roadmap for bridging these gaps, advocating for hybrid architectures that fuse lightweight classical control with data-driven perception to enable fully autonomous, agile nano-UAVs in GPS-denied environments.
https://arxiv.org/abs/2601.13252
Academic Papers
svg
331f5f5012a77d16c240ebbafaa05480dfcd78178b4f65e2b78905bcbeef7951
2026-01-21T00:00:00-05:00
A Hybrid Protocol for Large-Scale Semantic Dataset Generation in Low-Resource Languages: The Turkish Semantic Relations Corpus
arXiv:2601.13253v1 Announce Type: new Abstract: We present a hybrid methodology for generating large-scale semantic relationship datasets in low-resource languages, demonstrated through a comprehensive Turkish semantic relations corpus. Our approach integrates three phases: (1) FastText embeddings with Agglomerative Clustering to identify semantic clusters, (2) Gemini 2.5-Flash for automated semantic relationship classification, and (3) integration with curated dictionary sources. The resulting dataset comprises 843,000 unique Turkish semantic pairs across three relationship types (synonyms, antonyms, co-hyponyms) representing a 10x scale increase over existing resources at minimal cost ($65). We validate the dataset through two downstream tasks: an embedding model achieving 90% top-1 retrieval accuracy and a classification model attaining 90% F1-macro. Our scalable protocol addresses critical data scarcity in Turkish NLP and demonstrates applicability to other low-resource languages. We publicly release the dataset and models.
https://arxiv.org/abs/2601.13253
Academic Papers
svg
2527ab1d8d1f4877b768e716577ba49bbc10782b96fd21b538feaf156eac20fc
2026-01-21T00:00:00-05:00
Deep Neural networks for solving high-dimensional parabolic partial differential equations
arXiv:2601.13256v1 Announce Type: new Abstract: The numerical solution of high dimensional partial differential equations (PDEs) is severely constrained by the curse of dimensionality (CoD), rendering classical grid--based methods impractical beyond a few dimensions. In recent years, deep neural networks have emerged as a promising mesh free alternative, enabling the approximation of PDE solutions in tens to thousands of dimensions. This review provides a tutorial--oriented introduction to neural--network--based methods for solving high dimensional parabolic PDEs, emphasizing conceptual clarity and methodological connections. We organize the literature around three unifying paradigms: (i) PDE residual--based approaches, including physicsinformed neural networks and their high dimensional variants; (ii) stochastic methods derived from Feynman--Kac and backward stochastic differential equation formulations; and (iii) hybrid derivative--free random difference approaches designed to alleviate the computational cost of derivatives in high dimensions. For each paradigm, we outline the underlying mathematical formulation, algorithmic implementation, and practical strengths and limitations. Representative benchmark problems--including Hamilton--Jacobi--Bellman and Black--Scholes equations in up to 1000 dimensions --illustrate the scalability, effectiveness, and accuracy of the methods. The paper concludes with a discussion of open challenges and future directions for reliable and scalable solvers of high dimensional PDEs.
https://arxiv.org/abs/2601.13256
Academic Papers
svg
35ba9444b6889dd47b2d554d8ae981dfe7d5d3ca8b0c2aa4b97a3d75ac37a938
2026-01-21T00:00:00-05:00
Stop Taking Tokenizers for Granted: They Are Core Design Decisions in Large Language Models
arXiv:2601.13260v1 Announce Type: new Abstract: Tokenization underlies every large language model, yet it remains an under-theorized and inconsistently designed component. Common subword approaches such as Byte Pair Encoding (BPE) offer scalability but often misalign with linguistic structure, amplify bias, and waste capacity across languages and domains. This paper reframes tokenization as a core modeling decision rather than a preprocessing step. We argue for a context-aware framework that integrates tokenizer and model co-design, guided by linguistic, domain, and deployment considerations. Standardized evaluation and transparent reporting are essential to make tokenization choices accountable and comparable. Treating tokenization as a core design problem, not a technical afterthought, can yield language technologies that are fairer, more efficient, and more adaptable.
https://arxiv.org/abs/2601.13260
Academic Papers
svg
52076c048b1f1dfea64777eb550f9640a29034f690f42cd018ab25292534f794
2026-01-21T00:00:00-05:00
CURE-Med: Curriculum-Informed Reinforcement Learning for Multilingual Medical Reasoning
arXiv:2601.13262v1 Announce Type: new Abstract: While large language models (LLMs) have shown to perform well on monolingual mathematical and commonsense reasoning, they remain unreliable for multilingual medical reasoning applications, hindering their deployment in multilingual healthcare settings. We address this by first introducing CUREMED-BENCH, a high-quality multilingual medical reasoning dataset with open-ended reasoning queries with a single verifiable answer, spanning thirteen languages, including underrepresented languages such as Amharic, Yoruba, and Swahili. Building on this dataset, we propose CURE-MED, a curriculum-informed reinforcement learning framework that integrates code-switching-aware supervised fine-tuning and Group Relative Policy Optimization to jointly improve logical correctness and language stability. Across thirteen languages, our approach consistently outperforms strong baselines and scales effectively, achieving 85.21% language consistency and 54.35% logical correctness at 7B parameters, and 94.96% language consistency and 70.04% logical correctness at 32B parameters. These results support reliable and equitable multilingual medical reasoning in LLMs. The code and dataset are available at https://cure-med.github.io/
https://arxiv.org/abs/2601.13262
Academic Papers
svg
0c3aa054d08193a140fc74479eb0c6cc7aff6f03228bfe59f5cb4a7d2e2e2377
2026-01-21T00:00:00-05:00
Deep Learning for Semantic Segmentation of 3D Ultrasound Data
arXiv:2601.13263v1 Announce Type: new Abstract: Developing cost-efficient and reliable perception systems remains a central challenge for automated vehicles. LiDAR and camera-based systems dominate, yet they present trade-offs in cost, robustness and performance under adverse conditions. This work introduces a novel framework for learning-based 3D semantic segmentation using Calyo Pulse, a modular, solid-state 3D ultrasound sensor system for use in harsh and cluttered environments. A 3D U-Net architecture is introduced and trained on the spatial ultrasound data for volumetric segmentation. Results demonstrate robust segmentation performance from Calyo Pulse sensors, with potential for further improvement through larger datasets, refined ground truth, and weighted loss functions. Importantly, this study highlights 3D ultrasound sensing as a promising complementary modality for reliable autonomy.
https://arxiv.org/abs/2601.13263
Academic Papers
svg
96492f174bd4a073a990d94a73a0cda60a5f3ac39f238ef68f2f37f1ec3e6bc0
2026-01-21T00:00:00-05:00
Unlearning in LLMs: Methods, Evaluation, and Open Challenges
arXiv:2601.13264v1 Announce Type: new Abstract: Large language models (LLMs) have achieved remarkable success across natural language processing tasks, yet their widespread deployment raises pressing concerns around privacy, copyright, security, and bias. Machine unlearning has emerged as a promising paradigm for selectively removing knowledge or data from trained models without full retraining. In this survey, we provide a structured overview of unlearning methods for LLMs, categorizing existing approaches into data-centric, parameter-centric, architecture-centric, hybrid, and other strategies. We also review the evaluation ecosystem, including benchmarks, metrics, and datasets designed to measure forgetting effectiveness, knowledge retention, and robustness. Finally, we outline key challenges and open problems, such as scalable efficiency, formal guarantees, cross-language and multimodal unlearning, and robustness against adversarial relearning. By synthesizing current progress and highlighting open directions, this paper aims to serve as a roadmap for developing reliable and responsible unlearning techniques in large language models.
https://arxiv.org/abs/2601.13264
Academic Papers
svg
0ea1ef6320f79601552c9ed230c1094ff17c6bc701895a4acb8740fac67d73be
2026-01-21T00:00:00-05:00
The Query Complexity of Local Search in Rounds on General Graphs
arXiv:2601.13266v1 Announce Type: new Abstract: We analyze the query complexity of finding a local minimum in $t$ rounds on general graphs. More precisely, given a graph $G = (V,E)$ and oracle access to an unknown function $f : V \to \mathbb{R}$, the goal is to find a local minimum--a vertex $v$ such that $f(v) \leq f(u)$ for all $(u,v) \in E$--using at most $t$ rounds of interaction with the oracle. The query complexity is well understood on grids, but much less is known beyond. This abstract problem captures many optimization tasks, such as finding a local minimum of a loss function during neural network training. For each graph with $n$ vertices, we prove a deterministic upper bound of $O(t n^{1/t} (s\Delta)^{1-1/t})$, where $s$ is the separation number and $\Delta$ is the maximum degree of the graph. We complement this result with a randomized lower bound of $\Omega(t n^{1/t}-t)$ that holds for any connected graph. We also find that parallel steepest descent with a warm start provides improved bounds for graphs with high separation number and bounded degree.
https://arxiv.org/abs/2601.13266
Academic Papers
svg
ec7396c220e47d51cc7966ed8959efe1e3c5c946819c0814fed8fc54ba363915
2026-01-21T00:00:00-05:00
Improving the Safety and Trustworthiness of Medical AI via Multi-Agent Evaluation Loops
arXiv:2601.13268v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly applied in healthcare, yet ensuring their ethical integrity and safety compliance remains a major barrier to clinical deployment. This work introduces a multi-agent refinement framework designed to enhance the safety and reliability of medical LLMs through structured, iterative alignment. Our system combines two generative models - DeepSeek R1 and Med-PaLM - with two evaluation agents, LLaMA 3.1 and Phi-4, which assess responses using the American Medical Association's (AMA) Principles of Medical Ethics and a five-tier Safety Risk Assessment (SRA-5) protocol. We evaluate performance across 900 clinically diverse queries spanning nine ethical domains, measuring convergence efficiency, ethical violation reduction, and domain-specific risk behavior. Results demonstrate that DeepSeek R1 achieves faster convergence (mean 2.34 vs. 2.67 iterations), while Med-PaLM shows superior handling of privacy-sensitive scenarios. The iterative multi-agent loop achieved an 89% reduction in ethical violations and a 92% risk downgrade rate, underscoring the effectiveness of our approach. This study presents a scalable, regulator-aligned, and cost-efficient paradigm for governing medical AI safety.
https://arxiv.org/abs/2601.13268
Academic Papers
svg
de7badcf0fbacd89aa749c6c9d70e33be5b9879b8fb93882defb2127202d5792
2026-01-21T00:00:00-05:00
Probabilistic Linear Logic Programming with an application to Bayesian Networks computations
arXiv:2601.13270v1 Announce Type: new Abstract: Bayesian networks are a canonical formalism for representing probabilistic dependencies, yet their integration within logic programming frameworks remains a nontrivial challenge, mainly due to the complex structure of these networks. In this paper, we propose probLO (probabilistic Linear Objects) an extension of Andreoli and Pareschi's LO language which embeds Bayesian network representation and computation within the framework of multiplicative-additive linear logic programming. The key novelty is the use of multi-head Prolog-like methods to reconstruct network structures, which are not necessarily trees, and the operation of slicing, standard in the literature of linear logic, enabling internal numerical probability computations without relying on external semantic interpretation.
https://arxiv.org/abs/2601.13270
Academic Papers
svg
3aede8093ed5342b1c1b3f650c23b07be5f61c443d38250293554d5b5a3ed55c
2026-01-21T00:00:00-05:00
Function Recovery Attacks in Gate-Hiding Garbled Circuits using SAT Solving
arXiv:2601.13271v1 Announce Type: new Abstract: Semi-Private Function Evaluation enables joint computation while protecting both input data and function logic. A practical instantiation is gate-hiding garbled circuits, which conceal gate functionalities while revealing the circuit topology. Existing security definitions intentionally exclude leakage through circuit topology, leaving the concrete impact of such leakage on function privacy insufficiently understood. We analyze the empirical security of gate hiding under two adversarial models that capture realistic computational capabilities. We present a SAT-based function-recovery attack that reconstructs hidden gate operations from a circuit's public topology. To enable recovery on larger and more complex circuits, we develop an incremental SAT-solving framework combined with a set of composable, topology-preserving simplification theorems. These techniques jointly reduce the SAT instance size and progressively constrain the search space across repeated solving iterations. We evaluate our attack on ISCAS benchmarks, representative secure computation circuits, and fault-tolerant sensor fusion circuits under a fixed 24-hour recovery budget. Compared to baseline approaches, our optimized attack achieves up to a 159-fold speedup in recovery time without increasing the number of oracle queries. Our results demonstrate that topology leakage alone can enable effective function recovery in practice.
https://arxiv.org/abs/2601.13271
Academic Papers
svg
2a16f4569cfb2b5fddf4dfda2a183c282479b21bfd1b1d549748f17b2030bb9c
2026-01-21T00:00:00-05:00
Multi-level Monte Carlo Dropout for Efficient Uncertainty Quantification
arXiv:2601.13272v1 Announce Type: new Abstract: We develop a multilevel Monte Carlo (MLMC) framework for uncertainty quantification with Monte Carlo dropout. Treating dropout masks as a source of epistemic randomness, we define a fidelity hierarchy by the number of stochastic forward passes used to estimate predictive moments. We construct coupled coarse--fine estimators by reusing dropout masks across fidelities, yielding telescoping MLMC estimators for both predictive means and predictive variances that remain unbiased for the corresponding dropout-induced quantities while reducing sampling variance at fixed evaluation budget. We derive explicit bias, variance and effective cost expressions, together with sample-allocation rules across levels. Numerical experiments on forward and inverse PINNs--Uzawa benchmarks confirm the predicted variance rates and demonstrate efficiency gains over single-level MC-dropout at matched cost.
https://arxiv.org/abs/2601.13272
Academic Papers
svg
3cac187cc57dda0895623704a1c8301ec364229b9be9a67325c998fbc0284f1c
2026-01-21T00:00:00-05:00
Safe Navigation in Cluttered Environments Via Spline-Based Harmonic Potential Fields
arXiv:2601.13273v1 Announce Type: new Abstract: We provide a complete motion-planning mechanism that ensures target tracking and obstacle avoidance in a cluttered environment. For a given polyhedral decomposition of the feasible space, we adopt a novel procedure that constrains the agent to move only through a prescribed sequence of cells via a suitable control policy. For each cell, we construct a harmonic potential surface induced by a Dirichlet boundary condition given as a cardinal B-spline curve. A detailed analysis of the curve behavior (periodicity, support) and of the associated control point selection allows us to explicitly compute these harmonic potential surfaces, from which we subsequently derive the corresponding control policy. We illustrate that the resulting construction funnels the agent safely along the chain of cells from the starting point to the target.
https://arxiv.org/abs/2601.13273
Academic Papers
svg
45f5334db51f31442cf284b0cac853fda43fbb14ad5e4d0728d261c7443ecd49
2026-01-21T00:00:00-05:00
Balancing Classification and Calibration Performance in Decision-Making LLMs via Calibration Aware Reinforcement Learning
arXiv:2601.13284v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in decision-making tasks, where not only accuracy but also reliable confidence estimates are essential. Well-calibrated confidence enables downstream systems to decide when to trust a model and when to defer to fallback mechanisms. In this work, we conduct a systematic study of calibration in two widely used fine-tuning paradigms: supervised fine-tuning (SFT) and reinforcement learning with verifiable rewards (RLVR). We show that while RLVR improves task performance, it produces extremely overconfident models, whereas SFT yields substantially better calibration, even under distribution shift, though with smaller performance gains. Through targeted experiments, we diagnose RLVR's failure, showing that decision tokens act as extraction steps of the decision in reasoning traces and do not carry confidence information, which prevents reinforcement learning from surfacing calibrated alternatives. Based on this insight, we propose a calibration-aware reinforcement learning formulation that directly adjusts decision-token probabilities. Our method preserves RLVR's accuracy level while mitigating overconfidence, reducing ECE scores up to 9 points.
https://arxiv.org/abs/2601.13284
Academic Papers
svg
17f9b0482e227006bbdbaf7d51139331d3f1502f5dda27f8f18b493ba24c0b43
2026-01-21T00:00:00-05:00
Tight Asymptotic Bounds for Fair Division With Externalities
arXiv:2601.13287v1 Announce Type: new Abstract: We study the problem of allocating a set of indivisible items among agents whose preferences include externalities. Unlike the standard fair division model, agents may derive positive or negative utility not only from items allocated directly to them, but also from items allocated to other agents. Since exact envy-freeness cannot be guaranteed, prior work has focused on its relaxations. However, two central questions remained open: does there always exist an allocation that is envy-free up to one item (EF1), and if not, what is the optimal relaxation EF-$k$ that can always be attained? We settle both questions by deriving tight asymptotic bounds on the number of items sufficient to eliminate envy. We show that for any instance with $n$ agents, an allocation that is envy-free up to $O(\sqrt{n})$ items always exists and can be found in polynomial time, and we prove a matching $\Omega(\sqrt{n})$ lower bound showing that this result is tight even for binary valuations, which rules out the existence of EF1 allocations when agents have externalities.
https://arxiv.org/abs/2601.13287
Academic Papers
svg