id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
b72ce0fe91336d935ebabe1332eed6fa3e011b6feffd81d009c8fa0d6fbff001
|
2026-01-13T00:00:00-05:00
|
SkyNomad: On Using Multi-Region Spot Instances to Minimize AI Batch Job Cost
|
arXiv:2601.06520v1 Announce Type: new Abstract: AI batch jobs such as model training, inference pipelines, and data analytics require substantial GPU resources and often need to finish before a deadline. Spot instances offer 3-10x lower cost than on-demand instances, but their unpredictable availability makes meeting deadlines difficult. Existing systems either rely solely on spot instances and risk deadline violations, or operate in simplified single-region settings. These approaches overlook substantial spatial and temporal heterogeneity in spot availability, lifetimes, and prices. We show that exploiting such heterogeneity to access more spot capacity is the key to reduce the job execution cost. We present SkyNomad, a multi-region scheduling system that maximizes spot usage and minimizes cost while guaranteeing deadlines. SkyNomad uses lightweight probing to estimate availability, predicts spot lifetimes, accounts for migration cost, and unifies regional characteristics and deadline pressure into a monetary cost model that guides scheduling decisions. Our evaluation shows that SkyNomad achieves 1.25-3.96x cost savings in real cloud deployments and performs within 10% cost differences of an optimal policy in simulation, while consistently meeting deadlines.
|
https://arxiv.org/abs/2601.06520
|
Academic Papers
|
svg
|
a642489adc93c648899bceda50b863b65f61df702e79ca8edbb787fbf461c589
|
2026-01-13T00:00:00-05:00
|
BabyVision: Visual Reasoning Beyond Language
|
arXiv:2601.06521v1 Announce Type: new Abstract: While humans develop core visual skills long before acquiring language, contemporary Multimodal LLMs (MLLMs) still rely heavily on linguistic priors to compensate for their fragile visual understanding. We uncovered a crucial fact: state-of-the-art MLLMs consistently fail on basic visual tasks that humans, even 3-year-olds, can solve effortlessly. To systematically investigate this gap, we introduce BabyVision, a benchmark designed to assess core visual abilities independent of linguistic knowledge for MLLMs. BabyVision spans a wide range of tasks, with 388 items divided into 22 subclasses across four key categories. Empirical results and human evaluation reveal that leading MLLMs perform significantly below human baselines. Gemini3-Pro-Preview scores 49.7, lagging behind 6-year-old humans and falling well behind the average adult score of 94.1. These results show despite excelling in knowledge-heavy evaluations, current MLLMs still lack fundamental visual primitives. Progress in BabyVision represents a step toward human-level visual perception and reasoning capabilities. We also explore solving visual reasoning with generation models by proposing BabyVision-Gen and automatic evaluation toolkit. Our code and benchmark data are released at https://github.com/UniPat-AI/BabyVision for reproduction.
|
https://arxiv.org/abs/2601.06521
|
Academic Papers
|
svg
|
56ea587ddab523dc7da199c988487ebb6ce256d28f9dde22601a2e5055fb1d0a
|
2026-01-13T00:00:00-05:00
|
Toward Generalizable Deblurring: Leveraging Massive Blur Priors with Linear Attention for Real-World Scenarios
|
arXiv:2601.06525v1 Announce Type: new Abstract: Image deblurring has advanced rapidly with deep learning, yet most methods exhibit poor generalization beyond their training datasets, with performance dropping significantly in real-world scenarios. Our analysis shows this limitation stems from two factors: datasets face an inherent trade-off between realism and coverage of diverse blur patterns, and algorithmic designs remain restrictive, as pixel-wise losses drive models toward local detail recovery while overlooking structural and semantic consistency, whereas diffusion-based approaches, though perceptually strong, still fail to generalize when trained on narrow datasets with simplistic strategies. Through systematic investigation, we identify blur pattern diversity as the decisive factor for robust generalization and propose Blur Pattern Pretraining (BPP), which acquires blur priors from simulation datasets and transfers them through joint fine-tuning on real data. We further introduce Motion and Semantic Guidance (MoSeG) to strengthen blur priors under severe degradation, and integrate it into GLOWDeblur, a Generalizable reaL-wOrld lightWeight Deblur model that combines convolution-based pre-reconstruction & domain alignment module with a lightweight diffusion backbone. Extensive experiments on six widely-used benchmarks and two real-world datasets validate our approach, confirming the importance of blur priors for robust generalization and demonstrating that the lightweight design of GLOWDeblur ensures practicality in real-world applications. The project page is available at https://vegdog007.github.io/GLOWDeblur_Website/.
|
https://arxiv.org/abs/2601.06525
|
Academic Papers
|
svg
|
108ed9db0bc1e25798e943a17728a62d5ac33d0f968e90dd95fd0ad8a7a1247e
|
2026-01-13T00:00:00-05:00
|
Visible Light Communication using Led-Based AR Markers for Robot Localization
|
arXiv:2601.06527v1 Announce Type: new Abstract: A method of information transmission using visual markers has been widely studied. In this approach, information or identifiers (IDs) are encoded in the black-and-white pattern of each marker. By analyzing the geometric properties of the marker frame - such as its size, distortion, and coordinates - the relative position and orientation between the camera and the marker can be estimated. Furthermore, by associating the positional information of each marker with its corresponding ID, the position of the camera that takes the image picture can be calculated. In the field of mobile robotics, such markers are commonly utilized for robot localization. As mobile robots become more widely used in everyday environments, such visual markers are expected to be utilized across various contexts. In environments where robots collaborate with humans - such as in cell-based manufacturing systems in factories or in domestic settings with partner robots - it is desirable for such markers to be designed in a manner that appears natural and unobtrusive to humans. In this paper, we propose a method for implementing an ArUco marker in the form of illumination. In the proposed method, LEDs are arranged in accordance with the grid pattern of the marker, and the blinking frequency of each LED is determined based on the corresponding black or white cell. As a result, the illumination appears uniformly bright to the human eye, while the camera can capture variations in the blinking frequency. From these differences, the black-and-white pattern can be reconstructed, enabling the identification of the marker's tag information. We develop a prototype system, and conduct experiments which are conducted to evaluate its performance in terms of recognition accuracy under varying distances and viewing angles with respect to the ArUco marker.
|
https://arxiv.org/abs/2601.06527
|
Academic Papers
|
svg
|
9f0e56ed3f6848e14b42a48a7dffc39abecc80b66f04c5331f483675ed1e3a6f
|
2026-01-13T00:00:00-05:00
|
Atomic-SNLI: Fine-Grained Natural Language Inference through Atomic Fact Decomposition
|
arXiv:2601.06528v1 Announce Type: new Abstract: Current Natural Language Inference (NLI) systems primarily operate at the sentence level, providing black-box decisions that lack explanatory power. While atomic-level NLI offers a promising alternative by decomposing hypotheses into individual facts, we demonstrate that the conventional assumption that a hypothesis is entailed only when all its atomic facts are entailed fails in practice due to models' poor performance on fine-grained reasoning. Our analysis reveals that existing models perform substantially worse on atomic level inference compared to sentence level tasks. To address this limitation, we introduce Atomic-SNLI, a novel dataset constructed by decomposing SNLI and enriching it with carefully curated atomic level examples through linguistically informed generation strategies. Experimental results demonstrate that models fine-tuned on Atomic-SNLI achieve significant improvements in atomic reasoning capabilities while maintaining strong sentence level performance, enabling both accurate judgements and transparent, explainable results at the fact level.
|
https://arxiv.org/abs/2601.06528
|
Academic Papers
|
svg
|
74baacffcfc3bd8d3ede9297ab784bdcd9f992be5358f5e96079fe954c0a9456
|
2026-01-13T00:00:00-05:00
|
Improving Day-Ahead Grid Carbon Intensity Forecasting by Joint Modeling of Local-Temporal and Cross-Variable Dependencies Across Different Frequencies
|
arXiv:2601.06530v1 Announce Type: new Abstract: Accurate forecasting of the grid carbon intensity factor (CIF) is critical for enabling demand-side management and reducing emissions in modern electricity systems. Leveraging multiple interrelated time series, CIF prediction is typically formulated as a multivariate time series forecasting problem. Despite advances in deep learning-based methods, it remains challenging to capture the fine-grained local-temporal dependencies, dynamic higher-order cross-variable dependencies, and complex multi-frequency patterns for CIF forecasting. To address these issues, we propose a novel model that integrates two parallel modules: 1) one enhances the extraction of local-temporal dependencies under multi-frequency by applying multiple wavelet-based convolutional kernels to overlapping patches of varying lengths; 2) the other captures dynamic cross-variable dependencies under multi-frequency to model how inter-variable relationships evolve across the time-frequency domain. Evaluations on four representative electricity markets from Australia, featuring varying levels of renewable penetration, demonstrate that the proposed method outperforms the state-of-the-art models. An ablation study further validates the complementary benefits of the two proposed modules. Designed with built-in interpretability, the proposed model also enables better understanding of its predictive behavior, as shown in a case study where it adaptively shifts attention to relevant variables and time intervals during a disruptive event.
|
https://arxiv.org/abs/2601.06530
|
Academic Papers
|
svg
|
f5c3b575c0202db44fef8d0c0fb3980ea2d5b862afe730387ab67511a7d9e22d
|
2026-01-13T00:00:00-05:00
|
Short-term electricity load forecasting with multi-frequency reconstruction diffusion
|
arXiv:2601.06533v1 Announce Type: new Abstract: Diffusion models have emerged as a powerful method in various applications. However, their application to Short-Term Electricity Load Forecasting (STELF) -- a typical scenario in energy systems -- remains largely unexplored. Considering the nonlinear and fluctuating characteristics of the load data, effectively utilizing the powerful modeling capabilities of diffusion models to enhance STELF accuracy remains a challenge. This paper proposes a novel diffusion model with multi-frequency reconstruction for STELF, referred to as the Multi-Frequency-Reconstruction-based Diffusion (MFRD) model. The MFRD model achieves accurate load forecasting through four key steps: (1) The original data is combined with the decomposed multi-frequency modes to form a new data representation; (2) The diffusion model adds noise to the new data, effectively reducing and weakening the noise in the original data; (3) The reverse process adopts a denoising network that combines Long Short-Term Memory (LSTM) and Transformer to enhance noise removal; and (4) The inference process generates the final predictions based on the trained denoising network. To validate the effectiveness of the MFRD model, we conducted experiments on two data platforms: Australian Energy Market Operator (AEMO) and Independent System Operator of New England (ISO-NE). The experimental results show that our model consistently outperforms the compared models.
|
https://arxiv.org/abs/2601.06533
|
Academic Papers
|
svg
|
b96fa48bd7f06450e61e1e1381b205887e23796ea9ac83fc8f589e587c6af2d0
|
2026-01-13T00:00:00-05:00
|
Automated dimensional analysis for PDEs
|
arXiv:2601.06535v1 Announce Type: new Abstract: Physical units are fundamental to scientific computing. However, many finite element frameworks lack built-in support for dimensional analysis. In this work, we present a systematic framework for integrating physical units into the Unified Form Language (UFL). We implement a symbolic Quantity class to track units within variational forms. The implementation exploits the abelian group structure of physical dimensions. We represent them as vectors in $\mathbb{Q}^n$ to simplify operations and improve performance. A graph-based visitor pattern traverses the expression tree to automate consistency checks and factorization. We demonstrate that this automated nondimensionalization functions as the simplest form of Full Operator Preconditioning. It acts as a physics-aware diagonal preconditioner that equilibrates linear systems prior to assembly. Numerical experiments with the Navier--Stokes equations show that this improves the condition number of the saddle-point matrix. Analysis of Neo-Hooke hyperelasticity highlights the detection of floating-point cancellation errors in small deformation regimes. Finally, the Poisson--Nernst--Planck system example illustrates the handling of coupled multiphysics problems with derived scaling parameters. Although the implementation targets the FEniCSx framework, the concepts are general and easily adaptable to other finite element libraries using UFL, such as Firedrake or DUNE.
|
https://arxiv.org/abs/2601.06535
|
Academic Papers
|
svg
|
25b6c1aff872d41f861f74bde9a4d399a27d085d30dba28e0039709a93325a4b
|
2026-01-13T00:00:00-05:00
|
Expos\'ia: Academic Writing Assessment of Expos\'es and Peer Feedback
|
arXiv:2601.06536v1 Announce Type: new Abstract: We present Expos\'ia, the first public dataset that connects writing and feedback assessment in higher education, enabling research on educationally grounded approaches to academic writing evaluation. Expos\'ia includes student research project proposals and peer and instructor feedback consisting of comments and free-text reviews. The dataset was collected in the "Introduction to Scientific Work" course of the Computer Science undergraduate program that focuses on teaching academic writing skills and providing peer feedback on academic writing. Expos\'ia reflects the multi-stage nature of the academic writing process that includes drafting, providing and receiving feedback, and revising the writing based on the feedback received. Both the project proposals and peer feedback are accompanied by human assessment scores based on a fine-grained, pedagogically-grounded schema for writing and feedback assessment that we develop. We use Expos\'ia to benchmark state-of-the-art open-source large language models (LLMs) for two tasks: automated scoring of (1) the proposals and (2) the student reviews. The strongest LLMs attain high agreement on scoring aspects that require little domain knowledge but degrade on dimensions evaluating content, in line with human agreement values. We find that LLMs align better with the human instructors giving high scores. Finally, we establish that a prompting strategy that scores multiple aspects of the writing together is the most effective, an important finding for classroom deployment.
|
https://arxiv.org/abs/2601.06536
|
Academic Papers
|
svg
|
6bb58a478b8587030cd7f09c79f9aedf4f337bc330e0ae60864a0d9913bdb8ed
|
2026-01-13T00:00:00-05:00
|
Towards Egocentric 3D Hand Pose Estimation in Unseen Domains
|
arXiv:2601.06537v1 Announce Type: new Abstract: We present V-HPOT, a novel approach for improving the cross-domain performance of 3D hand pose estimation from egocentric images across diverse, unseen domains. State-of-the-art methods demonstrate strong performance when trained and tested within the same domain. However, they struggle to generalise to new environments due to limited training data and depth perception -- overfitting to specific camera intrinsics. Our method addresses this by estimating keypoint z-coordinates in a virtual camera space, normalised by focal length and image size, enabling camera-agnostic depth prediction. We further leverage this invariance to camera intrinsics to propose a self-supervised test-time optimisation strategy that refines the model's depth perception during inference. This is achieved by applying a 3D consistency loss between predicted and in-space scale-transformed hand poses, allowing the model to adapt to target domain characteristics without requiring ground truth annotations. V-HPOT significantly improves 3D hand pose estimation performance in cross-domain scenarios, achieving a 71% reduction in mean pose error on the H2O dataset and a 41% reduction on the AssemblyHands dataset. Compared to state-of-the-art methods, V-HPOT outperforms all single-stage approaches across all datasets and competes closely with two-stage methods, despite needing approximately x3.5 to x14 less data.
|
https://arxiv.org/abs/2601.06537
|
Academic Papers
|
svg
|
c475ec036f2e5280fbcc32995d382f20b9763d035f53685d985ee21eb6563228
|
2026-01-13T00:00:00-05:00
|
Self-Organizing Dual-Buffer Adaptive Clustering Experience Replay (SODASER) for Safe Reinforcement Learning in Optimal Control
|
arXiv:2601.06540v1 Announce Type: new Abstract: This paper proposes a novel reinforcement learning framework, named Self-Organizing Dual-buffer Adaptive Clustering Experience Replay (SODACER), designed to achieve safe and scalable optimal control of nonlinear systems. The proposed SODACER mechanism consisting of a Fast-Buffer for rapid adaptation to recent experiences and a Slow-Buffer equipped with a self-organizing adaptive clustering mechanism to maintain diverse and non-redundant historical experiences. The adaptive clustering mechanism dynamically prunes redundant samples, optimizing memory efficiency while retaining critical environmental patterns. The approach integrates SODASER with Control Barrier Functions (CBFs) to guarantee safety by enforcing state and input constraints throughout the learning process. To enhance convergence and stability, the framework is combined with the Sophia optimizer, enabling adaptive second-order gradient updates. The proposed SODACER-Sophia's architecture ensures reliable, effective, and robust learning in dynamic, safety-critical environments, offering a generalizable solution for applications in robotics, healthcare, and large-scale system optimization. The proposed approach is validated on a nonlinear Human Papillomavirus (HPV) transmission model with multiple control inputs and safety constraints. Comparative evaluations against random and clustering-based experience replay methods demonstrate that SODACER achieves faster convergence, improved sample efficiency, and a superior bias-variance trade-off, while maintaining safe system trajectories, validated via the Friedman test.
|
https://arxiv.org/abs/2601.06540
|
Academic Papers
|
svg
|
82829845fe9ff83170e3e0f95b353ca749c28c9204e309504a03b993eba94912
|
2026-01-13T00:00:00-05:00
|
SimLLM: Fine-Tuning Code LLMs for SimPy-Based Queueing System Simulation
|
arXiv:2601.06543v1 Announce Type: new Abstract: The Python package SimPy is widely used for modeling queueing systems due to its flexibility, simplicity, and smooth integration with modern data analysis and optimization frameworks. Recent advances in large language models (LLMs) have shown strong ability in generating clear and executable code, making them powerful and suitable tools for writing SimPy queueing simulation code. However, directly employing closed-source models like GPT-4o to generate such code may lead to high computational costs and raise data privacy concerns. To address this, we fine-tune two open-source LLMs, Qwen-Coder-7B and DeepSeek-Coder-6.7B, on curated SimPy queueing data, which enhances their code-generating performance in executability, output-format compliance, and instruction-code consistency. Particularly, we proposed a multi-stage fine-tuning framework comprising two stages of supervised fine-tuning (SFT) and one stage of direct preference optimization (DPO), progressively enhancing the model's ability in SimPy-based queueing simulation code generation. Extensive evaluations demonstrate that both fine-tuned models achieve substantial improvements in executability, output-format compliance, and instruct consistency. These results confirm that domain-specific fine-tuning can effectively transform compact open-source code models into reliable SimPy simulation generators which provide a practical alternative to closed-source LLMs for education, research, and operational decision support.
|
https://arxiv.org/abs/2601.06543
|
Academic Papers
|
svg
|
c7e493b5505f754403f536730bc2d0866aa818631c6239dfe07b0475f64d09bf
|
2026-01-13T00:00:00-05:00
|
LLMTrack: Semantic Multi-Object Tracking with Multi-modal Large Language Models
|
arXiv:2601.06550v1 Announce Type: new Abstract: Traditional Multi-Object Tracking (MOT) systems have achieved remarkable precision in localization and association, effectively answering \textit{where} and \textit{who}. However, they often function as autistic observers, capable of tracing geometric paths but blind to the semantic \textit{what} and \textit{why} behind object behaviors. To bridge the gap between geometric perception and cognitive reasoning, we propose \textbf{LLMTrack}, a novel end-to-end framework for Semantic Multi-Object Tracking (SMOT). We adopt a bionic design philosophy that decouples strong localization from deep understanding, utilizing Grounding DINO as the eyes and the LLaVA-OneVision multimodal large model as the brain. We introduce a Spatio-Temporal Fusion Module that aggregates instance-level interaction features and video-level contexts, enabling the Large Language Model (LLM) to comprehend complex trajectories. Furthermore, we design a progressive three-stage training strategy, Visual Alignment, Temporal Fine-tuning, and Semantic Injection via LoRA to efficiently adapt the massive model to the tracking domain. Extensive experiments on the BenSMOT benchmark demonstrate that LLMTrack achieves state-of-the-art performance, significantly outperforming existing methods in instance description, interaction recognition, and video summarization while maintaining robust tracking stability.
|
https://arxiv.org/abs/2601.06550
|
Academic Papers
|
svg
|
c0b30fa2c47b53855285b01cf4ca675817b99c0fd8c2922b2adfefb1a433b258
|
2026-01-13T00:00:00-05:00
|
L-RAG: Balancing Context and Retrieval with Entropy-Based Lazy Loading
|
arXiv:2601.06551v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) has emerged as the predominant paradigm for grounding Large Language Model outputs in factual knowledge, effectively mitigating hallucinations. However, conventional RAG systems operate under a "retrieve-always" assumption, querying vector databases for every input regardless of query complexity. This static approach incurs substantial computational overhead and inference latency, particularly problematic for high-throughput production deployments. We introduce L-RAG (Lazy Retrieval-Augmented Generation), an adaptive framework that implements hierarchical context management through entropy-based gating. L-RAG employs a two-tier architecture: queries are first processed with a compact document summary, and expensive chunk retrieval is triggered only when the model's predictive entropy exceeds a calibrated threshold, signaling genuine uncertainty. Through experiments on SQuAD 2.0 (N=500) using the Phi-2 model, we demonstrate that L-RAG provides a tunable accuracy-efficiency trade-off: at a conservative threshold (tau=0.5), L-RAG achieves 78.2% accuracy, matching Standard RAG (77.8%), with 8% retrieval reduction; at a balanced threshold (tau=1.0), retrieval reduction increases to 26% with modest accuracy trade-off (76.0%). Latency analysis shows that L-RAG saves 80-210ms per query when retrieval latency exceeds 500ms. Analysis of entropy distributions reveals statistically significant separation (p < 0.001) between correct predictions (H=1.72) and errors (H=2.20), validating entropy as a reliable uncertainty signal. L-RAG offers a practical, training-free approach toward more efficient RAG deployment, providing system architects with a configurable knob to balance accuracy and throughput requirements.
|
https://arxiv.org/abs/2601.06551
|
Academic Papers
|
svg
|
593bbf829e91b52cf16c519737a9b246f1e9d1c6282d4c06abba23979f51ac2a
|
2026-01-13T00:00:00-05:00
|
Model Reconciliation through Explainability and Collaborative Recovery in Assistive Robotics
|
arXiv:2601.06552v1 Announce Type: new Abstract: Whenever humans and robots work together, it is essential that unexpected robot behavior can be explained to the user. Especially in applications such as shared control the user and the robot must share the same model of the objects in the world, and the actions that can be performed on these objects. In this paper, we achieve this with a so-called model reconciliation framework. We leverage a Large Language Model to predict and explain the difference between the robot's and the human's mental models, without the need of a formal mental model of the user. Furthermore, our framework aims to solve the model divergence after the explanation by allowing the human to correct the robot. We provide an implementation in an assistive robotics domain, where we conduct a set of experiments with a real wheelchair-based mobile manipulator and its digital twin.
|
https://arxiv.org/abs/2601.06552
|
Academic Papers
|
svg
|
4c6d0b84219ac0d32b98bc0ae258e219c33e24094e3d6d0712da2c52111c00f5
|
2026-01-13T00:00:00-05:00
|
A Bayesian Network-Driven Zero Trust Model for Cyber Risk Quantification in Small-Medium Businesses
|
arXiv:2601.06553v1 Announce Type: new Abstract: Small-Medium Businesses (SMBs) are essential to global economies yet remain highly vulnerable to cyberattacks due to limited budgets, inadequate cybersecurity expertise, and underestimation of cyber risks. Their increasing reliance on digital infrastructures has expanded their attack surfaces, exposing them to sophisticated and evolving threats. Consequently, implementing proactive, adaptive security measures has become imperative. This research investigates the effectiveness of Zero Trust Architecture (ZTA) as a sustainable cybersecurity solution tailored to SMBs. While ZTA adoption has been examined broadly, the specific financial, organizational, and capability constraints of SMBs remain underexplored. This study develops an integrated predictive model to assess both the feasibility and risk-mitigation potential of ZTA implementation. The model consists of two sub-models. The first sub-model evaluates the probability of successful ZTA adoption considering implied barriers, and the second tests the effectiveness of ZTA in responding to prevalent cyberattacks. The integrated model predicts the risk level in the presence of ZTA and quantifies the uncertainty of the extent to which ZTA can enhance SMBs' cyber resilience, contributing novel insights for practitioners and stakeholders seeking to enhance compliance with policies, risk, and governance activities in SMBs.
|
https://arxiv.org/abs/2601.06553
|
Academic Papers
|
svg
|
8863dcc0fc5e6d4bcc1e1a1273466bb30775a88d1e976ef0664d0f1f949ef641
|
2026-01-13T00:00:00-05:00
|
QES-Backed Virtual FIDO2 Authenticators: Architectural Options for Secure, Synchronizable WebAuthn Credentials
|
arXiv:2601.06554v1 Announce Type: new Abstract: FIDO2 and the WebAuthn standard offer phishing-resistant, public-key based authentication but traditionally rely on device-bound cryptographic keys that are not naturally portable across user devices. Recent passkey deployments address this limitation by enabling multi-device credentials synchronized via platform-specific cloud ecosystems. However, these approaches require users and organizations to trust the corresponding cloud or phone providers with the protection and availability of their authentication material. In parallel, qualified electronic signature (QES) tokens and smart-card--based PKCS#11 modules provide high-assurance, hardware-rooted identity, yet they are not directly compatible with WebAuthn flows. This paper explores architectural options for bridging these technologies by securing a virtual FIDO2 authenticator with a QES-grade PKCS#11 key and enabling encrypted cloud synchronization of FIDO2 private keys. We first present and implement a baseline architecture in which the cloud stores only ciphertext and the decryption capability remains anchored exclusively in the user's hardware token. We then propose a hardened variant that introduces an Oblivious Pseudorandom Function (OPRF)-based mechanism bound to a local user-verification factor, thereby mitigating cross-protocol misuse and ensuring that synchronization keys cannot be repurposed outside the intended FIDO2 semantics; this enhanced design is analyzed but not implemented. Both architectures preserve a pure WebAuthn/FIDO2 interface to relying parties while offering different trust and deployment trade-offs. We provide the system model, threat analysis, implementation of the baseline architecture, and experimental evaluation, followed by a discussion of the hardened variant's security implications for high-assurance authentication deployments.
|
https://arxiv.org/abs/2601.06554
|
Academic Papers
|
svg
|
889f913207bf319e99ef4e315edacde4a00643bbee43c1476b0077d61b6014c2
|
2026-01-13T00:00:00-05:00
|
Modeling Descriptive Norms in Multi-Agent Systems: An Auto-Aggregation PDE Framework with Adaptive Perception Kernels
|
arXiv:2601.06557v1 Announce Type: new Abstract: This paper presents a PDE-based auto-aggregation model for simulating descriptive norm dynamics in autonomous multi-agent systems, capturing convergence and violation through non-local perception kernels and external potential fields. Extending classical transport equations, the framework represents opinion popularity as a continuous distribution, enabling direct interactions without Bayesian guessing of beliefs. Applied to a real-world COVID-19 dataset from a major medical center, the experimental results demonstrate that: when clinical guidelines serve as a top-down constraint mechanism, it effectively generates convergence of novel descriptive norms consistent with the dataset; in the bottom-up experiment, potential field guidance successfully promotes the system's reconstruction of descriptive norms aligned with the dataset through violation-and-recoupling; whereas fully autonomous interaction leads to the emergence of multi-centric normative structures independent of the dataset.
|
https://arxiv.org/abs/2601.06557
|
Academic Papers
|
svg
|
0b0b26e4416d7cf29b39863671863c8109623aeaa58665fc383fedb4473fe810
|
2026-01-13T00:00:00-05:00
|
Hard Thresholding Pursuit Algorithms for Least Absolute Deviations Problem
|
arXiv:2601.06558v1 Announce Type: new Abstract: Least absolute deviations (LAD) is a statistical optimality criterion widely utilized in scenarios where a minority of measurements are contaminated by outliers of arbitrary magnitudes. In this paper, we delve into the robustness of the variant of adaptive iterative hard thresholding to outliers, known as graded fast hard thresholding pursuit (GFHTP$_1$) algorithm. Unlike the majority of the state-of-the-art algorithms in this field, GFHTP$_1$ does not require prior information about the signal's sparsity. Moreover, its design is parameterless, which not only simplifies the implementation process but also removes the intricacies of parameter optimization. Numerical experiments reveal that the GFHTP$_1$ algorithm consistently outperforms competing algorithms in terms of both robustness and computational efficiency.
|
https://arxiv.org/abs/2601.06558
|
Academic Papers
|
svg
|
1c874c95c4b9369f90f23b5060ff977ce1d0c5740501e7f5ee35557be2c593f2
|
2026-01-13T00:00:00-05:00
|
ArrowGEV: Grounding Events in Video via Learning the Arrow of Time
|
arXiv:2601.06559v1 Announce Type: new Abstract: Grounding events in videos serves as a fundamental capability in video analysis. While Vision-Language Models (VLMs) are increasingly employed for this task, existing approaches predominantly train models to associate events with timestamps in the forward video only. This paradigm hinders VLMs from capturing the inherent temporal structure and directionality of events, thereby limiting robustness and generalization. To address this limitation, inspired by the arrow of time in physics, which characterizes the intrinsic directionality of temporal processes, we propose ArrowGEV, a reinforcement learning framework that explicitly models temporal directionality in events to improve both event grounding and temporal directionality understanding in VLMs. Specifically, we categorize events into time-sensitive (e.g., putting down a bag) and time-insensitive (e.g., holding a towel in the left hand). The former denote events whose reversal substantially alters their meaning, while the latter remain semantically unchanged under reversal. For time-sensitive events, ArrowGEV introduces a reward that encourages VLMs to discriminate between forward and backward videos, whereas for time-insensitive events, it enforces consistent grounding across both directions. Extensive experiments demonstrate that ArrowGEV not only improves grounding precision and temporal directionality recognition, but also enhances general video understanding and reasoning ability.
|
https://arxiv.org/abs/2601.06559
|
Academic Papers
|
svg
|
499cb3a5f9c8ff2e5b855ffb0c2c1e312c26363e314b4ff17cad247ad5d9d96b
|
2026-01-13T00:00:00-05:00
|
Mosaic: Unlocking Long-Context Inference for Diffusion LLMs via Global Memory Planning and Dynamic Peak Taming
|
arXiv:2601.06562v1 Announce Type: new Abstract: Diffusion-based large language models (dLLMs) have emerged as a promising paradigm, utilizing simultaneous denoising to enable global planning and iterative refinement. While these capabilities are particularly advantageous for long-context generation, deploying such models faces a prohibitive memory capacity barrier stemming from severe system inefficiencies. We identify that existing inference systems are ill-suited for this paradigm: unlike autoregressive models constrained by the cumulative KV-cache, dLLMs are bottlenecked by transient activations recomputed at every step. Furthermore, general-purpose memory reuse mechanisms lack the global visibility to adapt to dLLMs' dynamic memory peaks, which toggle between logits and FFNs. To address these mismatches, we propose Mosaic, a memory-efficient inference system that shifts from local, static management to a global, dynamic paradigm. Mosaic integrates a mask-only logits kernel to eliminate redundancy, a lazy chunking optimizer driven by an online heuristic search to adaptively mitigate dynamic peaks, and a global memory manager to resolve fragmentation via virtual addressing. Extensive evaluations demonstrate that Mosaic achieves an average 2.71$\times$ reduction in the memory peak-to-average ratio and increases the maximum inference sequence length supportable on identical hardware by 15.89-32.98$\times$. This scalability is achieved without compromising accuracy and speed, and in fact reducing latency by 4.12%-23.26%.
|
https://arxiv.org/abs/2601.06562
|
Academic Papers
|
svg
|
d407c58583dc13deac0caa2b1848f7712f5ffe3fd3fc2b158dc9a68f8b761ebf
|
2026-01-13T00:00:00-05:00
|
CSR-RAG: An Efficient Retrieval System for Text-to-SQL on the Enterprise Scale
|
arXiv:2601.06564v1 Announce Type: new Abstract: Natural language to SQL translation (Text-to-SQL) is one of the long-standing problems that has recently benefited from advances in Large Language Models (LLMs). While most academic Text-to-SQL benchmarks request schema description as a part of natural language input, enterprise-scale applications often require table retrieval before SQL query generation. To address this need, we propose a novel hybrid Retrieval Augmented Generation (RAG) system consisting of contextual, structural, and relational retrieval (CSR-RAG) to achieve computationally efficient yet sufficiently accurate retrieval for enterprise-scale databases. Through extensive enterprise benchmarks, we demonstrate that CSR-RAG achieves up to 40% precision and over 80% recall while incurring a negligible average query generation latency of only 30ms on commodity data center hardware, which makes it appropriate for modern LLM-based enterprise-scale systems.
|
https://arxiv.org/abs/2601.06564
|
Academic Papers
|
svg
|
433e99ed32b415dfac5828ae1791a05428064149b9e0c3dc4bb14cf0be749333
|
2026-01-13T00:00:00-05:00
|
EVM-QuestBench: An Execution-Grounded Benchmark for Natural-Language Transaction Code Generation
|
arXiv:2601.06565v1 Announce Type: new Abstract: Large language models are increasingly applied to various development scenarios. However, in on-chain transaction scenarios, even a minor error can cause irreversible loss for users. Existing evaluations often overlook execution accuracy and safety. We introduce EVM-QuestBench, an execution-grounded benchmark for natural-language transaction-script generation on EVM-compatible chains. The benchmark employs dynamic evaluation: instructions are sampled from template pools, numeric parameters are drawn from predefined intervals, and validators verify outcomes against these instantiated values. EVM-QuestBench contains 107 tasks (62 atomic, 45 composite). Its modular architecture enables rapid task development. The runner executes scripts on a forked EVM chain with snapshot isolation; composite tasks apply step-efficiency decay. We evaluate 20 models and find large performance gaps, with split scores revealing persistent asymmetry between single-action precision and multi-step workflow completion. Code: https://anonymous.4open.science/r/bsc_quest_bench-A9CF/.
|
https://arxiv.org/abs/2601.06565
|
Academic Papers
|
svg
|
7ace90428a4246f087117281e9eedcd42e01f35b7da134c6a38247ca6455cb6e
|
2026-01-13T00:00:00-05:00
|
QCaption: Video Captioning and Q&A through Fusion of Large Multimodal Models
|
arXiv:2601.06566v1 Announce Type: new Abstract: This paper introduces QCaption, a novel video captioning and Q&A pipeline that enhances video analytics by fusing three models: key frame extraction, a Large Multimodal Model (LMM) for image-text analysis, and a Large Language Model (LLM) for text analysis. This approach enables integrated analysis of text, images, and video, achieving performance improvements over existing video captioning and Q&A models; all while remaining fully self-contained, adept for on-premises deployment. Experimental results using QCaption demonstrated up to 44.2% and 48.9% improvements in video captioning and Q&A tasks, respectively. Ablation studies were also performed to assess the role of LLM on the fusion on the results. Moreover, the paper proposes and evaluates additional video captioning approaches, benchmarking them against QCaption and existing methodologies. QCaption demonstrate the potential of adopting a model fusion approach in advancing video analytics.
|
https://arxiv.org/abs/2601.06566
|
Academic Papers
|
svg
|
6c6706d4db20f0dec2a6b26faee3b59e3a4c70794a0e2debd6136aa8d9dc5ce6
|
2026-01-13T00:00:00-05:00
|
Robustness Quantification of MIMO-PI Controller From the Perspective of \(\gamma\)-Dissipativity
|
arXiv:2601.06568v1 Announce Type: new Abstract: The proportional-integral-derivative (PID) controller and its variants are widely used in control engineering, but they often rely on linearization around equilibrium points and empirical parameter tuning, making them ineffective for multi-input-multi-output (MIMO) systems with strong coupling, intense external disturbances, and high nonlinearity. Moreover, existing methods rarely explore the intrinsic stabilization mechanism of PID controllers for disturbed nonlinear systems from the perspective of modern robust control theories such as dissipativity and $\mathcal{L}_2$-gain. To address this gap, this study focuses on $\gamma$-dissipativity (partially equivalent to $\mathcal{L}_2$-gain) and investigates the optimal parameter tuning of MIMO-PI controllers for general disturbed nonlinear MIMO systems. First, by integrating dissipativity theory with the Hamilton-Jacobi-Isaacs (HJI) inequality, sufficient conditions for the MIMO-PI-controlled system to achieve $\gamma$-dissipativity are established, and the degree of $\gamma$-dissipativity in a local region containing the origin is quantified. Second, an optimal parameter tuning strategy is proposed, which reformulates the $\gamma$-dissipativity optimization problem into a class of standard eigenvalue problems (EVPs) and further converts it into linear matrix inequality (LMI) formulations for efficient online computation. Comprehensive simulation experiments validate the effectiveness and optimality of the proposed approach. This work provides a theoretical basis for the robust stabilization of general disturbed nonlinear MIMO systems and enriches the parameter tuning methods of PID controllers from the perspective of dissipativity.
|
https://arxiv.org/abs/2601.06568
|
Academic Papers
|
svg
|
8318a4a65896f7cfd8a4cf3ef0bc9ddb51c0a9a78f1a0e25404fdbada15d31d4
|
2026-01-13T00:00:00-05:00
|
Hellinger Multimodal Variational Autoencoders
|
arXiv:2601.06572v1 Announce Type: new Abstract: Multimodal variational autoencoders (VAEs) are widely used for weakly supervised generative learning with multiple modalities. Predominant methods aggregate unimodal inference distributions using either a product of experts (PoE), a mixture of experts (MoE), or their combinations to approximate the joint posterior. In this work, we revisit multimodal inference through the lens of probabilistic opinion pooling, an optimization-based approach. We start from H\"older pooling with $\alpha=0.5$, which corresponds to the unique symmetric member of the $\alpha\text{-divergence}$ family, and derive a moment-matching approximation, termed Hellinger. We then leverage such an approximation to propose HELVAE, a multimodal VAE that avoids sub-sampling, yielding an efficient yet effective model that: (i) learns more expressive latent representations as additional modalities are observed; and (ii) empirically achieves better trade-offs between generative coherence and quality, outperforming state-of-the-art multimodal VAE models.
|
https://arxiv.org/abs/2601.06572
|
Academic Papers
|
svg
|
16f29d9b7f59dac952c926251c72f34fd394bf82c4217ae5c2e7325c4721439c
|
2026-01-13T00:00:00-05:00
|
QMAVIS: Long Video-Audio Understanding using Fusion of Large Multimodal Models
|
arXiv:2601.06573v1 Announce Type: new Abstract: Large Multimodal Models (LMMs) for video-audio understanding have traditionally been evaluated only on shorter videos of a few minutes long. In this paper, we introduce QMAVIS (Q Team-Multimodal Audio Video Intelligent Sensemaking), a novel long video-audio understanding pipeline built through a late fusion of LMMs, Large Language Models, and speech recognition models. QMAVIS addresses the gap in long-form video analytics, particularly for longer videos of a few minutes to beyond an hour long, opening up new potential applica- tions in sensemaking, video content analysis, embodied AI, etc. Quantitative experiments using QMAVIS demonstrated a 38.75% improvement over state-of-the-art video-audio LMMs like Vide- oLlaMA2 and InternVL2 on the VideoMME (with subtitles) dataset, which comprises long videos with audio information. Evaluations on other challenging video understanding datasets like PerceptionTest and EgoSchema saw up to 2% improvement, indicating competitive performance. Qualitative experiments also showed that QMAVIS is able to extract the nuances of different scenes in a long video audio content while understanding the overarching narrative. Ablation studies were also conducted to ascertain the impact of each component in the fusion pipeline.
|
https://arxiv.org/abs/2601.06573
|
Academic Papers
|
svg
|
e01b175e42bfd3b7a9d437f7c26469a0bf413dbeb0018b467a607f87640c4b5a
|
2026-01-13T00:00:00-05:00
|
APEX: Learning Adaptive Priorities for Multi-Objective Alignment in Vision-Language Generation
|
arXiv:2601.06574v1 Announce Type: new Abstract: Multi-objective alignment for text-to-image generation is commonly implemented via static linear scalarization, but fixed weights often fail under heterogeneous rewards, leading to optimization imbalance where models overfit high-variance, high-responsiveness objectives (e.g., OCR) while under-optimizing perceptual goals. We identify two mechanistic causes: variance hijacking, where reward dispersion induces implicit reweighting that dominates the normalized training signal, and gradient conflicts, where competing objectives produce opposing update directions and trigger seesaw-like oscillations. We propose APEX (Adaptive Priority-based Efficient X-objective Alignment), which stabilizes heterogeneous rewards with Dual-Stage Adaptive Normalization and dynamically schedules objectives via P^3 Adaptive Priorities that combine learning potential, conflict penalty, and progress need. On Stable Diffusion 3.5, APEX achieves improved Pareto trade-offs across four heterogeneous objectives, with balanced gains of +1.31 PickScore, +0.35 DeQA, and +0.53 Aesthetics while maintaining competitive OCR accuracy, mitigating the instability of multi-objective alignment.
|
https://arxiv.org/abs/2601.06574
|
Academic Papers
|
svg
|
eb0cd72fce8a3da9f39a4edae5cd9cb66ece2f3bbbaf4db99308268cd057aa65
|
2026-01-13T00:00:00-05:00
|
Are Emotions Arranged in a Circle? Geometric Analysis of Emotion Representations via Hyperspherical Contrastive Learning
|
arXiv:2601.06575v1 Announce Type: new Abstract: Psychological research has long utilized circumplex models to structure emotions, placing similar emotions adjacently and opposing ones diagonally. Although frequently used to interpret deep learning representations, these models are rarely directly incorporated into the representation learning of language models, leaving their geometric validity unexplored. This paper proposes a method to induce circular emotion representations within language model embeddings via contrastive learning on a hypersphere. We show that while this circular alignment offers superior interpretability and robustness against dimensionality reduction, it underperforms compared to conventional designs in high-dimensional settings and fine-grained classification. Our findings elucidate the trade-offs involved in applying psychological circumplex models to deep learning architectures.
|
https://arxiv.org/abs/2601.06575
|
Academic Papers
|
svg
|
16d4630196cc1f4b962bb30f67a23f258fd8d8b4d402291bc6205049e90d4670
|
2026-01-13T00:00:00-05:00
|
Stylistic Evolution and LLM Neutrality in Singlish Language
|
arXiv:2601.06580v1 Announce Type: new Abstract: Singlish is a creole rooted in Singapore's multilingual environment and continues to evolve alongside social and technological change. This study investigates the evolution of Singlish over a decade of informal digital text messages. We propose a stylistic similarity framework that compares lexico-structural, pragmatic, psycholinguistic, and encoder-derived features across years to quantify temporal variation. Our analysis reveals notable diachronic changes in tone, expressivity and sentence construction over the years. Conversely, while some LLMs were able to generate superficially realistic Singlish messages, they do not produce temporally neutral outputs, and residual temporal signals remain detectable despite prompting and fine-tuning. Our findings highlight the dynamic evolution of Singlish, as well as the capabilities and limitations of current LLMs in modeling sociolectal and temporal variations in the colloquial language.
|
https://arxiv.org/abs/2601.06580
|
Academic Papers
|
svg
|
d47708cf441ed713f7170dce98f598c55c3dd289560e5923e9600944b815733e
|
2026-01-13T00:00:00-05:00
|
Softly Induced Functional Simplicity Implications for Neural Network Generalisation, Robustness, and Distillation
|
arXiv:2601.06584v1 Announce Type: new Abstract: Learning robust and generalisable abstractions from high-dimensional input data is a central challenge in machine learning and its applications to high-energy physics (HEP). Solutions of lower functional complexity are known to produce abstractions that generalise more effectively and are more robust to input perturbations. In complex hypothesis spaces, inductive biases make such solutions learnable by shaping the loss geometry during optimisation. In a HEP classification task, we show that a soft symmetry respecting inductive bias creates approximate degeneracies in the loss, which we identify as pseudo-Goldstone modes. We quantify functional complexity using metrics derived from first principles Hessian analysis and via compressibility. Our results demonstrate that solutions of lower complexity give rise to abstractions that are more generalisable, robust, and efficiently distillable.
|
https://arxiv.org/abs/2601.06584
|
Academic Papers
|
svg
|
d353100a62d51ddc4cbce0bbad5099d023b037f6bba813254a19854c5e0fca0b
|
2026-01-13T00:00:00-05:00
|
Detecting LLM-Generated Text with Performance Guarantees
|
arXiv:2601.06586v1 Announce Type: new Abstract: Large language models (LLMs) such as GPT, Claude, Gemini, and Grok have been deeply integrated into our daily life. They now support a wide range of tasks -- from dialogue and email drafting to assisting with teaching and coding, serving as search engines, and much more. However, their ability to produce highly human-like text raises serious concerns, including the spread of fake news, the generation of misleading governmental reports, and academic misconduct. To address this practical problem, we train a classifier to determine whether a piece of text is authored by an LLM or a human. Our detector is deployed on an online CPU-based platform https://huggingface.co/spaces/stats-powered-ai/StatDetectLLM, and contains three novelties over existing detectors: (i) it does not rely on auxiliary information, such as watermarks or knowledge of the specific LLM used to generate the text; (ii) it more effectively distinguishes between human- and LLM-authored text; and (iii) it enables statistical inference, which is largely absent in the current literature. Empirically, our classifier achieves higher classification accuracy compared to existing detectors, while maintaining type-I error control, high statistical power, and computational efficiency.
|
https://arxiv.org/abs/2601.06586
|
Academic Papers
|
svg
|
5b53b6b3f35f4131caa380f7d8176382f13f1d8feb8a88db74fc9423bfe6dee2
|
2026-01-13T00:00:00-05:00
|
TCLNet: A Hybrid Transformer-CNN Framework Leveraging Language Models as Lossless Compressors for CSI Feedback
|
arXiv:2601.06588v1 Announce Type: new Abstract: In frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, downlink channel state information (CSI) plays a crucial role in achieving high spectrum and energy efficiency. However, the CSI feedback overhead becomes a major bottleneck as the number of antennas increases. Although existing deep learning-based CSI compression methods have shown great potential, they still face limitations in capturing both local and global features of CSI, thereby limiting achievable compression efficiency. To address these issues, we propose TCLNet, a unified CSI compression framework that integrates a hybrid Transformer-CNN architecture for lossy compression with a hybrid language model (LM) and factorized model (FM) design for lossless compression. The lossy module jointly exploits local features and global context, while the lossless module adaptively switches between context-aware coding and parallel coding to optimize the rate-distortion-complexity (RDC) trade-off. Extensive experiments on both real-world and simulated datasets demonstrate that the proposed TCLNet outperforms existing approaches in terms of reconstruction accuracy and transmission efficiency, achieving up to a 5 dB performance gain across diverse scenarios. Moreover, we show that large language models (LLMs) can be leveraged as zero-shot CSI lossless compressors via carefully designed prompts.
|
https://arxiv.org/abs/2601.06588
|
Academic Papers
|
svg
|
72d34f20d06930cc8618f5be335772365aa55e4d25bc2e5927ae7d705b187fc4
|
2026-01-13T00:00:00-05:00
|
Modeling Tradeoffs between mobility, cost, and performance in Edge Computing
|
arXiv:2601.06591v1 Announce Type: new Abstract: Edge computing provides a cloud-like architecture where small-scale resources are distributed near the network edge, enabling applications on resource-constrained devices to offload latency-critical computations to these resources. While some recent work showed that the resource constraints of the edge could result in higher end-to-end latency under medium to high utilization due to higher queuing delays, to the best of our knowledge, there has not been any work on modeling the trade-offs of deploying on edge versus cloud infrastructures in the presence of mobility. Understanding the costs and trade-offs of this architecture is important for network designers, as the architecture is now adopted to be part of 5G and beyond networks in the form of the Multi-access Edge Computing (MEC). In this paper we focus on quantifying and estimating the cost of edge computing. Using closed-form queuing models, we explore the cost-performance trade-offs in the presence of different systems dynamics. We model how workload mobility and workload variations influence these trade- offs, and validate our results with realistic experiments and simulations. Finally, we discuss the practical implications for designing edge systems and developing algorithms for efficient resource and workload management.
|
https://arxiv.org/abs/2601.06591
|
Academic Papers
|
svg
|
82276a7a093e0feacf4fe61b9b7ced8b33fde13931c99e8166d1735893ea7b6e
|
2026-01-13T00:00:00-05:00
|
Are LLMs Vulnerable to Preference-Undermining Attacks (PUA)? A Factorial Analysis Methodology for Diagnosing the Trade-off between Preference Alignment and Real-World Validity
|
arXiv:2601.06596v1 Announce Type: new Abstract: Large Language Model (LLM) training often optimizes for preference alignment, rewarding outputs that are perceived as helpful and interaction-friendly. However, this preference-oriented objective can be exploited: manipulative prompts can steer responses toward user-appeasing agreement and away from truth-oriented correction. In this work, we investigate whether aligned models are vulnerable to Preference-Undermining Attacks (PUA), a class of manipulative prompting strategies designed to exploit the model's desire to please user preferences at the expense of truthfulness. We propose a diagnostic methodology that provides a finer-grained and more directive analysis than aggregate benchmark scores, using a factorial evaluation framework to decompose prompt-induced shifts into interpretable effects of system objectives (truth- vs. preference-oriented) and PUA-style dialogue factors (directive control, personal derogation, conditional approval, reality denial) within a controlled $2 \times 2^4$ design. Surprisingly, more advanced models are sometimes more susceptible to manipulative prompts. Beyond the dominant reality-denial factor, we observe model-specific sign reversals and interactions with PUA-style factors, suggesting tailored defenses rather than uniform robustness. These findings offer a novel, reproducible factorial evaluation methodology that provides finer-grained diagnostics for post-training processes like RLHF, enabling better trade-offs in the product iteration of LLMs by offering a more nuanced understanding of preference alignment risks and the impact of manipulative prompts.
|
https://arxiv.org/abs/2601.06596
|
Academic Papers
|
svg
|
63a8c6f6a3f1a5062fdb1755bbe3f14b2c99fa757fe42295ff09f0ce9a192daa
|
2026-01-13T00:00:00-05:00
|
Implicit bias as a Gauge correction: Theory and Inverse Design
|
arXiv:2601.06597v1 Announce Type: new Abstract: A central problem in machine learning theory is to characterize how learning dynamics select particular solutions among the many compatible with the training objective, a phenomenon, called implicit bias, which remains only partially characterized. In the present work, we identify a general mechanism, in terms of an explicit geometric correction of the learning dynamics, for the emergence of implicit biases, arising from the interaction between continuous symmetries in the model's parametrization and stochasticity in the optimization process. Our viewpoint is constructive in two complementary directions: given model symmetries, one can derive the implicit bias they induce; conversely, one can inverse-design a wide class of different implicit biases by computing specific redundant parameterizations. More precisely, we show that, when the dynamics is expressed in the quotient space obtained by factoring out the symmetry group of the parameterization, the resulting stochastic differential equation gains a closed form geometric correction in the stationary distribution of the optimizer dynamics favoring orbits with small local volume. We compute the resulting symmetry induced bias for a range of architectures, showing how several well known results fit into a single unified framework. The approach also provides a practical methodology for deriving implicit biases in new settings, and it yields concrete, testable predictions that we confirm by numerical simulations on toy models trained on synthetic data, leaving more complex scenarios for future work. Finally, we test the implicit bias inverse-design procedure in notable cases, including biases toward sparsity in linear features or in spectral properties of the model parameters.
|
https://arxiv.org/abs/2601.06597
|
Academic Papers
|
svg
|
410710a341e76d01ab4e1a8a6ada0ce2af96d2bca20717affb149d4b8569a211
|
2026-01-13T00:00:00-05:00
|
How Context Shapes Truth: Geometric Transformations of Statement-level Truth Representations in LLMs
|
arXiv:2601.06599v1 Announce Type: new Abstract: Large Language Models (LLMs) often encode whether a statement is true as a vector in their residual stream activations. These vectors, also known as truth vectors, have been studied in prior work, however how they change when context is introduced remains unexplored. We study this question by measuring (1) the directional change ($\theta$) between the truth vectors with and without context and (2) the relative magnitude of the truth vectors upon adding context. Across four LLMs and four datasets, we find that (1) truth vectors are roughly orthogonal in early layers, converge in middle layers, and may stabilize or continue increasing in later layers; (2) adding context generally increases the truth vector magnitude, i.e., the separation between true and false representations in the activation space is amplified; (3) larger models distinguish relevant from irrelevant context mainly through directional change ($\theta$), while smaller models show this distinction through magnitude differences. We also find that context conflicting with parametric knowledge produces larger geometric changes than parametrically aligned context. To the best of our knowledge, this is the first work that provides a geometric characterization of how context transforms the truth vector in the activation space of LLMs.
|
https://arxiv.org/abs/2601.06599
|
Academic Papers
|
svg
|
36655046df30d121ce1839e4738c4134e8bb1d266d9694e0f397ce01b9e0dc5d
|
2026-01-13T00:00:00-05:00
|
Probing Multimodal Large Language Models on Cognitive Biases in Chinese Short-Video Misinformation
|
arXiv:2601.06600v1 Announce Type: new Abstract: Short-video platforms have become major channels for misinformation, where deceptive claims frequently leverage visual experiments and social cues. While Multimodal Large Language Models (MLLMs) have demonstrated impressive reasoning capabilities, their robustness against misinformation entangled with cognitive biases remains under-explored. In this paper, we introduce a comprehensive evaluation framework using a high-quality, manually annotated dataset of 200 short videos spanning four health domains. This dataset provides fine-grained annotations for three deceptive patterns, experimental errors, logical fallacies, and fabricated claims, each verified by evidence such as national standards and academic literature. We evaluate eight frontier MLLMs across five modality settings. Experimental results demonstrate that Gemini-2.5-Pro achieves the highest performance in the multimodal setting with a belief score of 71.5/100, while o3 performs the worst at 35.2. Furthermore, we investigate social cues that induce false beliefs in videos and find that models are susceptible to biases like authoritative channel IDs.
|
https://arxiv.org/abs/2601.06600
|
Academic Papers
|
svg
|
4a9ef9f30960c7a98a352efbbca4b43f89a613cae3fa19048082eb9da4d0277b
|
2026-01-13T00:00:00-05:00
|
UMLoc: Uncertainty-Aware Map-Constrained Inertial Localization with Quantified Bounds
|
arXiv:2601.06602v1 Announce Type: new Abstract: Inertial localization is particularly valuable in GPS-denied environments such as indoors. However, localization using only Inertial Measurement Units (IMUs) suffers from drift caused by motion-process noise and sensor biases. This paper introduces Uncertainty-aware Map-constrained Inertial Localization (UMLoc), an end-to-end framework that jointly models IMU uncertainty and map constraints to achieve drift-resilient positioning. UMLoc integrates two coupled modules: (1) a Long Short-Term Memory (LSTM) quantile regressor, which estimates the specific quantiles needed to define 68%, 90%, and 95% prediction intervals serving as a measure of localization uncertainty and (2) a Conditioned Generative Adversarial Network (CGAN) with cross-attention that fuses IMU dynamic data with distance-based floor-plan maps to generate geometrically feasible trajectories. The modules are trained jointly, allowing uncertainty estimates to propagate through the CGAN during trajectory generation. UMLoc was evaluated on three datasets, including a newly collected 2-hour indoor benchmark with time-aligned IMU data, ground-truth poses and floor-plan maps. Results show that the method achieves a mean drift ratio of 5.9% over a 70 m travel distance and an average Absolute Trajectory Error (ATE) of 1.36 m, while maintaining calibrated prediction bounds.
|
https://arxiv.org/abs/2601.06602
|
Academic Papers
|
svg
|
6ea549df531e9a763badf8709eb704be7a2d03168459d6b052b35501e906bca4
|
2026-01-13T00:00:00-05:00
|
N2N-GQA: Noise-to-Narrative for Graph-Based Table-Text Question Answering Using LLMs
|
arXiv:2601.06603v1 Announce Type: new Abstract: Multi-hop question answering over hybrid table-text data requires retrieving and reasoning across multiple evidence pieces from large corpora, but standard Retrieval-Augmented Generation (RAG) pipelines process documents as flat ranked lists, causing retrieval noise to obscure reasoning chains. We introduce N2N-GQA. To our knowledge, it is the first zeroshot framework for open-domain hybrid table-text QA that constructs dynamic evidence graphs from noisy retrieval outputs. Our key insight is that multi-hop reasoning requires understanding relationships between evidence pieces: by modeling documents as graph nodes with semantic relationships as edges, we identify bridge documents connecting reasoning steps, a capability absent in list-based retrieval. On OTT-QA, graph-based evidence curation provides a 19.9-point EM improvement over strong baselines, demonstrating that organizing retrieval results as structured graphs is critical for multihop reasoning. N2N-GQA achieves 48.80 EM, matching finetuned retrieval models (CORE: 49.0 EM) and approaching heavily optimized systems (COS: 56.9 EM) without any task specific training. This establishes graph-structured evidence organization as essential for scalable, zero-shot multi-hop QA systems and demonstrates that simple, interpretable graph construction can rival sophisticated fine-tuned approaches.
|
https://arxiv.org/abs/2601.06603
|
Academic Papers
|
svg
|
d222491d63181988efdf54e1e37cab0d309933df836cce6a365c9a817d057b44
|
2026-01-13T00:00:00-05:00
|
Object-Centric World Models Meet Monte Carlo Tree Search
|
arXiv:2601.06604v1 Announce Type: new Abstract: In this paper, we introduce ObjectZero, a novel reinforcement learning (RL) algorithm that leverages the power of object-level representations to model dynamic environments more effectively. Unlike traditional approaches that process the world as a single undifferentiated input, our method employs Graph Neural Networks (GNNs) to capture intricate interactions among multiple objects. These objects, which can be manipulated and interact with each other, serve as the foundation for our model's understanding of the environment. We trained the algorithm in a complex setting teeming with diverse, interactive objects, demonstrating its ability to effectively learn and predict object dynamics. Our results highlight that a structured world model operating on object-centric representations can be successfully integrated into a model-based RL algorithm utilizing Monte Carlo Tree Search as a planning module.
|
https://arxiv.org/abs/2601.06604
|
Academic Papers
|
svg
|
79c61301a642bd40679492c3881a09b9ea829aacbe6db7167393357d1f1e4f70
|
2026-01-13T00:00:00-05:00
|
Sissi: Zero-shot Style-guided Image Synthesis via Semantic-style Integration
|
arXiv:2601.06605v1 Announce Type: new Abstract: Text-guided image generation has advanced rapidly with large-scale diffusion models, yet achieving precise stylization with visual exemplars remains difficult. Existing approaches often depend on task-specific retraining or expensive inversion procedures, which can compromise content integrity, reduce style fidelity, and lead to an unsatisfactory trade-off between semantic prompt adherence and style alignment. In this work, we introduce a training-free framework that reformulates style-guided synthesis as an in-context learning task. Guided by textual semantic prompts, our method concatenates a reference style image with a masked target image, leveraging a pretrained ReFlow-based inpainting model to seamlessly integrate semantic content with the desired style through multimodal attention fusion. We further analyze the imbalance and noise sensitivity inherent in multimodal attention fusion and propose a Dynamic Semantic-Style Integration (DSSI) mechanism that reweights attention between textual semantic and style visual tokens, effectively resolving guidance conflicts and enhancing output coherence. Experiments show that our approach achieves high-fidelity stylization with superior semantic-style balance and visual quality, offering a simple yet powerful alternative to complex, artifact-prone prior methods.
|
https://arxiv.org/abs/2601.06605
|
Academic Papers
|
svg
|
54cbcabd3ad3832a6b6ed6a96bdd1d7efa7318b500587a8a185e98481f18f194
|
2026-01-13T00:00:00-05:00
|
CEDAR: Context Engineering for Agentic Data Science
|
arXiv:2601.06606v1 Announce Type: new Abstract: We demonstrate CEDAR, an application for automating data science (DS) tasks with an agentic setup. Solving DS problems with LLMs is an underexplored area that has immense market value. The challenges are manifold: task complexities, data sizes, computational limitations, and context restrictions. We show that these can be alleviated via effective context engineering. We first impose structure into the initial prompt with DS-specific input fields, that serve as instructions for the agentic system. The solution is then materialized as an enumerated sequence of interleaved plan and code blocks generated by separate LLM agents, providing a readable structure to the context at any step of the workflow. Function calls for generating these intermediate texts, and for corresponding Python code, ensure that data stays local, and only aggregate statistics and associated instructions are injected into LLM prompts. Fault tolerance and context management are introduced via iterative code generation and smart history rendering. The viability of our agentic data scientist is demonstrated using canonical Kaggle challenges.
|
https://arxiv.org/abs/2601.06606
|
Academic Papers
|
svg
|
d91502d14757ad452170661ee96b4b861ed77ff1ba6b18fc3cf399f2e55cef61
|
2026-01-13T00:00:00-05:00
|
Pragya: An AI-Based Semantic Recommendation System for Sanskrit Subhasitas
|
arXiv:2601.06607v1 Announce Type: new Abstract: Sanskrit Subhasitas encapsulate centuries of cultural and philosophical wisdom, yet remain underutilized in the digital age due to linguistic and contextual barriers. In this work, we present Pragya, a retrieval-augmented generation (RAG) framework for semantic recommendation of Subhasitas. We curate a dataset of 200 verses annotated with thematic tags such as motivation, friendship, and compassion. Using sentence embeddings (IndicBERT), the system retrieves top-k verses relevant to user queries. The retrieved results are then passed to a generative model (Mistral LLM) to produce transliterations, translations, and contextual explanations. Experimental evaluation demonstrates that semantic retrieval significantly outperforms keyword matching in precision and relevance, while user studies highlight improved accessibility through generated summaries. To our knowledge, this is the first attempt at integrating retrieval and generation for Sanskrit Subhasitas, bridging cultural heritage with modern applied AI.
|
https://arxiv.org/abs/2601.06607
|
Academic Papers
|
svg
|
059c5230357d94e4ecccb61d60ad3e769eae9f38aa92c4f28da52a666df84a48
|
2026-01-13T00:00:00-05:00
|
Symplectic Hulls over a Non-Unital Ring
|
arXiv:2601.06609v1 Announce Type: new Abstract: This paper presents the study of the symplectic hulls over a non-unital ring $ E= \langle \kappa,\tau \mid 2 \kappa =2 \tau=0,~ \kappa^2=\kappa,~ \tau^2=\tau,~ \kappa \tau=\kappa,~ \tau \kappa=\tau \rangle$. We first identify the residue and torsion codes of the left, right, and two-sided symplectic hulls, and characterize the generator matrix of the two-sided symplectic hull of a free $E$-linear code. Then, we explore the symplectic hull of the sum of two free $E$-linear codes. Subsequently, we provide two build-up techniques that extend a free $E$-linear code of smaller length and symplectic hull-rank to one of larger length and symplectic hull-rank. Further, for free $E$-linear codes, we discuss the permutation equivalence and investigate the symplectic hull-variation problem. An application of this study is given by classifying the free $E$-linear optimal codes for smaller lengths.
|
https://arxiv.org/abs/2601.06609
|
Academic Papers
|
svg
|
314be1298e1b3762e856788b39c24ef6989e584d922c929564442848a4d7bf70
|
2026-01-13T00:00:00-05:00
|
AI Washing and the Erosion of Digital Legitimacy: A Socio-Technical Perspective on Responsible Artificial Intelligence in Business
|
arXiv:2601.06611v1 Announce Type: new Abstract: The rapid evolution of artificial intelligence (AI) systems, tools, and technologies has opened up novel, unprecedented opportunities for businesses to innovate, differentiate, and compete. However, growing concerns have emerged about the use of AI in businesses, particularly AI washing, in which firms exaggerate, misrepresent, or superficially signal their AI capabilities to gain financial and reputational advantages. This paper aims to establish a conceptual foundation for understanding AI washing. In this paper, we draw on analogies from greenwashing and insights from Information Systems (IS) research on ethics, trust, signaling, and digital innovation. This paper proposes a typology of AI washing practices across four primary domains: marketing and branding, technical capability inflation, strategic signaling, and governance-based washing. In addition, we examine their organizational, industry, and societal impacts. Our investigation and analysis reveal how AI washing can lead to short-term gains; however, it also proposes severe long-term consequences, including reputational damage, erosion of trust, and misallocation of resources. Moreover, this paper examines current research directions and open questions aimed at mitigating AI washing practices and enhancing the trust and reliability of legitimate AI systems and technologies.
|
https://arxiv.org/abs/2601.06611
|
Academic Papers
|
svg
|
5ab2192c1830376839686b75ca1deb597cf40d2e0bdb6ac21eae7d305a0eace2
|
2026-01-13T00:00:00-05:00
|
Cross-Border Data Security and Privacy Risks in Large Language Models and IoT Systems
|
arXiv:2601.06612v1 Announce Type: new Abstract: The reliance of Large Language Models and Internet of Things systems on massive, globally distributed data flows creates systemic security and privacy challenges. When data traverses borders, it becomes subject to conflicting legal regimes, such as the EU's General Data Protection Regulation and China's Personal Information Protection Law, compounded by technical vulnerabilities like model memorization. Current static encryption and data localization methods are fragmented and reactive, failing to provide adequate, policy-aligned safeguards. This research proposes a Jurisdiction-Aware, Privacy-by-Design architecture that dynamically integrates localized encryption, adaptive differential privacy, and real-time compliance assertion via cryptographic proofs. Empirical validation in a multi-jurisdictional simulation demonstrates this architecture reduced unauthorized data exposure to below five percent and achieved zero compliance violations. These security gains were realized while maintaining model utility retention above ninety percent and limiting computational overhead. This establishes that proactive, integrated controls are feasible for secure and globally compliant AI deployment.
|
https://arxiv.org/abs/2601.06612
|
Academic Papers
|
svg
|
90f9af2bffb0b70f50d969c7226e0547f485ec6c7e895f6dfe3b23ad5645bed8
|
2026-01-13T00:00:00-05:00
|
Industrial Semantics-Aware Digital Twins: A Hybrid Graph Matching Approach for Asset Administration Shells
|
arXiv:2601.06613v1 Announce Type: new Abstract: Although the Asset Administration Shell (AAS) standard provides a structured and machine-readable representation of industrial assets, their semantic comparability remains a major challenge, particularly when different vocabularies and modeling practices are used. Engineering would benefit from retrieving existing AAS models that are similar to the target in order to reuse submodels, parameters, and metadata. In practice, however, heterogeneous vocabularies and divergent modeling conventions hinder automated, content-level comparison across AAS. This paper proposes a hybrid graph matching approach to enable semantics-aware comparison of Digital Twin representations. The method combines rule-based pre-filtering using SPARQL with embedding-based similarity calculation leveraging RDF2vec to capture both structural and semantic relationships between AAS models. This contribution provides a foundation for enhanced discovery, reuse, and automated configuration in Digital Twin networks.
|
https://arxiv.org/abs/2601.06613
|
Academic Papers
|
svg
|
994f9bc0b761cd9e9c984d81a5a3a0f4153a1400baf0fbcb7770fe1dc5a1d9f8
|
2026-01-13T00:00:00-05:00
|
Fixturize: Bridging the Fixture Gap in Test Generation
|
arXiv:2601.06615v1 Announce Type: new Abstract: Current Large Language Models (LLMs) have advanced automated unit test generation but face a critical limitation: they often neglect to construct the necessary test fixtures, which are the environmental setups required for a test to run. To bridge this gap, this paper proposes Fixturize, a diagnostic framework that proactively identifies fixture-dependent functions and synthesizes test fixtures accordingly through an iterative, feedback-driven process, thereby improving the quality of auto-generated test suites of existing approaches. For rigorous evaluation, the authors introduce FixtureEval, a dedicated benchmark comprising 600 curated functions across two Programming Languages (PLs), i.e., Python and Java, with explicit fixture dependency labels, enabling both the corresponding classification and generation tasks. Empirical results demonstrate that Fixturize is highly effective, achieving 88.38%-97.00% accuracy across benchmarks in identifying the dependence of test fixtures and significantly enhancing the Suite Pass rate (SuitePS) by 18.03%-42.86% on average across both PLs with the auto-generated fixtures. Owing to the maintenance of test fixtures, Fixturize further improves line/branch coverage when integrated with existing testing tools of both LLM-based and Search-based by 16.85%/24.08% and 31.54%/119.66% on average, respectively. The findings establish fixture awareness as an essential, missing component in modern auto-testing pipelines.
|
https://arxiv.org/abs/2601.06615
|
Academic Papers
|
svg
|
641fd8f1f8bea912657ef3600b83962edd0c8daa678bfef2ac041cdad98b198b
|
2026-01-13T00:00:00-05:00
|
LLM-Driven Accessible Interface: A Model-Based Approach
|
arXiv:2601.06616v1 Announce Type: new Abstract: The integration of Large Language Models (LLMs) into interactive systems opens new opportunities for adaptive user experiences, yet it also raises challenges regarding accessibility, explainability, and normative compliance. This paper presents an implemented model-driven architecture for generating personalised, multimodal, and accessibility-aligned user interfaces. The approach combines structured user profiles, declarative adaptation rules, and validated prompt templates to refine baseline accessible UI templates that conform to WCAG 2.2 and EN 301 549, tailored to cognitive and sensory support needs. LLMs dynamically transform language complexity, modality, and visual structure, producing outputs such as Plain-Language text, pictograms, and high-contrast layouts aligned with ISO 24495-1 and W3C COGA guidance. A healthcare use case demonstrates how the system generates accessible post-consultation medication instructions tailored to a user profile comprising cognitive disability and hearing impairment. SysML v2 models provide explicit traceability between user needs, adaptation rules, and normative requirements, ensuring explainable and auditable transformations. Grounded in Human-Centered AI (HCAI), the framework incorporates co-design processes and structured feedback mechanisms to guide iterative refinement and support trustworthy generative behaviour.
|
https://arxiv.org/abs/2601.06616
|
Academic Papers
|
svg
|
9d16eb6b2cf79f1e2d57273c5de5847c64d3e32b3f8d70dbd993948d42156adc
|
2026-01-13T00:00:00-05:00
|
Robotic Tele-Operation for Upper Aerodigestive Tract Microsurgery: System Design and Validation
|
arXiv:2601.06617v1 Announce Type: new Abstract: Upper aerodigestive tract (UADT) treatments frequently employ transoral laser microsurgery (TLM) for procedures such as the removal of tumors or polyps. In TLM, a laser beam is used to cut target tissue, while forceps are employed to grasp, manipulate, and stabilize tissue within the UADT. Although TLM systems may rely on different technologies and interfaces, forceps manipulation is still predominantly performed manually, introducing limitations in ergonomics, precision, and controllability. This paper proposes a novel robotic system for tissue manipulation in UADT procedures, based on a novel end-effector designed for forceps control. The system is integrated within a teleoperation framework that employs a robotic manipulator with a programmed remote center of motion (RCM), enabling precise and constrained instrument motion while improving surgeon ergonomics. The proposed approach is validated through two experimental studies and a dedicated usability evaluation, demonstrating its effectiveness and suitability for UADT surgical applications.
|
https://arxiv.org/abs/2601.06617
|
Academic Papers
|
svg
|
0f2a3806fc0106c4bb4797c894a0e4afb4d5b5fd790f0ed4b55a0f26825f39cb
|
2026-01-13T00:00:00-05:00
|
Efficient and Reliable Estimation of Named Entity Linking Quality: A Case Study on GutBrainIE
|
arXiv:2601.06624v1 Announce Type: new Abstract: Named Entity Linking (NEL) is a core component of biomedical Information Extraction (IE) pipelines, yet assessing its quality at scale is challenging due to the high cost of expert annotations and the large size of corpora. In this paper, we present a sampling-based framework to estimate the NEL accuracy of large-scale IE corpora under statistical guarantees and constrained annotation budgets. We frame NEL accuracy estimation as a constrained optimization problem, where the objective is to minimize expected annotation cost subject to a target Margin of Error (MoE) for the corpus-level accuracy estimate. Building on recent works on knowledge graph accuracy estimation, we adapt Stratified Two-Stage Cluster Sampling (STWCS) to the NEL setting, defining label-based strata and global surface-form clusters in a way that is independent of NEL annotations. Applied to 11,184 NEL annotations in GutBrainIE -- a new biomedical corpus openly released in fall 2025 -- our framework reaches a MoE $\leq 0.05$ by manually annotating only 2,749 triples (24.6%), leading to an overall accuracy estimate of $0.915 \pm 0.0473$. A time-based cost model and simulations against a Simple Random Sampling (SRS) baseline show that our design reduces expert annotation time by about 29% at fixed sample size. The framework is generic and can be applied to other NEL benchmarks and IE pipelines that require scalable and statistically robust accuracy assessment.
|
https://arxiv.org/abs/2601.06624
|
Academic Papers
|
svg
|
583425fbe9ea88f28715108dc5b559a35e06a511f0b65a0c963ba33bca5b0077
|
2026-01-13T00:00:00-05:00
|
On traces of the derivatives of the $L^2$-projection error
|
arXiv:2601.06625v1 Announce Type: new Abstract: We provide derivative estimates for the $L^2$ projection of an $H^{k}$ function onto the space of polynomials of degree $\leq p$. The bounds are explicit in the order of differentiation and the polynomial degree $p$.
|
https://arxiv.org/abs/2601.06625
|
Academic Papers
|
svg
|
423ae61218bd4333ff0b3e931f3769780e27df07d7d98d68fa4a6696e6ea77d2
|
2026-01-13T00:00:00-05:00
|
Burn-After-Use for Preventing Data Leakage through a Secure Multi-Tenant Architecture in Enterprise LLM
|
arXiv:2601.06627v1 Announce Type: new Abstract: This study presents a Secure Multi-Tenant Architecture (SMTA) combined with a novel concept Burn-After-Use (BAU) mechanism for enterprise LLM environments to effectively prevent data leakage. As institutions increasingly adopt LLMs across departments, the risks of data leakage have become a critical security and compliance concern. The proposed SMTA isolates LLM instances across departments and enforces rigorous context ownership boundaries within an internally deployed infrastructure. The BAU mechanism introduces data confidentiality by enforcing ephemeral conversational contexts that are automatically destroyed after use, preventing cross-session or cross-user inference. The evaluation to SMTA and BAU is through two sets of realistic and reproducible experiments comprising of 127 test iterations. One aspect of this experiment is to assess prompt-based and semantic leakage attacks in a multi-tenant architecture (Appendix A) across 55 infrastructure-level attack tests, including vector-database credential compromise and shared logging pipeline exposure. SMTA achieves 92% defense success rate, demonstrating strong semantic isolation while highlighting residual risks from credential misconfiguration and observability pipelines. Another aspect is to evaluate the robustness of BAU under realistic failure scenarios (Appendix B) using four empirical metrics: Local Residual Persistence Rate (LRPR), Remote Residual Persistence Rate (RRPR), Image Frame Exposure Rate (IFER), and Burn Timer Persistence Rate (BTPR). Across 72 test iterations, BAU achieves a 76.75% success rate in mitigating post-session leakage threats across the client, server, application, infrastructure, and cache layers. These results show that SMTA and BAU together enforce strict isolation, complete session ephemerality, strong confidentiality guarantees, non-persistence, and policy-aligned behavior for enterprise LLMs.
|
https://arxiv.org/abs/2601.06627
|
Academic Papers
|
svg
|
5ec2dfb91ebb45a7a1faae9f01b47384ddb905c7164309185c23acf9bb8ac71a
|
2026-01-13T00:00:00-05:00
|
Lower Bounds for the Algorithmic Complexity of Learned Indexes
|
arXiv:2601.06629v1 Announce Type: new Abstract: Learned index structures aim to accelerate queries by training machine learning models to approximate the rank function associated with a database attribute. While effective in practice, their theoretical limitations are not fully understood. We present a general framework for proving lower bounds on query time for learned indexes, expressed in terms of their space overhead and parameterized by the model class used for approximation. Our formulation captures a broad family of learned indexes, including most existing designs, as piecewise model-based predictors. We solve the problem of lower bounding query time in two steps: first, we use probabilistic tools to control the effect of sampling when the database attribute is drawn from a probability distribution. Then, we analyze the approximation-theoretic problem of how to optimally represent a cumulative distribution function with approximators from a given model class. Within this framework, we derive lower bounds under a range of modeling and distributional assumptions, paying particular attention to the case of piecewise linear and piecewise constant model classes, which are common in practical implementations. Our analysis shows how tools from approximation theory, such as quantization and Kolmogorov widths, can be leveraged to formalize the space-time tradeoffs inherent to learned index structures. The resulting bounds illuminate core limitations of these methods.
|
https://arxiv.org/abs/2601.06629
|
Academic Papers
|
svg
|
f9a3a6c6180e064bd679ba6300d6f0425dbbdc1f1708e72ee7f1d8a9e7fd1c7a
|
2026-01-13T00:00:00-05:00
|
Labels have Human Values: Value Calibration of Subjective Tasks
|
arXiv:2601.06631v1 Announce Type: new Abstract: Building NLP systems for subjective tasks requires one to ensure their alignment to contrasting human values. We propose the MultiCalibrated Subjective Task Learner framework (MC-STL), which clusters annotations into identifiable human value clusters by three approaches (similarity of annotator rationales, expert-value taxonomies or rater's sociocultural descriptors) and calibrates predictions for each value cluster by learning cluster-specific embeddings. We demonstrate MC-STL on several subjective learning settings, including ordinal, binary, and preference learning predictions, and evaluate it on multiple datasets covering toxic chatbot conversations, offensive social media posts, and human preference alignment. The results show that MC-STL consistently outperforms the baselines that ignore the latent value structure of the annotations, delivering gains in discrimination, value-specific calibration, and disagreement-aware metrics.
|
https://arxiv.org/abs/2601.06631
|
Academic Papers
|
svg
|
7c17f62bd855dc3036688dff7d0699f1fb23cc61de06d22d197c439b2c3b40c6
|
2026-01-13T00:00:00-05:00
|
KASER: Knowledge-Aligned Student Error Simulator for Open-Ended Coding Tasks
|
arXiv:2601.06633v1 Announce Type: new Abstract: Open-ended tasks, such as coding problems that are common in computer science education, provide detailed insights into student knowledge. However, training large language models (LLMs) to simulate and predict possible student errors in their responses to these problems can be challenging: they often suffer from mode collapse and fail to fully capture the diversity in syntax, style, and solution approach in student responses. In this work, we present KASER (Knowledge-Aligned Student Error Simulator), a novel approach that aligns errors with student knowledge. We propose a training method based on reinforcement learning using a hybrid reward that reflects three aspects of student code prediction: i) code similarity to the ground-truth, ii) error matching, and iii) code prediction diversity. On two real-world datasets, we perform two levels of evaluation and show that: At the per-student-problem pair level, our method outperforms baselines on code and error prediction; at the per-problem level, our method outperforms baselines on error coverage and simulated code diversity.
|
https://arxiv.org/abs/2601.06633
|
Academic Papers
|
svg
|
c1bdd2f3a34e151595eb36666e9eb85a083e3971c17ed830026c3c4b5433c6e6
|
2026-01-13T00:00:00-05:00
|
A Framework for Kara-Kichwa Data Sovereignty in Latin America and the Caribbean
|
arXiv:2601.06634v1 Announce Type: new Abstract: In the high-altitude territories of the Andean-Amazonian-Atlantic pathway, data is not merely a digital resource but an extension of Khipu Panaka, the genealogical and relational memory of the Kara-Kichwa Republics. This perspective paper introduces the Kara-Kichwa Data Sovereignty Framework, a living instrument designed to counteract the "intellectual gentrification" and systemic invisibility of Andean Indigenous Peoples in global data ecosystems. Grounded in Indigenous legal systems thinking, the framework codifies five customary pillars, Kamachy (Self-determination), Ayllu-llaktapak kamachy (Collective Authority), Tantanakuy (Relational Accountability), Willay-panka-tantay (Ancestral Memory), and Sumak Kawsay (Biocultural Ethics), to govern the lifecycle of data from generation to expiration.
|
https://arxiv.org/abs/2601.06634
|
Academic Papers
|
svg
|
6f1fbf1bbbd33824f3f05665946dce35934ec64dd7db2963a7f6ea3091c341a1
|
2026-01-13T00:00:00-05:00
|
MedEinst: Benchmarking the Einstellung Effect in Medical LLMs through Counterfactual Differential Diagnosis
|
arXiv:2601.06636v1 Announce Type: new Abstract: Despite achieving high accuracy on medical benchmarks, LLMs exhibit the Einstellung Effect in clinical diagnosis--relying on statistical shortcuts rather than patient-specific evidence, causing misdiagnosis in atypical cases. Existing benchmarks fail to detect this critical failure mode. We introduce MedEinst, a counterfactual benchmark with 5,383 paired clinical cases across 49 diseases. Each pair contains a control case and a "trap" case with altered discriminative evidence that flips the diagnosis. We measure susceptibility via Bias Trap Rate--probability of misdiagnosing traps despite correctly diagnosing controls. Extensive Evaluation of 17 LLMs shows frontier models achieve high baseline accuracy but severe bias trap rates. Thus, we propose ECR-Agent, aligning LLM reasoning with Evidence-Based Medicine standard via two components: (1) Dynamic Causal Inference (DCI) performs structured reasoning through dual-pathway perception, dynamic causal graph reasoning across three levels (association, intervention, counterfactual), and evidence audit for final diagnosis; (2) Critic-Driven Graph and Memory Evolution (CGME) iteratively refines the system by storing validated reasoning paths in an exemplar base and consolidating disease-specific knowledge into evolving illness graphs. Source code is to be released.
|
https://arxiv.org/abs/2601.06636
|
Academic Papers
|
svg
|
8e00a95c989f9f99b3bf953b41ae1257ab789ba58abe7e65fd4998b8e684faa8
|
2026-01-13T00:00:00-05:00
|
Efficient Aspect Term Extraction using Spiking Neural Network
|
arXiv:2601.06637v1 Announce Type: new Abstract: Aspect Term Extraction (ATE) identifies aspect terms in review sentences, a key subtask of sentiment analysis. While most existing approaches use energy-intensive deep neural networks (DNNs) for ATE as sequence labeling, this paper proposes a more energy-efficient alternative using Spiking Neural Networks (SNNs). Using sparse activations and event-driven inferences, SNNs capture temporal dependencies between words, making them suitable for ATE. The proposed architecture, SpikeATE, employs ternary spiking neurons and direct spike training fine-tuned with pseudo-gradients. Evaluated on four benchmark SemEval datasets, SpikeATE achieves performance comparable to state-of-the-art DNNs with significantly lower energy consumption. This highlights the use of SNNs as a practical and sustainable choice for ATE tasks.
|
https://arxiv.org/abs/2601.06637
|
Academic Papers
|
svg
|
9e14bce610695836a657d5c15a1886dea5abf7dcf106e017991ce39d493c333d
|
2026-01-13T00:00:00-05:00
|
Attack-Resistant Watermarking for AIGC Image Forensics via Diffusion-based Semantic Deflection
|
arXiv:2601.06639v1 Announce Type: new Abstract: Protecting the copyright of user-generated AI images is an emerging challenge as AIGC becomes pervasive in creative workflows. Existing watermarking methods (1) remain vulnerable to real-world adversarial threats, often forced to trade off between defenses against spoofing and removal attacks; and (2) cannot support semantic-level tamper localization. We introduce PAI, a training-free inherent watermarking framework for AIGC copyright protection, plug-and-play with diffusion-based AIGC services. PAI simultaneously provides three key functionalities: robust ownership verification, attack detection, and semantic-level tampering localization. Unlike existing inherent watermark methods that only embed watermarks at noise initialization of diffusion models, we design a novel key-conditioned deflection mechanism that subtly steers the denoising trajectory according to the user key. Such trajectory-level coupling further strengthens the semantic entanglement of identity and content, thereby further enhancing robustness against real-world threats. Moreover, we also provide a theoretical analysis proving that only the valid key can pass verification. Experiments across 12 attack methods show that PAI achieves 98.43\% verification accuracy, improving over SOTA methods by 37.25\% on average, and retains strong tampering localization performance even against advanced AIGC edits. Our code is available at https://github.com/QingyuLiu/PAI.
|
https://arxiv.org/abs/2601.06639
|
Academic Papers
|
svg
|
cecf35c68c9732cf6b8d8c20d20c3dbd9029c65b0f88558f3b13a86f54170ef6
|
2026-01-13T00:00:00-05:00
|
Agentic AI Empowered Intent-Based Networking for 6G
|
arXiv:2601.06640v1 Announce Type: new Abstract: The transition towards sixth-generation (6G) wireless networks necessitates autonomous orchestration mechanisms capable of translating high-level operational intents into executable network configurations. Existing approaches to Intent-Based Networking (IBN) rely upon either rule-based systems that struggle with linguistic variation or end-to-end neural models that lack interpretability and fail to enforce operational constraints. This paper presents a hierarchical multi-agent framework where Large Language Model (LLM) based agents autonomously decompose natural language intents, consult domain-specific specialists, and synthesise technically feasible network slice configurations through iterative reasoning-action (ReAct) cycles. The proposed architecture employs an orchestrator agent coordinating two specialist agents, i.e., Radio Access Network (RAN) and Core Network agents, via ReAct-style reasoning, grounded in structured network state representations. Experimental evaluation across diverse benchmark scenarios shows that the proposed system outperforms rule-based systems and direct LLM prompting, with architectural principles applicable to Open RAN (O-RAN) deployments. The results also demonstrate that whilst contemporary LLMs possess general telecommunications knowledge, network automation requires careful prompt engineering to encode context-dependent decision thresholds, advancing autonomous orchestration capabilities for next-generation wireless systems.
|
https://arxiv.org/abs/2601.06640
|
Academic Papers
|
svg
|
548e3157b50876fe9f28667fb5dfd0ee4c501a74bf54daec971942e237eeefa5
|
2026-01-13T00:00:00-05:00
|
Leveraging Soft Prompts for Privacy Attacks in Federated Prompt Tuning
|
arXiv:2601.06641v1 Announce Type: new Abstract: Membership inference attack (MIA) poses a significant privacy threat in federated learning (FL) as it allows adversaries to determine whether a client's private dataset contains a specific data sample. While defenses against membership inference attacks in standard FL have been well studied, the recent shift toward federated fine-tuning has introduced new, largely unexplored attack surfaces. To highlight this vulnerability in the emerging FL paradigm, we demonstrate that federated prompt-tuning, which adapts pre-trained models with small input prefixes to improve efficiency, also exposes a new vector for privacy attacks. We propose PromptMIA, a membership inference attack tailored to federated prompt-tuning, in which a malicious server can insert adversarially crafted prompts and monitors their updates during collaborative training to accurately determine whether a target data point is in a client's private dataset. We formalize this threat as a security game and empirically show that PromptMIA consistently attains high advantage in this game across diverse benchmark datasets. Our theoretical analysis further establishes a lower bound on the attack's advantage which explains and supports the consistently high advantage observed in our empirical results. We also investigate the effectiveness of standard membership inference defenses originally developed for gradient or output based attacks and analyze their interaction with the distinct threat landscape posed by PromptMIA. The results highlight non-trivial challenges for current defenses and offer insights into their limitations, underscoring the need for defense strategies that are specifically tailored to prompt-tuning in federated settings.
|
https://arxiv.org/abs/2601.06641
|
Academic Papers
|
svg
|
58be013de97761503012be74a37cc22a823447bb3b2dc49cc8c5fc990cd686b9
|
2026-01-13T00:00:00-05:00
|
Boosting Overlapping Organoid Instance Segmentation Using Pseudo-Label Unmixing and Synthesis-Assisted Learning
|
arXiv:2601.06642v1 Announce Type: new Abstract: Organoids, sophisticated in vitro models of human tissues, are crucial for medical research due to their ability to simulate organ functions and assess drug responses accurately. Accurate organoid instance segmentation is critical for quantifying their dynamic behaviors, yet remains profoundly limited by high-quality annotated datasets and pervasive overlap in microscopy imaging. While semi-supervised learning (SSL) offers a solution to alleviate reliance on scarce labeled data, conventional SSL frameworks suffer from biases induced by noisy pseudo-labels, particularly in overlapping regions. Synthesis-assisted SSL (SA-SSL) has been proposed for mitigating training biases in semi-supervised semantic segmentation. We present the first adaptation of SA-SSL to organoid instance segmentation and reveal that SA-SSL struggles to disentangle intertwined organoids, often misrepresenting overlapping instances as a single entity. To overcome this, we propose Pseudo-Label Unmixing (PLU), which identifies erroneous pseudo-labels for overlapping instances and then regenerates organoid labels through instance decomposition. For image synthesis, we apply a contour-based approach to synthesize organoid instances efficiently, particularly for overlapping cases. Instance-level augmentations (IA) on pseudo-labels before image synthesis further enhances the effect of synthetic data (SD). Rigorous experiments on two organoid datasets demonstrate our method's effectiveness, achieving performance comparable to fully supervised models using only 10% labeled data, and state-of-the-art results. Ablation studies validate the contributions of PLU, contour-based synthesis, and augmentation-aware training. By addressing overlap at both pseudo-label and synthesis levels, our work advances scalable, label-efficient organoid analysis, unlocking new potential for high-throughput applications in precision medicine.
|
https://arxiv.org/abs/2601.06642
|
Academic Papers
|
svg
|
5fef45fab9916b01487ccf5d766749cfd18ed2c2feb075869c895ef69b85c88c
|
2026-01-13T00:00:00-05:00
|
Do Language Models Reason Across Languages?
|
arXiv:2601.06644v1 Announce Type: new Abstract: The real-world information sources are inherently multilingual, which naturally raises a question about whether language models can synthesize information across languages. In this paper, we introduce a simple two-hop question answering setting, where answering a question requires making inferences over two multilingual documents. We find that language models are more sensitive to language variation in answer-span documents than in those providing bridging information, despite the equal importance of both documents for answering a question. Under a step-by-step sub-question evaluation, we further show that in up to 33% of multilingual cases, models fail to infer the bridging information in the first step yet still answer the overall question correctly. This indicates that reasoning in language models, especially in multilingual settings, does not follow a faithful step-by-step decomposition. Subsequently, we show that the absence of reasoning decomposition leads to around 18% composition failure, where both sub-questions are answered correctly but fail for the final two-hop questions. To mitigate this, we propose a simple three-stage SUBQ prompting method to guide the multi-step reasoning with sub-questions, which boosts accuracy from 10.1% to 66.5%.
|
https://arxiv.org/abs/2601.06644
|
Academic Papers
|
svg
|
f65f92d460c6f642308a7cff309ca7e0aba89eccac6e76830b063420064d485a
|
2026-01-13T00:00:00-05:00
|
eSkiTB: A Synthetic Event-based Dataset for Tracking Skiers
|
arXiv:2601.06647v1 Announce Type: new Abstract: Tracking skiers in RGB broadcast footage is challenging due to motion blur, static overlays, and clutter that obscure the fast-moving athlete. Event cameras, with their asynchronous contrast sensing, offer natural robustness to such artifacts, yet a controlled benchmark for winter-sport tracking has been missing. We introduce event SkiTB (eSkiTB), a synthetic event-based ski tracking dataset generated from SkiTB using direct video-to-event conversion without neural interpolation, enabling an iso-informational comparison between RGB and event modalities. Benchmarking SDTrack (spiking transformer) against STARK (RGB transformer), we find that event-based tracking is substantially resilient to broadcast clutter in scenes dominated by static overlays, achieving 0.685 IoU, outperforming RGB by +20.0 points. Across the dataset, SDTrack attains a mean IoU of 0.711, demonstrating that temporal contrast is a reliable cue for tracking ballistic motion in visually congested environments. eSkiTB establishes the first controlled setting for event-based tracking in winter sports and highlights the promise of event cameras for ski tracking. The dataset and code will be released at https://github.com/eventbasedvision/eSkiTB.
|
https://arxiv.org/abs/2601.06647
|
Academic Papers
|
svg
|
f2a5a69cef3d221f238a1406f912c45307e52c394825948aebbfaa24d568e515
|
2026-01-13T00:00:00-05:00
|
Revisiting Training Scale: An Empirical Study of Token Count, Power Consumption, and Parameter Efficiency
|
arXiv:2601.06649v1 Announce Type: new Abstract: Research in machine learning has questioned whether increases in training token counts reliably produce proportional performance gains in large language models. Building on prior work introducing an energy-aware parameter efficiency metric, this study empirically examines the effects of increasing training token counts under fixed hardware and training conditions. The significance of this work lies in the explicit integration of power consumption and execution duration, as reflected by the power sampling frequency, into token-scale analysis. This addresses a gap in prior studies emphasizing performance outcomes while underrepresenting computational and energy costs. Using a repeated-measures experimental design on a constant GPU instance with an identical model architecture, optimizer settings, and epoch counts, a 1.1-billion-parameter TinyLlama model was trained at three token counts (500K, 1M, and 2M). While conventional performance metrics exhibited inconsistent or diminishing returns across token scales, the inclusion of power consumption and execution duration revealed a strictly monotonic decline in training efficiency as token count increased. Repeated-measures ANOVA demonstrated a strong effect of token count on parameter efficiency, with all pairwise comparisons remaining significant following Bonferroni correction. These findings indicate that increases in training token counts may be energetically inefficient even when marginal performance improvements are observed, underscoring the importance of efficiency-aware evaluation in large language model training.
|
https://arxiv.org/abs/2601.06649
|
Academic Papers
|
svg
|
46c7ea60e05cb0d5ce8a10b36223e3e8a9beb88cb80db2da28254c5786d21415
|
2026-01-13T00:00:00-05:00
|
Learning Password Best Practices Through In-Task Instruction
|
arXiv:2601.06650v1 Announce Type: new Abstract: Users often make security- and privacy-relevant decisions without a clear understanding of the rules that govern safe behavior. We introduce pedagogical friction, a design approach that introduces brief, instructional interactions at the moment of action. We evaluate this approach in the context of password creation, a task with clear, objective quality criteria and broad familiarity. We conducted a randomized repeated-measures study with 128 participants across four interface conditions that varied the depth and interactivity of guidance. We assessed three outcomes: (1) rule compliance in a subsequent password task without guidance, (2) accuracy on survey questions matched to the rules shown earlier, and (3) behavior-knowledge alignment, which captures whether participants who correctly followed a rule also recognized it on the survey. Across all guided conditions, participants corrected most rule violations in the follow-up task, achieved moderate accuracy on matched rule questions, and showed high behavior-knowledge alignment. These results support pedagogical friction as a lightweight and generalizable intervention for security- and privacy-critical interfaces.
|
https://arxiv.org/abs/2601.06650
|
Academic Papers
|
svg
|
0ce896e540ac08a23891ccaf4499aefb93d499f81ea303b476186608bfab8866
|
2026-01-13T00:00:00-05:00
|
Follow the Signs: Using Textual Cues and LLMs to Guide Efficient Robot Navigation
|
arXiv:2601.06652v1 Announce Type: new Abstract: Autonomous navigation in unfamiliar environments often relies on geometric mapping and planning strategies that overlook rich semantic cues such as signs, room numbers, and textual labels. We propose a novel semantic navigation framework that leverages large language models (LLMs) to infer patterns from partial observations and predict regions where the goal is most likely located. Our method combines local perceptual inputs with frontier-based exploration and periodic LLM queries, which extract symbolic patterns (e.g., room numbering schemes and building layout structures) and update a confidence grid used to guide exploration. This enables robots to move efficiently toward goal locations labeled with textual identifiers (e.g., "room 8") even before direct observation. We demonstrate that this approach enables more efficient navigation in sparse, partially observable grid environments by exploiting symbolic patterns. Experiments across environments modeled after real floor plans show that our approach consistently achieves near-optimal paths and outperforms baselines by over 25% in Success weighted by Path Length.
|
https://arxiv.org/abs/2601.06652
|
Academic Papers
|
svg
|
d7ad67511a8ea84b1dbcbfdde4e00b6594db907cf590c26d1c74c1348561c517
|
2026-01-13T00:00:00-05:00
|
Physics-constrained Gaussian Processes for Predicting Shockwave Hugoniot Curves
|
arXiv:2601.06655v1 Announce Type: new Abstract: A physics-constrained Gaussian Process regression framework is developed for predicting shocked material states along the Hugoniot curve using data from a small number of shockwave simulations. The proposed Gaussian process employs a probabilistic Taylor series expansion in conjunction with the Rankine-Hugoniot jump conditions between the various shocked material states to construct a thermodynamically consistent covariance function. This leads to the formulation of an optimization problem over a small number of interpretable hyperparameters and enables the identification of regime transitions, from a leading elastic wave to trailing plastic and phase transformation waves. This work is motivated by the need to investigate shock-driven material response for materials discovery and for offering mechanistic insights in regimes where experimental characterizations and simulations are costly. The proposed methodology relies on large-scale molecular dynamics which are an accurate but expensive computational alternative to experiments. Under these constraints, the proposed methodology establishes Hugoniot curves from a limited number of molecular dynamics simulations. We consider silicon carbide as a representative material and atomic-level simulations are performed using a reverse ballistic approach together with appropriate interatomic potentials. The framework reproduces the Hugoniot curve with satisfactory accuracy while also quantifying the uncertainty in the predictions using the Gaussian Process posterior.
|
https://arxiv.org/abs/2601.06655
|
Academic Papers
|
svg
|
43a00c86c97b802d824522079a07f5a6a4f0e41a82ca4eb7bca3d1b44e65f7e9
|
2026-01-13T00:00:00-05:00
|
What makes for an enjoyable protagonist? An analysis of character warmth and competence
|
arXiv:2601.06658v1 Announce Type: new Abstract: Drawing on psychological and literary theory, we investigated whether the warmth and competence of movie protagonists predict IMDb ratings, and whether these effects vary across genres. Using 2,858 films and series from the Movie Scripts Corpus, we identified protagonists via AI-assisted annotation and quantified their warmth and competence with the LLM_annotate package ([1]; human-LLM agreement: r = .83). Preregistered Bayesian regression analyses revealed theory-consistent but small associations between both warmth and competence and audience ratings, while genre-specific interactions did not meaningfully improve predictions. Male protagonists were slightly less warm than female protagonists, and movies with male leads received higher ratings on average (an association that was multiple times stronger than the relationships between movie ratings and warmth/competence). These findings suggest that, although audiences tend to favor warm, competent characters, the effects on movie evaluations are modest, indicating that character personality is only one of many factors shaping movie ratings. AI-assisted annotation with LLM_annotate and gpt-4.1-mini proved effective for large-scale analyses but occasionally fell short of manually generated annotations.
|
https://arxiv.org/abs/2601.06658
|
Academic Papers
|
svg
|
3756df0db791ccd86d8108098660a2811d5f49ac5af82f6bdf6033ad0293f21c
|
2026-01-13T00:00:00-05:00
|
SafePro: Evaluating the Safety of Professional-Level AI Agents
|
arXiv:2601.06663v1 Announce Type: new Abstract: Large language model-based agents are rapidly evolving from simple conversational assistants into autonomous systems capable of performing complex, professional-level tasks in various domains. While these advancements promise significant productivity gains, they also introduce critical safety risks that remain under-explored. Existing safety evaluations primarily focus on simple, daily assistance tasks, failing to capture the intricate decision-making processes and potential consequences of misaligned behaviors in professional settings. To address this gap, we introduce \textbf{SafePro}, a comprehensive benchmark designed to evaluate the safety alignment of AI agents performing professional activities. SafePro features a dataset of high-complexity tasks across diverse professional domains with safety risks, developed through a rigorous iterative creation and review process. Our evaluation of state-of-the-art AI models reveals significant safety vulnerabilities and uncovers new unsafe behaviors in professional contexts. We further show that these models exhibit both insufficient safety judgment and weak safety alignment when executing complex professional tasks. In addition, we investigate safety mitigation strategies for improving agent safety in these scenarios and observe encouraging improvements. Together, our findings highlight the urgent need for robust safety mechanisms tailored to the next generation of professional AI agents.
|
https://arxiv.org/abs/2601.06663
|
Academic Papers
|
svg
|
417dd5e2b71703a227aef1d1b94a8a952c9cdeeb08f5d1d537e65d95e460eb0d
|
2026-01-13T00:00:00-05:00
|
Reinforcement Learning-Guided Dynamic Multi-Graph Fusion for Evacuation Traffic Prediction
|
arXiv:2601.06664v1 Announce Type: new Abstract: Real-time traffic prediction is critical for managing transportation systems during hurricane evacuations. Although data-driven graph-learning models have demonstrated strong capabilities in capturing the complex spatiotemporal dynamics of evacuation traffic at a network level, they mostly consider a single dimension (e.g., travel-time or distance) to construct the underlying graph. Furthermore, these models often lack interpretability, offering little insight into which input variables contribute most to their predictive performance. To overcome these limitations, we develop a novel Reinforcement Learning-guided Dynamic Multi-Graph Fusion (RL-DMF) framework for evacuation traffic prediction. We construct multiple dynamic graphs at each time step to represent heterogeneous spatiotemporal relationships between traffic detectors. A dynamic multi-graph fusion (DMF) module is employed to adaptively learn and combine information from these graphs. To enhance model interpretability, we introduce RL-based intelligent feature selection and ranking (RL-IFSR) method that learns to mask irrelevant features during model training. The model is evaluated using a real-world dataset of 12 hurricanes affecting Florida from 2016 to 2024. For an unseen hurricane (Milton, 2024), the model achieves a 95% accuracy (RMSE = 293.9) for predicting the next 1-hour traffic flow. Moreover, the model can forecast traffic flow for up to next 6 hours with 90% accuracy (RMSE = 426.4). The RL-DMF framework outperforms several state-of-the-art traffic prediction models. Furthermore, ablation experiments confirm the effectiveness of dynamic multi-graph fusion and RL-IFSR approaches for improving model performance. This research provides a generalized and interpretable model for real-time evacuation traffic forecasting, with significant implications for evacuation traffic management.
|
https://arxiv.org/abs/2601.06664
|
Academic Papers
|
svg
|
cabec06591337d3c6166c9ea286e52b64db516cb5ad4a7f000a1fb5d479a3144
|
2026-01-13T00:00:00-05:00
|
InFi-Check: Interpretable and Fine-Grained Fact-Checking of LLMs
|
arXiv:2601.06666v1 Announce Type: new Abstract: Large language models (LLMs) often hallucinate, yet most existing fact-checking methods treat factuality evaluation as a binary classification problem, offering limited interpretability and failing to capture fine-grained error types. In this paper, we introduce InFi-Check, a framework for interpretable and fine-grained fact-checking of LLM outputs. Specifically, we first propose a controlled data synthesis pipeline that generates high-quality data featuring explicit evidence, fine-grained error type labels, justifications, and corrections. Based on this, we further construct large-scale training data and a manually verified benchmark InFi-Check-FG for fine-grained fact-checking of LLM outputs. Building on these high-quality training data, we further propose InFi-Checker, which can jointly provide supporting evidence, classify fine-grained error types, and produce justifications along with corrections. Experiments show that InFi-Checker achieves state-of-the-art performance on InFi-Check-FG and strong generalization across various downstream tasks, significantly improving the utility and trustworthiness of factuality evaluation.
|
https://arxiv.org/abs/2601.06666
|
Academic Papers
|
svg
|
d93ade2ecdcb26401bb5b8bda0a0f4c43cea2f530ec2a6a25c4aec14a952b2ee
|
2026-01-13T00:00:00-05:00
|
zkRansomware: Proof-of-Data Recoverability and Multi-round Game Theoretic Modeling of Ransomware Decisions
|
arXiv:2601.06667v1 Announce Type: new Abstract: Ransomware is still one of the most serious cybersecurity threats. Victims often pay but fail to regain access to their data, while also facing the danger of losing data privacy. These uncertainties heavily shape the attacker-victim dynamics in decision-making. In this paper, we introduce and analyze zkRansomware. This new ransomware model integrates zero-knowledge proofs to enable verifiable data recovery and uses smart contracts to enforce multi-round payments while mitigating the risk of data disclosure and privacy loss. We show that zkRansomware is technically feasible using existing cryptographic and blockchain tools and, perhaps counterintuitively, can align incentives between the attacker and the victim. Finally, we develop a theoretical decision-making frame- work for zkRansomware that distinguishes it from known ransomware decision models and discusses its implications for ransomware risk anal- ysis and response decision support.
|
https://arxiv.org/abs/2601.06667
|
Academic Papers
|
svg
|
cd32770ea81645fa0a35b7f4fa79292b62cece13a4956a70c75f096c961c24fb
|
2026-01-13T00:00:00-05:00
|
Otimizando A Aloca\c{c}\~ao De Salas De Aula Com Foco Na Acessibilidade Para Pessoas Com Defici\^encia
|
arXiv:2601.06670v1 Announce Type: new Abstract: This paper addresses the challenge of classroom allocation in higher education institutions, with an explicit emphasis on accessibility for Persons with Disabilities (PwDs). Employing a case study of a university's computer science department, the paper proposes an Integer Linear Programming (ILP)-based optimization model, which is solved using the Gurobi solver. The objective is to minimize the number of classrooms used by prioritizing the assignment of PwD students to ground-floor classrooms to reduce accessibility barriers. The model is calibrated with a weighting parameter, alpha, that allows for a balance between spatial efficiency and promoting accessibility. Experimental results indicate that adjusting alpha can achieve a balance point that significantly improves current manual allocation practices, reducing the number of classrooms required and accessibility penalties. The findings suggest that optimization methods can improve operational efficiency in academic institutions while promoting a more inclusive environment for all students. Future work may expand the application of the model to other departments and contexts and integrate additional criteria to develop a more holistic approach.
|
https://arxiv.org/abs/2601.06670
|
Academic Papers
|
svg
|
4aead134c539c5500c93b7408eb20f7c23df133a43a622dd31bf3e6cdb589d97
|
2026-01-13T00:00:00-05:00
|
Will it Merge? On The Causes of Model Mergeability
|
arXiv:2601.06672v1 Announce Type: new Abstract: Model merging has emerged as a promising technique for combining multiple fine-tuned models into a single multitask model without retraining. However, the factors that determine whether merging will succeed or fail remain poorly understood. In this work, we investigate why specific models are merged better than others. To do so, we propose a concrete, measurable definition of mergeability. We investigate several potential causes for high or low mergeability, highlighting the base model knowledge as a dominant factor: Models fine-tuned on instances that the base model knows better are more mergeable than models fine-tuned on instances that the base model struggles with. Based on our mergeability definition, we explore a simple weighted merging technique that better preserves weak knowledge in the base model.
|
https://arxiv.org/abs/2601.06672
|
Academic Papers
|
svg
|
54c2dd4d8c9164653133b10a41960bde379d632cb989b6207d87fe0c02aef565
|
2026-01-13T00:00:00-05:00
|
Quantification and Classification of Carbon Nanotubes in Electron Micrographs using Vision Foundation Models
|
arXiv:2601.06673v1 Announce Type: new Abstract: Accurate characterization of carbon nanotube morphologies in electron microscopy images is vital for exposure assessment and toxicological studies, yet current workflows rely on slow, subjective manual segmentation. This work presents a unified framework leveraging vision foundation models to automate the quantification and classification of CNTs in electron microscopy images. First, we introduce an interactive quantification tool built on the Segment Anything Model (SAM) that segments particles with near-perfect accuracy using minimal user input. Second, we propose a novel classification pipeline that utilizes these segmentation masks to spatially constrain a DINOv2 vision transformer, extracting features exclusively from particle regions while suppressing background noise. Evaluated on a dataset of 1,800 TEM images, this architecture achieves 95.5% accuracy in distinguishing between four different CNT morphologies, significantly outperforming the current baseline despite using a fraction of the training data. Crucially, this instance-level processing allows the framework to resolve mixed samples, correctly classifying distinct particle types co-existing within a single field of view. These results demonstrate that integrating zero-shot segmentation with self-supervised feature learning enables high-throughput, reproducible nanomaterial analysis, transforming a labor-intensive bottleneck into a scalable, data-driven process.
|
https://arxiv.org/abs/2601.06673
|
Academic Papers
|
svg
|
5c526b21933274f98cbd0001951a71486b557ebdda8d0cccfe38b987b48a0ce5
|
2026-01-13T00:00:00-05:00
|
Evaluating Cross-Lingual Unlearning in Multilingual Language Models
|
arXiv:2601.06675v1 Announce Type: new Abstract: We present the first comprehensive evaluation of cross-lingual unlearning in multilingual LLMs. Using translated TOFU benchmarks in seven language/script variants, we test major unlearning algorithms and show that most fail to remove facts outside the training language, even when utility remains high. However, subspace-projection consistently outperforms the other methods, achieving strong cross-lingual forgetting with minimal degradation. Analysis of learned task subspaces reveals a shared interlingua structure: removing this shared subspace harms all languages, while removing language-specific components selectively affects one. These results demonstrate that multilingual forgetting depends on geometry in weight space, motivating subspace-based approaches for future unlearning systems.
|
https://arxiv.org/abs/2601.06675
|
Academic Papers
|
svg
|
d188c8b574e23d328b52b3d365c59ce24d207d43a1c4f2524c23310e7d5411f5
|
2026-01-13T00:00:00-05:00
|
IDRBench: Interactive Deep Research Benchmark
|
arXiv:2601.06676v1 Announce Type: new Abstract: Deep research agents powered by Large Language Models (LLMs) can perform multi-step reasoning, web exploration, and long-form report generation. However, most existing systems operate in an autonomous manner, assuming fully specified user intent and evaluating only final outputs. In practice, research goals are often underspecified and evolve during exploration, making sustained interaction essential for robust alignment. Despite its importance, interaction remains largely invisible to existing deep research benchmarks, which neither model dynamic user feedback nor quantify its costs. We introduce IDRBench, the first benchmark for systematically evaluating interactive deep research. IDRBench combines a modular multi-agent research framework with on-demand interaction, a scalable reference-grounded user simulator, and an interaction-aware evaluation suite that jointly measures interaction benefits (quality and alignment) and costs (turns and tokens). Experiments across seven state-of-the-art LLMs show that interaction consistently improves research quality and robustness, often outweighing differences in model capacity, while revealing substantial trade-offs in interaction efficiency.
|
https://arxiv.org/abs/2601.06676
|
Academic Papers
|
svg
|
77827f66b0889210a30a4fd7f2788498119c50805609220f18668584e1c85b04
|
2026-01-13T00:00:00-05:00
|
Plasticity vs. Rigidity: The Impact of Low-Rank Adapters on Reasoning on a Micro-Budget
|
arXiv:2601.06677v1 Announce Type: new Abstract: Recent advances in mathematical reasoning typically rely on massive scale, yet the question remains: can strong reasoning capabilities be induced in small language models ($\leq1.5\text{B}$) under extreme constraints? We investigate this by training models on a single A40 GPU (48GB) for under 24 hours using Reinforcement Learning with Verifiable Rewards (RLVR) and Low-Rank Adaptation (LoRA). We find that the success of this ``micro-budget" regime depends critically on the interplay between adapter capacity and model initialization. While low-rank adapters ($r=8$) consistently fail to capture the complex optimization dynamics of reasoning, high-rank adapters ($r=256$) unlock significant plasticity in standard instruction-tuned models. Our best result achieved an impressive 40.0\% Pass@1 on AIME 24 (an 11.1\% absolute improvement over baseline) and pushed Pass@16 to 70.0\%, demonstrating robust exploration capabilities. However, this plasticity is not universal: while instruction-tuned models utilized the budget to elongate their chain-of-thought and maximize reward, heavily math-aligned models suffered performance collapse, suggesting that noisy, low-budget RL updates can act as destructive interference for models already residing near a task-specific optimum.
|
https://arxiv.org/abs/2601.06677
|
Academic Papers
|
svg
|
4c9593b4c85b36af623f7037abd6dc427d6e40b8a4b695d84b2416adf04ad57c
|
2026-01-13T00:00:00-05:00
|
Reflective Reasoning for SQL Generation
|
arXiv:2601.06678v1 Announce Type: new Abstract: Robust text-to-SQL over complex, real-world databases remains brittle even with modern LLMs: iterative refinement often introduces syntactic and semantic drift, corrections tend to be non-transferable across queries, and naive use of large context windows scales poorly. We propose a controlled text-to-SQL framework built around reflective refinement. Instead of repeatedly rewriting the current SQL instance, the system decomposes generation into typed stages and applies feedback as persistent updates to the stage-level generation mechanism. A Reflection-Refinement Loop localizes violations to the responsible stage maximize preservation of previously validated constraints and support monotonic improvement over a query set. The method operates without gold SQL by combining interpreter-based checks with LLM-based semantic coverage verification as epistemic judges. Experiments on Spider and BIRD demonstrate consistent gains over strong prompting baselines, robust convergence within a small refinement budget, and improved execution accuracy across both frontier and open-weight model families.
|
https://arxiv.org/abs/2601.06678
|
Academic Papers
|
svg
|
28edbb52b917f5ec95948d842ab0257d7b2108375c1d3dd8e50a524c06981c14
|
2026-01-13T00:00:00-05:00
|
A Power Electronic Converter Control Framework Based on Graph Neural Networks - An Early Proof-of-Concept
|
arXiv:2601.06686v1 Announce Type: new Abstract: Power electronic converter control is typically tuned per topology, limiting transfer across heterogeneous designs. This letter proposes a topology-agnostic meta-control framework that encodes converter netlists as typed bipartite graphs and uses a task-conditioned graph neural network backbone with distributed control heads. The policy is trained end-to-end via differentiable predictive control to amortize constrained optimal control over a distribution of converter parameters and reference-tracking tasks. In simulation on randomly sampled buck converters, the learned controller achieves near-optimal tracking performance relative to an online optimal-control baseline, motivating future extension to broader topologies, objectives, and real-time deployment.
|
https://arxiv.org/abs/2601.06686
|
Academic Papers
|
svg
|
00ae7e98fabef9446d27ee3cfe4b6de377d287a8ed89ad8e5ae5b083d42dda3c
|
2026-01-13T00:00:00-05:00
|
The Case for Strategic Data Stewardship: Re-imagining Data Governance to Make Responsible Data Re-use Possible
|
arXiv:2601.06687v1 Announce Type: new Abstract: As societal challenges grow more complex, access to data for public interest use is paradoxically becoming more constrained. This emerging data winter is not simply a matter of scarcity, but of shrinking legitimate and trusted pathways for responsible data reuse. Concerns over misuse, regulatory uncertainty, and the competitive race to train AI systems have concentrated data access among a few actors while raising costs and inhibiting collaboration. Prevailing data governance models, focused on compliance, risk management, and internal control, are necessary but insufficient. They often result in data that is technically available yet practically inaccessible, legally shareable yet institutionally unusable, or socially illegitimate to deploy. This paper proposes strategic data stewardship as a complementary institutional function designed to systematically, sustainably, and responsibly activate data for public value. Unlike traditional stewardship, which tends to be inwardlooking, strategic data stewardship focuses on enabling cross sector reuse, reducing missed opportunities, and building durable, ecosystem-level collaboration. It outlines core principles, functions, and competencies, and introduces a practical Data Stewardship Canvas to support adoption across contexts such as data collaboratives, data spaces, and data commons. Strategic data stewardship, the paper argues, is essential in the age of AI: it translates governance principles into practice, builds trust across data ecosystems, and ensures that data are not only governed, but meaningfully mobilized to serve society.
|
https://arxiv.org/abs/2601.06687
|
Academic Papers
|
svg
|
8d34686ef96bceebd1b1802c667aa15b1c6758d83cdf068a0c36eb09010c8cfb
|
2026-01-13T00:00:00-05:00
|
The Sample Complexity of Lossless Data Compression
|
arXiv:2601.06688v1 Announce Type: new Abstract: A new framework is introduced for examining and evaluating the fundamental limits of lossless data compression, that emphasizes genuinely non-asymptotic results. The {\em sample complexity} of compressing a given source is defined as the smallest blocklength at which it is possible to compress that source at a specified rate and to within a specified excess-rate probability. This formulation parallels corresponding developments in statistics and computer science, and it facilitates the use of existing results on the sample complexity of various hypothesis testing problems. For arbitrary sources, the sample complexity of general variable-length compressors is shown to be tightly coupled with the sample complexity of prefix-free codes and fixed-length codes. For memoryless sources, it is shown that the sample complexity is characterized not by the source entropy, but by its R\'{e}nyi entropy of order~$1/2$. Nonasymptotic bounds on the sample complexity are obtained, with explicit constants. Generalizations to Markov sources are established, showing that the sample complexity is determined by the source's R\'{e}nyi entropy rate of order~$1/2$. Finally, bounds on the sample complexity of universal data compression are developed for arbitrary families of memoryless sources. There, the sample complexity is characterized by the minimum R\'{e}nyi divergence of order~$1/2$ between elements of the family and the uniform distribution. The connection of this problem with identity testing and with the associated separation rates is explored and discussed.
|
https://arxiv.org/abs/2601.06688
|
Academic Papers
|
svg
|
e627f4b483fb3476ada168c552e1e8b25e6b191f0151bc0f30538aac95379bee
|
2026-01-13T00:00:00-05:00
|
An Exploratory Pilot Survey on Technical Quality Control Practices in Agile R&D Projects
|
arXiv:2601.06689v1 Announce Type: new Abstract: Managing technical quality in agile Research and Development (R&D) software projects represents a persistent challenge, particularly in contexts characterized by high technical uncertainty and experimental pressure. This exploratory pilot survey explores how agile R&D software teams report the use of practices and metrics related to technical quality control within Scrum-based environments. The study employed a structured questionnaire administered to professionals from Science and Technology Institutions (STIs) located in Manaus, Brazil, aiming to capture reported practices, perceptions of quality, and recurrent challenges. Quantitative data were complemented by qualitative responses to support contextual interpretation. The results indicate that although practices such as automated testing, code review, and continuous integration are widely acknowledged, their reported application is often inconsistent across iterations. Gaps were also observed in the monitoring of technical quality metrics and in the reporting of mechanisms for assessing technical debt from a business perspective. Rather than aiming for generalization, this study offers an exploratory baseline that describes how technical quality is managed in agile R&D projects within a regional innovation ecosystem.
|
https://arxiv.org/abs/2601.06689
|
Academic Papers
|
svg
|
467f7f6c801ea833047d9d1f25e94a636ee2c8d07b7f3aca26eeececc11ba53b
|
2026-01-13T00:00:00-05:00
|
S-DAPT-2026: A Stage-Aware Synthetic Dataset for Advanced Persistent Threat Detection
|
arXiv:2601.06690v1 Announce Type: new Abstract: The detection of advanced persistent threats (APTs) remains a crucial challenge due to their stealthy, multistage nature and the limited availability of realistic, labeled datasets for systematic evaluation. Synthetic dataset generation has emerged as a practical approach for modeling APT campaigns; however, existing methods often rely on computationally expensive alert correlation mechanisms that limit scalability. Motivated by these limitations, this paper presents a near realistic synthetic APT dataset and an efficient alert correlation framework. The proposed approach introduces a machine learning based correlation module that employs K Nearest Neighbors (KNN) clustering with a cosine similarity metric to group semantically related alerts within a temporal context. The dataset emulates multistage APT campaigns across campus and organizational network environments and captures a diverse set of fourteen distinct alert types, exceeding the coverage of commonly used synthetic APT datasets. In addition, explicit APT campaign states and alert to stage mappings are defined to enable flexible integration of new alert types and support stage aware analysis. A comprehensive statistical characterization of the dataset is provided to facilitate reproducibility and support APT stage predictions.
|
https://arxiv.org/abs/2601.06690
|
Academic Papers
|
svg
|
a55fb17df46c2c4d422d5f62b38f68367f432fe31915a5eae4dc82aedccf2c5b
|
2026-01-13T00:00:00-05:00
|
The Axiom of Consent: Friction Dynamics in Multi-Agent Coordination
|
arXiv:2601.06692v1 Announce Type: new Abstract: Multi-agent systems face a fundamental coordination problem: agents must coordinate despite heterogeneous preferences, asymmetric stakes, and imperfect information. When coordination fails, friction emerges: measurable resistance manifesting as deadlock, thrashing, communication overhead, or outright conflict. This paper derives a formal framework for analyzing coordination friction from a single axiom: actions affecting agents require authorization from those agents in proportion to stakes. From this axiom of consent, we establish the kernel triple $({\alpha}, {\sigma}, {\epsilon})$ (alignment, stake, and entropy) characterizing any resource allocation configuration. The friction equation $F = {\sigma} (1 + {\epsilon})/(1 + {\alpha})$ predicts coordination difficulty as a function of preference alignment ${\alpha}$, stake magnitude ${\sigma}$, and communication entropy ${\epsilon}$. The Replicator-Optimization Mechanism (ROM) governs evolutionary selection over coordination strategies: configurations generating less friction persist longer, establishing consent-respecting arrangements as dynamical attractors rather than normative ideals. We develop formal definitions for resource consent, coordination legitimacy, and friction-aware allocation in multi-agent systems. The framework yields testable predictions: MARL systems with higher reward alignment exhibit faster convergence; distributed allocations accounting for stake asymmetry generate lower coordination failure; AI systems with interpretability deficits produce friction proportional to the human-AI alignment gap. Applications to cryptocurrency governance and political systems demonstrate that the same equations govern friction dynamics across domains, providing a complexity science perspective on coordination under preference heterogeneity.
|
https://arxiv.org/abs/2601.06692
|
Academic Papers
|
svg
|
fa166f9d243038753e1af2177c1db9f200970d8b4ca462efacb9b1e02b142d03
|
2026-01-13T00:00:00-05:00
|
Incentive Mechanism Design for Privacy-Preserving Decentralized Blockchain Relayers
|
arXiv:2601.06699v1 Announce Type: new Abstract: Public blockchains, though renowned for their transparency and immutability, suffer from significant privacy concerns. Network-level analysis and long-term observation of publicly available transactions can often be used to infer user identities. To mitigate this, several blockchain applications rely on relayers, which serve as intermediary nodes between users and smart contracts deployed on the blockchain. However, dependence on a single relayer not only creates a single point of failure but also introduces exploitable vulnerabilities that weaken the system's privacy guarantees. This paper proposes a decentralized relayer architecture that enhances privacy and reliability through game-theoretic incentive design. We model the interaction among relayers as a non-cooperative game and design an incentive mechanism in which probabilistic uploading emerges as a unique mixed Nash equilibrium. Using evolutionary game analysis, we demonstrate the equilibrium's stability against perturbations and coordinated deviations. Through numerical evaluations, we analyze how equilibrium strategies and system behavior evolve with key parameters such as the number of relayers, upload costs, rewards, and penalties. In particular, we show that even with high transaction costs, the system maintains reliability with an outage probability below 0.05 . Furthermore, our results highlight a fundamental trade-off between privacy, reliability, robustness, and cost in decentralized relayer systems.
|
https://arxiv.org/abs/2601.06699
|
Academic Papers
|
svg
|
f7c545f363edbf483fcac7cc26f767babbcd6319cc32a69f2f833266f9f7fd94
|
2026-01-13T00:00:00-05:00
|
Characterising Toxicity in Generative Large Language Models
|
arXiv:2601.06700v1 Announce Type: new Abstract: In recent years, the advent of the attention mechanism has significantly advanced the field of natural language processing (NLP), revolutionizing text processing and text generation. This has come about through transformer-based decoder-only architectures, which have become ubiquitous in NLP due to their impressive text processing and generation capabilities. Despite these breakthroughs, language models (LMs) remain susceptible to generating undesired outputs: inappropriate, offensive, or otherwise harmful responses. We will collectively refer to these as ``toxic'' outputs. Although methods like reinforcement learning from human feedback (RLHF) have been developed to align model outputs with human values, these safeguards can often be circumvented through carefully crafted prompts. Therefore, this paper examines the extent to which LLMs generate toxic content when prompted, as well as the linguistic factors -- both lexical and syntactic -- that influence the production of such outputs in generative models.
|
https://arxiv.org/abs/2601.06700
|
Academic Papers
|
svg
|
4043feb012d80e250a1b4282696d2eed159e1ff8885bd9b2edfe4ab2bfd2596a
|
2026-01-13T00:00:00-05:00
|
Explainability of Complex AI Models with Correlation Impact Ratio
|
arXiv:2601.06701v1 Announce Type: new Abstract: Complex AI systems make better predictions but often lack transparency, limiting trustworthiness, interpretability, and safe deployment. Common post hoc AI explainers, such as LIME, SHAP, HSIC, and SAGE, are model agnostic but are too restricted in one significant regard: they tend to misrank correlated features and require costly perturbations, which do not scale to high dimensional data. We introduce ExCIR (Explainability through Correlation Impact Ratio), a theoretically grounded, simple, and reliable metric for explaining the contribution of input features to model outputs, which remains stable and consistent under noise and sampling variations. We demonstrate that ExCIR captures dependencies arising from correlated features through a lightweight single pass formulation. Experimental evaluations on diverse datasets, including EEG, synthetic vehicular data, Digits, and Cats-Dogs, validate the effectiveness and stability of ExCIR across domains, achieving more interpretable feature explanations than existing methods while remaining computationally efficient. To this end, we further extend ExCIR with an information theoretic foundation that unifies the correlation ratio with Canonical Correlation Analysis under mutual information bounds, enabling multi output and class conditioned explainability at scale.
|
https://arxiv.org/abs/2601.06701
|
Academic Papers
|
svg
|
78a781d3146e03fecc8cd7730182d6087e7620f4e47c182abcd26d8573675e11
|
2026-01-13T00:00:00-05:00
|
GRASP LoRA: GRPO Guided Adapter Sparsity Policy for Cross Lingual Transfer
|
arXiv:2601.06702v1 Announce Type: new Abstract: Parameter efficient fine tuning is a way to adapt LLMs to new languages when compute or data are limited, yet adapter pipelines usually choose a global prune ratio by grid search. This practice is computationally expensive and development set intensive, since it repeats training, freezes sparsity, and misses fractional optima. We introduce GRASP LoRA (GRPO Guided Adapter Sparsity Policy), which treats global sparsity as a learnable control variable. A GRPO controller interleaves with training, periodically probing candidate prune ratios on a small micro development set and updating a single global prune ratio online from its reward signal. It operates on merged source and target LoRA adapters on a frozen backbone and replaces grid search with one controller run that learns a prune ratio, followed by a single final merge and prune fine tuning run with pruning fixed to that ratio. On cross lingual transfer from English into Arabic and Chinese, including XL-Sum summarization and MLQA extractive question answering with Llama 3 8B, GRASP LoRA improves semantic faithfulness, content coverage, and answer quality over strong target only and merge and prune baselines. It reduces end to end runtime by multiple times relative to grid search, lowers reliance on large development sets, and makes adapter reuse practical for low resource deployment.
|
https://arxiv.org/abs/2601.06702
|
Academic Papers
|
svg
|
a25a7b984b8d98d5ece79eaa6a03f796ac17c372820ea3aa7c247384294f9605
|
2026-01-13T00:00:00-05:00
|
Mapping and Comparing Climate Equity Policy Practices Using RAG LLM-Based Semantic Analysis and Recommendation Systems
|
arXiv:2601.06703v1 Announce Type: new Abstract: This study investigates the use of large language models to enhance the policymaking process. We first analyze planning-related job postings to revisit the evolving roles of planners in the era of AI. We then examine climate equity plans across the U.S. and apply ChatGPT to conduct semantic analysis, extracting policy, strategy, and action items related to transportation and energy. The methodological framework relied on a LangChain-native retrieval-augmented generation pipeline. Based on these extracted elements and their evaluated presence, we develop a content-based recommendation system to support cross-city policy comparison. The results indicate that, despite growing attention to AI, planning jobs largely retain their traditional domain emphases in transportation, environmental planning, housing, and land use. Communicative responsibilities remain central to planning practice. Climate equity plans commonly address transportation, environmental, and energy-related measures aimed at reducing greenhouse gas emissions and predominantly employ affirmative language. The demonstration of the recommendation system illustrates how planners can efficiently identify cities with similar policy practices, revealing patterns of geographic similarity in policy adoption. The study concludes by envisioning localized yet personalized AI-assisted systems that can be adapted within urban systems.
|
https://arxiv.org/abs/2601.06703
|
Academic Papers
|
svg
|
ef50194ad1c14b00f270a8917f07d0cd5a77a823eb36432f9882d2fa82caf05d
|
2026-01-13T00:00:00-05:00
|
Beyond Perfect Scores: Proof-by-Contradiction for Trustworthy Machine Learning
|
arXiv:2601.06704v1 Announce Type: new Abstract: Machine learning (ML) models show strong promise for new biomedical prediction tasks, but concerns about trustworthiness have hindered their clinical adoption. In particular, it is often unclear whether a model relies on true clinical cues or on spurious hierarchical correlations in the data. This paper introduces a simple yet broadly applicable trustworthiness test grounded in stochastic proof-by-contradiction. Instead of just showing high test performance, our approach trains and tests on spurious labels carefully permuted based on a potential outcomes framework. A truly trustworthy model should fail under such label permutation; comparable accuracy across real and permuted labels indicates overfitting, shortcut learning, or data leakage. Our approach quantifies this behavior through interpretable Fisher-style p-values, which are well understood by domain experts across medical and life sciences. We evaluate our approach on multiple new bacterial diagnostics to separate tasks and models learning genuine causal relationships from those driven by dataset artifacts or statistical coincidences. Our work establishes a foundation to build rigor and trust between ML and life-science research communities, moving ML models one step closer to clinical adoption.
|
https://arxiv.org/abs/2601.06704
|
Academic Papers
|
svg
|
116304c1480064f9b2ea7e967ad75d02ea2a70301419a62603c7aaac730415be
|
2026-01-13T00:00:00-05:00
|
Algorithm Support for Graph Databases, Done Right
|
arXiv:2601.06705v1 Announce Type: new Abstract: Graph database query languages cannot express algorithms like PageRank, forcing costly data wrangling, while existing solutions such as algorithm libraries, vertex-centric APIs, and recursive CTEs lack the necessary combination of expressiveness, performance, and usability. We present GraphAlg: a domain-specific language for graph algorithms that compiles to relational algebra, enabling seamless integration with query processing pipelines. Built on linear algebra foundations, GraphAlg provides intuitive matrix operations that are amenable to aggressive optimization including sparsity analysis, loop-invariant code motion, and in-place aggregation. Our implementation in AvantGraph demonstrates significant code complexity reduction compared to SQL/Python and Pregel while achieving excellent performance on LDBC Graphalytics benchmarks. GraphAlg establishes that graph databases can serve as unified platforms for both queries and analytics.
|
https://arxiv.org/abs/2601.06705
|
Academic Papers
|
svg
|
263652e7f961b94224d48799159d483946376784a44920e25705dd60bcc45ba6
|
2026-01-13T00:00:00-05:00
|
Resource-Aware Task Allocator Design: Insights and Recommendations for Distributed Satellite Constellations
|
arXiv:2601.06706v1 Announce Type: new Abstract: We present the design of a Resource-Aware Task Allocator (RATA) and an empirical analysis in handling real-time tasks for processing on Distributed Satellite Systems (DSS). We consider task processing performance across low Earth orbit (LEO) to Low-Medium Earth Orbit (Low-MEO) constellation sizes, under varying traffic loads. Using Single-Level Tree Network(SLTN)-based cooperative task allocation architecture, we attempt to evaluate some key performance metrics - blocking probabilities, response times, energy consumption, and resource utilization across several tens of thousands of tasks per experiment. Our resource-conscious RATA monitors key parameters such as arrival rate, resources (on-board compute, storage, bandwidth, battery) availability, satellite eclipses' influence in processing and communications. This study is an important step towards analyzing the performance under lighter to stress inducing levels of compute intense workloads to test the ultimate performance limits under the combined influence of the above-mentioned factors. Results show pronounced non-linear scaling: while capacity increases with constellation size, blocking and delay grow rapidly, whereas energy remains resilient under solar-aware scheduling. The analysis identifies a practical satellite-count limit for baseline SLTNs and demonstrates that CPU availability, rather than energy, is the primary cause of blocking. These findings provide quantitative guidance by identifying thresholds at which system performance shifts from graceful degradation to collapse.
|
https://arxiv.org/abs/2601.06706
|
Academic Papers
|
svg
|
c8da53fd25de2099d9d611881f8c8d3957b94fabf39f6491fdf8b09948b6b890
|
2026-01-13T00:00:00-05:00
|
Evaluating Accounting Reasoning Capabilities of Large Language Models
|
arXiv:2601.06707v1 Announce Type: new Abstract: Large language models are transforming learning, cognition, and research across many fields. Effectively integrating them into professional domains, such as accounting, is a key challenge for enterprise digital transformation. To address this, we define vertical domain accounting reasoning and propose evaluation criteria derived from an analysis of the training data characteristics of representative GLM models. These criteria support systematic study of accounting reasoning and provide benchmarks for performance improvement. Using this framework, we evaluate GLM-6B, GLM-130B, GLM-4, and OpenAI GPT-4 on accounting reasoning tasks. Results show that prompt design significantly affects performance, with GPT-4 demonstrating the strongest capability. Despite these gains, current models remain insufficient for real-world enterprise accounting, indicating the need for further optimization to unlock their full practical value.
|
https://arxiv.org/abs/2601.06707
|
Academic Papers
|
svg
|
2c53bc315e81094e969bf9a6f2e3603376a76cd9d094d6d1182266d7129e9ae5
|
2026-01-13T00:00:00-05:00
|
Behavioral Analytics for Continuous Insider Threat Detection in Zero-Trust Architectures
|
arXiv:2601.06708v1 Announce Type: new Abstract: Insider threats are a particularly tricky cybersecurity issue, especially in zero-trust architectures (ZTA) where implicit trust is removed. Although the rule of thumb is never trust, always verify, attackers can still use legitimate credentials and impersonate the standard user activity. In response, behavioral analytics with machine learning (ML) can help monitor the user activity continuously and identify the presence of anomalies. This introductory framework makes use of the CERT Insider Threat Dataset for data cleaning, normalization, and class balance using the Synthetic Minority Oversampling Technique (SMOTE). It also employs Principal Component Analysis (PCA) for dimensionality reduction. Several benchmark models, including Support Vector Machine (SVM), Artificial Neural Network (ANN), and Bayesian Network (Bayes Net), were used to develop and evaluate the AdaBoost classifier. Compared to SVM (90.1%), ANN (94.7%), and Bayes Net (94.9), AdaBoost achieved higher performance with a 98.0% ACC, 98.3% PRE, 98.0% REC, and F1-score (F1). The Receiver Operating Characteristic (ROC) study, which provided further confirmation of its strength, yielded an Area Under the Curve (AUC) of 0.98. These results prove the effectiveness and dependability of AdaBoost-based behavioral analytics as a solution to reinforcing continuous insider threat detection in zero-trust settings.
|
https://arxiv.org/abs/2601.06708
|
Academic Papers
|
svg
|
27620b5fc72aef32cf088fee9856e9045fe3f06f5aafa96874ddc3944a2d8e22
|
2026-01-13T00:00:00-05:00
|
Privacy-Preserving Data Processing in Cloud : From Homomorphic Encryption to Federated Analytics
|
arXiv:2601.06710v1 Announce Type: new Abstract: Privacy-preserving data processing refers to the methods and models that allow computing and analyzing sensitive data with a guarantee of confidentiality. As cloud computing and applications that rely on data continue to expand, there is an increasing need to protect personal, financial and healthcare information. Conventional centralized data processing methods expose sensitive data to risk of breaches, compelling the need to use decentralized and secure data methods. This paper gives a detailed review of privacy-saving mechanisms in the cloud platform, such as statistical approaches like differential privacy and cryptographic solutions like homomorphic encryption. Federated analytics and federated learning, two distributed learning frameworks, are also discussed. Their principles, applications, benefits, and limitations are reviewed, with roles of use in the fields of healthcare, finance, IoT, and industrial cases. Comparative analyses measure trade-offs in security, efficiency, scalability, and accuracy, and investigations are done of emerging hybrid frameworks to provide better privacy protection. Critical issues, including computational overhead, privacy-utility trade-offs, standardization, adversarial threats, and cloud integration are also addressed. This review examines in detail the recent privacy-protecting approaches in cloud computation and offers scholars and practitioners crucial information on secure and effective solutions to data processing.
|
https://arxiv.org/abs/2601.06710
|
Academic Papers
|
svg
|
8ab61f3c4f25a0cafb113213e0a456f2881d6c3d3b2bb3e78a32d7c1892c622f
|
2026-01-13T00:00:00-05:00
|
FO-Complete Program Verification for Heap Logics
|
arXiv:2601.06719v1 Announce Type: new Abstract: We develop the first two heap logics that have implicit heaplets and that admit FO-complete program verification. The notion of FO-completeness is a theoretical guarantee that all theorems that are valid when recursive definitions are interpreted as fixpoint definitions (instead of least fixpoint) are guaranteed to be eventually proven by the system. The logics we develop are a frame logic ($\textit{FL}$) and a separation logic ($\textit{SL-FL}$) that has an alternate semantics inspired by frame logic. We show verification condition generation for FL that is amenable to FO-complete reasoning using quantifier instantiation and SMT solvers. We show $\textit{SL-FL}$ can be translated to FL in order to obtain FO-complete reasoning. We implement tools that realize our technique and show the expressiveness of our logics and the efficacy of the verification technique on a suite of benchmarks that manipulate data structures.
|
https://arxiv.org/abs/2601.06719
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.