id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
ba2c61a649df442b511c5323307202bfc8919959d98fa177690e3ea4d95acc54
|
2026-01-21T00:00:00-05:00
|
PDFInspect: A Unified Feature Extraction Framework for Malicious Document Detection
|
arXiv:2601.12866v1 Announce Type: new Abstract: The increasing prevalence of malicious Portable Document Format (PDF) files necessitates robust and comprehensive feature extraction techniques for effective detection and analysis. This work presents a unified framework that integrates graph-based, structural, and metadata-driven analysis to generate a rich feature representation for each PDF document. The system extracts text from PDF pages and constructs undirected graphs based on pairwise word relationships, enabling the computation of graph-theoretic features such as node count, edge density, and clustering coefficient. Simultaneously, the framework parses embedded metadata to quantify character distributions, entropy patterns, and inconsistencies across fields such as author, title, and producer. Temporal features are derived from creation and modification timestamps to capture behavioral signatures, while structural elements including, object streams, fonts, and embedded images, are quantified to reflect document complexity. Boolean flags for potentially malicious PDF constructs (e.g., JavaScript, launch actions) are also extracted. Together, these features form a high-dimensional vector representation (170 dimensions) that is well-suited for downstream tasks such as malware classification, anomaly detection, and forensic analysis. The proposed approach is scalable, extensible, and designed to support real-world PDF threat intelligence workflows.6
|
https://arxiv.org/abs/2601.12866
|
Academic Papers
|
svg
|
3395f9d087105f416c6150a8a9a31463323cfc8973e8fe8af9481efab89be79b
|
2026-01-21T00:00:00-05:00
|
Race, Ethnicity and Their Implication on Bias in Large Language Models
|
arXiv:2601.12868v1 Announce Type: new Abstract: Large language models (LLMs) increasingly operate in high-stakes settings including healthcare and medicine, where demographic attributes such as race and ethnicity may be explicitly stated or implicitly inferred from text. However, existing studies primarily document outcome-level disparities, offering limited insight into internal mechanisms underlying these effects. We present a mechanistic study of how race and ethnicity are represented and operationalized within LLMs. Using two publicly available datasets spanning toxicity-related generation and clinical narrative understanding tasks, we analyze three open-source models with a reproducible interpretability pipeline combining probing, neuron-level attribution, and targeted intervention. We find that demographic information is distributed across internal units with substantial cross-model variation. Although some units encode sensitive or stereotype-related associations from pretraining, identical demographic cues can induce qualitatively different behaviors. Interventions suppressing such neurons reduce bias but leave substantial residual effects, suggesting behavioral rather than representational change and motivating more systematic mitigation.
|
https://arxiv.org/abs/2601.12868
|
Academic Papers
|
svg
|
0c568a2c6a80a6e29c0df649946ff6aa8a8eb5be9bd578473a73eebc9cc58664
|
2026-01-21T00:00:00-05:00
|
Text2Structure3D: Graph-Based Generative Modeling of Equilibrium Structures with Diffusion Transformers
|
arXiv:2601.12870v1 Announce Type: new Abstract: This paper presents Text2Structure3D, a graph-based Machine Learning (ML) model that generates equilibrium structures from natural language prompts. Text2Structure3D is designed to support new intuitive ways of design exploration and iteration in the conceptual structural design process. The approach combines latent diffusion with a Variational Graph Auto-Encoder (VGAE) and graph transformers to generate structural graphs that are close to an equilibrium state. Text2Structure3D integrates a residual force optimization post-processing step that ensures generated structures fully satisfy static equilibrium. The model was trained and validated using a cross-typological dataset of funicular form-found and statically determinate bridge structures, paired with text descriptions that capture the formal and structural features of each bridge. Results demonstrate that Text2Structure3D generates equilibrium structures with strong adherence to text-based specifications and greatly improves generalization capabilities compared to parametric model-based approaches. Text2Structure3D represents an early step toward a general-purpose foundation model for structural design, enabling the integration of generative AI into conceptual design workflows.
|
https://arxiv.org/abs/2601.12870
|
Academic Papers
|
svg
|
2ea3f78ab90a022735c0178dd3e9d63611466278e915289d4fae57232b5baba3
|
2026-01-21T00:00:00-05:00
|
Measuring Love Toward AI: Development and Validation of the Love Attitudes Scale toward Artificial Intelligence (LAS-AI)
|
arXiv:2601.12871v1 Announce Type: new Abstract: Artificial intelligences (AIs) are increasingly capable of emotionally engaging with humans to the point of forming intimate relationships. Yet, current studies on romantic love toward AI lack statistically validated instruments to measure romantic love toward AI, hindering empirical research. To address this gap, we reinterpreted Lee's love styles theory in the AI context and developed the Love Attitudes Scale toward AI (LAS-AI). The resulting 24-item, six-factor scale was validated across four phases using three independent samples (N = 899), demonstrating strong psychometric properties. The findings further revealed that people primarily seek practical, passionate, and companionship-based relationships with AI (i.e., Pragma, Eros, and Storge), showing little interest in a playful or noncommittal approach (i.e., Ludus). We also provided an initial exploration of the similarities and differences between romantic love with humans and AI. The LAS-AI offers a robust tool for future research on human-AI romantic relationships, with prolific implications.
|
https://arxiv.org/abs/2601.12871
|
Academic Papers
|
svg
|
5876664386383692543a0c22222172be2117bd7e72d294ffbfac112a2565a972
|
2026-01-21T00:00:00-05:00
|
Quantum Interactive Oracle Proofs
|
arXiv:2601.12874v1 Announce Type: new Abstract: We initiate the study of quantum Interactive Oracle Proofs (qIOPs), a generalization of both quantum Probabilistically Checkable Proofs and quantum Interactive Proofs, as well as a quantum analogue of classical Interactive Oracle Proofs. In the model of quantum Interactive Oracle Proofs, we allow multiple rounds of quantum interaction between the quantum prover and the quantum verifier, but the verifier has limited access to quantum resources. This includes both queries to the prover's messages and the complexity of the quantum circuits applied by the verifier. The question of whether QMA admits a quantum interactive oracle proof system is a relaxation of the quantum PCP Conjecture. We show the following two main constructions of qIOPs, both of which are unconditional: - We construct a qIOP for QMA in which the verifier shares polynomially many EPR pairs with the prover at the start of the protocol and reads only a constant number of qubits from the prover's messages. - We provide a stronger construction of qIOP for QMA in which the verifier not only reads a constant number of qubits but also operates on a constant number of qubits overall, including those in their private registers. However, in this stronger setting, the communication complexity becomes exponential. This leaves open the question of whether strong qIOPs for QMA, with polynomial communication complexity, exist. As a key component of our construction, we introduce a novel single prover many-qubits test, which may be of independent interest.
|
https://arxiv.org/abs/2601.12874
|
Academic Papers
|
svg
|
97444cb7aa6ef82d11265693a9f925b0148c6f6853a61d1ed00fd9eb97b87738
|
2026-01-21T00:00:00-05:00
|
SWORD: A Secure LoW-Latency Offline-First Authentication and Data Sharing Scheme for Resource Constrained Distributed Networks
|
arXiv:2601.12875v1 Announce Type: new Abstract: While many resource-constrained networks, such as Internet of Things (IoT) and Internet of Vehicles (IoV), are inherently distributed, the majority still rely on central servers for fast authentication and data sharing. Blockchain-based solutions offer decentralized alternatives but often struggle to meet the stringent latency requirements of real-time applications. Even with the rollout of 5G, network latency between servers and peers remains a significant challenge. To address this, we introduce SWORD, a novel offline-first authentication and data-sharing scheme designed specifically for resource-constrained networks. SWORD utilizes a proximity-based clustering approach to enable offline authentication and data sharing, ensuring low-latency, secure operations even in intermittently connected scenarios. Our experimental results show that SWORD outperforms traditional blockchain-based solutions while offering similar resource efficiency and authentication latency to central-server-based solutions. Additionally, we provide a comprehensive security analysis, demonstrating that SWORD is resilient against spoofing, impersonation, replay, and man-in-the-middle attacks.
|
https://arxiv.org/abs/2601.12875
|
Academic Papers
|
svg
|
4296f1c62a0bc5838271f4ab2bee09c325c0c2670ce01ce8ccab178871b0b305
|
2026-01-21T00:00:00-05:00
|
Exploring Talking Head Models With Adjacent Frame Prior for Speech-Preserving Facial Expression Manipulation
|
arXiv:2601.12876v1 Announce Type: new Abstract: Speech-Preserving Facial Expression Manipulation (SPFEM) is an innovative technique aimed at altering facial expressions in images and videos while retaining the original mouth movements. Despite advancements, SPFEM still struggles with accurate lip synchronization due to the complex interplay between facial expressions and mouth shapes. Capitalizing on the advanced capabilities of audio-driven talking head generation (AD-THG) models in synthesizing precise lip movements, our research introduces a novel integration of these models with SPFEM. We present a new framework, Talking Head Facial Expression Manipulation (THFEM), which utilizes AD-THG models to generate frames with accurately synchronized lip movements from audio inputs and SPFEM-altered images. However, increasing the number of frames generated by AD-THG models tends to compromise the realism and expression fidelity of the images. To counter this, we develop an adjacent frame learning strategy that finetunes AD-THG models to predict sequences of consecutive frames. This strategy enables the models to incorporate information from neighboring frames, significantly improving image quality during testing. Our extensive experimental evaluations demonstrate that this framework effectively preserves mouth shapes during expression manipulations, highlighting the substantial benefits of integrating AD-THG with SPFEM.
|
https://arxiv.org/abs/2601.12876
|
Academic Papers
|
svg
|
f98c5739f63c73adf5e6934f6b530c5177e8910de01917439ebddf2be323c392
|
2026-01-21T00:00:00-05:00
|
A hierarchical splitting approach for N-split differential equations
|
arXiv:2601.12878v1 Announce Type: new Abstract: We propose a hierarchical splitting approach to differential equations that provides a design principle for constructing splitting methods for $N$-split systems by iteratively applying splitting methods for two-split systems. We analyze the convergence order, derive explicit formulas for the leading-order error terms, and investigate self-adjointness. Moreover, we discuss compositions of hierarchical splitting methods in detail. We further augment the hierarchical splitting approach with multiple time-stepping techniques, turning the class into a promising framework at the intersection of geometric numerical integration and multirate integration. In this context, we characterize the computational order of a multirate integrator and establish conditions on the multirate factors that guarantee an increased convergence rate in practical computations up to a certain step size. Finally, we design several hierarchical splitting methods and perform numerical simulations for rigid body equations and a separable Hamiltonian system with multirate potential, confirming the theoretical findings and showcasing the computational efficiency of hierarchical splitting methods.
|
https://arxiv.org/abs/2601.12878
|
Academic Papers
|
svg
|
e2a8045ffeed7cfd07435c035cdf330c96ef3bde0d5a4f925eb9785ce574316e
|
2026-01-21T00:00:00-05:00
|
Hierarchical Sparse Circuit Extraction from Billion-Parameter Language Models through Scalable Attribution Graph Decomposition
|
arXiv:2601.12879v1 Announce Type: new Abstract: Mechanistic interpretability seeks to reverse-engineer neural network computations into human-understandable algorithms, yet extracting sparse computational circuits from billion-parameter language models remains challenging due to exponential search complexity and pervasive polysemanticity. The proposed Hierarchical Attribution Graph Decomposition (HAGD) framework reduces circuit discovery complexity from O(2^n) exhaustive enumeration to O(n^2 log n) through multi-resolution abstraction hierarchies and differentiable circuit search. The methodology integrates cross-layer transcoders for monosemantic feature extraction, graph neural network meta-learning for topology prediction, and causal intervention protocols for validation. Empirical evaluation spans GPT-2 variants, Llama-7B through Llama-70B, and Pythia suite models across algorithmic tasks and natural language benchmarks. On modular arithmetic tasks, the framework achieves up to 91% behavioral preservation ($\pm$2.3\% across runs) while maintaining interpretable subgraph sizes. Cross-architecture transfer experiments suggest that discovered circuits exhibit moderate structural similarity (averaging 67%) across model families, indicating potential shared computational patterns. These results provide preliminary foundations for interpretability at larger model scales while identifying significant limitations in current attribution methodologies that require future advances.
|
https://arxiv.org/abs/2601.12879
|
Academic Papers
|
svg
|
bbccacb1a7eed90c3d391cbb302662bf5fc7a93768a74a8c14be2949841ba711
|
2026-01-21T00:00:00-05:00
|
YOLO26: An Analysis of NMS-Free End to End Framework for Real-Time Object Detection
|
arXiv:2601.12882v1 Announce Type: new Abstract: The "You Only Look Once" (YOLO) framework has long served as the benchmark for real-time object detection, yet traditional iterations (YOLOv1 through YOLO11) remain constrained by the latency and hyperparameter sensitivity of Non-Maximum Suppression (NMS) post-processing. This paper analyzes a comprehensive analysis of YOLO26, an architecture that fundamentally redefines this paradigm by eliminating NMS in favor of a native end-to-end learning strategy. This study examines the critical innovations that enable this transition, specifically the introduction of the MuSGD optimizer for stabilizing lightweight backbones, STAL for small-target-aware assignment, and ProgLoss for dynamic supervision. Through a systematic review of official performance benchmarks, the results demonstrate that YOLO26 establishes a new Pareto front, outperforming a comprehensive suite of predecessors and state-of-the-art competitors (including RTMDet and DAMO-YOLO) in both inference speed and detection accuracy. The analysis confirms that by decoupling representation learning from heuristic post-processing, YOLOv26 successfully resolves the historical trade-off between latency and precision, signaling the next evolutionary step in edge-based computer vision.
|
https://arxiv.org/abs/2601.12882
|
Academic Papers
|
svg
|
0d8b77cd8e3b89ed79fd1085c4d606aa5a020b49e315346f408a6e83438fe761
|
2026-01-21T00:00:00-05:00
|
Does Motion Intensity Impair Cognition in HCI? The Critical Role of Physical Motion-Visual Target Directional Congruency
|
arXiv:2601.12884v1 Announce Type: new Abstract: Human-computer interaction (HCI) increasingly occurs in motion-rich environments. The ability to accurately and rapidly respond to directional visual cues is critical in these contexts. How whole-body motion and individual differences affect human perception and reaction to these directional cues is therefore a key, yet an underexplored question for HCI. This study used a 6-DOF motion platform to measure task performance on a visual direction judgment task. We analyzed performance by decomposing the complex motion into two distinct components: a task-irrelevant lateral interference component and a task-aligned directional congruency component. Results indicate that increased motion intensity lengthened reaction times. This effect was primarily driven by the lateral interference component, and this detrimental impact was disproportionately amplified for individuals with high motion sickness susceptibility. Conversely, directional congruency, where motion direction matched the visual cue, improved performance for all participants. These findings suggest that motion's impact on cognition is not monolithic, and that system design for mobile HCI can be informed by strategies that actively shape motion, such as minimizing lateral interference while maximizing directional congruency.
|
https://arxiv.org/abs/2601.12884
|
Academic Papers
|
svg
|
2159e04aeeced72c0065c33688a1058bc27102c087e306927f36fb3be9d9e989
|
2026-01-21T00:00:00-05:00
|
From Vertices to Convex Hulls: Certifying Set-Wise Compatibility for CBF Constraints
|
arXiv:2601.12885v1 Announce Type: new Abstract: This paper develops certificates that propagate compatibility of multiple control barrier function (CBF) constraints from sampled vertices to their convex hull. Under mild concavity and affinity assumptions, we present three sufficient feasibility conditions under which feasible inputs over the convex hull can be obtained per coordinate, with a common input, or via convex blending. We also describe the associated computational methods, based on interval intersections or an offline linear program (LP). Beyond certifying compatibility, we give conditions under which the quadratic-program (QP) safety filter is affine in the state. This enables explicit implementations via convex combinations of vertex-feasible inputs. Case studies illustrate the results.
|
https://arxiv.org/abs/2601.12885
|
Academic Papers
|
svg
|
81e60d3538ecac0d1d1df606ca98f0ffa1a90d5cd87c98e158ddfe3bf5d7de67
|
2026-01-21T00:00:00-05:00
|
Communication Methods in Multi-Agent Reinforcement Learning
|
arXiv:2601.12886v1 Announce Type: new Abstract: Multi-agent reinforcement learning is a promising research area that extends established reinforcement learning approaches to problems formulated as multi-agent systems. Recently, a multitude of communication methods have been introduced to this field to address problems such as partially observable environments, non-stationarity, and exponentially growing action spaces. Communication further enables efficient cooperation among all agents interacting in an environment. This work aims at providing an overview of communication techniques in multi-agent reinforcement learning. By an in-depth analysis of 29 publications on this topic, the strengths and weaknesses of explicit, implicit, attention-based, graph-based, and hierarchical/role-based communication are evaluated. The results of this comparison show that there is no general, optimal communication framework for every problem. On the contrary, the choice of communication depends heavily on the problem at hand. The comparison also highlights the importance of communication methods with low computational overhead to enable scalability to environments where many agents interact. Finally, the paper discusses current research gaps, emphasizing the need for standardized benchmarking of system-level metrics and improved robustness under realistic communication conditions to enhance the real-world applicability of these approaches.
|
https://arxiv.org/abs/2601.12886
|
Academic Papers
|
svg
|
3757e20656684f81bc3e6eea90c362fdafc55f5bd61a3ad3550b804baacbdb0c
|
2026-01-21T00:00:00-05:00
|
Simultaneous Detection of LSD and FMD in Cattle Using Ensemble Deep Learning
|
arXiv:2601.12889v1 Announce Type: new Abstract: Lumpy Skin Disease (LSD) and Foot-and-Mouth Disease (FMD) are highly contagious viral diseases affecting cattle, causing significant economic losses and welfare challenges. Their visual diagnosis is complicated by significant symptom overlap with each other and with benign conditions like insect bites or chemical burns, hindering timely control measures. Leveraging a comprehensive dataset of 10,516 expert-annotated images from 18 farms across India, Brazil, and the USA, this study presents a novel Ensemble Deep Learning framework integrating VGG16, ResNet50, and InceptionV3 with optimized weighted averaging for simultaneous LSD and FMD detection. The model achieves a state-of-the-art accuracy of 98.2\%, with macro-averaged precision of 98.2\%, recall of 98.1\%, F1-score of 98.1\%, and an AUC-ROC of 99.5\%. This approach uniquely addresses the critical challenge of symptom overlap in multi-disease detection, enabling early, precise, and automated diagnosis. This tool has the potential to enhance disease management, support global agricultural sustainability, and is designed for future deployment in resource-limited settings.
|
https://arxiv.org/abs/2601.12889
|
Academic Papers
|
svg
|
6d843dd0ebbfd77c3f613793ffee336c8c964b1ad8f16739796c6e49e1433e30
|
2026-01-21T00:00:00-05:00
|
Efficient Code Analysis via Graph-Guided Large Language Models
|
arXiv:2601.12890v1 Announce Type: new Abstract: Malicious behavior is often hidden in small, easily overlooked code fragments, especially within large and complex codebases. The cross-file dependencies of these fragments make it difficult for even powerful large language models (LLMs) to detect them reliably. We propose a graph-centric attention acquisition pipeline that enhances LLMs' ability to localize malicious behavior. The approach parses a project into a code graph, uses an LLM to encode nodes with semantic and structural signals, and trains a Graph Neural Network (GNN) under sparse supervision. The GNN performs an initial detection, and through backtracking of its predictions, identifies key code sections that are most likely to contain malicious behavior. These influential regions are then used to guide the LLM's attention for in-depth analysis. This strategy significantly reduces interference from irrelevant context while maintaining low annotation costs. Extensive experiments show that the method consistently outperforms existing methods on multiple public and self-built datasets, highlighting its potential for practical deployment in software security scenarios.
|
https://arxiv.org/abs/2601.12890
|
Academic Papers
|
svg
|
9dda35e7851dec6e7620fb9166b29b344c877694b218f98e094df45971ba3e15
|
2026-01-21T00:00:00-05:00
|
AdaNODEs: Test Time Adaptation for Time Series Forecasting Using Neural ODEs
|
arXiv:2601.12893v1 Announce Type: new Abstract: Test time adaptation (TTA) has emerged as a promising solution to adapt pre-trained models to new, unseen data distributions using unlabeled target domain data. However, most TTA methods are designed for independent data, often overlooking the time series data and rarely addressing forecasting tasks. This paper presents AdaNODEs, an innovative source-free TTA method tailored explicitly for time series forecasting. By leveraging Neural Ordinary Differential Equations (NODEs), we propose a novel adaptation framework that accommodates the unique characteristics of distribution shifts in time series data. Moreover, we innovatively propose a new loss function to tackle TTA for forecasting tasks. AdaNODEs only requires updating limited model parameters, showing effectiveness in capturing temporal dependencies while avoiding significant memory usage. Extensive experiments with one- and high-dimensional data demonstrate that AdaNODEs offer relative improvements of 5.88\% and 28.4\% over the SOTA baselines, especially demonstrating robustness across higher severity distribution shifts.
|
https://arxiv.org/abs/2601.12893
|
Academic Papers
|
svg
|
b48af16991542b76fe865d2ed6eacc9c62ab7379f0f434c8d843574bc7b79676
|
2026-01-21T00:00:00-05:00
|
Sparse ActionGen: Accelerating Diffusion Policy with Real-time Pruning
|
arXiv:2601.12894v1 Announce Type: new Abstract: Diffusion Policy has dominated action generation due to its strong capabilities for modeling multi-modal action distributions, but its multi-step denoising processes make it impractical for real-time visuomotor control. Existing caching-based acceleration methods typically rely on $\textit{static}$ schedules that fail to adapt to the $\textit{dynamics}$ of robot-environment interactions, thereby leading to suboptimal performance. In this paper, we propose $\underline{\textbf{S}}$parse $\underline{\textbf{A}}$ction$\underline{\textbf{G}}$en ($\textbf{SAG}$) for extremely sparse action generation. To accommodate the iterative interactions, SAG customizes a rollout-adaptive prune-then-reuse mechanism that first identifies prunable computations globally and then reuses cached activations to substitute them during action diffusion. To capture the rollout dynamics, SAG parameterizes an observation-conditioned diffusion pruner for environment-aware adaptation and instantiates it with a highly parameter- and inference-efficient design for real-time prediction. Furthermore, SAG introduces a one-for-all reusing strategy that reuses activations across both timesteps and blocks in a zig-zag manner, minimizing the global redundancy. Extensive experiments on multiple robotic benchmarks demonstrate that SAG achieves up to 4$\times$ generation speedup without sacrificing performance. Project Page: https://sparse-actiongen.github.io/.
|
https://arxiv.org/abs/2601.12894
|
Academic Papers
|
svg
|
57806b9992041e9f0a2f502ba60b4ba9c0a0c30e61d36b394f36ac989ec31e65
|
2026-01-21T00:00:00-05:00
|
TwoHead-SwinFPN: A Unified DL Architecture for Synthetic Manipulation, Detection and Localization in Identity Documents
|
arXiv:2601.12895v1 Announce Type: new Abstract: The proliferation of sophisticated generative AI models has significantly escalated the threat of synthetic manipulations in identity documents, particularly through face swapping and text inpainting attacks. This paper presents TwoHead-SwinFPN, a unified deep learning architecture that simultaneously performs binary classification and precise localization of manipulated regions in ID documents. Our approach integrates a Swin Transformer backbone with Feature Pyramid Network (FPN) and UNet-style decoder, enhanced with Convolutional Block Attention Module (CBAM) for improved feature representation. The model employs a dual-head architecture for joint optimization of detection and segmentation tasks, utilizing uncertainty-weighted multi-task learning. Extensive experiments on the FantasyIDiap dataset demonstrate superior performance with 84.31\% accuracy, 90.78\% AUC for classification, and 57.24\% mean Dice score for localization. The proposed method achieves an F1-score of 88.61\% for binary classification while maintaining computational efficiency suitable for real-world deployment through FastAPI implementation. Our comprehensive evaluation includes ablation studies, cross-device generalization analysis, and detailed performance assessment across 10 languages and 3 acquisition devices.
|
https://arxiv.org/abs/2601.12895
|
Academic Papers
|
svg
|
a8ce0db291ec5393ddb24e49b7d51c08f22d6e56993f3e9a2517cff378846bbd
|
2026-01-21T00:00:00-05:00
|
Supervised Learning for the (s,S) Inventory Model with General Interarrival Demands and General Lead Times
|
arXiv:2601.12900v1 Announce Type: new Abstract: The continuous-review (s,S) inventory model is a cornerstone of stochastic inventory theory, yet its analysis becomes analytically intractable when dealing with non-Markovian systems. In such systems, evaluating long-run performance measures typically relies on costly simulation. This paper proposes a supervised learning framework via a neural network model for approximating stationary performance measures of (s,S) inventory systems with general distributions for the interarrival time between demands and lead times under lost sales. Simulations are first used to generate training labels, after which the neural network is trained. After training, the neural network provides almost instantaneous predictions of various metrics of the system, such as the stationary distribution of inventory levels, the expected cycle time, and the probability of lost sales. We find that using a small number of low-order moments of the distributions as input is sufficient to train the neural networks and to accurately capture the steady-state distribution. Extensive numerical experiments demonstrate high accuracy over a wide range of system parameters. As such, it effectively replaces repeated and costly simulation runs. Our framework is easily extendable to other inventory models, offering an efficient and fast alternative for analyzing complex stochastic systems.
|
https://arxiv.org/abs/2601.12900
|
Academic Papers
|
svg
|
027578e11d0c86233cc0e0dc52f27f78a6d22a029f3999e0ff7629f8936d4cae
|
2026-01-21T00:00:00-05:00
|
PlannerRFT: Reinforcing Diffusion Planners through Closed-Loop and Sample-Efficient Fine-Tuning
|
arXiv:2601.12901v1 Announce Type: new Abstract: Diffusion-based planners have emerged as a promising approach for human-like trajectory generation in autonomous driving. Recent works incorporate reinforcement fine-tuning to enhance the robustness of diffusion planners through reward-oriented optimization in a generation-evaluation loop. However, they struggle to generate multi-modal, scenario-adaptive trajectories, hindering the exploitation efficiency of informative rewards during fine-tuning. To resolve this, we propose PlannerRFT, a sample-efficient reinforcement fine-tuning framework for diffusion-based planners. PlannerRFT adopts a dual-branch optimization that simultaneously refines the trajectory distribution and adaptively guides the denoising process toward more promising exploration, without altering the original inference pipeline. To support parallel learning at scale, we develop nuMax, an optimized simulator that achieves 10 times faster rollout compared to native nuPlan. Extensive experiments shows that PlannerRFT yields state-of-the-art performance with distinct behaviors emerging during the learning process.
|
https://arxiv.org/abs/2601.12901
|
Academic Papers
|
svg
|
7f48d4758df788a01bdc58a14ef34df2eaf66b3040479604f2e007e8aeaad064
|
2026-01-21T00:00:00-05:00
|
Audit du syst{\`e}me d'information et du mod{\`e}le de gouvernance de la Biblioth{\`e}que Num{\'e}rique de l'Espace universitaire Francophone (BNEUF) du projet Initiative pour le D{\'e}veloppement du Num{\'e}rique dans l'Espace Universitaire Francophone (IDNEUF)
|
arXiv:2601.12902v1 Announce Type: new Abstract: This document provides an assessment of the overall structure of the BNEUF system and how it operates within the framework of the Initiative for Digital Development in French speaking Universities (IDNEUF). This report aims to support the AUF's new strategy for 2021-2025, with its new structural and governance foundations for the implementation of the Francophonie scientifique project. It was therefore decided to reorganize existing and future digital resources and services with a view to incorporating them into the future global collaborative platform for integrated services. This report provides an external assessment with new forms of organization and use of the BNEUF system. The aim is to provide the AUF project team with new avenues for optimized management of the compiled digital resources and to synergize them with the related modules of the Atlas of Expertise and the Francophone Social Network.
|
https://arxiv.org/abs/2601.12902
|
Academic Papers
|
svg
|
74d89418ad84f6fb7499d03401fcbc471352c13731aac4cdc78e050eabd894e3
|
2026-01-21T00:00:00-05:00
|
Deep Temporal Graph Clustering: A Comprehensive Benchmark and Datasets
|
arXiv:2601.12903v1 Announce Type: new Abstract: Temporal Graph Clustering (TGC) is a new task with little attention, focusing on node clustering in temporal graphs. Compared with existing static graph clustering, it can find the balance between time requirement and space requirement (Time-Space Balance) through the interaction sequence-based batch-processing pattern. However, there are two major challenges that hinder the development of TGC, i.e., inapplicable clustering techniques and inapplicable datasets. To address these challenges, we propose a comprehensive benchmark, called BenchTGC. Specially, we design a BenchTGC Framework to illustrate the paradigm of temporal graph clustering and improve existing clustering techniques to fit temporal graphs. In addition, we also discuss problems with public temporal graph datasets and develop multiple datasets suitable for TGC task, called BenchTGC Datasets. According to extensive experiments, we not only verify the advantages of BenchTGC, but also demonstrate the necessity and importance of TGC task. We wish to point out that the dynamically changing and complex scenarios in real world are the foundation of temporal graph clustering. The code and data is available at: https://github.com/MGitHubL/BenchTGC.
|
https://arxiv.org/abs/2601.12903
|
Academic Papers
|
svg
|
037c99fde7e3e5d616a18f187da3f866b3e6681b525b236205cadd802c7d96a9
|
2026-01-21T00:00:00-05:00
|
From Prefix Cache to Fusion RAG Cache: Accelerating LLM Inference in Retrieval-Augmented Generation
|
arXiv:2601.12904v1 Announce Type: new Abstract: Retrieval-Augmented Generation enhances Large Language Models by integrating external knowledge, which reduces hallucinations but increases prompt length. This increase leads to higher computational costs and longer Time to First Token (TTFT). To mitigate this issue, existing solutions aim to reuse the preprocessed KV cache of each retrieved chunk to accelerate RAG. However, the lack of cross-chunk contextual information leads to a significant drop in generation quality, leaving the potential benefits of KV cache reuse largely unfulfilled. The challenge lies in how to reuse the precomputed KV cache of chunks while preserving generation quality. We propose FusionRAG, a novel inference framework that optimizes both the preprocessing and reprocessing stages of RAG. In the offline preprocessing stage, we embed information from other related text chunks into each chunk, while in the online reprocessing stage, we recompute the KV cache for tokens that the model focuses on. As a result, we achieve a better trade-off between generation quality and efficiency. According to our experiments, FusionRAG significantly improves generation quality at the same recomputation ratio compared to previous state-of-the-art solutions. By recomputing fewer than 15% of the tokens, FusionRAG achieves up to 70% higher normalized F1 scores than baselines and reduces TTFT by 2.66x-9.39x compared to Full Attention.
|
https://arxiv.org/abs/2601.12904
|
Academic Papers
|
svg
|
5082fdd12c7f0d43b2c8ad2a1cac76f84bc5c0bfb8be862dacd6037be15ae7ed
|
2026-01-21T00:00:00-05:00
|
Gated Differentiable Working Memory for Long-Context Language Modeling
|
arXiv:2601.12906v1 Announce Type: new Abstract: Long contexts challenge transformers: attention scores dilute across thousands of tokens, critical information is often lost in the middle, and models struggle to adapt to novel patterns at inference time. Recent work on test-time adaptation addresses this by maintaining a form of working memory -- transient parameters updated on the current context -- but existing approaches rely on uniform write policies that waste computation on low-utility regions and suffer from high gradient variance across semantically heterogeneous contexts. In this work, we reframe test-time adaptation as a budget-constrained memory consolidation problem, focusing on which parts of the context should be consolidated into working memory under limited computation. We propose Gdwm (Gated Differentiable Working Memory), a framework that introduces a write controller to gate the consolidation process. The controller estimates Contextual Utility, an information-theoretic measure of long-range contextual dependence, and allocates gradient steps accordingly while maintaining global coverage. Experiments on ZeroSCROLLS and LongBench v2 demonstrate that Gdwm achieves comparable or superior performance with 4$\times$ fewer gradient steps than uniform baselines, establishing a new efficiency-performance Pareto frontier for test-time adaptation.
|
https://arxiv.org/abs/2601.12906
|
Academic Papers
|
svg
|
7652722e274b02dc255b60fa3186bb3ff849147a432fe0e8ef95f60756ca0d7c
|
2026-01-21T00:00:00-05:00
|
Machine Learning for highly oscillatory differential equations
|
arXiv:2601.12907v1 Announce Type: new Abstract: Highly oscillatory differential equations, commonly encountered in multi-scale problems, are often too complex to solve analytically. However, several numerical methods have been developed to approximate their solutions. Although these methods have shown their efficiency, the first part of the strategy often involves heavy pre-computations from averaging theory. In this paper, we leverage neural networks (machine learning) to approximate the vector fields required by the pre-computations in the first part, and combine this with micro-macro techniques to efficiently solve the oscillatory problem. We illustrate our work by numerical simulations.
|
https://arxiv.org/abs/2601.12907
|
Academic Papers
|
svg
|
02c5d1b2b1c3df30308350a619d42a4a4dd021b5d559878416a12ff54ecae25c
|
2026-01-21T00:00:00-05:00
|
SciCoQA: Quality Assurance for Scientific Paper--Code Alignment
|
arXiv:2601.12910v1 Announce Type: new Abstract: We present SciCoQA, a dataset for detecting discrepancies between scientific publications and their codebases to ensure faithful implementations. We construct SciCoQA from GitHub issues and reproducibility papers, and to scale our dataset, we propose a synthetic data generation method for constructing paper-code discrepancies. We analyze the paper-code discrepancies in detail and propose discrepancy types and categories to better understand the occurring mismatches. In total, our dataset consists of 611 paper-code discrepancies (81 real, 530 synthetic), spanning diverse computational science disciplines, including AI, Physics, Quantitative Biology, and others. Our evaluation of 21 LLMs highlights the difficulty of SciCoQA, particularly for instances involving omitted paper details, long-context inputs, and data outside the models' pre-training corpus. The best performing model in our evaluation, GPT-5, can only detect 45.7\% of real-world paper-code discrepancies.
|
https://arxiv.org/abs/2601.12910
|
Academic Papers
|
svg
|
492c8aff34f0b0b088b68c422f1c002199632c85056c4f113c3745193f9c1cd4
|
2026-01-21T00:00:00-05:00
|
Human Emotion Verification by Action Languages via Answer Set Programming
|
arXiv:2601.12912v1 Announce Type: new Abstract: In this paper, we introduce the action language C-MT (Mind Transition Language). It is built on top of answer set programming (ASP) and transition systems to represent how human mental states evolve in response to sequences of observable actions. Drawing on well-established psychological theories, such as the Appraisal Theory of Emotion, we formalize mental states, such as emotions, as multi-dimensional configurations. With the objective to address the need for controlled agent behaviors and to restrict unwanted mental side-effects of actions, we extend the language with a novel causal rule, forbids to cause, along with expressions specialized for mental state dynamics, which enables the modeling of principles for valid transitions between mental states. These principles of mental change are translated into transition constraints, and properties of invariance, which are rigorously evaluated using transition systems in terms of so-called trajectories. This enables controlled reasoning about the dynamic evolution of human mental states. Furthermore, the framework supports the comparison of different dynamics of change by analyzing trajectories that adhere to different psychological principles. We apply the action language to design models for emotion verification. Under consideration in Theory and Practice of Logic Programming (TPLP).
|
https://arxiv.org/abs/2601.12912
|
Academic Papers
|
svg
|
2b619278932c73d26bc887b081e08139f0613deffd054d6adc130c8b7081287b
|
2026-01-21T00:00:00-05:00
|
Actionable Interpretability Must Be Defined in Terms of Symmetries
|
arXiv:2601.12913v1 Announce Type: new Abstract: This paper argues that interpretability research in Artificial Intelligence is fundamentally ill-posed as existing definitions of interpretability are not *actionable*: they fail to provide formal principles from which concrete modelling and inferential rules can be derived. We posit that for a definition of interpretability to be actionable, it must be given in terms of *symmetries*. We hypothesise that four symmetries suffice to (i) motivate core interpretability properties, (ii) characterize the class of interpretable models, and (iii) derive a unified formulation of interpretable inference (e.g., alignment, interventions, and counterfactuals) as a form of Bayesian inversion.
|
https://arxiv.org/abs/2601.12913
|
Academic Papers
|
svg
|
7339fe497fc62886969a914e6ba3b72925bd6e0d5524c3b1d7826574e4418c78
|
2026-01-21T00:00:00-05:00
|
Static Detection of Core Structures in Tigress Virtualization-Based Obfuscation Using an LLVM Pass
|
arXiv:2601.12916v1 Announce Type: new Abstract: Malware often uses obfuscation to hinder security analysis. Among these techniques, virtualization-based obfuscation is particularly strong because it protects programs by translating original instructions into attacker-defined virtual machine (VM) bytecode, producing long and complex code that is difficult to analyze and deobfuscate. This paper aims to identify the structural components of virtualization-based obfuscation through static analysis. By examining the execution model of obfuscated code, we define and detect the key elements required for deobfuscation-namely the dispatch routine, handler blocks, and the VM region-using LLVM IR. Experimental results show that, in the absence of compiler optimizations, the proposed LLVM Pass successfully detects all core structures across major virtualization options, including switch, direct, and indirect modes.
|
https://arxiv.org/abs/2601.12916
|
Academic Papers
|
svg
|
678cccc07e8058c070b8f26fe92f1455e18d1a0779e2693a046bd419e75d1fb8
|
2026-01-21T00:00:00-05:00
|
CooperLLM: Cloud-Edge-End Cooperative Federated Fine-tuning for LLMs via ZOO-based Gradient Correction
|
arXiv:2601.12917v1 Announce Type: new Abstract: Large Language Models (LLMs) perform well on many NLP tasks, but fine-tuning them on resource-constrained mobile devices is challenging due to high memory and computation costs, despite growing demands for privacy-preserving personalization. Federated Learning (FL) enables local-data training, yet existing methods either rely on memory-intensive backpropagation or use zeroth-order optimization (ZOO), which avoids backward passes but suffers from slow convergence and degraded accuracy. We propose CooperLLM, a cloud-assisted edge-end cooperative federated fine-tuning framework that combines ZOO on mobile devices with cloud-guided gradient rectification. Mobile clients perform lightweight ZOO updates on private data, while the cloud fine-tunes on auxiliary public data using backpropagation and injects guided perturbations to rectify local updates, improving convergence and accuracy without violating privacy. To address system bottlenecks, CooperLLM introduces pipeline scheduling and adaptive compression to overlap computation and communication and reduce memory usage. Experiments on multiple Transformer models and datasets show that CooperLLM reduces on-device memory by up to $86.4\%$, accelerates convergence by $8.8 \times$, and improves accuracy by up to 10 percentage points over state-of-the-art ZOO-based baselines.
|
https://arxiv.org/abs/2601.12917
|
Academic Papers
|
svg
|
afeaff5589961619debac0a3562200d6a9201a9068a3ebdc74d7529a9a6041d1
|
2026-01-21T00:00:00-05:00
|
Dynamic Hand Gesture Recognition for Robot Manipulator Tasks
|
arXiv:2601.12918v1 Announce Type: new Abstract: This paper proposes a novel approach to recognizing dynamic hand gestures facilitating seamless interaction between humans and robots. Here, each robot manipulator task is assigned a specific gesture. There may be several such tasks, hence, several gestures. These gestures may be prone to several dynamic variations. All such variations for different gestures shown to the robot are accurately recognized in real-time using the proposed unsupervised model based on the Gaussian Mixture model. The accuracy during training and real-time testing prove the efficacy of this methodology.
|
https://arxiv.org/abs/2601.12918
|
Academic Papers
|
svg
|
529119baf9fad26404b0a4a92c654ca774e2e54acbb8a40dafae8833b59bfff9
|
2026-01-21T00:00:00-05:00
|
Supervision-by-Hallucination-and-Transfer: A Weakly-Supervised Approach for Robust and Precise Facial Landmark Detection
|
arXiv:2601.12919v1 Announce Type: new Abstract: High-precision facial landmark detection (FLD) relies on high-resolution deep feature representations. However, low-resolution face images or the compression (via pooling or strided convolution) of originally high-resolution images hinder the learning of such features, thereby reducing FLD accuracy. Moreover, insufficient training data and imprecise annotations further degrade performance. To address these challenges, we propose a weakly-supervised framework called Supervision-by-Hallucination-and-Transfer (SHT) for more robust and precise FLD. SHT contains two novel mutually enhanced modules: Dual Hallucination Learning Network (DHLN) and Facial Pose Transfer Network (FPTN). By incorporating FLD and face hallucination tasks, DHLN is able to learn high-resolution representations with low-resolution inputs for recovering both facial structures and local details and generating more effective landmark heatmaps. Then, by transforming faces from one pose to another, FPTN can further improve landmark heatmaps and faces hallucinated by DHLN for detecting more accurate landmarks. To the best of our knowledge, this is the first study to explore weakly-supervised FLD by integrating face hallucination and facial pose transfer tasks. Experimental results of both face hallucination and FLD demonstrate that our method surpasses state-of-the-art techniques.
|
https://arxiv.org/abs/2601.12919
|
Academic Papers
|
svg
|
765e5c70a7d9a347692f1afaba63041cc449ae6d7dc569d41a3b088fbd7cec2d
|
2026-01-21T00:00:00-05:00
|
Injecting Knowledge from Social Science Journals to Improve Indonesian Cultural Understanding by LLMs
|
arXiv:2601.12921v1 Announce Type: new Abstract: Recently there have been intensifying efforts to improve the understanding of Indonesian cultures by large language models (LLMs). An attractive source of cultural knowledge that has been largely overlooked is local journals of social science, which likely contain substantial cultural studies from a native perspective. We present a novel text dataset of journal article passages, created from 151 open-source Indonesian social science journals, called IndoSoSci. We demonstrate an effective recipe for injecting Indonesian cultural knowledge therein into LLMs: extracting the facts related to Indonesian culture, and apply retrieval-augmented generation (RAG) with LLM-generated hypothetical documents as queries during retrieval. The proposed recipe yields strong performance gains over several strong baselines on the IndoCulture benchmark. Additionally, by combining IndoSoSci with Indonesian Wikipedia, we set a new state-of-the-art accuracy on the IndoCulture benchmark.
|
https://arxiv.org/abs/2601.12921
|
Academic Papers
|
svg
|
ab082bed8727e3ac10084633b07feb0bc65025b11dc700e6870d07d9b6af0763
|
2026-01-21T00:00:00-05:00
|
Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy
|
arXiv:2601.12922v1 Announce Type: new Abstract: Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice. We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms: while conforming to the iDP guarantees, an individual's privacy risk is not solely governed by their own privacy budget, but critically depends on the privacy choices of all other data contributors. This creates a mismatch between the promise of individual privacy control and the reality of a system where risk is collectively determined. We demonstrate empirically that certain distributions of privacy preferences can unintentionally inflate the privacy risk of individuals, even when their formal guarantees are met. Moreover, this excess risk provides an exploitable attack vector. A central adversary or a set of colluding adversaries can deliberately choose privacy budgets to amplify vulnerabilities of targeted individuals. Most importantly, this attack operates entirely within the guarantees of DP, hiding this excess vulnerability. Our empirical evaluation demonstrates successful attacks against 62% of targeted individuals, substantially increasing their membership inference susceptibility. To mitigate this, we propose $(\varepsilon_i,\delta_i,\overline{\Delta})$-iDP a privacy contract that uses $\Delta$-divergences to provide users with a hard upper bound on their excess vulnerability, while offering flexibility to mechanism design. Our findings expose a fundamental challenge to the current paradigm, demanding a re-evaluation of how iDP systems are designed, audited, communicated, and deployed to make excess risks transparent and controllable.
|
https://arxiv.org/abs/2601.12922
|
Academic Papers
|
svg
|
260293437e4245de4ce469f255e3bbd46fb8a64cb734be50b09eca2f4af553b7
|
2026-01-21T00:00:00-05:00
|
ForeDiffusion: Foresight-Conditioned Diffusion Policy via Future View Construction for Robot Manipulation
|
arXiv:2601.12925v1 Announce Type: new Abstract: Diffusion strategies have advanced visual motor control by progressively denoising high-dimensional action sequences, providing a promising method for robot manipulation. However, as task complexity increases, the success rate of existing baseline models decreases considerably. Analysis indicates that current diffusion strategies are confronted with two limitations. First, these strategies only rely on short-term observations as conditions. Second, the training objective remains limited to a single denoising loss, which leads to error accumulation and causes grasping deviations. To address these limitations, this paper proposes Foresight-Conditioned Diffusion (ForeDiffusion), by injecting the predicted future view representation into the diffusion process. As a result, the policy is guided to be forward-looking, enabling it to correct trajectory deviations. Following this design, ForeDiffusion employs a dual loss mechanism, combining the traditional denoising loss and the consistency loss of future observations, to achieve the unified optimization. Extensive evaluation on the Adroit suite and the MetaWorld benchmark demonstrates that ForeDiffusion achieves an average success rate of 80% for the overall task, significantly outperforming the existing mainstream diffusion methods by 23% in complex tasks, while maintaining more stable performance across the entire tasks.
|
https://arxiv.org/abs/2601.12925
|
Academic Papers
|
svg
|
5aa6caa83b96e714c6e0b175225d7a47f74ed72015fbeaa9e32337e80b7cb08b
|
2026-01-21T00:00:00-05:00
|
Dual-Stream Collaborative Transformer for Image Captioning
|
arXiv:2601.12926v1 Announce Type: new Abstract: Current region feature-based image captioning methods have progressed rapidly and achieved remarkable performance. However, they are still prone to generating irrelevant descriptions due to the lack of contextual information and the over-reliance on generated partial descriptions for predicting the remaining words. In this paper, we propose a Dual-Stream Collaborative Transformer (DSCT) to address this issue by introducing the segmentation feature. The proposed DSCT consolidates and then fuses the region and segmentation features to guide the generation of caption sentences. It contains multiple Pattern-Specific Mutual Attention Encoders (PSMAEs) and Dynamic Nomination Decoders (DNDs). The PSMAE effectively highlights and consolidates the private information of two representations by querying each other. The DND dynamically searches for the most relevant learning blocks to the input textual representations and exploits the homogeneous features between the consolidated region and segmentation features to generate more accurate and descriptive caption sentences. To the best of our knowledge, this is the first study to explore how to fuse different pattern-specific features in a dynamic way to bypass their semantic inconsistencies and spatial misalignment issues for image captioning. The experimental results from popular benchmark datasets demonstrate that our DSCT outperforms the state-of-the-art image captioning models in the literature.
|
https://arxiv.org/abs/2601.12926
|
Academic Papers
|
svg
|
aa795a022f64fe5ee00e10a580cf0f6decb9d054812e09e96be269d0e32b64fc
|
2026-01-21T00:00:00-05:00
|
A Benchmark for Language Models in Real-World System Building
|
arXiv:2601.12927v1 Announce Type: new Abstract: During migration across instruction set architectures (ISAs), software package build repair is a critical task for ensuring the reliability of software deployment and the stability of modern operating systems. While Large Language Models (LLMs) have shown promise in tackling this challenge, prior work has primarily focused on single instruction set architecture (ISA) and homogeneous programming languages. To address this limitation, we introduce a new benchmark designed for software package build repair across diverse architectures and languages. Comprising 268 real-world software package build failures, the benchmark provides a standardized evaluation pipeline. We evaluate six state-of-the-art LLMs on the benchmark, and the results show that cross-ISA software package repair remains difficult and requires further advances. By systematically exposing this challenge, the benchmark establishes a foundation for advancing future methods aimed at improving software portability and bridging architectural gaps.
|
https://arxiv.org/abs/2601.12927
|
Academic Papers
|
svg
|
6bb7aa73547af94a2da199429cd8ef2ba4c7af87d16562e112e8ddec9df85a20
|
2026-01-21T00:00:00-05:00
|
An efficient heuristic for geometric analysis of cell deformations
|
arXiv:2601.12928v1 Announce Type: new Abstract: Sickle cell disease causes erythrocytes to become sickle-shaped, affecting their movement in the bloodstream and reducing oxygen delivery. It has a high global prevalence and places a significant burden on healthcare systems, especially in resource-limited regions. Automated classification of sickle cells in blood images is crucial, allowing the specialist to reduce the effort required and avoid errors when quantifying the deformed cells and assessing the severity of a crisis. Recent studies have proposed various erythrocyte representation and classification methods. Since classification depends solely on cell shape, a suitable approach models erythrocytes as closed planar curves in shape space. This approach employs elastic distances between shapes, which are invariant under rotations, translations, scaling, and reparameterizations, ensuring consistent distance measurements regardless of the curves' position, starting point, or traversal speed. While previous methods exploiting shape space distances had achieved high accuracy, we refined the model by considering the geometric characteristics of healthy and sickled erythrocytes. Our method proposes (1) to employ a fixed parameterization based on the major axis of each cell to compute distances and (2) to align each cell with two templates using this parameterization before computing distances. Aligning shapes to templates before distance computation, a concept successfully applied in areas such as molecular dynamics, and using a fixed parameterization, instead of minimizing distances across all possible parameterizations, simplifies calculations. This strategy achieves 96.03\% accuracy rate in both supervised classification and unsupervised clustering. Our method ensures efficient erythrocyte classification, maintaining or improving accuracy over shape space models while significantly reducing computational costs.
|
https://arxiv.org/abs/2601.12928
|
Academic Papers
|
svg
|
48e33f0fcca7872d20409c3431426bc09e1df0925fbad9cd3b9f0b54b367fcfc
|
2026-01-21T00:00:00-05:00
|
Membership Inference Test: Auditing Training Data in Object Classification Models
|
arXiv:2601.12929v1 Announce Type: new Abstract: In this research, we analyze the performance of Membership Inference Tests (MINT), focusing on determining whether given data were utilized during the training phase, specifically in the domain of object recognition. Within the area of object recognition, we propose and develop architectures tailored for MINT models. These architectures aim to optimize performance and efficiency in data utilization, offering a tailored solution to tackle the complexities inherent in the object recognition domain. We conducted experiments involving an object detection model, an embedding extractor, and a MINT module. These experiments were performed in three public databases, totaling over 174K images. The proposed architecture leverages convolutional layers to capture and model the activation patterns present in the data during the training process. Through our analysis, we are able to identify given data used for testing and training, achieving precision rates ranging between 70% and 80%, contingent upon the depth of the detection module layer chosen for input to the MINT module. Additionally, our studies entail an analysis of the factors influencing the MINT Module, delving into the contributing elements behind more transparent training processes.
|
https://arxiv.org/abs/2601.12929
|
Academic Papers
|
svg
|
b71fcb65c73fd15c0dba8a06fb1703b51da1435de42958a04a84ac9cd3fe33bc
|
2026-01-21T00:00:00-05:00
|
Online Continual Learning for Time Series: a Natural Score-driven Approach
|
arXiv:2601.12931v1 Announce Type: new Abstract: Online continual learning (OCL) methods adapt to changing environments without forgetting past knowledge. Similarly, online time series forecasting (OTSF) is a real-world problem where data evolve in time and success depends on both rapid adaptation and long-term memory. Indeed, time-varying and regime-switching forecasting models have been extensively studied, offering a strong justification for the use of OCL in these settings. Building on recent work that applies OCL to OTSF, this paper aims to strengthen the theoretical and practical connections between time series methods and OCL. First, we reframe neural network optimization as a parameter filtering problem, showing that natural gradient descent is a score-driven method and proving its information-theoretic optimality. Then, we show that using a Student's t likelihood in addition to natural gradient induces a bounded update, which improves robustness to outliers. Finally, we introduce Natural Score-driven Replay (NatSR), which combines our robust optimizer with a replay buffer and a dynamic scale heuristic that improves fast adaptation at regime drifts. Empirical results demonstrate that NatSR achieves stronger forecasting performance than more complex state-of-the-art methods.
|
https://arxiv.org/abs/2601.12931
|
Academic Papers
|
svg
|
9f70038ba157fb1d8408b1b7723449a8ba3b952a20341c122ecabf12c6f3b0d0
|
2026-01-21T00:00:00-05:00
|
Perception of Deepfakes among Bangladeshi Women
|
arXiv:2601.12933v1 Announce Type: new Abstract: As deepfake technology becomes more accessible, concerns about its misuse and societal impact are escalating, particularly in regions like the Global South where digital literacy and regulatory measures are often limited. While previous research has explored deepfakes in contexts such as detection and media manipulation, there is a noticeable gap in understanding how individuals in these regions perceive and interact with deepfake media. This study addresses this gap by investigating how Bangladeshi women perceive deepfakes and the socio-cultural factors influencing their awareness, concerns, and responses to this technology. Drawing on 15 semi-structured interviews, we uncover how cultural values, gendered norms, trust in institutions, and the prevalence of digital harassment shape their perceptions and coping mechanisms. Through this research, we aim to advance existing scholarship in HCI by offering insights into the design of culturally sensitive interventions, educational initiatives, and policy frameworks to address the challenges posed by deepfakes in the Global South.
|
https://arxiv.org/abs/2601.12933
|
Academic Papers
|
svg
|
e56b3582caa5a0a7a6a4a48124581b8951535045fa1571f590bef5736f5a08e1
|
2026-01-21T00:00:00-05:00
|
Bangladesh AI Readiness: Perspectives from the Academia, Industry, and Government
|
arXiv:2601.12934v1 Announce Type: new Abstract: Artificial Intelligence (AI) readiness in the Global South extends beyond infrastructure to include curriculum design, workforce development, and cross-sector collaboration. Bangladesh, ranked 82nd in the 2023 Oxford Insights AI Readiness Index, exhibits significant deficits in technology capacity and research ecosystems, despite strong governmental visions. While HCI and ICTD research have explored digital inclusion and responsible AI, little empirical work examines how educational, industrial, and policy domains intersect to shape readiness. We present a multi-method qualitative study of AI readiness in Bangladesh, combining institutional analyses, 59 stakeholder interviews, and curriculum benchmarking against global exemplars. Findings reveal outdated curricula, limited faculty upskilling, inadequate computing resources, entrenched gender disparities, and the near-total absence of AI ethics instruction. We contribute empirical mapping of current practices, identification of structural and cultural barriers, and actionable pathways for embedding human-centered, inclusive, and responsible AI practices into national agendas, advancing equitable innovation in emerging AI ecosystems.
|
https://arxiv.org/abs/2601.12934
|
Academic Papers
|
svg
|
1db870a453a42fd83caf6eded358d1b65e035074d875cde73bcbda5401e0a79d
|
2026-01-21T00:00:00-05:00
|
QASA: Quality-Guided K-Adaptive Slot Attention for Unsupervised Object-Centric Learning
|
arXiv:2601.12936v1 Announce Type: new Abstract: Slot Attention, an approach that binds different objects in a scene to a set of "slots", has become a leading method in unsupervised object-centric learning. Most methods assume a fixed slot count K, and to better accommodate the dynamic nature of object cardinality, a few works have explored K-adaptive variants. However, existing K-adaptive methods still suffer from two limitations. First, they do not explicitly constrain slot-binding quality, so low-quality slots lead to ambiguous feature attribution. Second, adding a slot-count penalty to the reconstruction objective creates conflicting optimization goals between reducing the number of active slots and maintaining reconstruction fidelity. As a result, they still lag significantly behind strong K-fixed baselines. To address these challenges, we propose Quality-Guided K-Adaptive Slot Attention (QASA). First, we decouple slot selection from reconstruction, eliminating the mutual constraints between the two objectives. Then, we propose an unsupervised Slot-Quality metric to assess per-slot quality, providing a principled signal for fine-grained slot--object binding. Based on this metric, we design a Quality-Guided Slot Selection scheme that dynamically selects a subset of high-quality slots and feeds them into our newly designed gated decoder for reconstruction during training. At inference, token-wise competition on slot attention yields a K-adaptive outcome. Experiments show that QASA substantially outperforms existing K-adaptive methods on both real and synthetic datasets. Moreover, on real-world datasets QASA surpasses K-fixed methods.
|
https://arxiv.org/abs/2601.12936
|
Academic Papers
|
svg
|
7311be2ee64c917687e9a8d8763b5358dac3c6f30eda8138b38b87709d19503e
|
2026-01-21T00:00:00-05:00
|
On the Evidentiary Limits of Membership Inference for Copyright Auditing
|
arXiv:2601.12937v1 Announce Type: new Abstract: As large language models (LLMs) are trained on increasingly opaque corpora, membership inference attacks (MIAs) have been proposed to audit whether copyrighted texts were used during training, despite growing concerns about their reliability under realistic conditions. We ask whether MIAs can serve as admissible evidence in adversarial copyright disputes where an accused model developer may obfuscate training data while preserving semantic content, and formalize this setting through a judge-prosecutor-accused communication protocol. To test robustness under this protocol, we introduce SAGE (Structure-Aware SAE-Guided Extraction), a paraphrasing framework guided by Sparse Autoencoders (SAEs) that rewrites training data to alter lexical structure while preserving semantic content and downstream utility. Our experiments show that state-of-the-art MIAs degrade when models are fine-tuned on SAGE-generated paraphrases, indicating that their signals are not robust to semantics-preserving transformations. While some leakage remains in certain fine-tuning regimes, these results suggest that MIAs are brittle in adversarial settings and insufficient, on their own, as a standalone mechanism for copyright auditing of LLMs.
|
https://arxiv.org/abs/2601.12937
|
Academic Papers
|
svg
|
a3f61688b4d159f0aa559ece7235472979596a4fb90bcd88035eab9686bfdf75
|
2026-01-21T00:00:00-05:00
|
The Post-Turing Condition: Conceptualising Artificial Subjectivity and Synthetic Sociality
|
arXiv:2601.12938v1 Announce Type: new Abstract: In the Post-Turing era, artificial intelligence increasingly shapes social coordination and meaning formation rather than merely automating cognitive tasks. The central challenge is therefore not whether machines become conscious, but whether processes of interpretation and shared reference are progressively automated in ways that marginalize human participation. This paper introduces the PRMO framework, relating AI design trajectories to four constitutive dimensions of human subjectivity: Perception, Representation, Meaning, and the Real. Within this framework, Synthetic Sociality denotes a technological horizon in which artificial agents negotiate coherence and social order primarily among themselves, raising the structural risk of human exclusion from meaning formation. To address this risk, the paper proposes Quadrangulation as a design principle for socially embedded AI systems, requiring artificial agents to treat the human subject as a constitutive reference within shared contexts of meaning. This work is a conceptual perspective that contributes a structural vocabulary for analyzing AI systems at the intersection of computation and society, without proposing a specific technical implementation.
|
https://arxiv.org/abs/2601.12938
|
Academic Papers
|
svg
|
e5d430f8233aa86aa4b3a57b02797849f01967dc425023cba45997ff8f7e64f2
|
2026-01-21T00:00:00-05:00
|
Active Inference-Driven World Modeling for Adaptive UAV Swarm Trajectory Design
|
arXiv:2601.12939v1 Announce Type: new Abstract: This paper proposes an Active Inference-based framework for autonomous trajectory design in UAV swarms. The method integrates probabilistic reasoning and self-learning to enable distributed mission allocation, route ordering, and motion planning. Expert trajectories generated using a Genetic Algorithm with Repulsion Forces (GA-RF) are employed to train a hierarchical World Model capturing swarm behavior across mission, route, and motion levels. During online operation, UAVs infer actions by minimizing divergence between current beliefs and model-predicted states, enabling adaptive responses to dynamic environments. Simulation results show faster convergence, higher stability, and safer navigation than Q-Learning, demonstrating the scalability and cognitive grounding of the proposed framework for intelligent UAV swarm control.
|
https://arxiv.org/abs/2601.12939
|
Academic Papers
|
svg
|
3201535e241695f73b077c5c2cfab0c82ea07224b2a690e20ad59a354d61e070
|
2026-01-21T00:00:00-05:00
|
Dependently-Typed AARA: A Non-Affine Approach for Resource Analysis of Higher-Order Programs
|
arXiv:2601.12943v1 Announce Type: new Abstract: Static resource analysis determines the resource consumption (e.g., time complexity) of a program without executing it. Among the numerous existing approaches for resource analysis, affine type systems have been one dominant approach. However, these affine type systems fall short of deriving precise resource behavior of higher-order programs, particularly in cases that involve partial applications. This article presents \lambda_\ms{amor}^\ms{na}}, a non-affine AARA-style dependent type system for resource reasoning about higher-order functional programs. The key observation is that the main issue in previous approaches comes from (i) the close coupling of types and resources, and (ii) the conflict between affine and higher-order typing mechanisms. To derive precise resource behavior of higher-order functions, \lambda_\ms{amor}^\ms{na}} decouples resources from types and follows a non-affine typing mechanism. The non-affine type system of \lambda_\ms{amor}^\ms{na}} achieves this by using dependent types, which allows expressing type-level potential functions separate from ordinary types. This article formalizes \lambda_\ms{amor}^\ms{na}}'s syntax and semantics, and proves its soundness, which guarantees the correctness of resource bounds. Several challenging classic and higher-order examples are presented to demonstrate the expressiveness and compositionality of \lambda_\ms{amor}^\ms{na}}'s reasoning capability.
|
https://arxiv.org/abs/2601.12943
|
Academic Papers
|
svg
|
c378fbb6de75dd9fe685c2a9ebda8381c44ebfa385a369f142dee7c5251c82fd
|
2026-01-21T00:00:00-05:00
|
On the Concavity of Tsallis Entropy along the Heat Flow
|
arXiv:2601.12944v1 Announce Type: new Abstract: We demonstrate the concavity of the Tsallis entropy along the heat flow for general dimensions, expanding upon the findings of Wu et al 2025 and Hung 2022, which were previously limited to the one-dimensional case. The core of the proof is a novel estimate of the terms in the second-order time derivative, and a rigorous validation of integration by parts. The resulting bound establishes a new functional inequality, which may be of interest for other areas of mathematical analysis.
|
https://arxiv.org/abs/2601.12944
|
Academic Papers
|
svg
|
0c9624a0fe4e4cc4320aec9f4d0c145c471ad258f0e97d9a94adb76bcf1699a1
|
2026-01-21T00:00:00-05:00
|
A Component-Based Survey of Interactions between Large Language Models and Multi-Armed Bandits
|
arXiv:2601.12945v1 Announce Type: new Abstract: Large language models (LLMs) have become powerful and widely used systems for language understanding and generation, while multi-armed bandit (MAB) algorithms provide a principled framework for adaptive decision-making under uncertainty. This survey explores the potential at the intersection of these two fields. As we know, it is the first survey to systematically review the bidirectional interaction between large language models and multi-armed bandits at the component level. We highlight the bidirectional benefits: MAB algorithms address critical LLM challenges, spanning from pre-training to retrieval-augmented generation (RAG) and personalization. Conversely, LLMs enhance MAB systems by redefining core components such as arm definition and environment modeling, thereby improving decision-making in sequential tasks. We analyze existing LLM-enhanced bandit systems and bandit-enhanced LLM systems, providing insights into their design, methodologies, and performance. Key challenges and representative findings are identified to help guide future research. An accompanying GitHub repository that indexes relevant literature is available at https://github.com/bucky1119/Awesome-LLM-Bandit-Interaction.
|
https://arxiv.org/abs/2601.12945
|
Academic Papers
|
svg
|
bf9bdaa22c0cc2ec08f0048846d3bd8e3edda7c0c2e471b1bfc8ff8a5f8c1bb6
|
2026-01-21T00:00:00-05:00
|
AI-generated data contamination erodes pathological variability and diagnostic reliability
|
arXiv:2601.12946v1 Announce Type: new Abstract: Generative artificial intelligence (AI) is rapidly populating medical records with synthetic content, creating a feedback loop where future models are increasingly at risk of training on uncurated AI-generated data. However, the clinical consequences of this AI-generated data contamination remain unexplored. Here, we show that in the absence of mandatory human verification, this self-referential cycle drives a rapid erosion of pathological variability and diagnostic reliability. By analysing more than 800,000 synthetic data points across clinical text generation, vision-language reporting, and medical image synthesis, we find that models progressively converge toward generic phenotypes regardless of the model architecture. Specifically, rare but critical findings, including pneumothorax and effusions, vanish from the synthetic content generated by AI models, while demographic representations skew heavily toward middle-aged male phenotypes. Crucially, this degradation is masked by false diagnostic confidence; models continue to issue reassuring reports while failing to detect life-threatening pathology, with false reassurance rates tripling to 40%. Blinded physician evaluation confirms that this decoupling of confidence and accuracy renders AI-generated documentation clinically useless after just two generations. We systematically evaluate three mitigation strategies, finding that while synthetic volume scaling fails to prevent collapse, mixing real data with quality-aware filtering effectively preserves diversity. Ultimately, our results suggest that without policy-mandated human oversight, the deployment of generative AI threatens to degrade the very healthcare data ecosystems it relies upon.
|
https://arxiv.org/abs/2601.12946
|
Academic Papers
|
svg
|
9f7faeb154e463efdaa155631d32d430e35382997749217006f40467546b8938
|
2026-01-21T00:00:00-05:00
|
GazeD: Context-Aware Diffusion for Accurate 3D Gaze Estimation
|
arXiv:2601.12948v1 Announce Type: new Abstract: We introduce GazeD, a new 3D gaze estimation method that jointly provides 3D gaze and human pose from a single RGB image. Leveraging the ability of diffusion models to deal with uncertainty, it generates multiple plausible 3D gaze and pose hypotheses based on the 2D context information extracted from the input image. Specifically, we condition the denoising process on the 2D pose, the surroundings of the subject, and the context of the scene. With GazeD we also introduce a novel way of representing the 3D gaze by positioning it as an additional body joint at a fixed distance from the eyes. The rationale is that the gaze is usually closely related to the pose, and thus it can benefit from being jointly denoised during the diffusion process. Evaluations across three benchmark datasets demonstrate that GazeD achieves state-of-the-art performance in 3D gaze estimation, even surpassing methods that rely on temporal information. Project details will be available at https://aimagelab.ing.unimore.it/go/gazed.
|
https://arxiv.org/abs/2601.12948
|
Academic Papers
|
svg
|
f75b47857e20958be9767a45f67064711a89eb13b0dcc5526669936a3ebd278e
|
2026-01-21T00:00:00-05:00
|
Beyond Accuracy: Characterizing Code Comprehension Capabilities in (Large) Language Models
|
arXiv:2601.12951v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly integrated into software engineering workflows, yet current benchmarks provide only coarse performance summaries that obscure the diverse capabilities and limitations of these models. This paper investigates whether LLMs' code-comprehension performance aligns with traditional human-centric software metrics or instead reflects distinct, non-human regularities. We introduce a diagnostic framework that reframes code understanding as a binary input-output consistency task, enabling the evaluation of classification and generative models. Using a large-scale dataset, we correlate model performance with traditional, human-centric complexity metrics, such as lexical size, control-flow complexity, and abstract syntax tree structure. Our analyses reveal minimal correlation between human-defined metrics and LLM success (AUROC 0.63), while shadow models achieve substantially higher predictive performance (AUROC 0.86), capturing complex, partially predictable patterns beyond traditional software measures. These findings suggest that LLM comprehension reflects model-specific regularities only partially accessible through either human-designed or learned features, emphasizing the need for benchmark methodologies that move beyond aggregate accuracy and toward instance-level diagnostics, while acknowledging fundamental limits in predicting correct outcomes.
|
https://arxiv.org/abs/2601.12951
|
Academic Papers
|
svg
|
a641654aa63048a8bed3acbc9f3522eaab3013b6fa834f7dd3bed88c275911f7
|
2026-01-21T00:00:00-05:00
|
Imitation learning-based spacecraft rendezvous and docking method with Expert Demonstration
|
arXiv:2601.12952v1 Announce Type: new Abstract: Existing spacecraft rendezvous and docking control methods largely rely on predefined dynamic models and often exhibit limited robustness in realistic on-orbit environments. To address this issue, this paper proposes an Imitation Learning-based spacecraft rendezvous and docking control framework (IL-SRD) that directly learns control policies from expert demonstrations, thereby reducing dependence on accurate modeling. We propose an anchored decoder target mechanism, which conditions the decoder queries on state-related anchors to explicitly constrain the control generation process. This mechanism enforces physically consistent control evolution and effectively suppresses implausible action deviations in sequential prediction, enabling reliable six-degree-of-freedom (6-DOF) rendezvous and docking control. To further enhance stability, a temporal aggregation mechanism is incorporated to mitigate error accumulation caused by the sequential prediction nature of Transformer-based models, where small inaccuracies at each time step can propagate and amplify over long horizons. Extensive simulation results demonstrate that the proposed IL-SRD framework achieves accurate and energy-efficient model-free rendezvous and docking control. Robustness evaluations further confirm its capability to maintain competitive performance under significant unknown disturbances. The source code is available at https://github.com/Dongzhou-1996/IL-SRD.
|
https://arxiv.org/abs/2601.12952
|
Academic Papers
|
svg
|
e75ebf657667362fb5eae3fb694a697e18f0fc3b3a69c6bd03daaab2b90d8071
|
2026-01-21T00:00:00-05:00
|
StyMam: A Mamba-Based Generator for Artistic Style Transfer
|
arXiv:2601.12954v1 Announce Type: new Abstract: Image style transfer aims to integrate the visual patterns of a specific artistic style into a content image while preserving its content structure. Existing methods mainly rely on the generative adversarial network (GAN) or stable diffusion (SD). GAN-based approaches using CNNs or Transformers struggle to jointly capture local and global dependencies, leading to artifacts and disharmonious patterns. SD-based methods reduce such issues but often fail to preserve content structures and suffer from slow inference. To address these issues, we revisit GAN and propose a mamba-based generator, termed as StyMam, to produce high-quality stylized images without introducing artifacts and disharmonious patterns. Specifically, we introduce a mamba-based generator with a residual dual-path strip scanning mechanism and a channel-reweighted spatial attention module. The former efficiently captures local texture features, while the latter models global dependencies. Finally, extensive qualitative and quantitative experiments demonstrate that the proposed method outperforms state-of-the-art algorithms in both quality and speed.
|
https://arxiv.org/abs/2601.12954
|
Academic Papers
|
svg
|
8ec170461c9a3694a27a86ae2ea60b723d8529e940f9f86b34394525121e6c66
|
2026-01-21T00:00:00-05:00
|
Codes Correcting Few Restricted Errors
|
arXiv:2601.12959v1 Announce Type: new Abstract: We consider linear codes over a field in which the error values are restricted to a subgroup of its unit group. This scenario captures Lee distance codes as well as codes over the Gaussian or Eisenstein integers. Codes correcting restricted errors gained increased attention recently in the context of code-based cryptography. In this work we provide new constructions of codes over the Gaussian or Eisenstein integers correcting two or three errors. We adapt some techniques from Roth and Siegel's work on codes for the Lee metric. We propose two construction methods, which may be seen of geometric and algebraic flavor, respectively.
|
https://arxiv.org/abs/2601.12959
|
Academic Papers
|
svg
|
f43e91bdd600cd04f9e83bda7bfdf11a066daf64c3406578b0d055e1ddaf1e51
|
2026-01-21T00:00:00-05:00
|
Trustworthy Data-driven Chronological Age Estimation from Panoramic Dental Images
|
arXiv:2601.12960v1 Announce Type: new Abstract: Integrating deep learning into healthcare enables personalized care but raises trust issues due to model opacity. To improve transparency, we propose a system for dental age estimation from panoramic images that combines an opaque and a transparent method within a natural language generation (NLG) module. This module produces clinician-friendly textual explanations about the age estimations, designed with dental experts through a rule-based approach. Following the best practices in the field, the quality of the generated explanations was manually validated by dental experts using a questionnaire. The results showed a strong performance, since the experts rated 4.77+/-0.12 (out of 5) on average across the five dimensions considered. We also performed a trustworthy self-assessment procedure following the ALTAI checklist, in which it scored 4.40+/-0.27 (out of 5) across seven dimensions of the AI Trustworthiness Assessment List.
|
https://arxiv.org/abs/2601.12960
|
Academic Papers
|
svg
|
1a48122fbb86048390f4d8ef23e6e42904dc9a0aea262989c11c97cfbed70d78
|
2026-01-21T00:00:00-05:00
|
Supervised Learning for Game Music Segmentation
|
arXiv:2601.12961v1 Announce Type: new Abstract: At present, neural network-based models, including transformers, struggle to generate memorable and readily comprehensible music from unified and repetitive musical material due to a lack of understanding of musical structure. Consequently, these models are rarely employed by the games industry. It is hypothesised by many scholars that the modelling of musical structure may inform models at a higher level, thereby enhancing the quality of music generation. The aim of this study is to explore the performance of supervised learning methods in the task of structural segmentation, which is the initial step in music structure modelling. An audio game music dataset with 309 structural annotations was created to train the proposed method, which combines convolutional neural networks and recurrent neural networks, achieving performance comparable to the state-of-the-art unsupervised learning methods with fewer training resources.
|
https://arxiv.org/abs/2601.12961
|
Academic Papers
|
svg
|
f5057c2da0697dc448dcc1e7faeb393dd7f6c21f0ea8be06c23e143f36f22187
|
2026-01-21T00:00:00-05:00
|
ACE-Align: Attribute Causal Effect Alignment for Cultural Values under Varying Persona Granularities
|
arXiv:2601.12962v1 Announce Type: new Abstract: Ensuring that large language models (LLMs) respect diverse cultural values is crucial for social equity. However, existing approaches often treat cultural groups as homogeneous and overlook within-group heterogeneity induced by intersecting demographic attributes, leading to unstable behavior under varying persona granularity. We propose ACE-Align (Attribute Causal Effect Alignment), a causal-effect framework that aligns how specific demographic attributes shift different cultural values, rather than treating each culture as a homogeneous group. We evaluate ACE-Align across 14 countries spanning five continents, with personas specified by subsets of four attributes (gender, education, residence, and marital status) and granularity instantiated by the number of specified attributes. Across all persona granularities, ACE-Align consistently outperforms baselines. Moreover, it improves geographic equity by reducing the average alignment gap between high-resource and low-resource regions from 9.81 to 4.92 points, while Africa shows the largest average gain (+8.48 points). Code is available at https://github.com/Wells-Luo/ACE-Align.
|
https://arxiv.org/abs/2601.12962
|
Academic Papers
|
svg
|
34343f1d31401d942589d68c48087d3ef0362198105874f5d66182388fc1083c
|
2026-01-21T00:00:00-05:00
|
Cross-Scale Pretraining: Enhancing Self-Supervised Learning for Low-Resolution Satellite Imagery for Semantic Segmentation
|
arXiv:2601.12964v1 Announce Type: new Abstract: Self-supervised pretraining in remote sensing is mostly done using mid-spatial resolution (MR) image datasets due to their high availability. Given the release of high-resolution (HR) datasets, we ask how HR datasets can be included in self-supervised pretraining to enhance MR image representation learning and downstream segmentation performance on MR tasks. We design a spatial affinity component that can be added to existing self-supervised learning frameworks and that uses HR imagery to learn better representations of MR imagery. We test the spatial affinity component on two self-supervised learning frameworks and show that it outperforms models pretrained on HR or MR images alone.
|
https://arxiv.org/abs/2601.12964
|
Academic Papers
|
svg
|
fd86c0b958b066f02f4ed1ae694102ea56b2e96a85b9c47567742f08d9d96753
|
2026-01-21T00:00:00-05:00
|
Deterministic Dynamics of Sampling Processes in Score-Based Diffusion Models with Multiplicative Noise Conditioning
|
arXiv:2601.12965v1 Announce Type: new Abstract: Score-based diffusion models generate new samples by learning the score function associated with a diffusion process. While the effectiveness of these models can be theoretically explained using differential equations related to the sampling process, previous work by Song and Ermon (2020) demonstrated that neural networks using multiplicative noise conditioning can still generate satisfactory samples. In this setup, the model is expressed as the product of two functions: one depending on the spatial variable and the other on the noise magnitude. This structure limits the model's ability to represent a more general relationship between the spatial variable and the noise, indicating that it cannot fully learn the correct score. Despite this limitation, the models perform well in practice. In this work, we provide a theoretical explanation for this phenomenon by studying the deterministic dynamics of the associated differential equations, offering insight into how the model operates.
|
https://arxiv.org/abs/2601.12965
|
Academic Papers
|
svg
|
cd50604c97b164fabe62e128e36cf3f3a095b12cb636d6cac753c6d34497d1a3
|
2026-01-21T00:00:00-05:00
|
Lombard Speech Synthesis for Any Voice with Controllable Style Embeddings
|
arXiv:2601.12966v1 Announce Type: new Abstract: The Lombard effect plays a key role in natural communication, particularly in noisy environments or when addressing hearing-impaired listeners. We present a controllable text-to-speech (TTS) system capable of synthesizing Lombard speech for any speaker without requiring explicit Lombard data during training. Our approach leverages style embeddings learned from a large, prosodically diverse dataset and analyzes their correlation with Lombard attributes using principal component analysis (PCA). By shifting the relevant PCA components, we manipulate the style embeddings and incorporate them into our TTS model to generate speech at desired Lombard levels. Evaluations demonstrate that our method preserves naturalness and speaker identity, enhances intelligibility under noise, and provides fine-grained control over prosody, offering a robust solution for controllable Lombard TTS for any speaker.
|
https://arxiv.org/abs/2601.12966
|
Academic Papers
|
svg
|
463d5ce29d1d485a8b687c59f30382564928392a0cc20cae592c77d4067711d5
|
2026-01-21T00:00:00-05:00
|
Sutradhara: An Intelligent Orchestrator-Engine Co-design for Tool-based Agentic Inference
|
arXiv:2601.12967v1 Announce Type: new Abstract: Agentic applications are LLMs that iteratively invoke external tools to accomplish complex tasks. Such tool-based agents are rapidly becoming the dominant paradigm for deploying language models in production. Unlike traditional single-turn inference, agentic workloads chain together multiple LLM calls and tool executions before producing a final response, creating a new performance bottleneck that manifests as increased latency in First Token Rendered (FTR) of the final answer. Through analysis of synthetic requests at production scale, we reveal three critical challenges: tool calls account for 30-80% of FTR latency, KV cache hit rates collapse despite substantial context reuse across iterations, and sequential orchestration wastes potential intra-request parallelism by sequentially executing LLM calls and tools. These bottlenecks stem from a design gap in which orchestrators and LLM engines operate as decoupled black boxes, preventing cross-layer optimizations. We present SUTRADHARA, a co-designed agentic inference system that integrates orchestration with LLM serving through a thin API enabling three optimizations: overlap tool execution with subsequent LLM prefill using tool-aware prompt splitting, streaming tool execution to dispatch tools incrementally during decode rather than waiting for complete output, and orchestrator-aware cache management that uses semantic hints to improve hit rates and reduce thrashing. Implemented on vLLM, SUTRADHARA reduces median FTR latency by 15% and end-to-end latency by 10% across workloads on A100 GPUs, demonstrating that co-design can systematically tame latency in agentic systems.
|
https://arxiv.org/abs/2601.12967
|
Academic Papers
|
svg
|
47195ffdba9df9b460e04f6201a4219572f4e939cf10396612413f7553a45e16
|
2026-01-21T00:00:00-05:00
|
Architecture-Optimization Co-Design for Physics-Informed Neural Networks Via Attentive Representations and Conflict-Resolved Gradients
|
arXiv:2601.12971v1 Announce Type: new Abstract: Physics-Informed Neural Networks (PINNs) provide a learning-based framework for solving partial differential equations (PDEs) by embedding governing physical laws into neural network training. In practice, however, their performance is often hindered by limited representational capacity and optimization difficulties caused by competing physical constraints and conflicting gradients. In this work, we study PINN training from a unified architecture-optimization perspective. We first propose a layer-wise dynamic attention mechanism to enhance representational flexibility, resulting in the Layer-wise Dynamic Attention PINN (LDA-PINN). We then reformulate PINN training as a multi-task learning problem and introduce a conflict-resolved gradient update strategy to alleviate gradient interference, leading to the Gradient-Conflict-Resolved PINN (GC-PINN). By integrating these two components, we develop the Architecture-Conflict-Resolved PINN (ACR-PINN), which combines attentive representations with conflict-aware optimization while preserving the standard PINN loss formulation. Extensive experiments on benchmark PDEs, including the Burgers, Helmholtz, Klein-Gordon, and lid-driven cavity flow problems, demonstrate that ACR-PINN achieves faster convergence and significantly lower relative $L_2$ and $L_\infty$ errors than standard PINNs. These results highlight the effectiveness of architecture-optimization co-design for improving the robustness and accuracy of PINN-based solvers.
|
https://arxiv.org/abs/2601.12971
|
Academic Papers
|
svg
|
9ad4ee3ca632216f3b2eb2943a7d6cf4ad3ba5ac4777d76ead7fbf220115d44e
|
2026-01-21T00:00:00-05:00
|
Pardon? Evaluating Conversational Repair in Large Audio-Language Models
|
arXiv:2601.12973v1 Announce Type: new Abstract: Large Audio-Language Models (LALMs) have demonstrated strong performance in spoken question answering (QA), with existing evaluations primarily focusing on answer accuracy and robustness to acoustic perturbations. However, such evaluations implicitly assume that spoken inputs remain semantically answerable, an assumption that often fails in real-world interaction when essential information is missing. In this work, we introduce a repair-aware evaluation setting that explicitly distinguishes between answerable and unanswerable audio inputs. We define answerability as a property of the input itself and construct paired evaluation conditions using a semantic-acoustic masking protocol. Based on this setting, we propose the Evaluability Awareness and Repair (EAR) score, a non-compensatory metric that jointly evaluates task competence under answerable conditions and repair behavior under unanswerable conditions. Experiments on two spoken QA benchmarks across diverse LALMs reveal a consistent gap between answer accuracy and conversational reliability: while many models perform well when inputs are answerable, most fail to recognize semantic unanswerability and initiate appropriate conversational repair. These findings expose a limitation of prevailing accuracy-centric evaluation practices and motivate reliability assessments that treat unanswerable inputs as cues for repair and continued interaction.
|
https://arxiv.org/abs/2601.12973
|
Academic Papers
|
svg
|
09ab1952fbdf4ea9ac4b703581bae0aa73f81df07fbb424748ebd93ef8c96b71
|
2026-01-21T00:00:00-05:00
|
Bridging the Knowledge-Action Gap by Evaluating LLMs in Dynamic Dental Clinical Scenarios
|
arXiv:2601.12974v1 Announce Type: new Abstract: The transition of Large Language Models (LLMs) from passive knowledge retrievers to autonomous clinical agents demands a shift in evaluation-from static accuracy to dynamic behavioral reliability. To explore this boundary in dentistry, a domain where high-quality AI advice uniquely empowers patient-participatory decision-making, we present the Standardized Clinical Management & Performance Evaluation (SCMPE) benchmark, which comprehensively assesses performance from knowledge-oriented evaluations (static objective tasks) to workflow-based simulations (multi-turn simulated patient interactions). Our analysis reveals that while models demonstrate high proficiency in static objective tasks, their performance precipitates in dynamic clinical dialogues, identifying that the primary bottleneck lies not in knowledge retention, but in the critical challenges of active information gathering and dynamic state tracking. Mapping "Guideline Adherence" versus "Decision Quality" reveals a prevalent "High Efficacy, Low Safety" risk in general models. Furthermore, we quantify the impact of Retrieval-Augmented Generation (RAG). While RAG mitigates hallucinations in static tasks, its efficacy in dynamic workflows is limited and heterogeneous, sometimes causing degradation. This underscores that external knowledge alone cannot bridge the reasoning gap without domain-adaptive pre-training. This study empirically charts the capability boundaries of dental LLMs, providing a roadmap for bridging the gap between standardized knowledge and safe, autonomous clinical practice.
|
https://arxiv.org/abs/2601.12974
|
Academic Papers
|
svg
|
aeb67961e0ff24480bb2c93003cfa9f72248fcfcff8482a4ce3c6b44c07a6eb3
|
2026-01-21T00:00:00-05:00
|
Kd-tree Based Wasserstein Distance Approximation for High-Dimensional Data
|
arXiv:2601.12975v1 Announce Type: new Abstract: The Wasserstein distance is a discrepancy measure between probability distributions, defined by an optimal transport problem. It has been used for various tasks such as retrieving similar items in high-dimensional images or text data. In retrieval applications, however, the Wasserstein distance is calculated repeatedly, and its cubic time complexity with respect to input size renders it unsuitable for large-scale datasets. Recently, tree-based approximation methods have been proposed to address this bottleneck. For example, the Flowtree algorithm computes transport on a quadtree and evaluates cost using the ground metric, and clustering-tree approaches have been reported to achieve high accuracy. However, these existing trees often incur significant construction time for preprocessing, and crucially, standard quadtrees cannot grow deep enough in high-dimensional spaces, resulting in poor approximation accuracy. In this paper, we propose kd-Flowtree, a kd-tree-based Wasserstein distance approximation method that uses a kd-tree for data embedding. Since kd-trees can grow sufficiently deep and adaptively even in high-dimensional cases, kd-Flowtree is capable of maintaining good approximation accuracy for such cases. In addition, kd-trees can be constructed quickly than quadtrees, which contributes to reducing the computation time required for nearest neighbor search, including preprocessing. We provide a probabilistic upper bound on the nearest-neighbor search accuracy of kd-Flowtree, and show that this bound is independent of the dataset size. In the numerical experiments, we demonstrated that kd-Flowtree outperformed the existing Wasserstein distance approximation methods for retrieval tasks with real-world data.
|
https://arxiv.org/abs/2601.12975
|
Academic Papers
|
svg
|
567b6e6ee92e663dff1f41f795a102dc4b5178443470749b85fc0ff45187855b
|
2026-01-21T00:00:00-05:00
|
Reproducibility in Event-Log Research: A Parametrised Generator and Benchmark for Event-based Signatures
|
arXiv:2601.12978v1 Announce Type: new Abstract: Event-based datasets are crucial for cybersecurity analysis. A key use case is detecting event-based signatures, which represent attacks spanning multiple events and can only be understood once the relevant events are identified and linked. Analysing event datasets is essential for monitoring system security, but their growing volume and frequency create significant scalability and processing difficulties. Researchers rely on these datasets to develop and test techniques for automatically identifying signatures. However, because real datasets are security-sensitive and rarely shared, it becomes difficult to perform meaningful comparative evaluation between different approaches. This work addresses this evaluation limitation by offering a systematic method for generating event logs with known ground truth, enabling reproducible and comparable research. We present a novel parametrised generation technique capable of producing synthetic event datasets that contain event-based signatures for discovery. To demonstrate the capabilities of the technique, we provide a benchmark in signature detection. Our benchmarking demonstrated the suitability of DBSCAN, achieving a score greater than 0.95 Adjusted Rand Index on most generated datasets. This work enhances the ability of researchers to develop and benchmark new cybersecurity techniques, ultimately contributing to more robust and effective cybersecurity measures.
|
https://arxiv.org/abs/2601.12978
|
Academic Papers
|
svg
|
68063f8de08cfae285b8419316632412b89e294fbb41cb72b32a50d51b32a2c9
|
2026-01-21T00:00:00-05:00
|
The Bitter Lesson of Diffusion Language Models for Agentic Workflows: A Comprehensive Reality Check
|
arXiv:2601.12979v1 Announce Type: new Abstract: The pursuit of real-time agentic interaction has driven interest in Diffusion-based Large Language Models (dLLMs) as alternatives to auto-regressive backbones, promising to break the sequential latency bottleneck. However, does such efficiency gains translate into effective agentic behavior? In this work, we present a comprehensive evaluation of dLLMs (e.g., LLaDA, Dream) across two distinct agentic paradigms: Embodied Agents (requiring long-horizon planning) and Tool-Calling Agents (requiring precise formatting). Contrary to the efficiency hype, our results on Agentboard and BFCL reveal a "bitter lesson": current dLLMs fail to serve as reliable agentic backbones, frequently leading to systematically failure. (1) In Embodied settings, dLLMs suffer repeated attempts, failing to branch under temporal feedback. (2) In Tool-Calling settings, dLLMs fail to maintain symbolic precision (e.g. strict JSON schemas) under diffusion noise. To assess the potential of dLLMs in agentic workflows, we introduce DiffuAgent, a multi-agent evaluation framework that integrates dLLMs as plug-and-play cognitive cores. Our analysis shows that dLLMs are effective in non-causal roles (e.g., memory summarization and tool selection) but require the incorporation of causal, precise, and logically grounded reasoning mechanisms into the denoising process to be viable for agentic tasks.
|
https://arxiv.org/abs/2601.12979
|
Academic Papers
|
svg
|
0d74e1526814df96705dbc4a7b27e93c9af01fa00d19baf3a2f42e8fe7340863
|
2026-01-21T00:00:00-05:00
|
Path to Diversity: A Primer on ISAC-izing Commodity Wi-Fi for Practical Deployments
|
arXiv:2601.12980v1 Announce Type: new Abstract: Integrated Sensing and Communication (ISAC) has emerged as a key paradigm in next-generation wireless networks. While the ubiquity and low cost of commodity Wi-Fi make it an ideal platform for wide-scale sensing, it is the continuous evolution of Wi-Fi standards-towards higher frequency bands, wider bandwidths, and larger antenna arrays-that fundamentally unlocks the physical resources required for high-performance ISAC. To structure this rapidly expanding field, numerous surveys have appeared. However, prevailing literature predominantly adopts a top-down perspective, emphasizing upper-layer applications or deep learning models while treating the physical layer as an opaque abstraction. Consequently, these works often fail to touch the bottom layer of signal formation and lack technical guidance on overcoming the physical barriers that constrain sensing performance. To bridge this gap, this tutorial takes a bottom-up approach, systematically analyzing the sensing gains brought by Wi-Fi advancements through the lens of physical-layer diversity. We organize the framework around four orthogonal dimensions: i) Temporal Diversity addresses synchronization gaps to enable absolute ranging; ii) Frequency Diversity expands the effective bandwidth to sharpen range resolution; iii) Link Diversity leverages distributed topologies and digital feedback to achieve ubiquitous observability; and iv) Spatial Diversity utilizes multi-antenna arrays to combine passive angular discrimination with active directional control. Collectively, these orthogonal dimensions resolve fundamental ambiguities in time, range, and space, bridging physical capabilities with challenging sensing diversities. By synthesizing these dimensions, this tutorial provides a comprehensive guide for "ISAC-izing" commodity Wi-Fi, paving the way for future standardization and robust deployment.
|
https://arxiv.org/abs/2601.12980
|
Academic Papers
|
svg
|
172e3605763897c163680585f102813d0c1ea20caa7275557721a4dd294a39c6
|
2026-01-21T00:00:00-05:00
|
Early Prediction of Type 2 Diabetes Using Multimodal data and Tabular Transformers
|
arXiv:2601.12981v1 Announce Type: new Abstract: This study introduces a novel approach for early Type 2 Diabetes Mellitus (T2DM) risk prediction using a tabular transformer (TabTrans) architecture to analyze longitudinal patient data. By processing patients` longitudinal health records and bone-related tabular data, our model captures complex, long-range dependencies in disease progression that conventional methods often overlook. We validated our TabTrans model on a retrospective Qatar BioBank (QBB) cohort of 1,382 subjects, comprising 725 men (146 diabetic, 579 healthy) and 657 women (133 diabetic, 524 healthy). The study integrated electronic health records (EHR) with dual-energy X-ray absorptiometry (DXA) data. To address class imbalance, we employed SMOTE and SMOTE-ENN resampling techniques. The proposed model`s performance is evaluated against conventional machine learning (ML) and generative AI models, including Claude 3.5 Sonnet (Anthropic`s constitutional AI), GPT-4 (OpenAI`s generative pre-trained transformer), and Gemini Pro (Google`s multimodal language model). Our TabTrans model demonstrated superior predictive performance, achieving ROC AUC $\geq$ 79.7 % for T2DM prediction compared to both generative AI models and conventional ML approaches. Feature interpretation analysis identified key risk indicators, with visceral adipose tissue (VAT) mass and volume, ward bone mineral density (BMD) and bone mineral content (BMC), T and Z-scores, and L1-L4 scores emerging as the most important predictors associated with diabetes development in Qatari adults. These findings demonstrate the significant potential of TabTrans for analyzing complex tabular healthcare data, providing a powerful tool for proactive T2DM management and personalized clinical interventions in the Qatari population. Index Terms: tabular transformers, multimodal data, DXA data, diabetes, T2DM, feature interpretation, tabular data
|
https://arxiv.org/abs/2601.12981
|
Academic Papers
|
svg
|
031ecef6730b883037bb4520a84c35e94adf0775dfd3c7753c66dc444cff93e7
|
2026-01-21T00:00:00-05:00
|
ChartAttack: Testing the Vulnerability of LLMs to Malicious Prompting in Chart Generation
|
arXiv:2601.12983v1 Announce Type: new Abstract: Multimodal large language models (MLLMs) are increasingly used to automate chart generation from data tables, enabling efficient data analysis and reporting but also introducing new misuse risks. In this work, we introduce ChartAttack, a novel framework for evaluating how MLLMs can be misused to generate misleading charts at scale. ChartAttack injects misleaders into chart designs, aiming to induce incorrect interpretations of the underlying data. Furthermore, we create AttackViz, a chart question-answering (QA) dataset where each (chart specification, QA) pair is labeled with effective misleaders and their induced incorrect answers. Experiments in in-domain and cross-domain settings show that ChartAttack significantly degrades the QA performance of MLLM readers, reducing accuracy by an average of 19.6 points and 14.9 points, respectively. A human study further shows an average 20.2 point drop in accuracy for participants exposed to misleading charts generated by ChartAttack. Our findings highlight an urgent need for robustness and security considerations in the design, evaluation, and deployment of MLLM-based chart generation systems. We make our code and data publicly available.
|
https://arxiv.org/abs/2601.12983
|
Academic Papers
|
svg
|
b0da570fea1ba603eea29c68dd09203f12ef3a067e85d8201548e149701811ac
|
2026-01-21T00:00:00-05:00
|
Rules, Resources, and Restrictions: A Taxonomy of Task-Based Information Request Intents
|
arXiv:2601.12985v1 Announce Type: new Abstract: Understanding and classifying query intents can improve retrieval effectiveness by helping align search results with the motivations behind user queries. However, existing intent taxonomies are typically derived from system log data and capture mostly isolated information needs, while the broader task context often remains unaddressed. This limitation becomes increasingly relevant as interactions with Large Language Models (LLMs) expand user expectations from simple query answering toward comprehensive task support, for example, with purchasing decisions or in travel planning. At the same time, current LLMs still struggle to fully interpret complex and multifaceted tasks. To address this gap, we argue for a stronger task-based perspective on query intent. Drawing on a grounded-theory-based interview study with airport information clerks, we present a taxonomy of task-based information request intents that bridges the gap between traditional query-focused approaches and the emerging demands of AI-driven task-oriented search.
|
https://arxiv.org/abs/2601.12985
|
Academic Papers
|
svg
|
d535658779a991f3214714f7795c083eea37fa94e0c5591d4ebe75f98e886bfb
|
2026-01-21T00:00:00-05:00
|
KinGuard: Hierarchical Kinship-Aware Fingerprinting to Defend Against Large Language Model Stealing
|
arXiv:2601.12986v1 Announce Type: new Abstract: Protecting the intellectual property of large language models requires robust ownership verification. Conventional backdoor fingerprinting, however, is flawed by a stealth-robustness paradox: to be robust, these methods force models to memorize fixed responses to high-perplexity triggers, but this targeted overfitting creates detectable statistical artifacts. We resolve this paradox with KinGuard, a framework that embeds a private knowledge corpus built on structured kinship narratives. Instead of memorizing superficial triggers, the model internalizes this knowledge via incremental pre-training, and ownership is verified by probing its conceptual understanding. Extensive experiments demonstrate KinGuard's superior effectiveness, stealth, and resilience against a battery of attacks including fine-tuning, input perturbation, and model merging. Our work establishes knowledge-based embedding as a practical and secure paradigm for model fingerprinting.
|
https://arxiv.org/abs/2601.12986
|
Academic Papers
|
svg
|
8d36c2667d6d444d5ba46ebd20e9bb42cfaba93369256959e992567b62a375a1
|
2026-01-21T00:00:00-05:00
|
Guiding vector field-based guidance under wind disturbances applied to a tailsitter UAV
|
arXiv:2601.12987v1 Announce Type: new Abstract: This paper develops a guidance control law based on a parametric Guiding Vector Field (GVF) and integrates it with a state-of-the-art acceleration and attitude control architecture for tailsitters. The resulting framework enables a direct comparison between traditional trajectory-tracking guidance and GVF-based path-following guidance using a realistic tailsitter model operating under windy conditions. Through extensive simulations, it is shown that for agile flight scenarios with wind and small initial position error, both guidance strategies achieve comparable tracking performance, indicating that the additional complexity introduced by the GVF formulation is not always justified. However, the GVF-based approach exhibits an advantage when initial deviation from the path is present, yielding smooth and well-behaved convergence toward the desired path. Two additional contributions support this evaluation. First, a modification of the parametric GVF is proposed that guarantees exponential stability of the tracking error dynamics for a single integrator system. Second, the differential flatness transform of a tailsitter vehicle is extended to account for explicit knowledge of the wind velocity vector.
|
https://arxiv.org/abs/2601.12987
|
Academic Papers
|
svg
|
f29f0274fc97b6dd785623a90232ac16399ff8c6335152ddd5b71e90f8a39a44
|
2026-01-21T00:00:00-05:00
|
PaperGuide: Making Small Language-Model Paper-Reading Agents More Efficient
|
arXiv:2601.12988v1 Announce Type: new Abstract: The accelerating growth of the scientific literature makes it increasingly difficult for researchers to track new advances through manual reading alone. Recent progress in large language models (LLMs) has therefore spurred interest in autonomous agents that can read scientific papers and extract task-relevant information. However, most existing approaches rely either on heavily engineered prompting or on a conventional SFT-RL training pipeline, both of which often lead to excessive and low-yield exploration. Drawing inspiration from cognitive science, we propose PaperCompass, a framework that mitigates these issues by separating high-level planning from fine-grained execution. PaperCompass first drafts an explicit plan that outlines the intended sequence of actions, and then performs detailed reasoning to instantiate each step by selecting the parameters for the corresponding function calls. To train such behavior, we introduce Draft-and-Follow Policy Optimization (DFPO), a tailored RL method that jointly optimizes both the draft plan and the final solution. DFPO can be viewed as a lightweight form of hierarchical reinforcement learning, aimed at narrowing the `knowing-doing' gap in LLMs. We provide a theoretical analysis that establishes DFPO's favorable optimization properties, supporting a stable and reliable training process. Experiments on paper-based question answering (Paper-QA) benchmarks show that PaperCompass improves efficiency over strong baselines without sacrificing performance, achieving results comparable to much larger models.
|
https://arxiv.org/abs/2601.12988
|
Academic Papers
|
svg
|
9dbde8f603935cc54fb624e240e0cd2c15edcb08da2bb247e9a06f8c3cd34df0
|
2026-01-21T00:00:00-05:00
|
Enshrined Proposer Builder Separation in the presence of Maximal Extractable Value
|
arXiv:2601.12989v1 Announce Type: new Abstract: In blockchain systems operating under the Proof-of-Stake (PoS) consensus mechanism, fairness in transaction processing is essential to preserving decentralization and maintaining user trust. However, with the emergence of Maximal Extractable Value (MEV), concerns about economic centralization and content manipulation have intensified. To address these vulnerabilities, the Ethereum community has introduced Proposer Builder Separation (PBS), which separates block construction from block proposal. Later, enshrined Proposer Builder Separation (ePBS) was also proposed in EIP-7732, which embeds PBS directly into the Ethereum consensus layer. Our work identifies key limitations of ePBS by developing a formal framework that combines mathematical analysis and agent-based simulations to evaluate its auction-based block-building mechanism, with particular emphasis on MEV dynamics. Our results reveal that, although ePBS redistributes responsibilities between builders and proposers, it significantly amplifies profit and content centralization: the Gini coefficient for profits rises from 0.1749 under standard PoS without ePBS to 0.8358 under ePBS. This sharp increase indicates that a small number of efficient builders capture most value via MEV-driven auctions. Moreover, 95.4% of the block value is rewarded to proposers in ePBS, revealing a strong economic bias despite their limited role in block assembly. These findings highlight that ePBS exacerbates incentives for builders to adopt aggressive MEV strategies, suggesting the need for future research into mechanism designs that better balance decentralization, fairness, and MEV mitigation.
|
https://arxiv.org/abs/2601.12989
|
Academic Papers
|
svg
|
f8e7ad8ef4dd956cd2310e47fece03d5abd23ae354043d90c7f9f5855e059847
|
2026-01-21T00:00:00-05:00
|
RAGExplorer: A Visual Analytics System for the Comparative Diagnosis of RAG Systems
|
arXiv:2601.12991v1 Announce Type: new Abstract: The advent of Retrieval-Augmented Generation (RAG) has significantly enhanced the ability of Large Language Models (LLMs) to produce factually accurate and up-to-date responses. However, the performance of a RAG system is not determined by a single component but emerges from a complex interplay of modular choices, such as embedding models and retrieval algorithms. This creates a vast and often opaque configuration space, making it challenging for developers to understand performance trade-offs and identify optimal designs. To address this challenge, we present RAGExplorer, a visual analytics system for the systematic comparison and diagnosis of RAG configurations. RAGExplorer guides users through a seamless macro-to-micro analytical workflow. Initially, it empowers developers to survey the performance landscape across numerous configurations, allowing for a high-level understanding of which design choices are most effective. For a deeper analysis, the system enables users to drill down into individual failure cases, investigate how differences in retrieved information contribute to errors, and interactively test hypotheses by manipulating the provided context to observe the resulting impact on the generated answer. We demonstrate the effectiveness of RAGExplorer through detailed case studies and user studies, validating its ability to empower developers in navigating the complex RAG design space. Our code and user guide are publicly available at https://github.com/Thymezzz/RAGExplorer.
|
https://arxiv.org/abs/2601.12991
|
Academic Papers
|
svg
|
b6235a1d082464c75bfe17a40780e427a1138aa681ca640fb39dbd5de5458b11
|
2026-01-21T00:00:00-05:00
|
Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization
|
arXiv:2601.12993v1 Announce Type: new Abstract: We introduce Being-H0.5, a foundational Vision-Language-Action (VLA) model designed for robust cross-embodiment generalization across diverse robotic platforms. While existing VLAs often struggle with morphological heterogeneity and data scarcity, we propose a human-centric learning paradigm that treats human interaction traces as a universal "mother tongue" for physical interaction. To support this, we present UniHand-2.0, the largest embodied pre-training recipe to date, comprising over 35,000 hours of multimodal data across 30 distinct robotic embodiments. Our approach introduces a Unified Action Space that maps heterogeneous robot controls into semantically aligned slots, enabling low-resource robots to bootstrap skills from human data and high-resource platforms. Built upon this human-centric foundation, we design a unified sequential modeling and multi-task pre-training paradigm to bridge human demonstrations and robotic execution. Architecturally, Being-H0.5 utilizes a Mixture-of-Transformers design featuring a novel Mixture-of-Flow (MoF) framework to decouple shared motor primitives from specialized embodiment-specific experts. Finally, to make cross-embodiment policies stable in the real world, we introduce Manifold-Preserving Gating for robustness under sensory shift and Universal Async Chunking to universalize chunked control across embodiments with different latency and control profiles. We empirically demonstrate that Being-H0.5 achieves state-of-the-art results on simulated benchmarks, such as LIBERO (98.9%) and RoboCasa (53.9%), while also exhibiting strong cross-embodiment capabilities on five robotic platforms.
|
https://arxiv.org/abs/2601.12993
|
Academic Papers
|
svg
|
cdfa2045091f36f95e519086b3048db5984c593cd47c78e9cfb88d172fadfaac
|
2026-01-21T00:00:00-05:00
|
AsyncBEV: Cross-modal Flow Alignment in Asynchronous 3D Object Detection
|
arXiv:2601.12994v1 Announce Type: new Abstract: In autonomous driving, multi-modal perception tasks like 3D object detection typically rely on well-synchronized sensors, both at training and inference. However, despite the use of hardware- or software-based synchronization algorithms, perfect synchrony is rarely guaranteed: Sensors may operate at different frequencies, and real-world factors such as network latency, hardware failures, or processing bottlenecks often introduce time offsets between sensors. Such asynchrony degrades perception performance, especially for dynamic objects. To address this challenge, we propose AsyncBEV, a trainable lightweight and generic module to improve the robustness of 3D Birds' Eye View (BEV) object detection models against sensor asynchrony. Inspired by scene flow estimation, AsyncBEV first estimates the 2D flow from the BEV features of two different sensor modalities, taking into account the known time offset between these sensor measurements. The predicted feature flow is then used to warp and spatially align the feature maps, which we show can easily be integrated into different current BEV detector architectures (e.g., BEV grid-based and token-based). Extensive experiments demonstrate AsyncBEV improves robustness against both small and large asynchrony between LiDAR or camera sensors in both the token-based CMT and grid-based UniBEV, especially for dynamic objects. We significantly outperform the ego motion compensated CMT and UniBEV baselines, notably by $16.6$ % and $11.9$ % NDS on dynamic objects in the worst-case scenario of a $0.5 s$ time offset. Code will be released upon acceptance.
|
https://arxiv.org/abs/2601.12994
|
Academic Papers
|
svg
|
4e88ac9a2255248508135da5d43443c91bdc2aa4289ab0d8399438a082206741
|
2026-01-21T00:00:00-05:00
|
Graph Reasoning Paradigm: Structured and Symbolic Reasoning with Topology-Aware Reinforcement Learning for Large Language Models
|
arXiv:2601.12995v1 Announce Type: new Abstract: Long Chain-of-Thought (LCoT), achieved by Reinforcement Learning with Verifiable Rewards (RLVR), has proven effective in enhancing the reasoning capabilities of Large Language Models (LLMs). However, reasoning in current LLMs is primarily generated as plain text, where performing semantic evaluation on such unstructured data creates a computational bottleneck during training. Despite RLVR-based optimization, existing methods still suffer from coarse-grained supervision, reward hacking, high training costs, and poor generalization. To address these issues, we propose the Graph Reasoning Paradigm (GRP), which realizes structured and symbolic reasoning, implemented via graph-structured representations with step-level cognitive labels. Building upon GRP, we further design Process-Aware Stratified Clipping Group Relative Policy Optimization (PASC-GRPO), which leverages structured evaluation to replace semantic evaluation, achieves process-aware verification through graph-structured outcome rewards, and mitigates reward hacking via stratified clipping advantage estimation. Experiments demonstrate significant improvements across mathematical reasoning and code generation tasks. Data, models, and code will be released later.
|
https://arxiv.org/abs/2601.12995
|
Academic Papers
|
svg
|
44a2b2fedf2cd0fc18f698a396265baf89399a5f8f1a03373f71eae9ff20311a
|
2026-01-21T00:00:00-05:00
|
OFA-MAS: One-for-All Multi-Agent System Topology Design based on Mixture-of-Experts Graph Generative Models
|
arXiv:2601.12996v1 Announce Type: new Abstract: Multi-Agent Systems (MAS) offer a powerful paradigm for solving complex problems, yet their performance is critically dependent on the design of their underlying collaboration topology. As MAS become increasingly deployed in web services (e.g., search engines), designing adaptive topologies for diverse cross-domain user queries becomes essential. Current graph learning-based design methodologies often adhere to a "one-for-one" paradigm, where a specialized model is trained for each specific task domain. This approach suffers from poor generalization to unseen domains and fails to leverage shared structural knowledge across different tasks. To address this, we propose OFA-TAD, a one-for-all framework that generates adaptive collaboration graphs for any task described in natural language through a single universal model. Our approach integrates a Task-Aware Graph State Encoder (TAGSE) that filters task-relevant node information via sparse gating, and a Mixture-of-Experts (MoE) architecture that dynamically selects specialized sub-networks to drive node and edge prediction. We employ a three-stage training strategy: unconditional pre-training on canonical topologies for structural priors, large-scale conditional pre-training on LLM-generated datasets for task-topology mappings, and supervised fine-tuning on empirically validated graphs. Experiments across six diverse benchmarks show that OFA-TAD significantly outperforms specialized one-for-one models, generating highly adaptive MAS topologies. Code: https://github.com/Shiy-Li/OFA-MAS.
|
https://arxiv.org/abs/2601.12996
|
Academic Papers
|
svg
|
5497ff9098052d465208621ebca38c4c0056cd6002394238e65c4c5e75e7dba7
|
2026-01-21T00:00:00-05:00
|
Weighted-Hamming Metric: Bounds and Codes
|
arXiv:2601.12998v1 Announce Type: new Abstract: The weighted-Hamming metric generalizes the Hamming metric by assigning different weights to blocks of coordinates. It is well-suited for applications such as coding over independent parallel channels, each of which has a different level of importance or noise. From a coding-theoretic perspective, the actual error-correction capability of a code under this metric can exceed half its minimum distance. In this work, we establish direct bounds on this capability, tightening those obtained via minimum-distance arguments. We also propose a flexible code construction based on generalized concatenation and show that these codes can be efficiently decoded up to a lower bound on the error-correction capability.
|
https://arxiv.org/abs/2601.12998
|
Academic Papers
|
svg
|
47cefe685b21b95474b8fa99d0c3478106f06e32595a1d4f0cf2d1b5c26672d1
|
2026-01-21T00:00:00-05:00
|
PrivFly: A Privacy-Preserving Self-Supervised Framework for Rare Attack Detection in IoFT
|
arXiv:2601.13003v1 Announce Type: new Abstract: The Internet of Flying Things (IoFT) plays a vital role in modern applications such as aerial surveillance and smart mobility. However, it remains highly vulnerable to cyberattacks that threaten the confidentiality, integrity, and availability of sensitive data. Developing effective intrusion detection systems (IDS) for IoFT networks faces key challenges, including data imbalance, privacy concerns, and the limited capability of traditional models to detect rare but potentially damaging cyber threats. In this work, we propose PrivFly, a privacy-preserving IDS framework that integrates self-supervised representation learning and differential privacy (DP) to enhance detection performance in imbalanced IoFT network traffic. We propose a masked feature reconstruction module for self-supervised pretraining, improving feature representations and boosting rare-class detection. Differential privacy is applied during training to protect sensitive information without significantly compromising model performance. In addition, we conduct a SHapley additive explanations (SHAP)-based analysis to evaluate the impact of DP on feature importance and model behavior. Experimental results on the ECU-IoFT dataset show that PrivFly achieves up to 98% accuracy and 99% F1-score, effectively balancing privacy and detection performance for secure IoFT systems.
|
https://arxiv.org/abs/2601.13003
|
Academic Papers
|
svg
|
feaf49fb4eb2b2c3359f58a3b9c0510b5ce4e7d58c4977b1c46b55879c8647d9
|
2026-01-21T00:00:00-05:00
|
An iterative approach to a fluid-rigid body interaction problem
|
arXiv:2601.13004v1 Announce Type: new Abstract: We study a novel approach for the existence of solutions to an incompressible fluid-rigid body interaction problem in three dimensions. Our approach introduces an iteration based on a sequence of related problems posed on domains with prescribed evolution. In particular we prove the short-time existence of strong solutions to a system coupling the incompressible Navier--Stokes equations to the ordinary differential equations governing the motion of a rigid body, with no slip boundary conditions on the boundary of the rigid body, provided that the relative density $\frac{\rho}{\rho_B}$, is sufficiently small. We also discuss the use of our iterative approach in numerical methods for the moving boundary problem, and complement this with some numerical experiments in two dimensions which demonstrate the necessity of the smallness assumption on $\frac{\rho}{\rho_B}$.
|
https://arxiv.org/abs/2601.13004
|
Academic Papers
|
svg
|
e040b0d1be2694edb3133587e7f071cb5d169b850806164cbd36ee0ce9ffeb3c
|
2026-01-21T00:00:00-05:00
|
ArchAgent: Scalable Legacy Software Architecture Recovery with LLMs
|
arXiv:2601.13007v1 Announce Type: new Abstract: Recovering accurate architecture from large-scale legacy software is hindered by architectural drift, missing relations, and the limited context of Large Language Models (LLMs). We present ArchAgent, a scalable agent-based framework that combines static analysis, adaptive code segmentation, and LLM-powered synthesis to reconstruct multiview, business-aligned architectures from cross-repository codebases. ArchAgent introduces scalable diagram generation with contextual pruning and integrates cross-repository data to identify business-critical modules. Evaluations of typical large-scale GitHub projects show significant improvements over existing benchmarks. An ablation study confirms that dependency context improves the accuracy of generated architectures of production-level repositories, and a real-world case study demonstrates effective recovery of critical business logics from legacy projects. The dataset is available at https://github.com/panrusheng/arch-eval-benchmark.
|
https://arxiv.org/abs/2601.13007
|
Academic Papers
|
svg
|
29938fe09b7fef81d7ee6232271a38fc856b16a98e85f5b4883bfa989f5deaa3
|
2026-01-21T00:00:00-05:00
|
HT-GNN: Hyper-Temporal Graph Neural Network for Customer Lifetime Value Prediction in Baidu Ads
|
arXiv:2601.13013v1 Announce Type: new Abstract: Lifetime value (LTV) prediction is crucial for news feed advertising, enabling platforms to optimize bidding and budget allocation for long-term revenue growth. However, it faces two major challenges: (1) demographic-based targeting creates segment-specific LTV distributions with large value variations across user groups; and (2) dynamic marketing strategies generate irregular behavioral sequences where engagement patterns evolve rapidly. We propose a Hyper-Temporal Graph Neural Network (HT-GNN), which jointly models demographic heterogeneity and temporal dynamics through three key components: (i) a hypergraph-supervised module capturing inter-segment relationships; (ii) a transformer-based temporal encoder with adaptive weighting; and (iii) a task-adaptive mixture-of-experts with dynamic prediction towers for multi-horizon LTV forecasting. Experiments on \textit{Baidu Ads} with 15 million users demonstrate that HT-GNN consistently outperforms state-of-the-art methods across all metrics and prediction horizons.
|
https://arxiv.org/abs/2601.13013
|
Academic Papers
|
svg
|
633029a3f1a3e37e9d12eaa26dff403cde14ae3741833efebcbf8ca3bc3b9177
|
2026-01-21T00:00:00-05:00
|
MeltRTL: Multi-Expert LLMs with Inference-time Intervention for RTL Code Generation
|
arXiv:2601.13015v1 Announce Type: new Abstract: The automated generation of hardware register-transfer level (RTL) code with large language models (LLMs) shows promise, yet current solutions struggle to produce syntactically and functionally correct code for complex digital designs. This paper introduces MeltRTL, a novel framework that integrates multi-expert attention with inference-time intervention (ITI) to significantly improve LLM-based RTL code generation accuracy without retraining the base model. MeltRTL introduces three key innovations: (1) A multi-expert attention architecture that dynamically routes design specifications to specialized expert networks, enabling targeted reasoning across various hardware categories; (2) An inference-time intervention mechanism that employs non-linear probes to detect and correct hardware-specific inaccuracies during generation; and (3) An efficient intervention framework that selectively operates on expert-specific attention heads with minimal computational overhead. We evaluate MeltRTL on the VerilogEval benchmark, achieving 96% synthesizability and 60% functional correctness, compared to the base LLM's 85.3% and 45.3%, respectively. These improvements are obtained entirely at inference time, with only 27% computational overhead and no model fine-tuning, making MeltRTL immediately deployable on existing pre-trained LLMs. Ablation studies further show the complementary benefits of multi-expert architecture and ITI, highlighting their synergistic effects when combined.
|
https://arxiv.org/abs/2601.13015
|
Academic Papers
|
svg
|
17f893e2a3298f6be8c2dbc23c483eece61d7edfcc85e8760ea228a5f66f6248
|
2026-01-21T00:00:00-05:00
|
Bi-Attention HateXplain : Taking into account the sequential aspect of data during explainability in a multi-task context
|
arXiv:2601.13018v1 Announce Type: new Abstract: Technological advances in the Internet and online social networks have brought many benefits to humanity. At the same time, this growth has led to an increase in hate speech, the main global threat. To improve the reliability of black-box models used for hate speech detection, post-hoc approaches such as LIME, SHAP, and LRP provide the explanation after training the classification model. In contrast, multi-task approaches based on the HateXplain benchmark learn to explain and classify simultaneously. However, results from HateXplain-based algorithms show that predicted attention varies considerably when it should be constant. This attention variability can lead to inconsistent interpretations, instability of predictions, and learning difficulties. To solve this problem, we propose the BiAtt-BiRNN-HateXplain (Bidirectional Attention BiRNN HateXplain) model which is easier to explain compared to LLMs which are more complex in view of the need for transparency, and will take into account the sequential aspect of the input data during explainability thanks to a BiRNN layer. Thus, if the explanation is correctly estimated, thanks to multi-task learning (explainability and classification task), the model could classify better and commit fewer unintentional bias errors related to communities. The experimental results on HateXplain data show a clear improvement in detection performance, explainability and a reduction in unintentional bias.
|
https://arxiv.org/abs/2601.13018
|
Academic Papers
|
svg
|
97dac88272d0d24e033417448cb8ad0f5b35b563325ae82e43b08c8a219d4418
|
2026-01-21T00:00:00-05:00
|
PASs-MoE: Mitigating Misaligned Co-drift among Router and Experts via Pathway Activation Subspaces for Continual Learning
|
arXiv:2601.13020v1 Announce Type: new Abstract: Continual instruction tuning (CIT) requires multimodal large language models (MLLMs) to adapt to a stream of tasks without forgetting prior capabilities. A common strategy is to isolate updates by routing inputs to different LoRA experts. However, existing LoRA-based Mixture-of-Experts (MoE) methods often jointly update the router and experts in an indiscriminate way, causing the router's preferences to co-drift with experts' adaptation pathways and gradually deviate from early-stage input-expert specialization. We term this phenomenon Misaligned Co-drift, which blurs expert responsibilities and exacerbates forgetting.To address this, we introduce the pathway activation subspace (PASs), a LoRA-induced subspace that reflects which low-rank pathway directions an input activates in each expert, providing a capability-aligned coordinate system for routing and preservation. Based on PASs, we propose a fixed-capacity PASs-based MoE-LoRA method with two components: PAS-guided Reweighting, which calibrates routing using each expert's pathway activation signals, and PAS-aware Rank Stabilization, which selectively stabilizes rank directions important to previous tasks. Experiments on a CIT benchmark show that our approach consistently outperforms a range of conventional continual learning baselines and MoE-LoRA variants in both accuracy and anti-forgetting without adding parameters. Our code will be released upon acceptance.
|
https://arxiv.org/abs/2601.13020
|
Academic Papers
|
svg
|
7f5544673d1ec7ed01810f5eb97234028e7c5ed2e035374fb556e0e61b185eea
|
2026-01-21T00:00:00-05:00
|
Enhancing Generalization in Sickle Cell Disease Diagnosis through Ensemble Methods and Feature Importance Analysis
|
arXiv:2601.13021v1 Announce Type: new Abstract: This work presents a novel approach for selecting the optimal ensemble-based classification method and features with a primarly focus on achieving generalization, based on the state-of-the-art, to provide diagnostic support for Sickle Cell Disease using peripheral blood smear images of red blood cells. We pre-processed and segmented the microscopic images to ensure the extraction of high-quality features. To ensure the reliability of our proposed system, we conducted an in-depth analysis of interpretability. Leveraging techniques established in the literature, we extracted features from blood cells and employed ensemble machine learning methods to classify their morphology. Furthermore, we have devised a methodology to identify the most critical features for classification, aimed at reducing complexity and training time and enhancing interpretability in opaque models. Lastly, we validated our results using a new dataset, where our model overperformed state-of-the-art models in terms of generalization. The results of classifier ensembled of Random Forest and Extra Trees classifier achieved an harmonic mean of precision and recall (F1-score) of 90.71\% and a Sickle Cell Disease diagnosis support score (SDS-score) of 93.33\%. These results demonstrate notable enhancement from previous ones with Gradient Boosting classifier (F1-score 87.32\% and SDS-score 89.51\%). To foster scientific progress, we have made available the parameters for each model, the implemented code library, and the confusion matrices with the raw data.
|
https://arxiv.org/abs/2601.13021
|
Academic Papers
|
svg
|
ab8507d1208dc8c881f66f17b7e2877631d30734acdb594f51e21bcdb484680e
|
2026-01-21T00:00:00-05:00
|
Tears or Cheers? Benchmarking LLMs via Culturally Elicited Distinct Affective Responses
|
arXiv:2601.13024v1 Announce Type: new Abstract: Culture serves as a fundamental determinant of human affective processing and profoundly shapes how individuals perceive and interpret emotional stimuli. Despite this intrinsic link extant evaluations regarding cultural alignment within Large Language Models primarily prioritize declarative knowledge such as geographical facts or established societal customs. These benchmarks remain insufficient to capture the subjective interpretative variance inherent to diverse sociocultural lenses. To address this limitation, we introduce CEDAR, a multimodal benchmark constructed entirely from scenarios capturing Culturally \underline{\textsc{E}}licited \underline{\textsc{D}}istinct \underline{\textsc{A}}ffective \underline{\textsc{R}}esponses. To construct CEDAR, we implement a novel pipeline that leverages LLM-generated provisional labels to isolate instances yielding cross-cultural emotional distinctions, and subsequently derives reliable ground-truth annotations through rigorous human evaluation. The resulting benchmark comprises 10,962 instances across seven languages and 14 fine-grained emotion categories, with each language including 400 multimodal and 1,166 text-only samples. Comprehensive evaluations of 17 representative multilingual models reveal a dissociation between language consistency and cultural alignment, demonstrating that culturally grounded affective understanding remains a significant challenge for current models.
|
https://arxiv.org/abs/2601.13024
|
Academic Papers
|
svg
|
57dcedc75fdedd9f2ceb2a8ad45001724dc4b27255655328dac69a0c5490d7cc
|
2026-01-21T00:00:00-05:00
|
Think3D: Thinking with Space for Spatial Reasoning
|
arXiv:2601.13029v1 Announce Type: new Abstract: Understanding and reasoning about the physical world requires spatial intelligence: the ability to interpret geometry, perspective, and spatial relations beyond 2D perception. While recent vision large models (VLMs) excel at visual understanding, they remain fundamentally 2D perceivers and struggle with genuine 3D reasoning. We introduce Think3D, a framework that enables VLM agents to think with 3D space. By leveraging 3D reconstruction models that recover point clouds and camera poses from images or videos, Think3D allows the agent to actively manipulate space through camera-based operations and ego/global-view switching, transforming spatial reasoning into an interactive 3D chain-of-thought process. Without additional training, Think3D significantly improves the spatial reasoning performance of advanced models such as GPT-4.1 and Gemini 2.5 Pro, yielding average gains of +7.8% on BLINK Multi-view and MindCube, and +4.7% on VSI-Bench. We further show that smaller models, which struggle with spatial exploration, benefit significantly from a reinforcement learning policy that enables the model to select informative viewpoints and operations. With RL, the benefit from tool usage increases from +0.7% to +6.8%. Our findings demonstrate that training-free, tool-augmented spatial exploration is a viable path toward more flexible and human-like 3D reasoning in multimodal agents, establishing a new dimension of multimodal intelligence. Code and weights are released at https://github.com/zhangzaibin/spagent.
|
https://arxiv.org/abs/2601.13029
|
Academic Papers
|
svg
|
585cad05bcddf49553fe25a30a3ce6bbcb3af3debe26ebe3c96a88eb84de49fb
|
2026-01-21T00:00:00-05:00
|
Post-Quantum Secure Aggregation via Code-Based Homomorphic Encryption
|
arXiv:2601.13031v1 Announce Type: new Abstract: Secure aggregation enables aggregation of inputs from multiple parties without revealing individual contributions to the server or other clients. Existing post-quantum approaches based on homomorphic encryption offer practical efficiency but predominantly rely on lattice-based hardness assumptions. We present a code-based alternative for secure aggregation by instantiating a general framework based on key- and message-additive homomorphic encryption under the Learning Parity with Noise (LPN) assumption. Our construction employs a committee-based decryptor realized via secret sharing and incorporates a Chinese Remainder Theorem (CRT)-based optimization to reduce the communication costs of LPN-based instantiations. We analyze the security of the proposed scheme under a new Hint-LPN assumption and show that it is equivalent to standard LPN for suitable parameters. Finally, we evaluate performance and identify regimes in which our approach outperforms information-theoretically secure aggregation protocols.
|
https://arxiv.org/abs/2601.13031
|
Academic Papers
|
svg
|
bf9f0956f378c7f1bb92f35fd17046c97122e898cb75168578623a4df2e02533
|
2026-01-21T00:00:00-05:00
|
SASA: Semantic-Aware Contrastive Learning Framework with Separated Attention for Triple Classification
|
arXiv:2601.13035v1 Announce Type: new Abstract: Knowledge Graphs~(KGs) often suffer from unreliable knowledge, which restricts their utility. Triple Classification~(TC) aims to determine the validity of triples from KGs. Recently, text-based methods learn entity and relation representations from natural language descriptions, significantly improving the generalization capabilities of TC models and setting new benchmarks in performance. However, there are still two critical challenges. First, existing methods often ignore the effective semantic interaction among different KG components. Second, most approaches adopt single binary classification training objective, leading to insufficient semantic representation learning. To address these challenges, we propose \textbf{SASA}, a novel framework designed to enhance TC models via separated attention mechanism and semantic-aware contrastive learning~(CL). Specifically, we first propose separated attention mechanism to encode triples into decoupled contextual representations and then fuse them through a more effective interactive way. Then, we introduce semantic-aware hierarchical CL as auxiliary training objective to guide models in improving their discriminative capabilities and achieving sufficient semantic learning, considering both local level and global level CL. Experimental results across two benchmark datasets demonstrate that SASA significantly outperforms state-of-the-art methods. In terms of accuracy, we advance the state-of-the-art by +5.9\% on FB15k-237 and +3.4\% on YAGO3-10.
|
https://arxiv.org/abs/2601.13035
|
Academic Papers
|
svg
|
3ca100a8bdf493d2f7d23f99e3e3f8f0a7b4aa6548999df3c61a9dd8f6cdd6c0
|
2026-01-21T00:00:00-05:00
|
Feedforward-Feedback Integration in Flight Control: Reinforcement Learning with Sliding Mode Control
|
arXiv:2601.13037v1 Announce Type: new Abstract: Learning-based controllers leverage nonlinear couplings and enhance transients but seldom offer guarantees under tight input constraints. Robust feedback like sliding-mode control (SMC) provides these guarantees but is conservative in isolation. This paper creates a learning-augmented framework where a deep reinforcement learning policy produces feedforward commands and an SMC law imposes actuator limits, bounds learned authority, and guarantees robustness. The policy is modeled as a matched, bounded input, and Lyapunov-based conditions link SMC gains to the admissible feedforward bound, guaranteeing stability under saturation. This formulation is applicable to nonlinear, underactuated plants with hard constraints. To illustrate the methodology, the method is applied to a six-degree-of-freedom aircraft model and compared with Reinforcement Learning and isolated SMC. Simulation results show that the hybrid controller improves transient behavior and reduces control oscillations compared to standalone RL and SMC controllers, while preserving robustness under modeling uncertainties and disturbances. Even using it with partially trained policies, SMC component of the control stabilizes transients, whereas fully trained policies provide faster convergence, reduced constraint violations, and robustness. These results illustrate that learning-augmented control offers superior performance with robustness guarantees under tight input constraints.
|
https://arxiv.org/abs/2601.13037
|
Academic Papers
|
svg
|
0cbc58d8e1fd684dcb308a41d92f4f3893b0f38d9638f445442eaaac92b33515
|
2026-01-21T00:00:00-05:00
|
Solving Generalized Lyapunov Equations with guarantees: application to the Model Reduction of Switched Linear Systems
|
arXiv:2601.13039v1 Announce Type: new Abstract: We present an efficient strategy to approximate the solutions of large-scale generalized Lyapunov equations (GLEs) with rigorous, computable error guarantees. This work is motivated by applications in model order reduction (MOR) of switched linear systems (SLS) in control form, where GLEs play a central role. We analyze how inaccuracies in the numerical solution of GLEs propagate through the MOR procedure and affect the accuracy and reliability of the reduced order model. Furthermore, the classical balanced-truncation error estimate for SLS is neither theoretically nor practically viable, as they rely on restrictive assumptions requiring several requiring several linear matrix inequalities (LMI) to be satisfied exactly by numerically computed solutions of the GLEs. To overcome these limitation, we propose a new MOR framework for SLS, called piecewise balanced reduction (PBR). The method is based on solving multiple GLEs and the construction of projection matrices that are piecewise constant in time to appropriately balance and subsequently reduce the SLS. We extend the standard balanced-truncation error bounds and demonstrate that the PBR formulation allows us to control the error arising from the inexact LMI. In addition, our new error bound accounts for the influence of the piecewise constant time-varying projection matrices. Altogether, this renders the PBR approach for SLS applicable to a broad and flexible class of SLS. Numerical experiments are provided to corroborate our theoretical results.
|
https://arxiv.org/abs/2601.13039
|
Academic Papers
|
svg
|
5f8da8d7e9b44dbc91f254d500ea58eba954204a92d75d7347eab6ca41a332c4
|
2026-01-21T00:00:00-05:00
|
CPU-less parallel execution of lambda calculus in digital logic
|
arXiv:2601.13040v1 Announce Type: new Abstract: While transistor density is still increasing, clock speeds are not, motivating the search for new parallel architectures. One approach is to completely abandon the concept of CPU -- and thus serial imperative programming -- and instead to specify and execute tasks in parallel, compiling from programming languages to data flow digital logic. It is well-known that pure functional languages are inherently parallel, due to the Church-Rosser theorem, and CPU-based parallel compilers exist for many functional languages. However, these still rely on conventional CPUs and their von Neumann bottlenecks. An alternative is to compile functional languages directly into digital logic to maximize available parallelism. It is difficult to work with complete modern functional languages due to their many features, so we demonstrate a proof-of-concept system using lambda calculus as the source language and compiling to digital logic. We show how functional hardware can be tailored to a simplistic functional language, forming the ground for a new model of CPU-less functional computation. At the algorithmic level, we use a tree-based representation, with data localized within nodes and communicated data passed between them. This is implemented by physical digital logic blocks corresponding to nodes, and buses enabling message passing. Node types and behaviors correspond to lambda grammar forms, and beta-reductions are performed in parallel allowing branches independent from one another to perform transformations simultaneously. As evidence for this approach, we present an implementation, along with simulation results, showcasing successful execution of lambda expressions. This suggests that the approach could be scaled to larger functional languages. Successful execution of a test suite of lambda expressions suggests that the approach could be scaled to larger functional languages.
|
https://arxiv.org/abs/2601.13040
|
Academic Papers
|
svg
|
8707719c9c0497964a1955f6f217a84beb1dad93cacbb1e730e575a84b9aa804
|
2026-01-21T00:00:00-05:00
|
High-Throughput and Scalable Secure Inference Protocols for Deep Learning with Packed Secret Sharing
|
arXiv:2601.13041v1 Announce Type: new Abstract: Most existing secure neural network inference protocols based on secure multi-party computation (MPC) typically support at most four participants, demonstrating severely limited scalability. Liu et al. (USENIX Security'24) presented the first relatively practical approach by utilizing Shamir secret sharing with Mersenne prime fields. However, when processing deeper neural networks such as VGG16, their protocols incur substantial communication overhead, resulting in particularly significant latency in wide-area network (WAN) environments. In this paper, we propose a high-throughput and scalable MPC protocol for neural network inference against semi-honest adversaries in the honest-majority setting. The core of our approach lies in leveraging packed Shamir secret sharing (PSS) to enable parallel computation and reduce communication complexity. The main contributions are three-fold: i) We present a communication-efficient protocol for vector-matrix multiplication, based on our newly defined notion of vector-matrix multiplication-friendly random share tuples. ii) We design the filter packing approach that enables parallel convolution. iii) We further extend all non-linear protocols based on Shamir secret sharing to the PSS-based protocols for achieving parallel non-linear operations. Extensive experiments across various datasets and neural networks demonstrate the superiority of our approach in WAN. Compared to Liu et al. (USENIX Security'24), our scheme reduces the communication upto 5.85x, 11.17x, and 6.83x in offline, online and total communication overhead, respectively. In addition, our scheme is upto 1.59x, 2.61x, and 1.75x faster in offline, online and total running time, respectively.
|
https://arxiv.org/abs/2601.13041
|
Academic Papers
|
svg
|
bd9aaab012f805eaeafb7abc98c70062a86c6113e5c64a08b35db327d8741b13
|
2026-01-21T00:00:00-05:00
|
Static Is Not Enough: A Comparative Study of VR and SpaceMouse in Static and Dynamic Teleoperation Tasks
|
arXiv:2601.13042v1 Announce Type: new Abstract: Imitation learning relies on high-quality demonstrations, and teleoperation is a primary way to collect them, making teleoperation interface choice crucial for the data. Prior work mainly focused on static tasks, i.e., discrete, segmented motions, yet demonstrations also include dynamic tasks requiring reactive control. As dynamic tasks impose fundamentally different interface demands, insights from static-task evaluations cannot generalize. To address this gap, we conduct a within-subjects study comparing a VR controller and a SpaceMouse across two static and two dynamic tasks ($N=25$). We assess success rate, task duration, cumulative success, alongside NASA-TLX, SUS, and open-ended feedback. Results show statistically significant advantages for VR: higher success rates, particularly on dynamic tasks, shorter successful execution times across tasks, and earlier successes across attempts, with significantly lower workload and higher usability. As existing VR teleoperation systems are rarely open-source or suited for dynamic tasks, we release our VR interface to fill this gap.
|
https://arxiv.org/abs/2601.13042
|
Academic Papers
|
svg
|
207a8986786747834f8f34d9bacefdeffe45c5dae30b90eba18100bf96e93af7
|
2026-01-21T00:00:00-05:00
|
Typhoon ASR Real-time: FastConformer-Transducer for Thai Automatic Speech Recognition
|
arXiv:2601.13044v1 Announce Type: new Abstract: Large encoder-decoder models like Whisper achieve strong offline transcription but remain impractical for streaming applications due to high latency. However, due to the accessibility of pre-trained checkpoints, the open Thai ASR landscape remains dominated by these offline architectures, leaving a critical gap in efficient streaming solutions. We present Typhoon ASR Real-time, a 115M-parameter FastConformer-Transducer model for low-latency Thai speech recognition. We demonstrate that rigorous text normalization can match the impact of model scaling: our compact model achieves a 45x reduction in computational cost compared to Whisper Large-v3 while delivering comparable accuracy. Our normalization pipeline resolves systemic ambiguities in Thai transcription --including context-dependent number verbalization and repetition markers (mai yamok) --creating consistent training targets. We further introduce a two-stage curriculum learning approach for Isan (north-eastern) dialect adaptation that preserves Central Thai performance. To address reproducibility challenges in Thai ASR, we release the Typhoon ASR Benchmark, a gold-standard human-labeled datasets with transcriptions following established Thai linguistic conventions, providing standardized evaluation protocols for the research community.
|
https://arxiv.org/abs/2601.13044
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.