id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
e54c352176262afa6682c6bf90af879b251ac88330bca5fada65a9332d85ce44
|
2026-01-07T00:00:00-05:00
|
Learning and Optimizing the Efficacy of Spatio-Temporal Task Allocation under Temporal and Resource Constraints
|
arXiv:2601.02505v1 Announce Type: new Abstract: Complex multi-robot missions often require heterogeneous teams to jointly optimize task allocation, scheduling, and path planning to improve team performance under strict constraints. We formalize these complexities into a new class of problems, dubbed Spatio-Temporal Efficacy-optimized Allocation for Multi-robot systems (STEAM). STEAM builds upon trait-based frameworks that model robots using their capabilities (e.g., payload and speed), but goes beyond the typical binary success-failure model by explicitly modeling the efficacy of allocations as trait-efficacy maps. These maps encode how the aggregated capabilities assigned to a task determine performance. Further, STEAM accommodates spatio-temporal constraints, including a user-specified time budget (i.e., maximum makespan). To solve STEAM problems, we contribute a novel algorithm named Efficacy-optimized Incremental Task Allocation Graph Search (E-ITAGS) that simultaneously optimizes task performance and respects time budgets by interleaving task allocation, scheduling, and path planning. Motivated by the fact that trait-efficacy maps are difficult, if not impossible, to specify, E-ITAGS efficiently learns them using a realizability-aware active learning module. Our approach is realizability-aware since it explicitly accounts for the fact that not all combinations of traits are realizable by the robots available during learning. Further, we derive experimentally-validated bounds on E-ITAGS' suboptimality with respect to efficacy. Detailed numerical simulations and experiments using an emergency response domain demonstrate that E-ITAGS generates allocations of higher efficacy compared to baselines, while respecting resource and spatio-temporal constraints. We also show that our active learning approach is sample efficient and establishes a principled tradeoff between data and computational efficiency.
|
https://arxiv.org/abs/2601.02505
|
Academic Papers
|
svg
|
66c17a431d9ffc62251a952d31c9eafdd95e942fc8e34eb46ddb66edd3d2017a
|
2026-01-07T00:00:00-05:00
|
hdlib 2.0: Extending Machine Learning Capabilities of Vector-Symbolic Architectures
|
arXiv:2601.02509v1 Announce Type: new Abstract: Following the initial publication of hdlib, a Python library for designing Vector-Symbolic Architectures (VSA), we introduce a major extension that significantly enhances its machine learning capabilities. VSA, also known as Hyperdimensional Computing, is a computing paradigm that represents and processes information using high-dimensional vectors. While the first version of hdlib established a robust foundation for creating and manipulating these vectors, this update addresses the growing need for more advanced, data-driven modeling within the VSA framework. Here, we present four extensions: significant enhancements to the existing supervised classification model also enabling feature selection, and a new regression model for predicting continuous variables, a clustering model for unsupervised learning, and a graph-based learning model. Furthermore, we propose the first implementation ever of Quantum Hyperdimensional Computing with quantum-powered arithmetic operations and a new Quantum Machine Learning model for supervised learning. hdlib remains open-source and available on GitHub at https://github.com/cumbof/hdlib under the MIT license, and distributed through the Python Package Index (pip install hdlib) and Conda (conda install -c conda-forge hdlib). Documentation and examples of these new features are available on the official Wiki at https://github.com/cumbof/hdlib/wiki.
|
https://arxiv.org/abs/2601.02509
|
Academic Papers
|
svg
|
0cbe4ddb092bcdaef816629c3956aa8248a1e2c89311e982162397eea50e471a
|
2026-01-07T00:00:00-05:00
|
LLM-Enhanced Reinforcement Learning for Time Series Anomaly Detection
|
arXiv:2601.02511v1 Announce Type: new Abstract: Detecting anomalies in time series data is crucial for finance, healthcare, sensor networks, and industrial monitoring applications. However, time series anomaly detection often suffers from sparse labels, complex temporal patterns, and costly expert annotation. We propose a unified framework that integrates Large Language Model (LLM)-based potential functions for reward shaping with Reinforcement Learning (RL), Variational Autoencoder (VAE)-enhanced dynamic reward scaling, and active learning with label propagation. An LSTM-based RL agent leverages LLM-derived semantic rewards to guide exploration, while VAE reconstruction errors add unsupervised anomaly signals. Active learning selects the most uncertain samples, and label propagation efficiently expands labeled data. Evaluations on Yahoo-A1 and SMD benchmarks demonstrate that our method achieves state-of-the-art detection accuracy under limited labeling budgets and operates effectively in data-constrained settings. This study highlights the promise of combining LLMs with RL and advanced unsupervised techniques for robust, scalable anomaly detection in real-world applications.
|
https://arxiv.org/abs/2601.02511
|
Academic Papers
|
svg
|
42c042faf85de67b113b1e035891f13dda7ce44147c22f7b31a6b2ef71ddb4b3
|
2026-01-07T00:00:00-05:00
|
Green LLM Techniques in Action: How Effective Are Existing Techniques for Improving the Energy Efficiency of LLM-Based Applications in Industry?
|
arXiv:2601.02512v1 Announce Type: new Abstract: The rapid adoption of large language models (LLMs) has raised concerns about their substantial energy consumption, especially when deployed at industry scale. While several techniques have been proposed to address this, limited empirical evidence exists regarding the effectiveness of applying them to LLM-based industry applications. To fill this gap, we analyzed a chatbot application in an industrial context at Schuberg Philis, a Dutch IT services company. We then selected four techniques, namely Small and Large Model Collaboration, Prompt Optimization, Quantization, and Batching, applied them to the application in eight variations, and then conducted experiments to study their impact on energy consumption, accuracy, and response time compared to the unoptimized baseline. Our results show that several techniques, such as Prompt Optimization and 2-bit Quantization, managed to reduce energy use significantly, sometimes by up to 90%. However, these techniques especially impacted accuracy negatively, to a degree that is not acceptable in practice. The only technique that achieved significant and strong energy reductions without harming the other qualities substantially was Small and Large Model Collaboration via Nvidia's Prompt Task and Complexity Classifier (NPCC) with prompt complexity thresholds. This highlights that reducing the energy consumption of LLM-based applications is not difficult in practice. However, improving their energy efficiency, i.e., reducing energy use without harming other qualities, remains challenging. Our study provides practical insights to move towards this goal.
|
https://arxiv.org/abs/2601.02512
|
Academic Papers
|
svg
|
eaa5c3aa8a505f19164fd99c3aab38e04170129d6576356ead527b9a0b09ac93
|
2026-01-07T00:00:00-05:00
|
On well-posed energy/entropy stable boundary conditions for the rotating shallow water equations
|
arXiv:2601.02513v1 Announce Type: new Abstract: We derive and analyze well-posed, energy- and entropy-stable boundary conditions (BCs) for the two-dimensional linear and nonlinear rotating shallow water equations (RSWE) in vector invariant form. The focus of the study is on subcritical flows, which are commonly observed in atmospheric, oceanic, and geostrophic flow applications. We consider spatial domains with smooth boundaries and formulate both linear and nonlinear BCs using mass flux, Riemann's invariants, and Bernoulli's potential, ensuring that the resulting initial boundary value problem (IBVP) is provably entropy- and energy-stable. The linear analysis is comprehensive, providing sufficient conditions to establish the existence, uniqueness, and energy stability of solutions to the linear IBVP. For the nonlinear IBVP, which admits more general solutions, our goal is to develop nonlinear BCs that guarantee entropy stability. We introduce the concepts of linear consistency and linear stability for nonlinear IBVPs, demonstrating that if a nonlinear IBVP is both linearly consistent and linearly stable, then, for sufficiently regular initial and boundary data over a finite time interval, a unique smooth solution exists. Both the linear and nonlinear IBVPs can be efficiently solved using high-order accurate numerical methods. By employing high-order summation-by-parts operators to discretize spatial derivatives and implementing weak enforcement of BCs via penalty techniques, we develop provably energy- and entropy-stable numerical schemes on curvilinear meshes. Extensive numerical experiments are presented to verify the accuracy of the methods and to demonstrate the robustness of the proposed BCs and numerical schemes.
|
https://arxiv.org/abs/2601.02513
|
Academic Papers
|
svg
|
cbfa711f8f821d3d7b1a2ffc6a5d9bf9434ee2b7db5bc74a74fa508f2ec2f749
|
2026-01-07T00:00:00-05:00
|
Textual Explanations and Their Evaluations for Reinforcement Learning Policy
|
arXiv:2601.02514v1 Announce Type: new Abstract: Understanding a Reinforcement Learning (RL) policy is crucial for ensuring that autonomous agents behave according to human expectations. This goal can be achieved using Explainable Reinforcement Learning (XRL) techniques. Although textual explanations are easily understood by humans, ensuring their correctness remains a challenge, and evaluations in state-of-the-art remain limited. We present a novel XRL framework for generating textual explanations, converting them into a set of transparent rules, improving their quality, and evaluating them. Expert's knowledge can be incorporated into this framework, and an automatic predicate generator is also proposed to determine the semantic information of a state. Textual explanations are generated using a Large Language Model (LLM) and a clustering technique to identify frequent conditions. These conditions are then converted into rules to evaluate their properties, fidelity, and performance in the deployed environment. Two refinement techniques are proposed to improve the quality of explanations and reduce conflicting information. Experiments were conducted in three open-source environments to enable reproducibility, and in a telecom use case to evaluate the industrial applicability of the proposed XRL framework. This framework addresses the limitations of an existing method, Autonomous Policy Explanation, and the generated transparent rules can achieve satisfactory performance on certain tasks. This framework also enables a systematic and quantitative evaluation of textual explanations, providing valuable insights for the XRL field.
|
https://arxiv.org/abs/2601.02514
|
Academic Papers
|
svg
|
b87d38ed7493f0f1ee3d39c28c595900eb6960c3920da6bcdca037c9b24e6b06
|
2026-01-07T00:00:00-05:00
|
CT Scans As Video: Efficient Intracranial Hemorrhage Detection Using Multi-Object Tracking
|
arXiv:2601.02521v1 Announce Type: new Abstract: Automated analysis of volumetric medical imaging on edge devices is severely constrained by the high memory and computational demands of 3D Convolutional Neural Networks (CNNs). This paper develops a lightweight computer vision framework that reconciles the efficiency of 2D detection with the necessity of 3D context by reformulating volumetric Computer Tomography (CT) data as sequential video streams. This video-viewpoint paradigm is applied to the time-sensitive task of Intracranial Hemorrhage (ICH) detection using the Hemorica dataset. To ensure operational efficiency, we benchmarked multiple generations of the YOLO architecture (v8, v10, v11 and v12) in their Nano configurations, selecting the version with the highest mAP@50 to serve as the slice-level backbone. A ByteTrack algorithm is then introduced to enforce anatomical consistency across the $z$-axis. To address the initialization lag inherent in video trackers, a hybrid inference strategy and a spatiotemporal consistency filter are proposed to distinguish true pathology from transient prediction noise. Experimental results on independent test data demonstrate that the proposed framework serves as a rigorous temporal validator, increasing detection Precision from 0.703 to 0.779 compared to the baseline 2D detector, while maintaining high sensitivity. By approximating 3D contextual reasoning at a fraction of the computational cost, this method provides a scalable solution for real-time patient prioritization in resource-constrained environments, such as mobile stroke units and IoT-enabled remote clinics.
|
https://arxiv.org/abs/2601.02521
|
Academic Papers
|
svg
|
9f991a34c677f7d957b101ad38bffd76a2560021eaacdb0a915ddfb9653ef2f0
|
2026-01-07T00:00:00-05:00
|
On the Effectiveness of Proposed Techniques to Reduce Energy Consumption in RAG Systems: A Controlled Experiment
|
arXiv:2601.02522v1 Announce Type: new Abstract: The rising energy demands of machine learning (ML), e.g., implemented in popular variants like retrieval-augmented generation (RAG) systems, have raised significant concerns about their environmental sustainability. While previous research has proposed green tactics for ML-enabled systems, their empirical evaluation within RAG systems remains largely unexplored. This study presents a controlled experiment investigating five practical techniques aimed at reducing energy consumption in RAG systems. Using a production-like RAG system developed at our collaboration partner, the Software Improvement Group, we evaluated the impact of these techniques on energy consumption, latency, and accuracy. Through a total of 9 configurations spanning over 200 hours of trials using the CRAG dataset, we reveal that techniques such as increasing similarity retrieval thresholds, reducing embedding sizes, applying vector indexing, and using a BM25S reranker can significantly reduce energy usage, up to 60% in some cases. However, several techniques also led to unacceptable accuracy decreases, e.g., by up to 30% for the indexing strategies. Notably, finding an optimal retrieval threshold and reducing embedding size substantially reduced energy consumption and latency with no loss in accuracy, making these two techniques truly energy-efficient. We present the first comprehensive, empirical study on energy-efficient design techniques for RAG systems, providing guidance for developers and researchers aiming to build sustainable RAG applications.
|
https://arxiv.org/abs/2601.02522
|
Academic Papers
|
svg
|
a5e87e1f7de6a46d30a21b0e039071f2759b75fa4ece07cc7c76296b14b8e8fa
|
2026-01-07T00:00:00-05:00
|
Modellierung und Simulation der Dynamik von Fussg\"angerstr\"omen
|
arXiv:2601.02526v1 Announce Type: new Abstract: This work presents a microscopic model to describe pedestrian flows based on the social force theory. The aim of this study is twofold: (1) developing a realistic model that can be used as a tool for designing pedestrian-friendly infrastructure, and (2) verifying a social science theory using a model with sufficient data. The investigation of the pedestrian model shows that despite simple individual behavior patterns, complex spatial and temporal structures emerge through the interactions in pedestrian flows. Collective behavior emerges from individuals following two basic rules: (1) moving directly towards their goal at a certain speed, and (2) maintaining a distance to other pedestrians and obstacles. This self-organized collective behavior manifests itself as trails that are formed by pedestrians moving in one direction. Furthermore, strong dependencies of the properties of pedestrian flows on geometric forms of buildings are shown, and the influence of geometric changes on performance characteristics is investigated. An example demonstrates how efficiency can be increased by reducing walkable areas. This work also presents an evolutionary algorithm for optimizing building layouts based on the social force model. Additionally, a decision-making model is integrated to describe alternative goal selection, and adaptation and learning capabilities are included to improve pedestrian avoidance behavior and decision strategies based on accumulated experience. A method for determining load distributions in individual sections of a path system considering subjective selection criteria is also developed. Finally, a model that describes the self-organization of path systems with minimal detours is presented, similar to natural transport networks where total length and material costs are optimized.
|
https://arxiv.org/abs/2601.02526
|
Academic Papers
|
svg
|
3768a44e30bd126965c344735e5bd3c205f92559cb701aac482e1fded5f1a48d
|
2026-01-07T00:00:00-05:00
|
Multi-scale Graph Autoregressive Modeling: Molecular Property Prediction via Next Token Prediction
|
arXiv:2601.02530v1 Announce Type: new Abstract: We present Connection-Aware Motif Sequencing (CamS), a graph-to-sequence representation that enables decoder-only Transformers to learn molecular graphs via standard next-token prediction (NTP). For molecular property prediction, SMILES-based NTP scales well but lacks explicit topology, whereas graph-native masked modeling captures connectivity but risks disrupting the pivotal chemical details (e.g., activity cliffs). CamS bridges this gap by serializing molecular graphs into structure-rich causal sequences. CamS first mines data-driven connection-aware motifs. It then serializes motifs via scaffold-rooted breadth-first search (BFS) to establish a stable core-to-periphery order. Crucially, CamS enables hierarchical modeling by concatenating sequences from fine to coarse motif scales, allowing the model to condition global scaffolds on dense, uncorrupted local structural evidence. We instantiate CamS-LLaMA by pre-training a vanilla LLaMA backbone on CamS sequences. It achieves state-of-the-art performance on MoleculeNet and the activity-cliff benchmark MoleculeACE, outperforming both SMILES-based language models and strong graph baselines. Interpretability analysis confirms that our multi-scale causal serialization effectively drives attention toward cliff-determining differences.
|
https://arxiv.org/abs/2601.02530
|
Academic Papers
|
svg
|
b7c82c0c993978cdf1104bbb0762d5498de119153fea72424a8123f0f205b3da
|
2026-01-07T00:00:00-05:00
|
Losses that Cook: Topological Optimal Transport for Structured Recipe Generation
|
arXiv:2601.02531v1 Announce Type: new Abstract: Cooking recipes are complex procedures that require not only a fluent and factual text, but also accurate timing, temperature, and procedural coherence, as well as the correct composition of ingredients. Standard training procedures are primarily based on cross-entropy and focus solely on fluency. Building on RECIPE-NLG, we investigate the use of several composite objectives and present a new topological loss that represents ingredient lists as point clouds in embedding space, minimizing the divergence between predicted and gold ingredients. Using both standard NLG metrics and recipe-specific metrics, we find that our loss significantly improves ingredient- and action-level metrics. Meanwhile, the Dice loss excels in time/temperature precision, and the mixed loss yields competitive trade-offs with synergistic gains in quantity and time. A human preference analysis supports our finding, showing our model is preferred in 62% of the cases.
|
https://arxiv.org/abs/2601.02531
|
Academic Papers
|
svg
|
6bd245467858fd18ed44e6dab8b550c17891ff2dd8ffcb3982d965e8a757d9f6
|
2026-01-07T00:00:00-05:00
|
A $O^*((2 + \epsilon)^k)$ Time Algorithm for Cograph Deletion Using Unavoidable Subgraphs in Large Prime Graphs
|
arXiv:2601.02532v1 Announce Type: new Abstract: We study the parameterized complexity of the Cograph Deletion problem, which asks whether one can delete at most $k$ edges from a graph to make it $P_4$-free. This is a well-known graph modification problem with applications in computation biology and social network analysis. All current parameterized algorithms use a similar strategy, which is to find a $P_4$ and explore the local structure around it to perform an efficient recursive branching. The best known algorithm achieves running time $O^*(2.303^k)$ and requires an automated search of the branching cases due to their complexity. Since it appears difficult to further improve the current strategy, we devise a new approach using modular decompositions. We solve each module and the quotient graph independently, with the latter being the core problem. This reduces the problem to solving on a prime graph, in which all modules are trivial. We then use a characterization of Chudnovsky et al. stating that any large enough prime graph has one of seven structures as an induced subgraph. These all have many $P_4$s, with the quantity growing linearly with the graph size, and we show that these allow a recursive branch tree algorithm to achieve running time $O^*((2 + \epsilon)^k)$ for any $\epsilon > 0$. This appears to be the first algorithmic application of the prime graph characterization and it could be applicable to other modification problems. Towards this goal, we provide the exact set of graph classes $\H$ for which the $\H$-free editing problem can make use of our reduction to a prime graph, opening the door to improvements for other modification problems.
|
https://arxiv.org/abs/2601.02532
|
Academic Papers
|
svg
|
1f0a0c2fec549ffe5c179cba6affef387cc4679dd3ae59f6b4bff56701cb0044
|
2026-01-07T00:00:00-05:00
|
ModeX: Evaluator-Free Best-of-N Selection for Open-Ended Generation
|
arXiv:2601.02535v1 Announce Type: new Abstract: Selecting a single high-quality output from multiple stochastic generations remains a fundamental challenge for large language models (LLMs), particularly in open-ended tasks where no canonical answer exists. While Best-of-N and self-consistency methods show that aggregating multiple generations can improve performance, existing approaches typically rely on external evaluators, reward models, or exact string-match voting, limiting their applicability and efficiency. We propose Mode Extraction (ModeX), an evaluator-free Best-of-N selection framework that generalizes majority voting to open-ended text generation by identifying the modal output representing the dominant semantic consensus among generated texts. ModeX constructs a similarity graph over candidate generations and recursively applies spectral clustering to select a representative centroid, without requiring additional inference or auxiliary models. We further instantiate this selection principle as ModeX--Lite, an improved version of ModeX with early pruning for efficiency. Across open-ended tasks--including text summarization, code generation, and mathematical reasoning--our approaches consistently outperform standard single- and multi-path baselines, providing a computationally efficient solution for robust open-ended text generation. Code is released in https://github.com/deeplearning-wisc/ModeX.
|
https://arxiv.org/abs/2601.02535
|
Academic Papers
|
svg
|
37c5e2a176b29ac39b25ae4a831eb17e9249550af7d60358149f66b2170235f4
|
2026-01-07T00:00:00-05:00
|
MovieRecapsQA: A Multimodal Open-Ended Video Question-Answering Benchmark
|
arXiv:2601.02536v1 Announce Type: new Abstract: Understanding real-world videos such as movies requires integrating visual and dialogue cues to answer complex questions. Yet existing VideoQA benchmarks struggle to capture this multimodal reasoning and are largely not open-ended, given the difficulty of evaluating free-form answers. In this paper, we introduce a novel open-ended multi-modal VideoQA benchmark, MovieRecapsQA created using movie recap videos--a distinctive type of YouTube content that summarizes a film by presenting its key events through synchronized visual (recap video) and textual (recap summary) modalities. Using the recap summary, we generate $\approx 8.2$ K question-answer (QA) pairs (aligned with movie-subtitles) and provide the necessary "facts" needed to verify an answer in a reference-free manner. To our knowledge, this is the first open-ended VideoQA benchmark that supplies explicit textual context of the input (video and/or text); which we use for evaluation. Our benchmark provides videos of multiple lengths (i.e., recap-segments, movie-segments) and categorizations of questions (by modality and type) to enable fine-grained analysis. We evaluate the performance of seven state-of-the-art MLLMs using our benchmark and observe that: 1) visual-only questions remain the most challenging; 2) models default to textual inputs whenever available; 3) extracting factually accurate information from video content is still difficult for all models; and 4) proprietary and open-source models perform comparably on video-dependent questions.
|
https://arxiv.org/abs/2601.02536
|
Academic Papers
|
svg
|
bd3f44edb43834a09404f705a2c13cdfd1a2ca821f05fb8146228d2cd86e8a54
|
2026-01-07T00:00:00-05:00
|
Optimal Oblivious Load-Balancing for Sparse Traffic in Large-Scale Satellite Networks
|
arXiv:2601.02537v1 Announce Type: new Abstract: Oblivious load-balancing in networks involves routing traffic from sources to destinations using predetermined routes independent of the traffic, so that the maximum load on any link in the network is minimized. We investigate oblivious load-balancing schemes for a $N\times N$ torus network under sparse traffic where there are at most $k$ active source-destination pairs. We are motivated by the problem of load-balancing in large-scale LEO satellite networks, which can be modelled as a torus, where the traffic is known to be sparse and localized to certain hotspot areas. We formulate the problem as a linear program and show that no oblivious routing scheme can achieve a worst-case load lower than approximately $\frac{\sqrt{2k}}{4}$ when $1<k \leq N^2/2$ and $\frac{N}{4}$ when $N^2/2\leq k\leq N^2$. Moreover, we demonstrate that the celebrated Valiant Load Balancing scheme is suboptimal under sparse traffic and construct an optimal oblivious load-balancing scheme that achieves the lower bound. Further, we discover a $\sqrt{2}$ multiplicative gap between the worst-case load of a non-oblivious routing and the worst-case load of any oblivious routing. The results can also be extended to general $N\times M$ tori with unequal link capacities along the vertical and horizontal directions.
|
https://arxiv.org/abs/2601.02537
|
Academic Papers
|
svg
|
c1f89cbf0dd6ea457ebaf79f6954832de83da280a18ee352f9e9cb44ff345c5f
|
2026-01-07T00:00:00-05:00
|
GPU-Accelerated Energy-Conserving Methods for the Hyperbolized Serre-Green-Naghdi Equations in 2D
|
arXiv:2601.02540v1 Announce Type: new Abstract: We develop energy-conserving numerical methods for a two-dimensional hyperbolic approximation of the Serre-Green-Naghdi equations with variable bathymetry for both periodic and reflecting boundary conditions. The hyperbolic formulation avoids the costly inversion of an elliptic operator present in the classical model. Our schemes combine split forms with summation-by-parts (SBP) operators to construct semidiscretizations that conserve the total water mass and the total energy. We provide analytical proofs of these conservation properties and also verify them numerically. While the framework is general, our implementation focuses on second-order finite-difference SBP operators. The methods are implemented in Julia for CPU and GPU architectures (AMD and NVIDIA) and achieve substantial speedups on modern accelerators. We validate the approach through convergence studies based on solitary-wave and manufactured-solution tests, and by comparisons to analytical, experimental, and existing numerical results. All source code to reproduce our results is available online.
|
https://arxiv.org/abs/2601.02540
|
Academic Papers
|
svg
|
f347a7fb57ce3860fab4ed65fcf87fc0647f63f9ac1f99b64867adf8792a16bf
|
2026-01-07T00:00:00-05:00
|
Normalized Conditional Mutual Information Surrogate Loss for Deep Neural Classifiers
|
arXiv:2601.02543v1 Announce Type: new Abstract: In this paper, we propose a novel information theoretic surrogate loss; normalized conditional mutual information (NCMI); as a drop in alternative to the de facto cross-entropy (CE) for training deep neural network (DNN) based classifiers. We first observe that the model's NCMI is inversely proportional to its accuracy. Building on this insight, we introduce an alternating algorithm to efficiently minimize the NCMI. Across image recognition and whole-slide imaging (WSI) subtyping benchmarks, NCMI-trained models surpass state of the art losses by substantial margins at a computational cost comparable to that of CE. Notably, on ImageNet, NCMI yields a 2.77% top-1 accuracy improvement with ResNet-50 comparing to the CE; on CAMELYON-17, replacing CE with NCMI improves the macro-F1 by 8.6% over the strongest baseline. Gains are consistent across various architectures and batch sizes, suggesting that NCMI is a practical and competitive alternative to CE.
|
https://arxiv.org/abs/2601.02543
|
Academic Papers
|
svg
|
180d8ef43eeef670b957fa37799540a8bcf5cf66ff545d4e1b697667f9bd6e9b
|
2026-01-07T00:00:00-05:00
|
SimpleMem: Efficient Lifelong Memory for LLM Agents
|
arXiv:2601.02553v1 Announce Type: new Abstract: To support reliable long-term interaction in complex environments, LLM agents require memory systems that efficiently manage historical experiences. Existing approaches either retain full interaction histories via passive context extension, leading to substantial redundancy, or rely on iterative reasoning to filter noise, incurring high token costs. To address this challenge, we introduce SimpleMem, an efficient memory framework based on semantic lossless compression. We propose a three-stage pipeline designed to maximize information density and token utilization: (1) \textit{Semantic Structured Compression}, which applies entropy-aware filtering to distill unstructured interactions into compact, multi-view indexed memory units; (2) \textit{Recursive Memory Consolidation}, an asynchronous process that integrates related units into higher-level abstract representations to reduce redundancy; and (3) \textit{Adaptive Query-Aware Retrieval}, which dynamically adjusts retrieval scope based on query complexity to construct precise context efficiently. Experiments on benchmark datasets show that our method consistently outperforms baseline approaches in accuracy, retrieval efficiency, and inference cost, achieving an average F1 improvement of 26.4% while reducing inference-time token consumption by up to 30-fold, demonstrating a superior balance between performance and efficiency. Code is available at https://github.com/aiming-lab/SimpleMem.
|
https://arxiv.org/abs/2601.02553
|
Academic Papers
|
svg
|
795e3080191cd92071d5d1ec3cc5456dc0091c1c40c40bcb276acef7cd9739e6
|
2026-01-07T00:00:00-05:00
|
AMC26: VSSEA robust position control
|
arXiv:2601.02557v1 Announce Type: new Abstract: This paper presents robust position control strategies for the novel VSSEA. By employing a constructed state-space model, two control schemes are developed in a unified framework: a state-feedback controller and a sliding mode controller, both integrated with a second-order DOb. The proposed framework achieves high-performance motion control by precisely estimating and compensating for internal and external disturbances, while preserving the nominal dynamic response. Simulation results demonstrate that pole-placement-based controllers are highly sensitive to disturbances, whereas LQR-based controllers offer improved robustness at the expense of slower dynamics. By incorporating DOb, robustness is significantly enhanced without degrading time response, and the LQR controller can be tuned solely for performance optimization. Experimental results confirm that the proposed robust position controllers can be implemented in real world applications. These results highlight the effectiveness of the proposed approach and lay the foundation for future investigations on robust stability and performance under different stiffness settings.
|
https://arxiv.org/abs/2601.02557
|
Academic Papers
|
svg
|
efa0859d8f537f6f4146a195ad33ae2b721a57137094d426c4ad98b82f5842dc
|
2026-01-07T00:00:00-05:00
|
PerspectiveCoach: Exploring LLMs for Developer Reflection
|
arXiv:2601.02559v1 Announce Type: new Abstract: Despite growing awareness of ethical challenges in software development, practitioners still lack structured tools that help them critically engage with the lived experiences of marginalized users. This paper presents PerspectiveCoach, a large language model (LLM)-powered conversational tool designed to guide developers through structured perspective-taking exercises and deepen critical reflection on how software design decisions affect marginalized communities. Through a controlled study with 18 front-end developers (balanced by sex), who interacted with the tool using a real case of online gender-based harassment, we examine how PerspectiveCoach supports ethical reasoning and engagement with user perspectives. Qualitative analysis revealed increased self-awareness, broadened perspectives, and more nuanced ethical articulation, while a complementary human-human study contextualized these findings. Text similarity analyses demonstrated that participants in the human-PerspectiveCoach study improved the fidelity of their restatements over multiple attempts, capturing both surface-level and semantic aspects of user concerns. However, human-PerspectiveCoach's restatements had a lower baseline than the human-human conversations, highlighting contextual differences in impersonal and interpersonal perspective-taking. Across the study, participants rated the tool highly for usability and relevance. This work contributes an exploratory design for LLM-powered end-user perspective-taking that supports critical, ethical self-reflection and offers empirical insights (i.e., enhancing adaptivity, centering plurality) into how such tools can help practitioners build more inclusive and socially responsive technologies.
|
https://arxiv.org/abs/2601.02559
|
Academic Papers
|
svg
|
04bdf04243cfa7775581bd9bc31bd51740e35e694bc03bf86d4fa65ce710e8e7
|
2026-01-07T00:00:00-05:00
|
AMC26: High-performance DOb for robust position control
|
arXiv:2601.02560v1 Announce Type: new Abstract: This paper presents a new HPDOb that significantly improves disturbance estimation accuracy and robustness in motion control systems, surpassing the capabilities of conventional DObs. The proposed observer is analysed and synthesised in the discrete-time domain, providing a realistic representation of their dynamic behaviour and enabling enhanced controller design for practical applications. The core contribution of the HPDOb is a novel synthesis method that incorporates higher-order truncation error dynamics into disturbance estimation. Unlike conventional DObs, which are limited to zero-order truncation error, the HPDOb achieves first-order truncation error, yielding markedly improved estimation accuracy and robustness against disturbances in motion control systems. Simulation and experiments verify the stability and performance of HPDOb.
|
https://arxiv.org/abs/2601.02560
|
Academic Papers
|
svg
|
5f3128c3e82c45c34f13b68d16e1c8a691e0579f6557bea24ea2df773dc297f2
|
2026-01-07T00:00:00-05:00
|
A Schr\"odinger-Based Dispersive Regularization Approach for Numerical Simulation of One-Dimensional Shallow Water Equations
|
arXiv:2601.02561v1 Announce Type: new Abstract: We propose a novel dispersive regularization framework for the numerical simulation of the one-dimensional shallow water equations (SWE). The classical hyperbolic system is regularized by a third-order dispersive term in the momentum equation, which renders the system equivalent, via the Madelung transform, to a defocusing cubic nonlinear Schr\"odinger equation with a drift term induced by bottom topography. Instead of solving the shallow water equations directly, we solve the associated Schr\"odinger equation and recover the hydrodynamic variables through a simple postprocessing procedure. This approach transforms the original nonlinear hyperbolic system into a semilinear complex-valued equation, which can be efficiently approximated using a Strang time-splitting method combined with a spectral element discretization in space. Numerical experiments demonstrate that, in subcritical regimes without shock formation, the Schr\"odinger regularization provides an $O(\varepsilon)$ approximation to the classical shallow water solution, where $\varepsilon$ denotes the regularization parameter. Importantly, we observe that this convergence behavior persists even in the presence of moving wetting--drying interfaces, where vacuum states emerge and standard shallow water solvers often encounter difficulties. These results suggest that the Schr\"odinger-based formulation offers a robust and promising alternative framework for the numerical simulation of shallow water flows with dry states.
|
https://arxiv.org/abs/2601.02561
|
Academic Papers
|
svg
|
06ff8dbf83a5c22b706b0d1132e856aa88baca04e4f22642b316028633f1d9e2
|
2026-01-07T00:00:00-05:00
|
CutisAI: Deep Learning Framework for Automated Dermatology and Cancer Screening
|
arXiv:2601.02562v1 Announce Type: new Abstract: The rapid growth of dermatological imaging and mobile diagnostic tools calls for systems that not only demonstrate empirical performance but also provide strong theoretical guarantees. Deep learning models have shown high predictive accuracy; however, they are often criticized for lacking well, calibrated uncertainty estimates without which these models are hardly deployable in a clinical setting. To this end, we present the Conformal Bayesian Dermatological Classifier (CBDC), a well, founded framework that combines Statistical Learning Theory, Topological Data Analysis (TDA), and Bayesian Conformal Inference. CBDC offers distribution, dependent generalization bounds that reflect dermatological variability, proves a topological stability theorem that guarantees the invariance of convolutional neural network embeddings under photometric and morphological perturbations and provides finite conformal coverage guarantees for trustworthy uncertainty quantification. Through exhaustive experiments on the HAM10000, PH2, and ISIC 2020 datasets, we show that CBDC not only attains classification accuracy but also generates calibrated predictions that are interpretable from a clinical perspective. This research constitutes a theoretical and practical leap for deep dermatological diagnostics, thereby opening the machine learning theory clinical applicability interface.
|
https://arxiv.org/abs/2601.02562
|
Academic Papers
|
svg
|
13e61b22dc546cf869986bc67a2841e87e52a77433a8223ac2666ef8958625d2
|
2026-01-07T00:00:00-05:00
|
Compressed code: the hidden effects of quantization and distillation on programming tokens
|
arXiv:2601.02563v1 Announce Type: new Abstract: Large Language Models (LLMs) have demonstrated exceptional code generation capabilities, yet their token-level mechanisms remain underexplored, particularly in compressed models. Through systematic analysis of programming language token representations, we characterize how programming languages are encoded in LLM tokenizers by analyzing their vocabulary distribution and keyword coverage patterns. We introduce a novel cold-start probability analysis method that provides insights into model behavior without requiring explicit prompts. Additionally, we present a comprehensive evaluation of how different model optimization techniques - including quantization, distillation, model scaling, and task-specific fine-tuning - affect token-level representations and code generation quality. Our experiments, supported by comprehensive probability distribution analysis and evaluation metrics, reveal critical insights into token-level behavior and provide empirically-validated guidelines for maintaining code generation quality under various optimization constraints. These findings advance both theoretical understanding of LLM code generation and practical implementation of optimized models in production environments.
|
https://arxiv.org/abs/2601.02563
|
Academic Papers
|
svg
|
9aca2eff7d42c4b23149ab90383e01586a47fb7826014d9af9faa8e742d5b6fa
|
2026-01-07T00:00:00-05:00
|
Shallow- and Deep-fake Image Manipulation Localization Using Vision Mamba and Guided Graph Neural Network
|
arXiv:2601.02566v1 Announce Type: new Abstract: Image manipulation localization is a critical research task, given that forged images may have a significant societal impact of various aspects. Such image manipulations can be produced using traditional image editing tools (known as "shallowfakes") or advanced artificial intelligence techniques ("deepfakes"). While numerous studies have focused on image manipulation localization on either shallowfake images or deepfake videos, few approaches address both cases. In this paper, we explore the feasibility of using a deep learning network to localize manipulations in both shallow- and deep-fake images, and proposed a solution for such purpose. To precisely differentiate between authentic and manipulated pixels, we leverage the Vision Mamba network to extract feature maps that clearly describe the boundaries between tampered and untouched regions. To further enhance this separation, we propose a novel Guided Graph Neural Network (G-GNN) module that amplifies the distinction between manipulated and authentic pixels. Our evaluation results show that our proposed method achieved higher inference accuracy compared to other state-of-the-art methods.
|
https://arxiv.org/abs/2601.02566
|
Academic Papers
|
svg
|
9e143143d5a3c03b15ccd0edafe9b71355adc8394aac49fdf27f537334a2e4b1
|
2026-01-07T00:00:00-05:00
|
LoRA-Drop: Temporal LoRA Decoding for Efficient LLM Inference
|
arXiv:2601.02569v1 Announce Type: new Abstract: Autoregressive large language models (LLMs) are bottlenecked by sequential decoding, where each new token typically requires executing all transformer layers. Existing dynamic-depth and layer-skipping methods reduce this cost, but often rely on auxiliary routing mechanisms or incur accuracy degradation when bypassed layers are left uncompensated. We present \textbf{LoRA-Drop}, a plug-and-play inference framework that accelerates decoding by applying a \emph{temporal compute schedule} to a fixed subset of intermediate layers: on most decoding steps, selected layers reuse the previous-token hidden state and apply a low-rank LoRA correction, while periodic \emph{refresh} steps execute the full model to prevent drift. LoRA-Drop requires no routing network, is compatible with standard KV caching, and can reduce KV-cache footprint by skipping KV updates in droppable layers during LoRA steps and refreshing periodically. Across \textbf{LLaMA2-7B}, \textbf{LLaMA3-8B}, \textbf{Qwen2.5-7B}, and \textbf{Qwen2.5-14B}, LoRA-Drop achieves up to \textbf{2.6$\times$ faster decoding} and \textbf{45--55\% KV-cache reduction} while staying within \textbf{0.5 percentage points (pp)} of baseline accuracy. Evaluations on reasoning (GSM8K, MATH, BBH), code generation (HumanEval, MBPP), and long-context/multilingual benchmarks (LongBench, XNLI, XCOPA) identify a consistent \emph{safe zone} of scheduling configurations that preserves quality while delivering substantial efficiency gains, providing a simple path toward adaptive-capacity inference in LLMs. Codes are available at https://github.com/hosseinbv/LoRA-Drop.git.
|
https://arxiv.org/abs/2601.02569
|
Academic Papers
|
svg
|
1f1fa8fbf1e52cf8c815763c99891f08c96b0173e31250eed86c2502b0ae6add
|
2026-01-07T00:00:00-05:00
|
O-DSS: An Open Dynamic Spectrum Sharing Framework for Cellular-Radar Coexistence in Mid-band Frequencies
|
arXiv:2601.02571v1 Announce Type: new Abstract: The growing demand for mid-band spectrum necessitates efficient Dynamic Spectrum Sharing (DSS) to ensure coexistence between cellular networks and incumbent radar systems. Existing Spectrum Access System (SAS) frameworks rely on fixed Environmental Sensing Capability (ESC) sensors, which are latency-prone and inflexible. This paper introduces O-DSS, an O-RAN-compliant, Machine Learning (ML)-driven DSS framework that enables real-time cellular-radar coexistence in mid-band frequencies with shipborne and fast-moving airborne radars. O-DSS integrates radar detection from low-overhead Key Performance Metrics (KPMs) with spectrogram-based localization to drive fine-grained RAN control, including PRB blanking and radar-aware MCS adaptation. Deployed as a modular xApp, O-DSS achieves 60~ms detection and 700~ms evacuation latencies, outperforming existing baselines. Evaluations across simulations and Over-The-Air (OTA) testbeds show that O-DSS ensures robust incumbent protection while maintaining cellular performance by achieving radar detection of $\geq 99\%$ at SINR $\geq -4$~dB and localization recall of $\geq 95\%$ at SINR $\geq 8$~dB.
|
https://arxiv.org/abs/2601.02571
|
Academic Papers
|
svg
|
1364d8b3e00ddb0efeefbce95022e7581fc975f8ad22b567c67b5b4377f043a0
|
2026-01-07T00:00:00-05:00
|
LendNova: Towards Automated Credit Risk Assessment with Language Models
|
arXiv:2601.02573v1 Announce Type: new Abstract: Credit risk assessment is essential in the financial sector, but has traditionally depended on costly feature-based models that often fail to utilize all available information in raw credit records. This paper introduces LendNova, the first practical automated end-to-end pipeline for credit risk assessment, designed to utilize all available information in raw credit records by leveraging advanced NLP techniques and language models. LendNova transforms risk modeling by operating directly on raw, jargon-heavy credit bureau text using a language model that learns task-relevant representations without manual feature engineering. By automatically capturing patterns and risk signals embedded in the text, it replaces manual preprocessing steps, reducing costs and improving scalability. Evaluation on real-world data further demonstrates its strong potential in accurate and efficient risk assessment. LendNova establishes a baseline for intelligent credit risk agents, demonstrating the feasibility of language models in this domain. It lays the groundwork for future research toward foundation systems that enable more accurate, adaptable, and automated financial decision-making.
|
https://arxiv.org/abs/2601.02573
|
Academic Papers
|
svg
|
c0b7b7f23956469317233130a81f468829124ce177e05051457d32a438a320d6
|
2026-01-07T00:00:00-05:00
|
Fact-Checking with Large Language Models via Probabilistic Certainty and Consistency
|
arXiv:2601.02574v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly used in applications requiring factual accuracy, yet their outputs often contain hallucinated responses. While fact-checking can mitigate these errors, existing methods typically retrieve external evidence indiscriminately, overlooking the model's internal knowledge and potentially introducing irrelevant noise. Moreover, current systems lack targeted mechanisms to resolve specific uncertainties in the model's reasoning. Inspired by how humans fact-check, we argue that LLMs should adaptively decide whether to rely on internal knowledge or initiate retrieval based on their confidence in a given claim. We introduce Probabilistic Certainty and Consistency (PCC), a framework that estimates factual confidence by jointly modeling an LLM's probabilistic certainty and reasoning consistency. These confidence signals enable an adaptive verification strategy: the model answers directly when confident, triggers targeted retrieval when uncertain or inconsistent, and escalates to deep search when ambiguity is high. Our confidence-guided routing mechanism ensures that retrieval is invoked only when necessary, improving both efficiency and reliability. Extensive experiments across three challenging benchmarks show that PCC achieves better uncertainty quantification than verbalized confidence and consistently outperforms strong LLM-based fact-checking baselines. Furthermore, we demonstrate that PCC generalizes well across various LLMs.
|
https://arxiv.org/abs/2601.02574
|
Academic Papers
|
svg
|
03995e27ca74021e544cfb3d7663054fbada9269197c1522004d2a29dee6fa6f
|
2026-01-07T00:00:00-05:00
|
Orchestral AI: A Framework for Agent Orchestration
|
arXiv:2601.02577v1 Announce Type: new Abstract: The rapid proliferation of LLM agent frameworks has forced developers to choose between vendor lock-in through provider-specific SDKs and complex multi-package ecosystems that obscure control flow and hinder reproducibility. Integrating tool calling across multiple LLM providers remains a core engineering challenge due to fragmented APIs, incompatible message formats, and inconsistent streaming and tool-calling behavior, making it difficult to build portable, reliable agent systems. We introduce Orchestral, a lightweight Python framework that provides a unified, type-safe interface for building LLM agents across major providers while preserving the simplicity required for scientific computing and production deployment. Orchestral defines a single universal representation for messages, tools, and LLM usage that operates seamlessly across providers, eliminating manual format translation and reducing framework-induced complexity. Automatic tool schema generation from Python type hints removes the need for handwritten descriptors while maintaining type safety across provider boundaries. A synchronous execution model with streaming support enables deterministic behavior, straightforward debugging, and real-time interaction without introducing server dependencies. The framework's modular architecture cleanly separates provider integration, tool execution, conversation orchestration, and user-facing interfaces, enabling extensibility without architectural entanglement. Orchestral supports advanced agent capabilities found in larger frameworks, including rich tool calling, context compaction, workspace sandboxing, user approval workflows, sub-agents, memory management, and MCP integration.
|
https://arxiv.org/abs/2601.02577
|
Academic Papers
|
svg
|
470db73b7ccdd1c58be428942977a5bc8337a9d8f76db6654b521b33c8d893dc
|
2026-01-07T00:00:00-05:00
|
DataParasite Enables Scalable and Repurposable Online Data Curation
|
arXiv:2601.02578v1 Announce Type: new Abstract: Many questions in computational social science rely on datasets assembled from heterogeneous online sources, a process that is often labor-intensive, costly, and difficult to reproduce. Recent advances in large language models enable agentic search and structured extraction from the web, but existing systems are frequently opaque, inflexible, or poorly suited to scientific data curation. Here we introduce DataParasite, an open-source, modular pipeline for scalable online data collection. DataParasite decomposes tabular curation tasks into independent, entity-level searches defined through lightweight configuration files and executed through a shared, task-agnostic python script. Crucially, the same pipeline can be repurposed to new tasks, including those without predefined entity lists, using only natural-language instructions. We evaluate the pipeline on multiple canonical tasks in computational social science, including faculty hiring histories, elite death events, and political career trajectories. Across tasks, DataParasite achieves high accuracy while reducing data-collection costs by an order of magnitude relative to manual curation. By lowering the technical and labor barriers to online data assembly, DataParasite provides a practical foundation for scalable, transparent, and reusable data curation in computational social science and beyond.
|
https://arxiv.org/abs/2601.02578
|
Academic Papers
|
svg
|
9693b51656f88f462d8404bfe85af8eb43d1eb62a4cf61c81a01562ff699a3f7
|
2026-01-07T00:00:00-05:00
|
Reconstructing Item Characteristic Curves using Fine-Tuned Large Language Models
|
arXiv:2601.02580v1 Announce Type: new Abstract: Traditional methods for determining assessment item parameters, such as difficulty and discrimination, rely heavily on expensive field testing to collect student performance data for Item Response Theory (IRT) calibration. This study introduces a novel approach that implicitly models these psychometric properties by fine-tuning Large Language Models (LLMs) to simulate student responses across a spectrum of latent abilities. Leveraging the Qwen-3 dense model series and Low-Rank Adaptation (LoRA), we train models to generate responses to multiple choice questions conditioned on discrete ability descriptors. We reconstruct the probability of a correct response as a function of student ability, effectively generating synthetic Item Characteristic Curves (ICCs) to estimate IRT parameters. Evaluation on a dataset of Grade 6 English Language Arts (ELA) items and the BEA 2024 Shared Task dataset demonstrates that this method competes with or outperforms baseline approaches. This simulation-based technique seems particularly effective at modeling item discrimination.
|
https://arxiv.org/abs/2601.02580
|
Academic Papers
|
svg
|
d007805b8c05e9ae7560c471130d36936a3857bb34653de5426e36d3e4bbe636
|
2026-01-07T00:00:00-05:00
|
Threat Detection in Social Media Networks Using Machine Learning Based Network Analysis
|
arXiv:2601.02581v1 Announce Type: new Abstract: The accelerated development of social media websites has posed intricate security issues in cyberspace, where these sites have increasingly become victims of criminal activities including attempts to intrude into them, abnormal traffic patterns, and organized attacks. The conventional rule-based security systems are not always scalable and dynamic to meet such a threat. This paper introduces a threat detection framework based on machine learning that can be used to classify malicious behavior in the social media network environment based on the nature of network traffic. Exploiting a rich network traffic dataset, the massive preprocessing and exploratory data analysis is conducted to overcome the problem of data imbalance, feature inconsistency, and noise. A model of artificial neural network (ANN) is then created to acquire intricate, non-linear tendencies of malicious actions. The proposed model is tested on conventional performance metrics, such as accuracy, accuracy, recall, F1-score, and ROC-AUC, and shows good detection and high levels of strength. The findings suggest that neural network-based solutions have the potential to be used effectively to identify the latent threat dynamics within the context of a large-scale social media network and that they can be employed to complement the existing intrusion detection system and better to conduct proactive cybersecurity operations.
|
https://arxiv.org/abs/2601.02581
|
Academic Papers
|
svg
|
f3caddb48dfcc44848ac019e2e8f72e170957e98cb4ae5719ebf50860c3f3313
|
2026-01-07T00:00:00-05:00
|
AI Social Responsibility as Reachability: Execution-Level Semantics for the Social Responsibility Stack
|
arXiv:2601.02585v1 Announce Type: new Abstract: Artificial intelligence systems are increasingly embedded as persistent, closed-loop components within cyber-physical, social, and institutional processes. Rather than producing isolated outputs, such systems operate continuously under feedback, adaptation, and scale, reshaping physical flows, human behavior, and institutional practice over time. In these settings, socially unacceptable outcomes rarely arise from singular faults or explicit policy violations. Instead, they emerge through cumulative execution trajectories enabled by repetition, concurrency, and feedback. This paper advances the formal foundation of the Social Responsibility Stack (SRS) by making its central requirement explicit: responsibility is fundamentally a reachability property of system execution. A system is responsible iff its execution semantics prevent entry into inadmissible global configurations, regardless of local performance gains or optimization objectives. Responsibility failures are therefore not objective-level errors, but execution-level failures of trajectory control. To operationalize this perspective, we introduce Petri nets as an execution-level formalism for responsible autonomous systems. We show how SRS value commitments correspond to forbidden markings, safeguards to structural constraints on transition firing, auditing to monitoring of reachability pressure, and governance to legitimate modification of execution structure. Embedding Petri-net reachability within the SRS architecture internalizes responsibility as a structural invariant rather than an external objective or post-hoc mechanism. These results establish the Social Responsibility Stack as an executable responsibility architecture and position reachability-based execution semantics as a necessary foundation for responsible autonomy in feedback-rich cyber-physical and socio-technical systems.
|
https://arxiv.org/abs/2601.02585
|
Academic Papers
|
svg
|
84a8612d6b6def23231567d0429481ad1779a74bc6bd60dae12004df335fef19
|
2026-01-07T00:00:00-05:00
|
Understanding Human Perception of Music Plagiarism Through a Computational Approach
|
arXiv:2601.02586v1 Announce Type: new Abstract: There is a wide variety of music similarity detection algorithms, while discussions about music plagiarism in the real world are often based on audience perceptions. Therefore, we aim to conduct a study to examine the key criteria of human perception of music plagiarism, focusing on the three commonly used musical features in similarity analysis: melody, rhythm, and chord progression. After identifying the key features and levels of variation humans use in perceiving musical similarity, we propose a LLM-as-a-judge framework that applies a systematic, step-by-step approach, drawing on modules that extract such high-level attributes.
|
https://arxiv.org/abs/2601.02586
|
Academic Papers
|
svg
|
10d6f6c8aca3fa68c8ec75343095211f7e070b2d594a3df4a1734848602479ec
|
2026-01-07T00:00:00-05:00
|
FlowPlan-G2P: A Structured Generation Framework for Transforming Scientific Papers into Patent Descriptions
|
arXiv:2601.02589v1 Announce Type: new Abstract: Over 3.5 million patents are filed annually, with drafting patent descriptions requiring deep technical and legal expertise. Transforming scientific papers into patent descriptions is particularly challenging due to their differing rhetorical styles and stringent legal requirements. Unlike black-box text-to-text approaches that struggle to model structural reasoning and legal constraints, we propose FlowPlan-G2P, a novel framework that mirrors the cognitive workflow of expert drafters by reformulating this task into three stages: (1) Concept Graph Induction, extracting technical entities and relationships into a directed graph via expert-like reasoning; (2) Paragraph and Section Planning, reorganizing the graph into coherent clusters aligned with canonical patent sections; and (3) Graph-Conditioned Generation, producing legally compliant paragraphs using section-specific subgraphs and tailored prompts. Experiments demonstrate that FlowPlan-G2P significantly improves logical coherence and legal compliance over end-to-end LLM baselines. Our framework establishes a new paradigm for paper-to-patent generation and advances structured text generation for specialized domains.
|
https://arxiv.org/abs/2601.02589
|
Academic Papers
|
svg
|
5f335a0994beb7699370696255bf025d43153baba2f9bf3705d3ab5da1ef80d8
|
2026-01-07T00:00:00-05:00
|
A Music Information Retrieval Approach to Classify Sub-Genres in Role Playing Games
|
arXiv:2601.02591v1 Announce Type: new Abstract: Video game music (VGM) is often studied under the same lens as film music, which largely focuses on its theoretical functionality with relation to the identified genres of the media. However, till date, we are unaware of any systematic approach that analyzes the quantifiable musical features in VGM across several identified game genres. Therefore, we extracted musical features from VGM in games from three sub-genres of Role-Playing Games (RPG), and then hypothesized how different musical features are correlated to the perceptions and portrayals of each genre. This observed correlation may be used to further suggest such features are relevant to the expected storytelling elements or play mechanics associated with the sub-genre.
|
https://arxiv.org/abs/2601.02591
|
Academic Papers
|
svg
|
6c3c2f950506b8398e06be05f1d86008a4db52626c175c3b1303bda2a547ae1d
|
2026-01-07T00:00:00-05:00
|
Volumetric locking-free Mixed Virtual Element Methods for Contact Problems
|
arXiv:2601.02595v1 Announce Type: new Abstract: We consider the approximation of the 2D frictionless contact problem in elasticity using the Virtual Element Methods (VEMs). To overcome the volumetric locking phenomenon in the nearly incompressible case, we adopt a mixed displacement/pressure ($u/p$) variational formulation, where pressure is introduced as an independent unknown. We present the VEM discretization and develop a general error analysis, keeping explicit track of the constants involved in the error estimates, thus allowing to consider meshes with "small edges". As examples, we consider two possible VEM schemes: a first-order scheme and a second-order scheme. The numerical results confirm the theoretical predictions, specifically both schemes show: 1) robustness with respect to the volumetric parameter $\lambda$, thus preventing the occurrence of the volumetric locking phenomenon; 2) good behavior even in the presence of "small edges"; 3) achievement of the expected theoretical convergence rates.
|
https://arxiv.org/abs/2601.02595
|
Academic Papers
|
svg
|
b2b498e727cfe89fd5ea28d4719968c46e1aee9ecdf4d2def66c3932772bc5d3
|
2026-01-07T00:00:00-05:00
|
Coordinated Multi-Domain Deception: A Stackelberg Game Approach
|
arXiv:2601.02596v1 Announce Type: new Abstract: This paper explores coordinated deception strategies by synchronizing defenses across coupled cyber and physical systems to mislead attackers and strengthen defense mechanisms. We introduce a Stackelberg game framework to model the strategic interaction between defenders and attackers, where the defender leverages CVSS-based exploit probabilities and real-world vulnerability data from the National Vulnerability Database (NVD) to guide the deployment of deception. Cyber and physical replicas are used to disrupt attacker reconnaissance and enhance defensive effectiveness. We propose a CVE-based utility function to identify the most critical vulnerabilities and demonstrate that coordinated multilayer deception outperforms single-layer and baseline strategies in improving defender utility across both CVSS versions.
|
https://arxiv.org/abs/2601.02596
|
Academic Papers
|
svg
|
bb63962487b0fe25267df414815ac364613f90e017b844725efe812c628abc07
|
2026-01-07T00:00:00-05:00
|
LongDA: Benchmarking LLM Agents for Long-Document Data Analysis
|
arXiv:2601.02598v1 Announce Type: new Abstract: We introduce LongDA, a data analysis benchmark for evaluating LLM-based agents under documentation-intensive analytical workflows. In contrast to existing benchmarks that assume well-specified schemas and inputs, LongDA targets real-world settings in which navigating long documentation and complex data is the primary bottleneck. To this end, we manually curate raw data files, long and heterogeneous documentation, and expert-written publications from 17 publicly available U.S. national surveys, from which we extract 505 analytical queries grounded in real analytical practice. Solving these queries requires agents to first retrieve and integrate key information from multiple unstructured documents, before performing multi-step computations and writing executable code, which remains challenging for existing data analysis agents. To support the systematic evaluation under this setting, we develop LongTA, a tool-augmented agent framework that enables document access, retrieval, and code execution, and evaluate a range of proprietary and open-source models. Our experiments reveal substantial performance gaps even among state-of-the-art models, highlighting the challenges researchers should consider before applying LLM agents for decision support in real-world, high-stakes analytical settings.
|
https://arxiv.org/abs/2601.02598
|
Academic Papers
|
svg
|
dd69563d1e5f685c80cdfc479927af5da66f631813c34358417ecb759266edbb
|
2026-01-07T00:00:00-05:00
|
State of the Quantum Software Engineering Ecosystem
|
arXiv:2601.02601v1 Announce Type: new Abstract: We study the current state of the Quantum Software Engineering (QSE) ecosystem, focusing on the achievements, activities, and engagements from academia and industry, with a special focus on successful entrepreneurial endeavors in this arena. Our research methodology is a novel one, featuring the state-of-the-art in Artificial Intelligence (AI), namely Large Language Models (LLMs), especially Generative Pretrained Transformers (GPT). We use one of such models, namely the OpenAI GPT-5 model, through the ChatGPT tool. The goal is to identify institutions and companies that are highly active and have achieved distinguished results in QSE, evidenced by peer-reviewed publications or raised capital in the venture capital market.
|
https://arxiv.org/abs/2601.02601
|
Academic Papers
|
svg
|
ff10c6085ec9c9ae66c725d6866b5e95ff2ca04c135a4b52d1167d6090585e66
|
2026-01-07T00:00:00-05:00
|
SWaRL: Safeguard Code Watermarking via Reinforcement Learning
|
arXiv:2601.02602v1 Announce Type: new Abstract: We present SWaRL, a robust and fidelity-preserving watermarking framework designed to protect the intellectual property of code LLM owners by embedding unique and verifiable signatures in the generated output. Existing approaches rely on manually crafted transformation rules to preserve watermarked code functionality or manipulate token-generation probabilities at inference time, which are prone to compilation errors. To address these challenges, SWaRL employs a reinforcement learning-based co-training framework that uses compiler feedback for functional correctness and a jointly trained confidential verifier as a reward signal to maintain watermark detectability. Furthermore, SWaRL employs low-rank adaptation (LoRA) during fine-tuning, allowing the learned watermark information to be transferable across model updates. Extensive experiments show that SWaRL achieves higher watermark detection accuracy compared to prior methods while fully maintaining watermarked code functionality. The LoRA-based signature embedding steers the base model to generate and solve code in a watermark-specific manner without significant computational overhead. Moreover, SWaRL exhibits strong resilience against refactoring and adversarial transformation attacks.
|
https://arxiv.org/abs/2601.02602
|
Academic Papers
|
svg
|
9e65035b5eb67d4b4d7fab484cb8621ad8d42aaad3aeadab3488a4c8f758f115
|
2026-01-07T00:00:00-05:00
|
Scalable Construction of a Lung Cancer Knowledge Base: Profiling Semantic Reasoning in LLMs
|
arXiv:2601.02604v1 Announce Type: new Abstract: The integration of Large Language Models (LLMs) into biomedical research offers new opportunities for domainspecific reasoning and knowledge representation. However, their performance depends heavily on the semantic quality of training data. In oncology, where precision and interpretability are vital, scalable methods for constructing structured knowledge bases are essential for effective fine-tuning. This study presents a pipeline for developing a lung cancer knowledge base using Open Information Extraction (OpenIE). The process includes: (1) identifying medical concepts with the MeSH thesaurus; (2) filtering open-access PubMed literature with permissive licenses (CC0); (3) extracting (subject, relation, object) triplets using OpenIE method; and (4) enriching triplet sets with Named Entity Recognition (NER) to ensure biomedical relevance. The resulting triplet sets provide a domain-specific, large-scale, and noise-aware resource for fine-tuning LLMs. We evaluated T5 models finetuned on this dataset through Supervised Semantic Fine-Tuning. Comparative assessments with ROUGE and BERTScore show significantly improved performance and semantic coherence, demonstrating the potential of OpenIE-derived resources as scalable, low-cost solutions for enhancing biomedical NLP.
|
https://arxiv.org/abs/2601.02604
|
Academic Papers
|
svg
|
9f57debffa8c6df8056ee46b220f1719e0636bf84f7dbd918f9194e036cea5ec
|
2026-01-07T00:00:00-05:00
|
Weights on finite fields and failures of the MacWilliams identities
|
arXiv:2601.02608v1 Announce Type: new Abstract: In the 1960s, MacWilliams proved that the Hamming weight enumerator of a linear code over a finite field completely determines, and is determined by, the Hamming weight enumerator of its dual code. In particular, if two linear codes have the same Hamming weight enumerator, then their dual codes have the same Hamming weight enumerator. In contrast, there is a wide class of weights on finite fields whose weight enumerators have the opposite behavior: there exist two linear codes having the same weight enumerator, but their dual codes have different weight enumerators.
|
https://arxiv.org/abs/2601.02608
|
Academic Papers
|
svg
|
597fc592591def4d11414a1d1e3a858cd80938d6e41c09441d0e10a352f3967e
|
2026-01-07T00:00:00-05:00
|
Chronicals: A High-Performance Framework for LLM Fine-Tuning with 3.51x Speedup over Unsloth
|
arXiv:2601.02609v1 Announce Type: new Abstract: Large language model fine-tuning is bottlenecked by memory: a 7B parameter model requires 84GB--14GB for weights, 14GB for gradients, and 56GB for FP32 optimizer states--exceeding even A100-40GB capacity. We present Chronicals, an open-source training framework achieving 3.51x speedup over Unsloth through four synergistic optimizations: (1) fused Triton kernels eliminating 75% of memory traffic via RMSNorm (7x), SwiGLU (5x), and QK-RoPE (2.3x) fusion; (2) Cut Cross-Entropy reducing logit memory from 5GB to 135MB through online softmax computation; (3) LoRA+ with theoretically-derived 16x differential learning rates between adapter matrices; and (4) Best-Fit Decreasing sequence packing recovering 60-75% of compute wasted on padding. On Qwen2.5-0.5B with A100-40GB, Chronicals achieves 41,184 tokens/second for full fine-tuning versus Unsloth's 11,736 tokens/second (3.51x). For LoRA at rank 32, we reach 11,699 tokens/second versus Unsloth MAX's 2,857 tokens/second (4.10x). Critically, we discovered that Unsloth's reported 46,000 tokens/second benchmark exhibited zero gradient norms--the model was not training. We provide complete mathematical foundations: online softmax correctness proofs, FlashAttention IO complexity bounds O(N^2 d^2 M^{-1}), LoRA+ learning rate derivations from gradient magnitude analysis, and bin-packing approximation guarantees. All implementations, benchmarks, and proofs are available at https://github.com/Ajwebdevs/Chronicals with pip installation via https://pypi.org/project/chronicals/.
|
https://arxiv.org/abs/2601.02609
|
Academic Papers
|
svg
|
21cfe4598108f8440d1f6745ecc0f9a5ebd1bb362477df1324f991de5653962e
|
2026-01-07T00:00:00-05:00
|
Sparsity-Aware Streaming SNN Accelerator with Output-Channel Dataflow for Automatic Modulation Classification
|
arXiv:2601.02613v1 Announce Type: new Abstract: The rapid advancement of wireless communication technologies, including 5G, emerging 6G networks, and the large-scale deployment of the Internet of Things (IoT), has intensified the need for efficient spectrum utilization. Automatic modulation classification (AMC) plays a vital role in cognitive radio systems by enabling real-time identification of modulation schemes for dynamic spectrum access and interference mitigation. While deep neural networks (DNNs) offer high classification accuracy, their computational and energy demands pose challenges for real-time edge deployment. Spiking neural networks (SNNs), with their event-driven nature, offer inherent energy efficiency, but achieving both high throughput and low power under constrained hardware resources remains challenging. This work proposes a sparsity-aware SNN streaming accelerator optimized for AMC tasks. Unlike traditional systolic arrays that exploit sparsity but suffer from low throughput, or streaming architectures that achieve high throughput but cannot fully utilize input and weight sparsity, our design integrates both advantages. By leveraging the fixed nature of kernels during inference, we apply the gated one-to-all product (GOAP) algorithm to compute only on non-zero input-weight intersections. Extra or empty iterations are precomputed and embedded into the inference dataflow, eliminating dynamic data fetches and enabling fully pipelined, control-free inter-layer execution. Implemented on an FPGA, our sparsity-aware output-channel dataflow streaming (SAOCDS) accelerator achieves 23.5 MS/s (approximately double the baseline throughput) on the RadioML 2016 dataset, while reducing dynamic power and maintaining comparable classification accuracy. These results demonstrate strong potential for real-time, low-power deployment in edge cognitive radio systems.
|
https://arxiv.org/abs/2601.02613
|
Academic Papers
|
svg
|
ec3ccbe8c7588093bd41104539411998914b7de19b0c571d135cd95e28add81a
|
2026-01-07T00:00:00-05:00
|
LAsset: An LLM-assisted Security Asset Identification Framework for System-on-Chip (SoC) Verification
|
arXiv:2601.02624v1 Announce Type: new Abstract: The growing complexity of modern system-on-chip (SoC) and IP designs is making security assurance difficult day by day. One of the fundamental steps in the pre-silicon security verification of a hardware design is the identification of security assets, as it substantially influences downstream security verification tasks, such as threat modeling, security property generation, and vulnerability detection. Traditionally, assets are determined manually by security experts, requiring significant time and expertise. To address this challenge, we present LAsset, a novel automated framework that leverages large language models (LLMs) to identify security assets from both hardware design specifications and register-transfer level (RTL) descriptions. The framework performs structural and semantic analysis to identify intra-module primary and secondary assets and derives inter-module relationships to systematically characterize security dependencies at the design level. Experimental results show that the proposed framework achieves high classification accuracy, reaching up to 90% recall rate in SoC design, and 93% recall rate in IP designs. This automation in asset identification significantly reduces manual overhead and supports a scalable path forward for secure hardware development.
|
https://arxiv.org/abs/2601.02624
|
Academic Papers
|
svg
|
9d6f254135887cd45447cd461b4d46007155469592aa0a0f7cab0666b268db85
|
2026-01-07T00:00:00-05:00
|
Improved Evidence Extraction for Document Inconsistency Detection with LLMs
|
arXiv:2601.02627v1 Announce Type: new Abstract: Large language models (LLMs) are becoming useful in many domains due to their impressive abilities that arise from large training datasets and large model sizes. However, research on LLM-based approaches to document inconsistency detection is relatively limited. There are two key aspects of document inconsistency detection: (i) classification of whether there exists any inconsistency, and (ii) providing evidence of the inconsistent sentences. We focus on the latter, and introduce new comprehensive evidence-extraction metrics and a redact-and-retry framework with constrained filtering that substantially improves LLM-based document inconsistency detection over direct prompting. We back our claims with promising experimental results.
|
https://arxiv.org/abs/2601.02627
|
Academic Papers
|
svg
|
fc3d52eaada7bdaeaf9ca88a919c9ff55e394a00d148d782bc7b4dbe3ae81ac7
|
2026-01-07T00:00:00-05:00
|
Listen to the Unexpected: Self-Supervised Surprise Detection for Efficient Viewport Prediction
|
arXiv:2601.02629v1 Announce Type: new Abstract: Adaptive streaming of 360-degree video relies on viewport prediction to allocate bandwidth efficiently. Current approaches predominantly use visual saliency or historical gaze patterns, neglecting the role of spatial audio in guiding user attention. This paper presents a self-learning framework for detecting "surprising" auditory events -- moments that deviate from learned temporal expectations -- and demonstrates their utility for viewport prediction. The proposed architecture combines $SE(3)$-equivariant graph neural networks with recurrent temporal modeling, trained via a dual self-supervised objective. A key feature is the natural modeling of temporal attention decay: surprise is high at event onset but diminishes as the listener adapts. Experiments on the AVTrack360 dataset show that integrating audio surprise with visual cues reduces bitrate waste by up to 18% compared to visual-only methods.
|
https://arxiv.org/abs/2601.02629
|
Academic Papers
|
svg
|
0fa71878d118a0369cc1b1308c29a9cb423ffea77b0b18210c7d5c5c513f217f
|
2026-01-07T00:00:00-05:00
|
Copyright Laundering Through the AI Ouroboros: Adapting the 'Fruit of the Poisonous Tree' Doctrine to Recursive AI Training
|
arXiv:2601.02631v1 Announce Type: new Abstract: Copyright enforcement rests on an evidentiary bargain: a plaintiff must show both the defendant's access to the work and substantial similarity in the challenged output. That bargain comes under strain when AI systems are trained through multi-generational pipelines with recursive synthetic data. As successive models are tuned on the outputs of its predecessors, any copyrighted material absorbed by an early model is diffused into deeper statistical abstractions. The result is an evidentiary blind spot where overlaps that emerge look coincidental, while the chain of provenance is too attenuated to trace. These conditions are ripe for "copyright laundering"--the use of multi-generational synthetic pipelines, an "AI Ouroboros," to render traditional proof of infringement impracticable. This Article adapts the "fruit of the poisonous tree" (FOPT) principle to propose a AI-FOPT standard: if a foundational AI model's training is adjudged infringing (either for unlawful sourcing or for non-transformative ingestion that fails fair-use), then subsequent AI models principally derived from the foundational model's outputs or distilled weights carry a rebuttable presumption of taint. The burden shifts to downstream developers--those who control the evidence of provenance--to restore the evidentiary bargain by affirmatively demonstrating a verifiably independent and lawfully sourced lineage or a curative rebuild, without displacing fair-use analysis at the initial ingestion stage. Absent such proof, commercial deployment of tainted models and their outputs is actionable. This Article develops the standard by specifying its trigger, presumption, and concrete rebuttal paths (e.g., independent lineage or verifiable unlearning); addresses counterarguments concerning chilling innovation and fair use; and demonstrates why this lineage-focused approach is both administrable and essential.
|
https://arxiv.org/abs/2601.02631
|
Academic Papers
|
svg
|
348751aa19782ec5a2f78f9c1d88c28455bf07fcd21f129d993b3b6d98bc4fba
|
2026-01-07T00:00:00-05:00
|
TAAF: A Trace Abstraction and Analysis Framework Synergizing Knowledge Graphs and LLMs
|
arXiv:2601.02632v1 Announce Type: new Abstract: Execution traces are a critical source of information for understanding, debugging, and optimizing complex software systems. However, traces from OS kernels or large-scale applications like Chrome or MySQL are massive and difficult to analyze. Existing tools rely on predefined analyses, and custom insights often require writing domain-specific scripts, which is an error-prone and time-consuming task. This paper introduces TAAF (Trace Abstraction and Analysis Framework), a novel approach that combines time-indexing, knowledge graphs (KGs), and large language models (LLMs) to transform raw trace data into actionable insights. TAAF constructs a time-indexed KG from trace events to capture relationships among entities such as threads, CPUs, and system resources. An LLM then interprets query-specific subgraphs to answer natural-language questions, reducing the need for manual inspection and deep system expertise. To evaluate TAAF, we introduce TraceQA-100, a benchmark of 100 questions grounded in real kernel traces. Experiments across three LLMs and multiple temporal settings show that TAAF improves answer accuracy by up to 31.2%, particularly in multi-hop and causal reasoning tasks. We further analyze where graph-grounded reasoning helps and where limitations remain, offering a foundation for next-generation trace analysis tools.
|
https://arxiv.org/abs/2601.02632
|
Academic Papers
|
svg
|
c06f6331100b470f01b8fb08fe4b2ba6a54049f21b10094397bb15b1d41c0b28
|
2026-01-07T00:00:00-05:00
|
Fluid Agency in AI Systems: A Case for Functional Equivalence in Copyright, Patent, and Tort
|
arXiv:2601.02633v1 Announce Type: new Abstract: Modern Artificial Intelligence (AI) systems lack human-like consciousness or culpability, yet they exhibit fluid agency: behavior that is (i) stochastic (probabilistic and path-dependent), (ii) dynamic (co-evolving with user interaction), and (iii) adaptive (able to reorient across contexts). Fluid agency generates valuable outputs but collapses attribution, irreducibly entangling human and machine inputs. This fundamental unmappability fractures doctrines that assume traceable provenance--authorship, inventorship, and liability--yielding ownership gaps and moral "crumple zones." This Article argues that only functional equivalence stabilizes doctrine. Where provenance is indeterminate, legal frameworks must treat human and AI contributions as equivalent for allocating rights and responsibility--not as a claim of moral or economic parity but as a pragmatic default. This principle stabilizes doctrine across domains, offering administrable rules: in copyright, vesting ownership in human orchestrators without parsing inseparable contributions; in patent, tying inventor-of-record status to human orchestration and reduction to practice, even when AI supplies the pivotal insight; and in tort, replacing intractable causation inquiries with enterprise-level and sector-specific strict or no-fault schemes. The contribution is both descriptive and normative: fluid agency explains why origin-based tests fail, while functional equivalence supplies an outcome-focused framework to allocate rights and responsibility when attribution collapses.
|
https://arxiv.org/abs/2601.02633
|
Academic Papers
|
svg
|
a20879951faac9820462a53e34e1be188a8514957d8a1137c8ddb03bdb7a6b0e
|
2026-01-07T00:00:00-05:00
|
Credit Assignment via Neural Manifold Noise Correlation
|
arXiv:2601.02636v1 Announce Type: new Abstract: Credit assignment--how changes in individual neurons and synapses affect a network's output--is central to learning in brains and machines. Noise correlation, which estimates gradients by correlating perturbations of activity with changes in output, provides a biologically plausible solution to credit assignment but scales poorly as accurately estimating the Jacobian requires that the number of perturbations scale with network size. Moreover, isotropic noise conflicts with neurobiological observations that neural activity lies on a low-dimensional manifold. To address these drawbacks, we propose neural manifold noise correlation (NMNC), which performs credit assignment using perturbations restricted to the neural manifold. We show theoretically and empirically that the Jacobian row space aligns with the neural manifold in trained networks, and that manifold dimensionality scales slowly with network size. NMNC substantially improves performance and sample efficiency over vanilla noise correlation in convolutional networks trained on CIFAR-10, ImageNet-scale models, and recurrent networks. NMNC also yields representations more similar to the primate visual system than vanilla noise correlation. These findings offer a mechanistic hypothesis for how biological circuits could support credit assignment, and suggest that biologically inspired constraints may enable, rather than limit, effective learning at scale.
|
https://arxiv.org/abs/2601.02636
|
Academic Papers
|
svg
|
36a94d5cf318b3b0d7f5798c5d249ff37a19c19987acd47eaccc81a66f5251c4
|
2026-01-07T00:00:00-05:00
|
An Empirical Study of On-Device Translation for Real-Time Live-Stream Chat on Mobile Devices
|
arXiv:2601.02641v1 Announce Type: new Abstract: Despite its efficiency, there has been little research on the practical aspects required for real-world deployment of on-device AI models, such as the device's CPU utilization and thermal conditions. In this paper, through extensive experiments, we investigate two key issues that must be addressed to deploy on-device models in real-world services: (i) the selection of on-device models and the resource consumption of each model, and (ii) the capability and potential of on-device models for domain adaptation. To this end, we focus on a task of translating live-stream chat messages and manually construct LiveChatBench, a benchmark consisting of 1,000 Korean-English parallel sentence pairs. Experiments on five mobile devices demonstrate that, although serving a large and heterogeneous user base requires careful consideration of highly constrained deployment settings and model selection, the proposed approach nevertheless achieves performance comparable to commercial models such as GPT-5.1 on the well-targeted task. We expect that our findings will provide meaningful insights to the on-device AI community.
|
https://arxiv.org/abs/2601.02641
|
Academic Papers
|
svg
|
c6ce73f987c24dd2584c2cf5850abf8879c112381b8aa93c79170654b4f231a1
|
2026-01-07T00:00:00-05:00
|
AWARE-US: Benchmark for Preference-Aware Resolution in Tool-Calling Agents
|
arXiv:2601.02643v1 Announce Type: new Abstract: Tool-calling conversational agents querying structured databases often face two linked failures: underspecification (missing constraints needed to run a precise query) and infeasibility (the fully specified query returns an empty set because no item satisfies all constraints). Existing work often responds with "no results" or relaxes constraints using ad hoc rules, which can violate user intent by discarding requirements the user cares about most. We frame infeasibility handling as a preference-aware query repair problem: when a query is unsatisfiable, the agent should relax the least important constraints to the user. We propose three LLM-based methods for inferring relative constraint importance from dialogue: (1) local weighting, (2) global one-shot weighting, and (3) pairwise ranking. Experiments show local weighting achieves the best preference alignment, while global weighting performs best on correct constraint relaxation. We also introduce AWARE-US, a benchmark of persona-grounded queries requiring agents to disambiguate requests via conversation and resolve infeasibility in a way consistent with persona-implied preferences.
|
https://arxiv.org/abs/2601.02643
|
Academic Papers
|
svg
|
a381c3146d79a3b7ce64416b8cb8d10e80d0e90fdebe9c49baced8ada993b779
|
2026-01-07T00:00:00-05:00
|
Making Infeasible Tasks Feasible: Planning to Reconfigure Disconnected 3D Environments with Movable Objects
|
arXiv:2601.02645v1 Announce Type: new Abstract: Several planners have been developed to compute dynamically feasible, collision-free robot paths from an initial to a goal configuration. A key assumption in these works is that the goal region is reachable; an assumption that often fails in practice when environments are disconnected. Motivated by this limitation, we consider known 3D environments comprising objects, also called blocks, that form distinct navigable support surfaces (planes), and that are either non-movable (e.g., tables) or movable (e.g., boxes). These surfaces may be mutually disconnected due to height differences, holes, or lateral separations. Our focus is on tasks where the robot must reach a goal region residing on an elevated plane that is unreachable. Rather than declaring such tasks infeasible, an effective strategy is to enable the robot to interact with the environment, rearranging movable objects to create new traversable connections; a problem known as Navigation Among Movable Objects (NAMO). Existing NAMO planners typically address 2D environments, where obstacles are pushed aside to clear a path. These methods cannot directly handle the considered 3D setting; in such cases, obstacles must be placed strategically to bridge these physical disconnections. We address this challenge by developing BRiDGE (Block-based Reconfiguration in Disconnected 3D Geometric Environments), a sampling-based planner that incrementally builds trees over robot and object configurations to compute feasible plans specifying which objects to move, where to place them, and in what order, while accounting for a limited number of movable objects. To accelerate planning, we introduce non-uniform sampling strategies. We show that our method is probabilistically complete and we provide extensive numerical and hardware experiments validating its effectiveness.
|
https://arxiv.org/abs/2601.02645
|
Academic Papers
|
svg
|
9801d8f28ee143027be0c246609c5ca11b3efcba6d268836b40c09a2b64b812a
|
2026-01-07T00:00:00-05:00
|
DreamLoop: Controllable Cinemagraph Generation from a Single Photograph
|
arXiv:2601.02646v1 Announce Type: new Abstract: Cinemagraphs, which combine static photographs with selective, looping motion, offer unique artistic appeal. Generating them from a single photograph in a controllable manner is particularly challenging. Existing image-animation techniques are restricted to simple, low-frequency motions and operate only in narrow domains with repetitive textures like water and smoke. In contrast, large-scale video diffusion models are not tailored for cinemagraph constraints and lack the specialized data required to generate seamless, controlled loops. We present DreamLoop, a controllable video synthesis framework dedicated to generating cinemagraphs from a single photo without requiring any cinemagraph training data. Our key idea is to adapt a general video diffusion model by training it on two objectives: temporal bridging and motion conditioning. This strategy enables flexible cinemagraph generation. During inference, by using the input image as both the first- and last- frame condition, we enforce a seamless loop. By conditioning on static tracks, we maintain a static background. Finally, by providing a user-specified motion path for a target object, our method provides intuitive control over the animation's trajectory and timing. To our knowledge, DreamLoop is the first method to enable cinemagraph generation for general scenes with flexible and intuitive controls. We demonstrate that our method produces high-quality, complex cinemagraphs that align with user intent, outperforming existing approaches.
|
https://arxiv.org/abs/2601.02646
|
Academic Papers
|
svg
|
d8a7d3192a8734fa70ccfa4f4ac9ffa34d4bd95146ff0e34fa57c2a9e91ad571
|
2026-01-07T00:00:00-05:00
|
Prioritized Replay for RL Post-training
|
arXiv:2601.02648v1 Announce Type: new Abstract: We introduce a problem-level prioritization framework for RL post-training of large language models. Building on insights from prioritized replay in deep RL, as well as prior observations that rollouts with intermediate success rates tend to produce stronger learning signals under methods such as GRPO, our approach selects problems according to a simple, model-driven priority score derived from empirical success statistics. In contrast to conventional curriculum strategies that emphasize easier tasks early in training, the resulting schedule naturally focuses training on problems that are neither consistently solved nor consistently failed, while deprioritizing those that contribute little gradient information. The method yields a continuously adapting and automatic prioritization process that requires no predefined difficulty tiers, auxiliary predictors, or external labels. We further introduce lightweight mechanisms for practical deployment, including heap-based prioritized sampling and periodic retesting of solved and unsolved problems to mitigate starvation and forgetting. Overall, the approach offers a principled and scalable alternative to manually designed curricula while aligning data selection directly with the dynamics of GRPO-based post-training.
|
https://arxiv.org/abs/2601.02648
|
Academic Papers
|
svg
|
0f661669ee318dab2aa6a11bc161feebd7ef17adb8e760468aa7b8d25f7f34a3
|
2026-01-07T00:00:00-05:00
|
Effective Online 3D Bin Packing with Lookahead Parcels Using Monte Carlo Tree Search
|
arXiv:2601.02649v1 Announce Type: new Abstract: Online 3D Bin Packing (3D-BP) with robotic arms is crucial for reducing transportation and labor costs in modern logistics. While Deep Reinforcement Learning (DRL) has shown strong performance, it often fails to adapt to real-world short-term distribution shifts, which arise as different batches of goods arrive sequentially, causing performance drops. We argue that the short-term lookahead information available in modern logistics systems is key to mitigating this issue, especially during distribution shifts. We formulate online 3D-BP with lookahead parcels as a Model Predictive Control (MPC) problem and adapt the Monte Carlo Tree Search (MCTS) framework to solve it. Our framework employs a dynamic exploration prior that automatically balances a learned RL policy and a robust random policy based on the lookahead characteristics. Additionally, we design an auxiliary reward to penalize long-term spatial waste from individual placements. Extensive experiments on real-world datasets show that our method consistently outperforms state-of-the-art baselines, achieving over 10\% gains under distributional shifts, 4\% average improvement in online deployment, and up to more than 8\% in the best case--demonstrating the effectiveness of our framework.
|
https://arxiv.org/abs/2601.02649
|
Academic Papers
|
svg
|
0132788a5d0eba9392d2b9856d124259547d1650f28288cf316206cab717becd
|
2026-01-07T00:00:00-05:00
|
A Derivative-Free Saddle-search Algorithm With Linear Convergence Rate
|
arXiv:2601.02650v1 Announce Type: new Abstract: We propose a derivative-free saddle-search algorithm designed to locate transition states using only function evaluations. The algorithm employs a nested architecture consisting of an inner eigenvector search and an outer saddle-point search. Through rigorous numerical analysis, we prove the almost sure convergence of the inner step under suitable assumptions. Furthermore, we establish the convergence of the outer search using a decaying step size, while demonstrating linear convergence under constant step size and boundedness conditions. Numerical experiments are provided to validate our theoretical results and demonstrate the algorithm's practical applicability.
|
https://arxiv.org/abs/2601.02650
|
Academic Papers
|
svg
|
f9bcb4f091a02c227a397b72f9e41bb70d2c943b61107d3b5708b5b3e6b4f33c
|
2026-01-07T00:00:00-05:00
|
Driving Accessibility: Shifting the Narrative & Design of Automated Vehicle Systems for Persons With Disabilities Through a Collaborative Scoring System
|
arXiv:2601.02651v1 Announce Type: new Abstract: Automated vehicles present unique opportunities and challenges, with progress and adoption limited, in part, by policy and regulatory barriers. Underrepresented groups, including individuals with mobility impairments, sensory disabilities, and cognitive conditions, who may benefit most from automation, are often overlooked in crucial discussions on system design, implementation, and usability. Despite the high potential benefits of automated vehicles, the needs of Persons with Disabilities are frequently an afterthought, considered only in terms of secondary accommodations rather than foundational design elements. We aim to shift automated vehicle research and discourse away from this reactive model and toward a proactive and inclusive approach. We first present an overview of the current state of automated vehicle systems. Regarding their adoption, we examine social and technical barriers and advantages for Persons with Disabilities. We analyze existing regulations and policies concerning automated vehicles and Persons with Disabilities, identifying gaps that hinder accessibility. To address these deficiencies, we introduce a scoring rubric intended for use by manufacturers and vehicle designers. The rubric fosters direct collaboration throughout the design process, moving beyond an `afterthought` approach and towards intentional, inclusive innovation. This work was created by authors with varying degrees of personal experience within the realm of disability.
|
https://arxiv.org/abs/2601.02651
|
Academic Papers
|
svg
|
343939670a1c20ecdf9e801526c57f30ab0a1b8f56408b593e8e26d64449e6b9
|
2026-01-07T00:00:00-05:00
|
Backwards Data-Flow Analysis using Prophecy Variable in the BuildIt System
|
arXiv:2601.02653v1 Announce Type: new Abstract: Many program transformations and optimizations require information about the future behavior of the program. A standard way to obtain this information is to build an intermediate program representation, then use a backwards program analysis to propagate relevant information against the flow of control back to the transformation/optimization site. We instead propose to use prophecy variables, which predict information about the future execution of the program, to enable such transformations and optimizations. We implement prophecy variables in BuildIt, a lightweight domain specific language implementation system. BuildIt uses staged compilation to implement high performance domain specific languages embedded within a standard general purpose programming language (C++). The BuildIt first phase uses standard C++ program execution to generate optimized C, C++, and CUDA second phase code. This approach enables BuildIt to eliminate programming language implementation components such as parsers and intermediate representations, delivering a dramatic decrease in the engineering effort required to implement domain specific languages. The combination of prophecy variables and repeated forward program execution enables BuildIt to extend this approach to include transformations and optimizations that require information about the future execution of the program without backwards analyses and without the engineering overhead associated with implementing these analyses. We formalize the use of prophecy variables for this purpose, discuss the implementation of prophecy variables and repeated execution in BuildIt, and present experimental results for BuildIt computations that benefit from optimizations enabled by the information that prophecy variables provide.
|
https://arxiv.org/abs/2601.02653
|
Academic Papers
|
svg
|
7f531c6d513e57c260bf1da2d82193b1052ad8f50965310b774cd44cc5a7ed67
|
2026-01-07T00:00:00-05:00
|
Empirical Comparison of Encoder-Based Language Models and Feature-Based Supervised Machine Learning Approaches to Automated Scoring of Long Essays
|
arXiv:2601.02659v1 Announce Type: new Abstract: Long context may impose challenges for encoder-only language models in text processing, specifically for automated scoring of essays. This study trained several commonly used encoder-based language models for automated scoring of long essays. The performance of these trained models was evaluated and compared with the ensemble models built upon the base language models with a token limit of 512?. The experimented models include BERT-based models (BERT, RoBERTa, DistilBERT, and DeBERTa), ensemble models integrating embeddings from multiple encoder models, and ensemble models of feature-based supervised machine learning models, including Gradient-Boosted Decision Trees, eXtreme Gradient Boosting, and Light Gradient Boosting Machine. We trained, validated, and tested each model on a dataset of 17,307 essays, with an 80%/10%/10% split, and evaluated model performance using Quadratic Weighted Kappa. This study revealed that an ensemble-of-embeddings model that combines multiple pre-trained language model representations with gradient-boosting classifier as the ensemble model significantly outperforms individual language models at scoring long essays.
|
https://arxiv.org/abs/2601.02659
|
Academic Papers
|
svg
|
08fb531ad604744c3b3c32068daf03e01f524c42d31cfd0d1e8853d7303e0c83
|
2026-01-07T00:00:00-05:00
|
When Prompting Meets Spiking: Graph Sparse Prompting via Spiking Graph Prompt Learning
|
arXiv:2601.02662v1 Announce Type: new Abstract: Graph Prompt Feature (GPF) learning has been widely used in adapting pre-trained GNN model on the downstream task. GPFs first introduce some prompt atoms and then learns the optimal prompt vector for each graph node using the linear combination of prompt atoms. However, existing GPFs generally conduct prompting over node's all feature dimensions which is obviously redundant and also be sensitive to node feature noise. To overcome this issue, for the first time, this paper proposes learning sparse graph prompts by leveraging the spiking neuron mechanism, termed Spiking Graph Prompt Feature (SpikingGPF). Our approach is motivated by the observation that spiking neuron can perform inexpensive information processing and produce sparse outputs which naturally fits the task of our graph sparse prompting. Specifically, SpikingGPF has two main aspects. First, it learns a sparse prompt vector for each node by exploiting a spiking neuron architecture, enabling prompting on selective node features. This yields a more compact and lightweight prompting design while also improving robustness against node noise. Second, SpikingGPF introduces a novel prompt representation learning model based on sparse representation theory, i.e., it represents each node prompt as a sparse combination of prompt atoms. This encourages a more compact representation and also facilitates efficient computation. Extensive experiments on several benchmarks demonstrate the effectiveness and robustness of SpikingGPF.
|
https://arxiv.org/abs/2601.02662
|
Academic Papers
|
svg
|
a32934c5c652dc07ab3c98cf310156082fe2c224457f9bc1e23a46b7d6843c5b
|
2026-01-07T00:00:00-05:00
|
When Do Tools and Planning Help LLMs Think? A Cost- and Latency-Aware Benchmark
|
arXiv:2601.02663v1 Announce Type: new Abstract: Modern large language models (LLMs) increasingly rely on inference-time planning and external tools to improve reasoning. We benchmark this behavior on two real-world settings: event-centric question answering over graph-structured knowledge (Event-QA) and persuasive response generation in Reddit ChangeMyView (CMV). Using LangChain and LangGraph, we compare a one-shot baseline against a plan--execute--replan agent equipped with task-specific tools (DBpedia SPARQL/lookup/schema exploration, Wikipedia-focused retrieval, and topical web search). We evaluate on 60 examples each from Event-QA and CMV (3 splits of 20), and report both mean end-to-end latency and per-example token cost estimates. We evaluate GPT-4o and GPT-4o-mini under identical workflows and report accuracy and end-to-end latency. On Event-QA, the best tool-augmented configuration improves accuracy (e.g., 47.5\% $\rightarrow$ 67.5\% for GPT-4o) while increasing latency by orders of magnitude ($\sim$8s $\rightarrow$ $\sim$317s per example). On CMV, one-shot prompting is strongest (e.g., GPT-4o-mini achieves 75\% at $\sim$6s), and planning+search increases latency substantially without consistent gains. However, complex multi-tool orchestration exposes failure modes where the smaller model degrades. Overall, the findings highlight the need for task-specific, cost-aware choices of both model size and agent/tooling complexity.
|
https://arxiv.org/abs/2601.02663
|
Academic Papers
|
svg
|
6485bf0d3fb04d66f39c964ee03b59389d54601bf94dc070b8aaddca615aee24
|
2026-01-07T00:00:00-05:00
|
Inferring Causal Graph Temporal Logic Formulas to Expedite Reinforcement Learning in Temporally Extended Tasks
|
arXiv:2601.02666v1 Announce Type: new Abstract: Decision-making tasks often unfold on graphs with spatial-temporal dynamics. Black-box reinforcement learning often overlooks how local changes spread through network structure, limiting sample efficiency and interpretability. We present GTL-CIRL, a closed-loop framework that simultaneously learns policies and mines Causal Graph Temporal Logic (Causal GTL) specifications. The method shapes rewards with robustness, collects counterexamples when effects fail, and uses Gaussian Process (GP) driven Bayesian optimization to refine parameterized cause templates. The GP models capture spatial and temporal correlations in the system dynamics, enabling efficient exploration of complex parameter spaces. Case studies in gene and power networks show faster learning and clearer, verifiable behavior compared to standard RL baselines.
|
https://arxiv.org/abs/2601.02666
|
Academic Papers
|
svg
|
a2438e9d9194d97d74761979d6f5815dccafe9009d07f33b98ce972bc03aef04
|
2026-01-07T00:00:00-05:00
|
MAFS: Multi-head Attention Feature Selection for High-Dimensional Data via Deep Fusion of Filter Methods
|
arXiv:2601.02668v1 Announce Type: new Abstract: Feature selection is essential for high-dimensional biomedical data, enabling stronger predictive performance, reduced computational cost, and improved interpretability in precision medicine applications. Existing approaches face notable challenges. Filter methods are highly scalable but cannot capture complex relationships or eliminate redundancy. Deep learning-based approaches can model nonlinear patterns but often lack stability, interpretability, and efficiency at scale. Single-head attention improves interpretability but is limited in capturing multi-level dependencies and remains sensitive to initialization, reducing reproducibility. Most existing methods rarely combine statistical interpretability with the representational power of deep learning, particularly in ultra-high-dimensional settings. Here, we introduce MAFS (Multi-head Attention-based Feature Selection), a hybrid framework that integrates statistical priors with deep learning capabilities. MAFS begins with filter-based priors for stable initialization and guide learning. It then uses multi-head attention to examine features from multiple perspectives in parallel, capturing complex nonlinear relationships and interactions. Finally, a reordering module consolidates outputs across attention heads, resolving conflicts and minimizing information loss to generate robust and consistent feature rankings. This design combines statistical guidance with deep modeling capacity, yielding interpretable importance scores while maximizing retention of informative signals. Across simulated and real-world datasets, including cancer gene expression and Alzheimer's disease data, MAFS consistently achieves superior coverage and stability compared with existing filter-based and deep learning-based alternatives, offering a scalable, interpretable, and robust solution for feature selection in high-dimensional biomedical data.
|
https://arxiv.org/abs/2601.02668
|
Academic Papers
|
svg
|
d480c72c1813508f97e93cc3d3ba13bf21e0b2228719d3b75cb1a8b8f33e82b4
|
2026-01-07T00:00:00-05:00
|
Towards Comprehensive Stage-wise Benchmarking of Large Language Models in Fact-Checking
|
arXiv:2601.02669v1 Announce Type: new Abstract: Large Language Models (LLMs) are increasingly deployed in real-world fact-checking systems, yet existing evaluations focus predominantly on claim verification and overlook the broader fact-checking workflow, including claim extraction and evidence retrieval. This narrow focus prevents current benchmarks from revealing systematic reasoning failures, factual blind spots, and robustness limitations of modern LLMs. To bridge this gap, we present FactArena, a fully automated arena-style evaluation framework that conducts comprehensive, stage-wise benchmarking of LLMs across the complete fact-checking pipeline. FactArena integrates three key components: (i) an LLM-driven fact-checking process that standardizes claim decomposition, evidence retrieval via tool-augmented interactions, and justification-based verdict prediction; (ii) an arena-styled judgment mechanism guided by consolidated reference guidelines to ensure unbiased and consistent pairwise comparisons across heterogeneous judge agents; and (iii) an arena-driven claim-evolution module that adaptively generates more challenging and semantically controlled claims to probe LLMs' factual robustness beyond fixed seed data. Across 16 state-of-the-art LLMs spanning seven model families, FactArena produces stable and interpretable rankings. Our analyses further reveal significant discrepancies between static claim-verification accuracy and end-to-end fact-checking competence, highlighting the necessity of holistic evaluation. The proposed framework offers a scalable and trustworthy paradigm for diagnosing LLMs' factual reasoning, guiding future model development, and advancing the reliable deployment of LLMs in safety-critical fact-checking applications.
|
https://arxiv.org/abs/2601.02669
|
Academic Papers
|
svg
|
48138af08940cf935c80db8f60b22632587ef16ff657b33d73fbd6c79cde3675
|
2026-01-07T00:00:00-05:00
|
Multi-Turn Jailbreaking of Aligned LLMs via Lexical Anchor Tree Search
|
arXiv:2601.02670v1 Announce Type: new Abstract: Most jailbreak methods achieve high attack success rates (ASR) but require attacker LLMs to craft adversarial queries and/or demand high query budgets. These resource limitations make jailbreaking expensive, and the queries generated by attacker LLMs often consist of non-interpretable random prefixes. This paper introduces Lexical Anchor Tree Search (), addressing these limitations through an attacker-LLM-free method that operates purely via lexical anchor injection. LATS reformulates jailbreaking as a breadth-first tree search over multi-turn dialogues, where each node incrementally injects missing content words from the attack goal into benign prompts. Evaluations on AdvBench and HarmBench demonstrate that LATS achieves 97-100% ASR on latest GPT, Claude, and Llama models with an average of only ~6.4 queries, compared to 20+ queries required by other methods. These results highlight conversational structure as a potent and under-protected attack surface, while demonstrating superior query efficiency in an era where high ASR is readily achievable. Our code will be released to support reproducibility.
|
https://arxiv.org/abs/2601.02670
|
Academic Papers
|
svg
|
a35aa60f405352012416f807224ccf57bde92c21c4f58806f7aeb78228ab56bb
|
2026-01-07T00:00:00-05:00
|
Extracting books from production language models
|
arXiv:2601.02671v1 Announce Type: new Abstract: Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model's weights during training, and whether those memorized data can be extracted in the model's outputs. While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures these systems implement. We investigate this question using a two-phase procedure: (1) an initial probe to test for extraction feasibility, which sometimes uses a Best-of-N (BoN) jailbreak, followed by (2) iterative continuation prompts to attempt to extract the book. We evaluate our procedure on four production LLMs -- Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3 -- and we measure extraction success with a score computed from a block-based approximation of longest common substring (nv-recall). With different per-LLM experimental configurations, we were able to extract varying amounts of text. For the Phase 1 probe, it was unnecessary to jailbreak Gemini 2.5 Pro and Grok 3 to extract text (e.g, nv-recall of 76.8% and 70.3%, respectively, for Harry Potter and the Sorcerer's Stone), while it was necessary for Claude 3.7 Sonnet and GPT-4.1. In some cases, jailbroken Claude 3.7 Sonnet outputs entire books near-verbatim (e.g., nv-recall=95.8%). GPT-4.1 requires significantly more BoN attempts (e.g., 20X), and eventually refuses to continue (e.g., nv-recall=4.0%). Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs.
|
https://arxiv.org/abs/2601.02671
|
Academic Papers
|
svg
|
f97e4e1e4e2f0b8c28110201c033fb3a3cc6217a03a4b735a87524aca4aa69aa
|
2026-01-07T00:00:00-05:00
|
Iterative Structured Pruning for Large Language Models with Multi-Domain Calibration
|
arXiv:2601.02674v1 Announce Type: new Abstract: Large Language Models (LLMs) have achieved remarkable success across a wide spectrum of natural language processing tasks. However, their ever-growing scale introduces significant barriers to real-world deployment, including substantial computational overhead, memory footprint, and inference latency. While model pruning presents a viable solution to these challenges, existing unstructured pruning techniques often yield irregular sparsity patterns that necessitate specialized hardware or software support. In this work, we explore structured pruning, which eliminates entire architectural components and maintains compatibility with standard hardware accelerators. We introduce a novel structured pruning framework that leverages a hybrid multi-domain calibration set and an iterative calibration strategy to effectively identify and remove redundant channels. Extensive experiments on various models across diverse downstream tasks show that our approach achieves significant compression with minimal performance degradation.
|
https://arxiv.org/abs/2601.02674
|
Academic Papers
|
svg
|
7b817252f7cfb2b2d9b0521f359037d4f556b8b2b7fa08ceedd984dbcf8effe6
|
2026-01-07T00:00:00-05:00
|
Uni-FinLLM: A Unified Multimodal Large Language Model with Modular Task Heads for Micro-Level Stock Prediction and Macro-Level Systemic Risk Assessment
|
arXiv:2601.02677v1 Announce Type: new Abstract: Financial institutions and regulators require systems that integrate heterogeneous data to assess risks from stock fluctuations to systemic vulnerabilities. Existing approaches often treat these tasks in isolation, failing to capture cross-scale dependencies. We propose Uni-FinLLM, a unified multimodal large language model that uses a shared Transformer backbone and modular task heads to jointly process financial text, numerical time series, fundamentals, and visual data. Through cross-modal attention and multi-task optimization, it learns a coherent representation for micro-, meso-, and macro-level predictions. Evaluated on stock forecasting, credit-risk assessment, and systemic-risk detection, Uni-FinLLM significantly outperforms baselines. It raises stock directional accuracy to 67.4% (from 61.7%), credit-risk accuracy to 84.1% (from 79.6%), and macro early-warning accuracy to 82.3%. Results validate that a unified multimodal LLM can jointly model asset behavior and systemic vulnerabilities, offering a scalable decision-support engine for finance.
|
https://arxiv.org/abs/2601.02677
|
Academic Papers
|
svg
|
cdf13ee7d2c3a6b61f118326537e8554e055273cf83602d08bbd48a77776f081
|
2026-01-07T00:00:00-05:00
|
Adversarial Contrastive Learning for LLM Quantization Attacks
|
arXiv:2601.02680v1 Announce Type: new Abstract: Model quantization is critical for deploying large language models (LLMs) on resource-constrained hardware, yet recent work has revealed severe security risks that benign LLMs in full precision may exhibit malicious behaviors after quantization. In this paper, we propose Adversarial Contrastive Learning (ACL), a novel gradient-based quantization attack that achieves superior attack effectiveness by explicitly maximizing the gap between benign and harmful responses probabilities. ACL formulates the attack objective as a triplet-based contrastive loss, and integrates it with a projected gradient descent two-stage distributed fine-tuning strategy to ensure stable and efficient optimization. Extensive experiments demonstrate ACL's remarkable effectiveness, achieving attack success rates of 86.00% for over-refusal, 97.69% for jailbreak, and 92.40% for advertisement injection, substantially outperforming state-of-the-art methods by up to 44.67%, 18.84%, and 50.80%, respectively.
|
https://arxiv.org/abs/2601.02680
|
Academic Papers
|
svg
|
444a19d47ed309a5e0f3d88da89d476111c537b32c45c3a02a07dc3e4cda0f37
|
2026-01-07T00:00:00-05:00
|
Topology-Independent Robustness of the Weighted Mean under Label Poisoning Attacks in Heterogeneous Decentralized Learning
|
arXiv:2601.02682v1 Announce Type: new Abstract: Robustness to malicious attacks is crucial for practical decentralized signal processing and machine learning systems. A typical example of such attacks is label poisoning, meaning that some agents possess corrupted local labels and share models trained on these poisoned data. To defend against malicious attacks, existing works often focus on designing robust aggregators; meanwhile, the weighted mean aggregator is typically considered a simple, vulnerable baseline. This paper analyzes the robustness of decentralized gradient descent under label poisoning attacks, considering both robust and weighted mean aggregators. Theoretical results reveal that the learning errors of robust aggregators depend on the network topology, whereas the performance of weighted mean aggregator is topology-independent. Remarkably, the weighted mean aggregator, although often considered vulnerable, can outperform robust aggregators under sufficient heterogeneity, particularly when: (i) the global contamination rate (i.e., the fraction of poisoned agents for the entire network) is smaller than the local contamination rate (i.e., the maximal fraction of poisoned neighbors for the regular agents); (ii) the network of regular agents is disconnected; or (iii) the network of regular agents is sparse and the local contamination rate is high. Empirical results support our theoretical findings, highlighting the important role of network topology in the robustness to label poisoning attacks.
|
https://arxiv.org/abs/2601.02682
|
Academic Papers
|
svg
|
54fd4e60db380481ae05805784e4fc50f14ae32457f9fb89dc8a0d1e134613a5
|
2026-01-07T00:00:00-05:00
|
Learning from Prompt itself: the Hierarchical Attribution Prompt Optimization
|
arXiv:2601.02683v1 Announce Type: new Abstract: Optimization is fundamental across numerous disciplines, typically following an iterative process of refining an initial solution to enhance performance. This principle is equally critical in prompt engineering, where designing effective prompts for large language models constitutes a complex optimization challenge. A structured optimization approach requires automated or semi-automated procedures to develop improved prompts, thereby reducing manual effort, improving performance, and yielding an interpretable process. However, current prompt optimization methods often induce prompt drift, where new prompts fix prior failures but impair performance on previously successful tasks. Additionally, generating prompts from scratch can compromise interpretability. To address these limitations, this study proposes the Hierarchical Attribution Prompt Optimization (HAPO) framework, which introduces three innovations: (1) a dynamic attribution mechanism targeting error patterns in training data and prompting history, (2) semantic-unit optimization for editing functional prompt segments, and (3) multimodal-friendly progression supporting both end-to-end LLM and LLM-MLLM workflows. Applied in contexts like single/multi-image QA (e.g., OCRV2) and complex task analysis (e.g., BBH), HAPO demonstrates enhanced optimization efficiency, outperforming comparable automated prompt optimization methods and establishing an extensible paradigm for scalable prompt engineering.
|
https://arxiv.org/abs/2601.02683
|
Academic Papers
|
svg
|
cdef903ae3593020a5835ed30d81ac059f5dd6d7cb123368c3cecc0b0de0ff19
|
2026-01-07T00:00:00-05:00
|
Learning to Nudge: A Scalable Barrier Function Framework for Safe Robot Interaction in Dense Clutter
|
arXiv:2601.02686v1 Announce Type: new Abstract: Robots operating in everyday environments must navigate and manipulate within densely cluttered spaces, where physical contact with surrounding objects is unavoidable. Traditional safety frameworks treat contact as unsafe, restricting robots to collision avoidance and limiting their ability to function in dense, everyday settings. As the number of objects grows, model-based approaches for safe manipulation become computationally intractable; meanwhile, learned methods typically tie safety to the task at hand, making them hard to transfer to new tasks without retraining. In this work we introduce Dense Contact Barrier Functions(DCBF). Our approach bypasses the computational complexity of explicitly modeling multi-object dynamics by instead learning a composable, object-centric function that implicitly captures the safety constraints arising from physical interactions. Trained offline on interactions with a few objects, the learned DCBFcomposes across arbitrary object sets at runtime, producing a single global safety filter that scales linearly and transfers across tasks without retraining. We validate our approach through simulated experiments in dense clutter, demonstrating its ability to enable collision-free navigation and safe, contact-rich interaction in suitable settings.
|
https://arxiv.org/abs/2601.02686
|
Academic Papers
|
svg
|
4362eb8696924e8f79511b634249141ebc091535f89f34714686ecaa99c6e74f
|
2026-01-07T00:00:00-05:00
|
Multi-channel multi-speaker transformer for speech recognition
|
arXiv:2601.02688v1 Announce Type: new Abstract: With the development of teleconferencing and in-vehicle voice assistants, far-field multi-speaker speech recognition has become a hot research topic. Recently, a multi-channel transformer (MCT) has been proposed, which demonstrates the ability of the transformer to model far-field acoustic environments. However, MCT cannot encode high-dimensional acoustic features for each speaker from mixed input audio because of the interference between speakers. Based on these, we propose the multi-channel multi-speaker transformer (M2Former) for far-field multi-speaker ASR in this paper. Experiments on the SMS-WSJ benchmark show that the M2Former outperforms the neural beamformer, MCT, dual-path RNN with transform-average-concatenate and multi-channel deep clustering based end-to-end systems by 9.2%, 14.3%, 24.9%, and 52.2% respectively, in terms of relative word error rate reduction.
|
https://arxiv.org/abs/2601.02688
|
Academic Papers
|
svg
|
ca99f424f2fe8b3dad0c68c52528a4b2d75fdc05d4571f65d70dd96e4a1b2999
|
2026-01-07T00:00:00-05:00
|
Which Deep Learner? A Systematic Evaluation of Advanced Deep Forecasting Models Accuracy and Efficiency for Network Traffic Prediction
|
arXiv:2601.02694v1 Announce Type: new Abstract: Network traffic prediction is essential for automating modern network management. It is a difficult time series forecasting (TSF) problem that has been addressed by Deep Learning (DL) models due to their ability to capture complex patterns. Advances in forecasting, from sophisticated transformer architectures to simple linear models, have improved performance across diverse prediction tasks. However, given the variability of network traffic across network environments and traffic series timescales, it is essential to identify effective deployment choices and modeling directions for network traffic prediction. This study systematically identify and evaluates twelve advanced TSF models -including transformer-based and traditional DL approaches, each with unique advantages for network traffic prediction- against three statistical baselines on four real traffic datasets, across multiple time scales and horizons, assessing performance, robustness to anomalies, data gaps, external factors, data efficiency, and resource efficiency in terms of time, memory, and energy. Results highlight performance regimes, efficiency thresholds, and promising architectures that balance accuracy and efficiency, demonstrating robustness to traffic challenges and suggesting new directions beyond traditional RNNs.
|
https://arxiv.org/abs/2601.02694
|
Academic Papers
|
svg
|
0746287a74a448dafd23d3c2e9fc6a910cdb582fbea2cfeddcf20d4b54cada9f
|
2026-01-07T00:00:00-05:00
|
EvoRoute: Experience-Driven Self-Routing LLM Agent Systems
|
arXiv:2601.02695v1 Announce Type: new Abstract: Complex agentic AI systems, powered by a coordinated ensemble of Large Language Models (LLMs), tool and memory modules, have demonstrated remarkable capabilities on intricate, multi-turn tasks. However, this success is shadowed by prohibitive economic costs and severe latency, exposing a critical, yet underexplored, trade-off. We formalize this challenge as the \textbf{Agent System Trilemma}: the inherent tension among achieving state-of-the-art performance, minimizing monetary cost, and ensuring rapid task completion. To dismantle this trilemma, we introduce EvoRoute, a self-evolving model routing paradigm that transcends static, pre-defined model assignments. Leveraging an ever-expanding knowledge base of prior experience, EvoRoute dynamically selects Pareto-optimal LLM backbones at each step, balancing accuracy, efficiency, and resource use, while continually refining its own selection policy through environment feedback. Experiments on challenging agentic benchmarks such as GAIA and BrowseComp+ demonstrate that EvoRoute, when integrated into off-the-shelf agentic systems, not only sustains or enhances system performance but also reduces execution cost by up to $80\%$ and latency by over $70\%$.
|
https://arxiv.org/abs/2601.02695
|
Academic Papers
|
svg
|
b22080b223720b761431cb2193c9ca849c775ad7689797636b749aeb333cad26
|
2026-01-07T00:00:00-05:00
|
Boosting Accuracy and Interpretability in Multilingual Hate Speech Detection Through Layer Freezing and Explainable AI
|
arXiv:2601.02697v1 Announce Type: new Abstract: Sentiment analysis focuses on identifying the emotional polarity expressed in textual data, typically categorized as positive, negative, or neutral. Hate speech detection, on the other hand, aims to recognize content that incites violence, discrimination, or hostility toward individuals or groups based on attributes such as race, gender, sexual orientation, or religion. Both tasks play a critical role in online content moderation by enabling the detection and mitigation of harmful or offensive material, thereby contributing to safer digital environments. In this study, we examine the performance of three transformer-based models: BERT-base-multilingual-cased, RoBERTa-base, and XLM-RoBERTa-base with the first eight layers frozen, for multilingual sentiment analysis and hate speech detection. The evaluation is conducted across five languages: English, Korean, Japanese, Chinese, and French. The models are compared using standard performance metrics, including accuracy, precision, recall, and F1-score. To enhance model interpretability and provide deeper insight into prediction behavior, we integrate the Local Interpretable Model-agnostic Explanations (LIME) framework, which highlights the contribution of individual words to the models decisions. By combining state-of-the-art transformer architectures with explainability techniques, this work aims to improve both the effectiveness and transparency of multilingual sentiment analysis and hate speech detection systems.
|
https://arxiv.org/abs/2601.02697
|
Academic Papers
|
svg
|
be0ea18d07fbd01734e1b8ea3bcf47b6cf8446b04b2b9227fb712c7524283894
|
2026-01-07T00:00:00-05:00
|
Enterprise Identity Integration for AI-Assisted Developer Services: Architecture, Implementation, and Case Study
|
arXiv:2601.02698v1 Announce Type: new Abstract: AI-assisted developer services are increasingly embedded in modern IDEs, yet enterprises must ensure these tools operate within existing identity, access control, and governance requirements. The Model Context Protocol (MCP) enables AI assistants to retrieve structured internal context, but its specification provides only a minimal authorization model and lacks guidance on integrating enterprise SSO. This article presents a practical architecture that incorporates OAuth 2.0 and OpenID Connect (OIDC) into MCP-enabled developer environments. It describes how IDE extensions obtain and present tokens, how MCP servers validate them through an identity provider, and how scopes and claims can enforce least-privilege access. A prototype implementation using Visual Studio Code, a Python-based MCP server, and an OIDC-compliant IdP demonstrates feasibility. A case study evaluates authentication latency, token-validation overhead, operational considerations, and AI-specific risks. The approach provides a deployable pattern for organizations adopting AI-assisted developer tools while maintaining identity assurance and auditability.
|
https://arxiv.org/abs/2601.02698
|
Academic Papers
|
svg
|
250518c204a3dbb5f091f3fa370982205cafc70b1ef8a2ff6a7dd376f7633c2c
|
2026-01-07T00:00:00-05:00
|
Adversarial Question Answering Robustness: A Multi-Level Error Analysis and Mitigation Study
|
arXiv:2601.02700v1 Announce Type: new Abstract: Question answering (QA) systems achieve impressive performance on standard benchmarks like SQuAD, but remain vulnerable to adversarial examples. This project investigates the adversarial robustness of transformer models on the AddSent adversarial dataset through systematic experimentation across model scales and targeted mitigation strategies. We perform comprehensive multi-level error analysis using five complementary categorization schemes, identifying negation confusion and entity substitution as the primary failure modes. Through systematic evaluation of adversarial fine-tuning ratios, we identify 80% clean + 20% adversarial data as optimal. Data augmentation experiments reveal a capacity bottleneck in small models. Scaling from ELECTRA-small (14M parameters) to ELECTRA-base (110M parameters) eliminates the robustness-accuracy trade-off, achieving substantial improvements on both clean and adversarial data. We implement three targeted mitigation strategies, with Entity-Aware contrastive learning achieving best performance: 89.89% AddSent Exact Match (EM) and 90.73% SQuAD EM, representing 94.9% closure of the adversarial gap. To our knowledge, this is the first work integrating comprehensive linguistic error analysis with Named Entity Recognition (NER)-guided contrastive learning for adversarial QA, demonstrating that targeted mitigation can achieve near-parity between clean and adversarial performance.
|
https://arxiv.org/abs/2601.02700
|
Academic Papers
|
svg
|
d6dc0413be11a3ba19ff1da9c18dde3c41a4e2adfec672951c45937258731fe0
|
2026-01-07T00:00:00-05:00
|
Topology-Aware Spatio-Temporal Graph Transformer for Predicting Smart Grid Failures
|
arXiv:2601.02701v1 Announce Type: new Abstract: Smart grid infrastructure needs improved resilience and preventive maintenance through more accurate predictions. Current methodologies lack accurate representation of spatio-temporal-causal interdependencies and class imbalance in failure prediction tasks. This study introduces a Topology-Aware Spatio-Temporal Graph Transformer (ST-GT) architecture that overcomes existing limitations by using three main innovations: (1) directly incorporating physical transmission network topology into the transformer attention mechanism to identify spatial failure propagation patterns; (2) unified processing of static topological descriptors and (3) temporal Phasor Measurement Units (PMU) sequences in an end-to-end framework. The ST-GT model exhibited outstanding performance in five-fold cross-validation across 10 substations, attaining perfect recall (1.000 $\pm$ 0.001) and an F1-score of 0.858 $\pm$ 0.009, markedly surpassing XGBoost baselines (0.683 accuracy/F1). Perfect recall guarantees that no critical failures are overlooked, which is essential for grid safety; however, it may lead to an increase in false alarm rates. This framework integrates temporal dynamics modeling with spatial graph awareness for critical infrastructure monitoring. It offers interpretable insights into failure propagation pathways and enhances maintenance strategies. Future research focuses on developing cost-weighted loss functions for precision-recall trade-off enhancement, implementing real-time monitoring systems with uncertainty quantification, and creating cost-sensitive frameworks balancing false alarm expenditures with failure consequences. The methodology success suggests its potential for wider application in critical infrastructure areas requiring spatial temporal failure prediction.
|
https://arxiv.org/abs/2601.02701
|
Academic Papers
|
svg
|
95d373e9679c14b041f58825fc768f245445b3ebe050713a25d6242fb6718654
|
2026-01-07T00:00:00-05:00
|
Learning User Preferences Through Interaction for Long-Term Collaboration
|
arXiv:2601.02702v1 Announce Type: new Abstract: As conversational agents accumulate experience collaborating with users, adapting to user preferences is essential for fostering long-term relationships and improving collaboration quality over time. We introduce MultiSessionCollab, a benchmark that evaluates how well agents can learn user preferences and leverage them to improve collaboration quality throughout multiple sessions. To develop agents that succeed in this setting, we present long-term collaborative agents equipped with a memory that persists and refines user preference as interaction experience accumulates. Moreover, we demonstrate that learning signals can be derived from user simulator behavior in MultiSessionCollab to train agents to generate more comprehensive reflections and update their memory more effectively. Extensive experiments show that equipping agents with memory improves long-term collaboration, yielding higher task success rates, more efficient interactions, and reduced user effort. Finally, we conduct a human user study that demonstrates that memory helps improve user experience in real-world settings.
|
https://arxiv.org/abs/2601.02702
|
Academic Papers
|
svg
|
fccdb08bae7b73c85414f963c674502e23823f893a4d8676b375cced3fdfe41a
|
2026-01-07T00:00:00-05:00
|
Exact Constructive Digit-by-Digit Algorithms for Integer $e$-th Root Extraction
|
arXiv:2601.02703v1 Announce Type: new Abstract: We present a unified constructive digit-by-digit framework for exact root extraction using only integer arithmetic. The core contribution is a complete correctness theory for the fractional square root algorithm, proving that each computed decimal digit is exact and final, together with a sharp truncation error bound of $10^{-k}$ after $k$ digits. We further develop an invariant-based framework for computing the integer $e$-th root $\lfloor N^{1/e} \rfloor$ of a non-negative integer $N$ for arbitrary fixed exponents $e \ge 2$, derived directly from the binomial theorem. This method generalizes the classical long-division square root algorithm, preserves a constructive remainder invariant throughout the computation, and provides an exact decision procedure for perfect $e$-th power detection. We also explain why exact digit-by-digit fractional extraction with non-revisable digits is structurally possible only for square roots ($e=2$), whereas higher-order roots ($e \ge 3$) exhibit nonlinear coupling that prevents digit stability under scaling. All proofs are carried out in a constructive, algorithmic manner consistent with Bishop-style constructive mathematics, yielding explicit algorithmic witnesses, decidable predicates, and guaranteed termination. The resulting algorithms require no division or floating-point operations and are well suited to symbolic computation, verified exact arithmetic, educational exposition, and digital hardware implementation.
|
https://arxiv.org/abs/2601.02703
|
Academic Papers
|
svg
|
4c334d14eead97418e88bf7996cad41e375280d37991bcd08c393ff964393fed
|
2026-01-07T00:00:00-05:00
|
Analysis of Various Manipulator Configurations Based on Multi-Objective Black-Box Optimization
|
arXiv:2601.02704v1 Announce Type: new Abstract: Various 6-degree-of-freedom (DOF) and 7-DOF manipulators have been developed to date. Over a long history, their joint configurations and link length ratios have been determined empirically. In recent years, the development of robotic foundation models has become increasingly active, leading to the continuous proposal of various manipulators to support these models. However, none of these manipulators share exactly the same structure, as the order of joints and the ratio of link lengths differ among robots. Therefore, in order to discuss the optimal structure of a manipulator, we performed multi-objective optimization from the perspectives of end-effector reachability and joint torque. We analyze where existing manipulator structures stand within the sampling results of the optimization and provide insights for future manipulator design.
|
https://arxiv.org/abs/2601.02704
|
Academic Papers
|
svg
|
abb93af2254f3378af7aeec05a3d7cb81a667d1d60316c9793a5bcd38255be39
|
2026-01-07T00:00:00-05:00
|
Scaling Laws of Machine Learning for Optimal Power Flow
|
arXiv:2601.02706v1 Announce Type: new Abstract: Optimal power flow (OPF) is one of the fundamental tasks for power system operations. While machine learning (ML) approaches such as deep neural networks (DNNs) have been widely studied to enhance OPF solution speed and performance, their practical deployment faces two critical scaling questions: What is the minimum training data volume required for reliable results? How should ML models' complexity balance accuracy with real-time computational limits? Existing studies evaluate discrete scenarios without quantifying these scaling relationships, leading to trial-and-error-based ML development in real-world applications. This work presents the first systematic scaling study for ML-based OPF across two dimensions: data scale (0.1K-40K training samples) and compute scale (multiple NN architectures with varying FLOPs). Our results reveal consistent power-law relationships on both DNNs and physics-informed NNs (PINNs) between each resource dimension and three core performance metrics: prediction error (MAE), constraint violations and speed. We find that for ACOPF, the accuracy metric scales with dataset size and training compute. These scaling laws enable predictable and principled ML pipeline design for OPF. We further identify the divergence between prediction accuracy and constraint feasibility and characterize the compute-optimal frontier. This work provides quantitative guidance for ML-OPF design and deployments.
|
https://arxiv.org/abs/2601.02706
|
Academic Papers
|
svg
|
14e90ffdef69e63a504fed37427628c3405e07a2e825f35767ebd93dc2cf4f97
|
2026-01-07T00:00:00-05:00
|
CREAM: Continual Retrieval on Dynamic Streaming Corpora with Adaptive Soft Memory
|
arXiv:2601.02708v1 Announce Type: new Abstract: Information retrieval (IR) in dynamic data streams is emerging as a challenging task, as shifts in data distribution degrade the performance of AI-powered IR systems. To mitigate this issue, memory-based continual learning has been widely adopted for IR. However, existing methods rely on a fixed set of queries with ground-truth relevant documents, which limits generalization to unseen queries and documents, making them impractical for real-world applications. To enable more effective learning with unseen topics of a new corpus without ground-truth labels, we propose CREAM, a self-supervised framework for memory-based continual retrieval. CREAM captures the evolving semantics of streaming queries and documents into dynamically structured soft memory and leverages it to adapt to both seen and unseen topics in an unsupervised setting. We realize this through three key techniques: fine-grained similarity estimation, regularized cluster prototyping, and stratified coreset sampling. Experiments on two benchmark datasets demonstrate that CREAM exhibits superior adaptability and retrieval accuracy, outperforming the strongest method in a label-free setting by 27.79\% in Success@5 and 44.5\% in Recall@10 on average, and achieving performance comparable to or even exceeding that of supervised methods.
|
https://arxiv.org/abs/2601.02708
|
Academic Papers
|
svg
|
64396dc7c3e523fbd8c540b23deda768cef08b299b55680ea0d27aff3ca03eaf
|
2026-01-07T00:00:00-05:00
|
GRRE: Leveraging G-Channel Removed Reconstruction Error for Robust Detection of AI-Generated Images
|
arXiv:2601.02709v1 Announce Type: new Abstract: The rapid progress of generative models, particularly diffusion models and GANs, has greatly increased the difficulty of distinguishing synthetic images from real ones. Although numerous detection methods have been proposed, their accuracy often degrades when applied to images generated by novel or unseen generative models, highlighting the challenge of achieving strong generalization. To address this challenge, we introduce a novel detection paradigm based on channel removal reconstruction. Specifically, we observe that when the green (G) channel is removed from real images and reconstructed, the resulting reconstruction errors differ significantly from those of AI-generated images. Building upon this insight, we propose G-channel Removed Reconstruction Error (GRRE), a simple yet effective method that exploits this discrepancy for robust AI-generated image detection. Extensive experiments demonstrate that GRRE consistently achieves high detection accuracy across multiple generative models, including those unseen during training. Compared with existing approaches, GRRE not only maintains strong robustness against various perturbations and post-processing operations but also exhibits superior cross-model generalization. These results highlight the potential of channel-removal-based reconstruction as a powerful forensic tool for safeguarding image authenticity in the era of generative AI.
|
https://arxiv.org/abs/2601.02709
|
Academic Papers
|
svg
|
c3bc0ae0aae25a104502e9795862833f7960e7d6d732fc300373de36f9f26b5c
|
2026-01-07T00:00:00-05:00
|
Time-Scaling Is What Agents Need Now
|
arXiv:2601.02714v1 Announce Type: new Abstract: Early artificial intelligence paradigms exhibited separated cognitive functions: Neural Networks focused on "perception-representation," Reinforcement Learning on "decision-making-behavior," and Symbolic AI on "knowledge-reasoning." With Transformer-based large models and world models, these paradigms are converging into cognitive agents with closed-loop "perception-decision-action" capabilities. Humans solve complex problems under limited cognitive resources through temporalized sequential reasoning. Language relies on problem space search for deep semantic reasoning. While early large language models (LLMs) could generate fluent text, they lacked robust semantic reasoning capabilities. Prompting techniques like Chain-of-Thought (CoT) and Tree-of-Thought (ToT) extended reasoning paths by making intermediate steps explicit. Recent models like DeepSeek-R1 enhanced performance through explicit reasoning trajectories. However, these methods have limitations in search completeness and efficiency. This highlights the need for "Time-Scaling"--the systematic extension and optimization of an agent's ability to unfold reasoning over time. Time-Scaling refers to architectural design utilizing extended temporal pathways, enabling deeper problem space exploration, dynamic strategy adjustment, and enhanced metacognitive control, paralleling human sequential reasoning under cognitive constraints. It represents a critical frontier for enhancing deep reasoning and problem-solving without proportional increases in static model parameters. Advancing intelligent agent capabilities requires placing Time-Scaling principles at the forefront, positioning explicit temporal reasoning management as foundational.
|
https://arxiv.org/abs/2601.02714
|
Academic Papers
|
svg
|
6b89b151f7d3ec39c325c778659c6bfe9bd0ff371dd9be70cb0e97a0843bfe4a
|
2026-01-07T00:00:00-05:00
|
CAMO: Category-Agnostic 3D Motion Transfer from Monocular 2D Videos
|
arXiv:2601.02716v1 Announce Type: new Abstract: Motion transfer from 2D videos to 3D assets is a challenging problem, due to inherent pose ambiguities and diverse object shapes, often requiring category-specific parametric templates. We propose CAMO, a category-agnostic framework that transfers motion to diverse target meshes directly from monocular 2D videos without relying on predefined templates or explicit 3D supervision. The core of CAMO is a morphology-parameterized articulated 3D Gaussian splatting model combined with dense semantic correspondences to jointly adapt shape and pose through optimization. This approach effectively alleviates shape-pose ambiguities, enabling visually faithful motion transfer for diverse categories. Experimental results demonstrate superior motion accuracy, efficiency, and visual coherence compared to existing methods, significantly advancing motion transfer in varied object categories and casual video scenarios.
|
https://arxiv.org/abs/2601.02716
|
Academic Papers
|
svg
|
fdd2c5a8056c916c0e2ce336f0c94e93ba8cfd33ce1976e3c23f8035e123e86f
|
2026-01-07T00:00:00-05:00
|
Privacy-Preserving AI-Enabled Decentralized Learning and Employment Records System
|
arXiv:2601.02720v1 Announce Type: new Abstract: Learning and Employment Record (LER) systems are emerging as critical infrastructure for securely compiling and sharing educational and work achievements. Existing blockchain-based platforms leverage verifiable credentials but typically lack automated skill-credential generation and the ability to incorporate unstructured evidence of learning. In this paper,a privacy-preserving, AI-enabled decentralized LER system is proposed to address these gaps. Digitally signed transcripts from educational institutions are accepted, and verifiable self-issued skill credentials are derived inside a trusted execution environment (TEE) by a natural language processing pipeline that analyzes formal records (e.g., transcripts, syllabi) and informal artifacts. All verification and job-skill matching are performed inside the enclave with selective disclosure, so raw credentials and private keys remain enclave-confined. Job matching relies solely on attested skill vectors and is invariant to non-skill resume fields, thereby reducing opportunities for screening bias.The NLP component was evaluated on sample learner data; the mapping follows the validated Syllabus-to-O*NET methodology,and a stability test across repeated runs observed <5% variance in top-ranked skills. Formal security statements and proof sketches are provided showing that derived credentials are unforgeable and that sensitive information remains confidential. The proposed system thus supports secure education and employment credentialing, robust transcript verification,and automated, privacy-preserving skill extraction within a decentralized framework.
|
https://arxiv.org/abs/2601.02720
|
Academic Papers
|
svg
|
ce9e3299fceacc2884d802fbf3c047404f8db49de901f2d2af82167f6b980948
|
2026-01-07T00:00:00-05:00
|
Robust Mesh Saliency GT Acquisition in VR via View Cone Sampling and Geometric Smoothing
|
arXiv:2601.02721v1 Announce Type: new Abstract: Reliable 3D mesh saliency ground truth (GT) is essential for human-centric visual modeling in virtual reality (VR). However, current 3D mesh saliency GT acquisition methods are generally consistent with 2D image methods, ignoring the differences between 3D geometry topology and 2D image array. Current VR eye-tracking pipelines rely on single ray sampling and Euclidean smoothing, triggering texture attention and signal leakage across gaps. This paper proposes a robust framework to address these limitations. We first introduce a view cone sampling (VCS) strategy, which simulates the human foveal receptive field via Gaussian-distributed ray bundles to improve sampling robustness for complex topologies. Furthermore, a hybrid Manifold-Euclidean constrained diffusion (HCD) algorithm is developed, fusing manifold geodesic constraints with Euclidean scales to ensure topologically-consistent saliency propagation. By mitigating "topological short-circuits" and aliasing, our framework provides a high-fidelity 3D attention acquisition paradigm that aligns with natural human perception, offering a more accurate and robust baseline for 3D mesh saliency research.
|
https://arxiv.org/abs/2601.02721
|
Academic Papers
|
svg
|
31bd94bb1adf05269ffc37a8039932ebdf261210f6f9e612d02d205f76759db3
|
2026-01-07T00:00:00-05:00
|
Loop Closure using AnyLoc Visual Place Recognition in DPV-SLAM
|
arXiv:2601.02723v1 Announce Type: new Abstract: Loop closure is crucial for maintaining the accuracy and consistency of visual SLAM. We propose a method to improve loop closure performance in DPV-SLAM. Our approach integrates AnyLoc, a learning-based visual place recognition technique, as a replacement for the classical Bag of Visual Words (BoVW) loop detection method. In contrast to BoVW, which relies on handcrafted features, AnyLoc utilizes deep feature representations, enabling more robust image retrieval across diverse viewpoints and lighting conditions. Furthermore, we propose an adaptive mechanism that dynamically adjusts similarity threshold based on environmental conditions, removing the need for manual tuning. Experiments on both indoor and outdoor datasets demonstrate that our method significantly outperforms the original DPV-SLAM in terms of loop closure accuracy and robustness. The proposed method offers a practical and scalable solution for enhancing loop closure performance in modern SLAM systems.
|
https://arxiv.org/abs/2601.02723
|
Academic Papers
|
svg
|
67fda2dba9b6cf5eb56b5c1e142df9a3fff8911e89e3d8843e5522e4ea1ba294
|
2026-01-07T00:00:00-05:00
|
Foreground-Aware Dataset Distillation via Dynamic Patch Selection
|
arXiv:2601.02727v1 Announce Type: new Abstract: In this paper, we propose a foreground-aware dataset distillation method that enhances patch selection in a content-adaptive manner. With the rising computational cost of training large-scale deep models, dataset distillation has emerged as a promising approach for constructing compact synthetic datasets that retain the knowledge of their large original counterparts. However, traditional optimization-based methods often suffer from high computational overhead, memory constraints, and the generation of unrealistic, noise-like images with limited architectural generalization. Recent non-optimization methods alleviate some of these issues by constructing distilled data from real image patches, but the used rigid patch selection strategies can still discard critical information about the main objects. To solve this problem, we first leverage Grounded SAM2 to identify foreground objects and compute per-image foreground occupancy, from which we derive a category-wise patch decision threshold. Guided by these thresholds, we design a dynamic patch selection strategy that, for each image, either selects the most informative patch from multiple candidates or directly resizes the full image when the foreground dominates. This dual-path mechanism preserves more key information about the main objects while reducing redundant background content. Extensive experiments on multiple benchmarks show that the proposed method consistently improves distillation performance over existing approaches, producing more informative and representative distilled datasets and enhancing robustness across different architectures and image compositions.
|
https://arxiv.org/abs/2601.02727
|
Academic Papers
|
svg
|
95614e5c3ab218f4d946326b052ab34ba9fc04ec59c1d8bd6f33df58ede2f2bb
|
2026-01-07T00:00:00-05:00
|
CRoPE: Efficient Parametrization of Rotary Positional Embedding
|
arXiv:2601.02728v1 Announce Type: new Abstract: Rotary positional embedding has become the state-of-the-art approach to encode position information in transformer-based models. While it is often succinctly expressed in complex linear algebra, we note that the actual implementation of $Q/K/V$-projections is not equivalent to a complex linear transformation. We argue that complex linear transformation is a more natural parametrization and saves near 50\% parameters within the attention block. We show empirically that removing such redundancy has negligible impact on the model performance both in sample and out of sample. Our modification achieves more efficient parameter usage, as well as a cleaner interpretation of the representation space.
|
https://arxiv.org/abs/2601.02728
|
Academic Papers
|
svg
|
733b93333eefa1d1fffc664a2cf0d77579679ecb39e19dd1890df9c5ce7a8179
|
2026-01-07T00:00:00-05:00
|
HOLO: Homography-Guided Pose Estimator Network for Fine-Grained Visual Localization on SD Maps
|
arXiv:2601.02730v1 Announce Type: new Abstract: Visual localization on standard-definition (SD) maps has emerged as a promising low-cost and scalable solution for autonomous driving. However, existing regression-based approaches often overlook inherent geometric priors, resulting in suboptimal training efficiency and limited localization accuracy. In this paper, we propose a novel homography-guided pose estimator network for fine-grained visual localization between multi-view images and standard-definition (SD) maps. We construct input pairs that satisfy a homography constraint by projecting ground-view features into the BEV domain and enforcing semantic alignment with map features. Then we leverage homography relationships to guide feature fusion and restrict the pose outputs to a valid feasible region, which significantly improves training efficiency and localization accuracy compared to prior methods relying on attention-based fusion and direct 3-DoF pose regression. To the best of our knowledge, this is the first work to unify BEV semantic reasoning with homography learning for image-to-map localization. Furthermore, by explicitly modeling homography transformations, the proposed framework naturally supports cross-resolution inputs, enhancing model flexibility. Extensive experiments on the nuScenes dataset demonstrate that our approach significantly outperforms existing state-of-the-art visual localization methods. Code and pretrained models will be publicly released to foster future research.
|
https://arxiv.org/abs/2601.02730
|
Academic Papers
|
svg
|
e322b5a7592aee45753f62f224208545b821b16463e2e68a8645f85ef80da578
|
2026-01-07T00:00:00-05:00
|
Omni2Sound: Towards Unified Video-Text-to-Audio Generation
|
arXiv:2601.02731v1 Announce Type: new Abstract: Training a unified model integrating video-to-audio (V2A), text-to-audio (T2A), and joint video-text-to-audio (VT2A) generation offers significant application flexibility, yet faces two unexplored foundational challenges: (1) the scarcity of high-quality audio captions with tight A-V-T alignment, leading to severe semantic conflict between multimodal conditions, and (2) cross-task and intra-task competition, manifesting as an adverse V2A-T2A performance trade-off and modality bias in the VT2A task. First, to address data scarcity, we introduce SoundAtlas, a large-scale dataset (470k pairs) that significantly outperforms existing benchmarks and even human experts in quality. Powered by a novel agentic pipeline, it integrates Vision-to-Language Compression to mitigate visual bias of MLLMs, a Junior-Senior Agent Handoff for a 5 times cost reduction, and rigorous Post-hoc Filtering to ensure fidelity. Consequently, SoundAtlas delivers semantically rich and temporally detailed captions with tight V-A-T alignment. Second, we propose Omni2Sound, a unified VT2A diffusion model supporting flexible input modalities. To resolve the inherent cross-task and intra-task competition, we design a three-stage multi-task progressive training schedule that converts cross-task competition into joint optimization and mitigates modality bias in the VT2A task, maintaining both audio-visual alignment and off-screen audio generation faithfulness. Finally, we construct VGGSound-Omni, a comprehensive benchmark for unified evaluation, including challenging off-screen tracks. With a standard DiT backbone, Omni2Sound achieves unified SOTA performance across all three tasks within a single model, demonstrating strong generalization across benchmarks with heterogeneous input conditions. The project page is at https://swapforward.github.io/Omni2Sound.
|
https://arxiv.org/abs/2601.02731
|
Academic Papers
|
svg
|
38fcc630c991b2c62e45ed5891b0c95cf9ef5d289333433e28f335c533067a0e
|
2026-01-07T00:00:00-05:00
|
Agentic Memory Enhanced Recursive Reasoning for Root Cause Localization in Microservices
|
arXiv:2601.02732v1 Announce Type: new Abstract: As contemporary microservice systems become increasingly popular and complex-often comprising hundreds or even thousands of fine-grained, interdependent subsystems-they are experiencing more frequent failures. Ensuring system reliability thus demands accurate root cause localization. While many traditional graph-based and deep learning approaches have been explored for this task, they often rely heavily on pre-defined schemas that struggle to adapt to evolving operational contexts. Consequently, a number of LLM-based methods have recently been proposed. However, these methods still face two major limitations: shallow, symptom-centric reasoning that undermines accuracy, and a lack of cross-alert reuse that leads to redundant reasoning and high latency. In this paper, we conduct a comprehensive study of how Site Reliability Engineers (SREs) localize the root causes of failures, drawing insights from professionals across multiple organizations. Our investigation reveals that expert root cause analysis exhibits three key characteristics: recursiveness, multi-dimensional expansion, and cross-modal reasoning. Motivated by these findings, we introduce AMER-RCL, an agentic memory enhanced recursive reasoning framework for root cause localization in microservices. AMER-RCL employs the Recursive Reasoning RCL engine, a multi-agent framework that performs recursive reasoning on each alert to progressively refine candidate causes, while Agentic Memory incrementally accumulates and reuses reasoning from prior alerts within a time window to reduce redundant exploration and lower inference latency. Experimental results demonstrate that AMER-RCL consistently outperforms state-of-the-art methods in both localization accuracy and inference efficiency.
|
https://arxiv.org/abs/2601.02732
|
Academic Papers
|
svg
|
d58da2defdc5db51af987374119e712e21963f23dd3f16bc887119d373f0ea73
|
2026-01-07T00:00:00-05:00
|
Scalable Tree Ensemble Proximities in Python
|
arXiv:2601.02735v1 Announce Type: new Abstract: Tree ensemble methods such as Random Forests naturally induce supervised similarity measures through their decision tree structure, but existing implementations of proximities derived from tree ensembles typically suffer from quadratic time or memory complexity, limiting their scalability. In this work, we introduce a general framework for efficient proximity computation by defining a family of Separable Weighted Leaf-Collision Proximities. We show that any proximity measure in this family admits an exact sparse matrix factorization, restricting computation to leaf-level collisions and avoiding explicit pairwise comparisons. This formulation enables low-memory, scalable proximity computation using sparse linear algebra in Python. Empirical benchmarks demonstrate substantial runtime and memory improvements over traditional approaches, allowing tree ensemble proximities to scale efficiently to datasets with hundreds of thousands of samples on standard CPU hardware.
|
https://arxiv.org/abs/2601.02735
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.