id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
e68b69cced767d031294986b436e87c753cb395203b291d938bb961aa3fd9fef
|
2026-01-16T00:00:00-05:00
|
Algebraic Properties of PAC Codes
|
arXiv:2601.10262v1 Announce Type: new Abstract: We analyze polarization-adjusted convolutional codes using the algebraic representation of polar and Reed-Muller codes. We define a large class of codes, called generalized polynomial polar codes which include PAC codes and Reverse PAC codes. We derive structural properties of generalized polynomial polar codes, such as duality, minimum distance. We also deduce some structural limits in terms of number of minimum weight codewords, and dimension of monomial sub-code.
|
https://arxiv.org/abs/2601.10262
|
Academic Papers
|
svg
|
3d61eab9f7b10e9366280f3652845f09613004490293b31306510070e3766300
|
2026-01-16T00:00:00-05:00
|
An Ensemble of Evolutionary Algorithms With Both Crisscross Search and Sparrow Search for Processing Inferior Individuals
|
arXiv:2601.10263v1 Announce Type: new Abstract: In the field of artificial intelligence, real parameter single objective optimization is an important direction. Both the Differential Evolution (DE) and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) demonstrate good performance for real parameter single objective optimization. Nevertheless, there exist other types of evolutionary algorithm for the purpose. In recent years, researchers begin to study long-term search. EA4eig - an ensemble of three DE variants and CMA-ES - performs well for long-term search. In this paper, we introduce two types of evolutionary algorithm proposed recently - crisscross search and sparrow search - into EA4eig as secondary evolutionary algorithms to process inferior individuals. Thus, EA4eigCS is obtained. In our ensemble, the secondary evolutionary algorithms are expected to vary distribution of the population for breaking stagnation. Experimental results show that our EA4eigCS outperforms EA4eig and is competitive when compared with state-of-the-art algorithms. Code and supplementary material are available at:https://anonymous.4open.science/r/EA4eigCS-2A43.
|
https://arxiv.org/abs/2601.10263
|
Academic Papers
|
svg
|
0a284f6c14e36dccd01b6a4ccac9a178f3059f67ecc70c3525498f876ca87599
|
2026-01-16T00:00:00-05:00
|
Measuring Affinity between Attention-Head Weight Subspaces via the Projection Kernel
|
arXiv:2601.10266v1 Announce Type: new Abstract: Understanding relationships between attention heads is essential for interpreting the internal structure of Transformers, yet existing metrics do not capture this structure well. We focus on the subspaces spanned by attention-head weight matrices and quantify head-to-head relationships using the Projection Kernel (PK), a principal-angle-based measure of subspace similarity. Experiments show that PK reproduces known head-to-head interactions on the IOI task more clearly than prior metrics such as the Composition Score. We further introduce a framework to quantify the informativeness of PK distributions by comparing them with a reference distribution derived from random orthogonal subspaces. As an application, we analyze a directed graph constructed from PK and show that, in GPT2-small, L4H7 acts as a hub by functioning as an identity head.
|
https://arxiv.org/abs/2601.10266
|
Academic Papers
|
svg
|
b686c28c48b513d889b3e7c8e73a7bdc6a5917c0ef6a628bd16f0a55905dbd8c
|
2026-01-16T00:00:00-05:00
|
In-Context Source and Channel Coding
|
arXiv:2601.10267v1 Announce Type: new Abstract: Separate Source-Channel Coding (SSCC) remains attractive for text transmission due to its modularity and compatibility with mature entropy coders and powerful channel codes. However, SSCC often suffers from a pronounced cliff effect in low Signal-to-Noise Ratio (SNR) regimes, where residual bit errors after channel decoding can catastrophically break lossless source decoding, especially for Arithmetic Coding (AC) driven by Large Language Models (LLMs). This paper proposes a receiver-side In-Context Decoding (ICD) framework that enhances SSCC robustness without modifying the transmitter. ICD leverages an Error Correction Code Transformer (ECCT) to obtain bit-wise reliability for the decoded information bits. Based on the context-consistent bitstream, ICD constructs a confidence-ranked candidate pool via reliability-guided bit flipping, samples a compact yet diverse subset of candidates, and applies an LLM-based arithmetic decoder to obtain both reconstructions and sequence-level log-likelihoods. A reliability-likelihood fusion rule then selects the final output. We further provide theoretical guarantees on the stability and convergence of the proposed sampling procedure. Extensive experiments over Additive White Gaussian Noise (AWGN) and Rayleigh fading channels demonstrate consistent gains compared with conventional SSCC baselines and representative Joint Source-Channel Coding (JSCC) schemes.
|
https://arxiv.org/abs/2601.10267
|
Academic Papers
|
svg
|
b1a7f4ae629b4b78f6cb8d54d8024a615ba1b853bc0bcd3bc8f5221265fa149d
|
2026-01-16T00:00:00-05:00
|
The impact of tactile sensor configurations on grasp learning efficiency -- a comparative evaluation in simulation
|
arXiv:2601.10268v1 Announce Type: new Abstract: Tactile sensors are breaking into the field of robotics to provide direct information related to contact surfaces, including contact events, slip events and even texture identification. These events are especially important for robotic hand designs, including prosthetics, as they can greatly improve grasp stability. Most presently published robotic hand designs, however, implement them in vastly different densities and layouts on the hand surface, often reserving the majority of the available space. We used simulations to evaluate 6 different tactile sensor configurations with different densities and layouts, based on their impact on reinforcement learning. Our two-setup system allows for robust results that are not dependent on the use of a given physics simulator, robotic hand model or machine learning algorithm. Our results show setup-specific, as well as generalized effects across the 6 sensorized simulations, and we identify one configuration as consistently yielding the best performance across both setups. These results could help future research aimed at robotic hand designs, including prostheses.
|
https://arxiv.org/abs/2601.10268
|
Academic Papers
|
svg
|
75576564470364e0eaf12b1e99e0f1229f366b43f730db81510e964a8af761d3
|
2026-01-16T00:00:00-05:00
|
Early Fault Detection on CMAPSS with Unsupervised LSTM Autoencoders
|
arXiv:2601.10269v1 Announce Type: new Abstract: This paper introduces an unsupervised health-monitoring framework for turbofan engines that does not require run-to-failure labels. First, operating-condition effects in NASA CMAPSS sensor streams are removed via regression-based normalisation; then a Long Short-Term Memory (LSTM) autoencoder is trained only on the healthy portion of each trajectory. Persistent reconstruction error, estimated using an adaptive data-driven threshold, triggers real-time alerts without hand-tuned rules. Benchmark results show high recall and low false-alarm rates across multiple operating regimes, demonstrating that the method can be deployed quickly, scale to diverse fleets, and serve as a complementary early-warning layer to Remaining Useful Life models.
|
https://arxiv.org/abs/2601.10269
|
Academic Papers
|
svg
|
f69c874c62ba140d863e853bd8d7018ef1f200509fc0fc858dde6759c49e8dff
|
2026-01-16T00:00:00-05:00
|
MoST: Mixing Speech and Text with Modality-Aware Mixture of Experts
|
arXiv:2601.10272v1 Announce Type: new Abstract: We present MoST (Mixture of Speech and Text), a novel multimodal large language model that seamlessly integrates speech and text processing through our proposed Modality-Aware Mixture of Experts (MAMoE) architecture. While current multimodal models typically process diverse modality representations with identical parameters, disregarding their inherent representational differences, we introduce specialized routing pathways that direct tokens to modality-appropriate experts based on input type. MAMoE simultaneously enhances modality-specific learning and cross-modal understanding through two complementary components: modality-specific expert groups that capture domain-specific patterns and shared experts that facilitate information transfer between modalities. Building on this architecture, we develop an efficient transformation pipeline that adapts the pretrained MoE language model through strategic post-training on ASR and TTS datasets, followed by fine-tuning with a carefully curated speech-text instruction dataset. A key feature of this pipeline is that it relies exclusively on fully accessible, open-source datasets to achieve strong performance and data efficiency. Comprehensive evaluations across ASR, TTS, audio language modeling, and spoken question answering benchmarks show that MoST consistently outperforms existing models of comparable parameter counts. Our ablation studies confirm that the modality-specific routing mechanism and shared experts design significantly contribute to performance gains across all tested domains. To our knowledge, MoST represents the first fully open-source speech-text LLM built on a Mixture of Experts architecture. \footnote{We release MoST model, training code, inference code, and training data at https://github.com/NUS-HPC-AI-Lab/MoST
|
https://arxiv.org/abs/2601.10272
|
Academic Papers
|
svg
|
048dd4ec9784631633264d383ce01c05b6e2bef3edc5e511e619125be169e873
|
2026-01-16T00:00:00-05:00
|
Queueing-Aware Optimization of Reasoning Tokens for Accuracy-Latency Trade-offs in LLM Servers
|
arXiv:2601.10274v1 Announce Type: new Abstract: We consider a single large language model (LLM) server that serves a heterogeneous stream of queries belonging to $N$ distinct task types. Queries arrive according to a Poisson process, and each type occurs with a known prior probability. For each task type, the server allocates a fixed number of internal thinking tokens, which determines the computational effort devoted to that query. The token allocation induces an accuracy-latency trade-off: the service time follows an approximately affine function of the allocated tokens, while the probability of a correct response exhibits diminishing returns. Under a first-in, first-out (FIFO) service discipline, the system operates as an $M/G/1$ queue, and the mean system time depends on the first and second moments of the resulting service-time distribution. We formulate a constrained optimization problem that maximizes a weighted average accuracy objective penalized by the mean system time, subject to architectural token-budget constraints and queue-stability conditions. The objective function is shown to be strictly concave over the stability region, which ensures existence and uniqueness of the optimal token allocation. The first-order optimality conditions yield a coupled projected fixed-point characterization of the optimum, together with an iterative solution and an explicit sufficient condition for contraction. Moreover, a projected gradient method with a computable global step-size bound is developed to guarantee convergence beyond the contractive regime. Finally, integer-valued token allocations are attained via rounding of the continuous solution, and the resulting performance loss is evaluated in simulation results.
|
https://arxiv.org/abs/2601.10274
|
Academic Papers
|
svg
|
00518351c26ee268e80489f44cf30c821a99dcc597e986df615ca0e7043cc70d
|
2026-01-16T00:00:00-05:00
|
SCRamble: Adaptive Decentralized Overlay Construction for Blockchain Networks
|
arXiv:2601.10277v1 Announce Type: new Abstract: Despite being under development for over 15 years, transaction throughput remains one of the key challenges confronting blockchains, which typically has a cap of a limited number of transactions per second. A fundamental factor limiting this metric is the network latency associated with the block propagation throughout of the underlying peer-to-peer network, typically formed through random connections. Accelerating the dissemination of blocks not only improves transaction rates, but also enhances system security by reducing the probability of forks. This paper introduces SCRamble: a decentralized protocol that significantly reduces block dissemination time in blockchain networks. SCRamble's effectiveness is attributed to its innovative link selection strategy, which integrates two heuristics: a scoring mechanism that assesses block arrival times from neighboring peers, and a second heuristic that takes network latency into account.
|
https://arxiv.org/abs/2601.10277
|
Academic Papers
|
svg
|
c85f34e5a582d335965d94468460b2fffca736cc2ac81f4a53f50f1bfd1e88cf
|
2026-01-16T00:00:00-05:00
|
SPIKE: Sparse Koopman Regularization for Physics-Informed Neural Networks
|
arXiv:2601.10282v1 Announce Type: new Abstract: Physics-Informed Neural Networks (PINNs) provide a mesh-free approach for solving differential equations by embedding physical constraints into neural network training. However, PINNs tend to overfit within the training domain, leading to poor generalization when extrapolating beyond trained spatiotemporal regions. This work presents SPIKE (Sparse Physics-Informed Koopman-Enhanced), a framework that regularizes PINNs with continuous-time Koopman operators to learn parsimonious dynamics representations. By enforcing linear dynamics $dz/dt = Az$ in a learned observable space, both PIKE (without explicit sparsity) and SPIKE (with L1 regularization on $A$) learn sparse generator matrices, embodying the parsimony principle that complex dynamics admit low-dimensional structure. Experiments across parabolic, hyperbolic, dispersive, and stiff PDEs, including fluid dynamics (Navier-Stokes) and chaotic ODEs (Lorenz), demonstrate consistent improvements in temporal extrapolation, spatial generalization, and long-term prediction accuracy. The continuous-time formulation with matrix exponential integration provides unconditional stability for stiff systems while avoiding diagonal dominance issues inherent in discrete-time Koopman operators.
|
https://arxiv.org/abs/2601.10282
|
Academic Papers
|
svg
|
3372133f99e9abc0d0ee39e0f0434a19bc83ba6433bf1e80374a0e6482a2afc7
|
2026-01-16T00:00:00-05:00
|
Atelier \`a la conf\'erence IHM 2025 : RA Permanente
|
arXiv:2601.10291v1 Announce Type: new Abstract: As we move towards more ubiquitous computing, the concept of pervasive augmented reality (PAR) could lead to a major evolution in the relationship between humans, computing and the world. The experience of a continuously augmented world can have both benefits and undesirable consequences for users' lives, and raises many questions in multiple areas. In this workshop, we wanted to bring together all IHM'25 conference participants who are concerned or enthusiastic about discussing this topic. The aim was to draw on collective intelligence to identify the interdisciplinary challenges that remain to be resolved in order to enable the implementation of these technologies in everyday life, but also to define the necessary safeguards. Is PAR too techno-enthusiastic? All of these elements were grouped into categories to define a set of future major areas of research around permanent augmented reality. This document is in French as the conference is a French-speaking international conference.
|
https://arxiv.org/abs/2601.10291
|
Academic Papers
|
svg
|
05a1ccaebb4d2fcba056a09b9749d2760c98dd5456ef509454df77daf8478388
|
2026-01-16T00:00:00-05:00
|
Single-Feed Circularly Polarized Super Realized Gain Antenna
|
arXiv:2601.10292v1 Announce Type: new Abstract: This paper presents a super realized gain, circularly polarized strip-crossed dipole antenna operating at 3.5 GHz. Superdirective behavior is achieved by leveraging strong inter-element mutual coupling through careful adjustment of the strip dimensions. The antenna features a single driven element, with the other element passively loaded with a reactive impedance. The structure is optimized to maximize left-hand circularly polarized (LHCP) realized gain, ensuring high polarization purity and good impedance matching. The optimized design exhibits a 50 $\Omega$ impedance bandwidth of 3.29 - 4.17 GHz (23.75%) and an axial-ratio bandwidth of 3.43 - 3.57 GHz (4%). At 3.5 GHz, the antenna achieves a peak realized gain of 6.1 dB ($ka \approx 1.65$), with an axial ratio of 1.4 dB. These results demonstrate that circular polarization and superdirectivity can be simultaneously realized in a geometrically simple, low-profile ($0.15\lambda$) antenna, rendering it suitable for integration into compact sub-6~GHz wireless and sensing platforms.
|
https://arxiv.org/abs/2601.10292
|
Academic Papers
|
svg
|
98d96fa91665194317075e9f0720df49e9fe1d9b8692bda7ef9e131de1acfb09
|
2026-01-16T00:00:00-05:00
|
Reasoning Hijacking: Subverting LLM Classification via Decision-Criteria Injection
|
arXiv:2601.10294v1 Announce Type: new Abstract: Current LLM safety research predominantly focuses on mitigating Goal Hijacking, preventing attackers from redirecting a model's high-level objective (e.g., from "summarizing emails" to "phishing users"). In this paper, we argue that this perspective is incomplete and highlight a critical vulnerability in Reasoning Alignment. We propose a new adversarial paradigm: Reasoning Hijacking and instantiate it with Criteria Attack, which subverts model judgments by injecting spurious decision criteria without altering the high-level task goal. Unlike Goal Hijacking, which attempts to override the system prompt, Reasoning Hijacking accepts the high-level goal but manipulates the model's decision-making logic by injecting spurious reasoning shortcut. Though extensive experiments on three different tasks (toxic comment, negative review, and spam detection), we demonstrate that even newest models are prone to prioritize injected heuristic shortcuts over rigorous semantic analysis. The results are consistent over different backbones. Crucially, because the model's "intent" remains aligned with the user's instructions, these attacks can bypass defenses designed to detect goal deviation (e.g., SecAlign, StruQ), exposing a fundamental blind spot in the current safety landscape. Data and code are available at https://github.com/Yuan-Hou/criteria_attack
|
https://arxiv.org/abs/2601.10294
|
Academic Papers
|
svg
|
054e56d17cc45f409d9b84dbc9e510c07cfc5b02a19a81015ffce96da7291738
|
2026-01-16T00:00:00-05:00
|
Multipath Routing for Multi-Hop UAV Networks
|
arXiv:2601.10299v1 Announce Type: new Abstract: Multi-hop uncrewed aerial vehicle (UAV) networks are promising to extend the terrestrial network coverage. Existing multi-hop UAV networks employ a single routing path by selecting the next-hop forwarding node in a hop-by-hop manner, which leads to local congestion and increases traffic delays. In this paper, a novel traffic-adaptive multipath routing method is proposed for multi-hop UAV networks, which enables each UAV to dynamically split and forward traffic flows across multiple next-hop neighbors, thus meeting latency requirements of diverse traffic flows in dynamic mobile environments. An on-time packet delivery ratio maximization problem is formulated to determine the traffic splitting ratios at each hop. This sequential decision-making problem is modeled as a decentralized partially observable Markov decision process (Dec-POMDP). To solve this Dec-POMDP, a novel multi-agent deep reinforcement leaning (MADRL) algorithm, termed Independent Proximal Policy Optimization with Dirichlet Modeling (IPPO-DM), is developed. Specifically, the IPPO serves as the core optimization framework, where the Dirichlet distribution is leveraged to parameterize a continuous stochastic policy network on the probability simplex, inherently ensuring feasible traffic splitting ratios. Simulation results demonstrate that IPPO-DM outperforms benchmark schemes in terms of both delivery latency guarantee and packet loss performance.
|
https://arxiv.org/abs/2601.10299
|
Academic Papers
|
svg
|
6c1a8610c629a47a8210e146fbee72ce1c1824939f8719490e825de9f0c0dd0d
|
2026-01-16T00:00:00-05:00
|
DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset
|
arXiv:2601.10305v1 Announce Type: new Abstract: Vision-Language Pre-training (VLP) models demonstrate strong performance across various downstream tasks by learning from large-scale image-text pairs through contrastive pretraining. The release of extensive English image-text datasets (e.g., COYO-700M and LAION-400M) has enabled widespread adoption of models such as CLIP and SigLIP in tasks including cross-modal retrieval and image captioning. However, the advancement of Chinese vision-language pretraining has substantially lagged behind, due to the scarcity of high-quality Chinese image-text data. To address this gap, we develop a comprehensive pipeline for constructing a high-quality Chinese cross-modal dataset. As a result, we propose DanQing, which contains 100 million image-text pairs collected from Common Crawl. Different from existing datasets, DanQing is curated through a more rigorous selection process, yielding superior data quality. Moreover, DanQing is primarily built from 2024-2025 web data, enabling models to better capture evolving semantic trends and thus offering greater practical utility. We compare DanQing with existing datasets by continual pre-training of the SigLIP2 model. Experimental results show that DanQing consistently achieves superior performance across a range of Chinese downstream tasks, including zero-shot classification, cross-modal retrieval, and LMM-based evaluations. To facilitate further research in Chinese vision-language pre-training, we will open-source the DanQing dataset under the Creative Common CC-BY 4.0 license.
|
https://arxiv.org/abs/2601.10305
|
Academic Papers
|
svg
|
8b35a3cfac9fc2b2b249149a05735ef0afb0536e2dc90dfc903839d7dc8dfdda
|
2026-01-16T00:00:00-05:00
|
Evidence-Augmented Policy Optimization with Reward Co-Evolution for Long-Context Reasoning
|
arXiv:2601.10306v1 Announce Type: new Abstract: While Reinforcement Learning (RL) has advanced LLM reasoning, applying it to long-context scenarios is hindered by sparsity of outcome rewards. This limitation fails to penalize ungrounded "lucky guesses," leaving the critical process of needle-in-a-haystack evidence retrieval largely unsupervised. To address this, we propose EAPO (Evidence-Augmented Policy Optimization). We first establish the Evidence-Augmented Reasoning paradigm, validating via Tree-Structured Evidence Sampling that precise evidence extraction is the decisive bottleneck for long-context reasoning. Guided by this insight, EAPO introduces a specialized RL algorithm where a reward model computes a Group-Relative Evidence Reward, providing dense process supervision to explicitly improve evidence quality. To sustain accurate supervision throughout training, we further incorporate an Adaptive Reward-Policy Co-Evolution mechanism. This mechanism iteratively refines the reward model using outcome-consistent rollouts, sharpening its discriminative capability to ensure precise process guidance. Comprehensive evaluations across eight benchmarks demonstrate that EAPO significantly enhances long-context reasoning performance compared to SOTA baselines.
|
https://arxiv.org/abs/2601.10306
|
Academic Papers
|
svg
|
b16317cf255e42df6a33a29db459faaa7f112e6e0188a335cddf215a3885ada3
|
2026-01-16T00:00:00-05:00
|
The Straight and Narrow: Do LLMs Possess an Internal Moral Path?
|
arXiv:2601.10307v1 Announce Type: new Abstract: Enhancing the moral alignment of Large Language Models (LLMs) is a critical challenge in AI safety. Current alignment techniques often act as superficial guardrails, leaving the intrinsic moral representations of LLMs largely untouched. In this paper, we bridge this gap by leveraging Moral Foundations Theory (MFT) to map and manipulate the fine-grained moral landscape of LLMs. Through cross-lingual linear probing, we validate the shared nature of moral representations in middle layers and uncover a shared yet different moral subspace between English and Chinese. Building upon this, we extract steerable Moral Vectors and successfully validate their efficacy at both internal and behavioral levels. Leveraging the high generalizability of morality, we propose Adaptive Moral Fusion (AMF), a dynamic inference-time intervention that synergizes probe detection with vector injection to tackle the safety-helpfulness trade-off. Empirical results confirm that our approach acts as a targeted intrinsic defense, effectively reducing incorrect refusals on benign queries while minimizing jailbreak success rates compared to standard baselines.
|
https://arxiv.org/abs/2601.10307
|
Academic Papers
|
svg
|
782534b8b1d1688d49c840e91bb78758bff51b2099364c5c8df829643c9ca7cb
|
2026-01-16T00:00:00-05:00
|
Multilinguality as Sense Adaptation
|
arXiv:2601.10310v1 Announce Type: new Abstract: We approach multilinguality as sense adaptation: aligning latent meaning representations across languages rather than relying solely on shared parameters and scale. In this paper, we introduce SENse-based Symmetric Interlingual Alignment (SENSIA), which adapts a Backpack language model from one language to another by explicitly aligning sense-level mixtures and contextual representations on parallel data, while jointly training a target-language language modeling loss to preserve fluency. Across benchmarks on four typologically diverse languages, SENSIA generally outperforms comparable multilingual alignment methods and achieves competitive accuracy against monolingual from-scratch baselines while using 2-4x less target-language data. Analyses of learned sense geometry indicate that local sense topology and global structure relative to English are largely preserved, and ablations show that the method is robust in terms of design and scale.
|
https://arxiv.org/abs/2601.10310
|
Academic Papers
|
svg
|
bc72c2b0ed7c20ff506a2f8b21f509a7cba727f5525a9f09727bceeb07ebcd27
|
2026-01-16T00:00:00-05:00
|
We Need a More Robust Classifier: Dual Causal Learning Empowers Domain-Incremental Time Series Classification
|
arXiv:2601.10312v1 Announce Type: new Abstract: The World Wide Web thrives on intelligent services that rely on accurate time series classification, which has recently witnessed significant progress driven by advances in deep learning. However, existing studies face challenges in domain incremental learning. In this paper, we propose a lightweight and robust dual-causal disentanglement framework (DualCD) to enhance the robustness of models under domain incremental scenarios, which can be seamlessly integrated into time series classification models. Specifically, DualCD first introduces a temporal feature disentanglement module to capture class-causal features and spurious features. The causal features can offer sufficient predictive power to support the classifier in domain incremental learning settings. To accurately capture these causal features, we further design a dual-causal intervention mechanism to eliminate the influence of both intra-class and inter-class confounding features. This mechanism constructs variant samples by combining the current class's causal features with intra-class spurious features and with causal features from other classes. The causal intervention loss encourages the model to accurately predict the labels of these variant samples based solely on the causal features. Extensive experiments on multiple datasets and models demonstrate that DualCD effectively improves performance in domain incremental scenarios. We summarize our rich experiments into a comprehensive benchmark to facilitate research in domain incremental time series classification.
|
https://arxiv.org/abs/2601.10312
|
Academic Papers
|
svg
|
6ed82571e3a3a9c2d2470b123158cb6c3564409bc31ab2eb5c280ef2798c5f84
|
2026-01-16T00:00:00-05:00
|
Hierarchical Refinement of Universal Multimodal Attacks on Vision-Language Models
|
arXiv:2601.10313v1 Announce Type: new Abstract: Existing adversarial attacks for VLP models are mostly sample-specific, resulting in substantial computational overhead when scaled to large datasets or new scenarios. To overcome this limitation, we propose Hierarchical Refinement Attack (HRA), a multimodal universal attack framework for VLP models. HRA refines universal adversarial perturbations (UAPs) at both the sample level and the optimization level. For the image modality, we disentangle adversarial examples into clean images and perturbations, allowing each component to be handled independently for more effective disruption of cross-modal alignment. We further introduce a ScMix augmentation strategy that diversifies visual contexts and strengthens both global and local utility of UAPs, thereby reducing reliance on spurious features. In addition, we refine the optimization path by leveraging a temporal hierarchy of historical and estimated future gradients to avoid local minima and stabilize universal perturbation learning. For the text modality, HRA identifies globally influential words by combining intra-sentence and inter-sentence importance measures, and subsequently utilizes these words as universal text perturbations. Extensive experiments across various downstream tasks, VLP models, and datasets demonstrate the superiority of the proposed universal multimodal attacks.
|
https://arxiv.org/abs/2601.10313
|
Academic Papers
|
svg
|
c4b13b4004e34b5604ce2dca86a8657390641a3f95673094163bba6dee5a5eaf
|
2026-01-16T00:00:00-05:00
|
ADVOSYNTH: A Synthetic Multi-Advocate Dataset for Speaker Identification in Courtroom Scenarios
|
arXiv:2601.10315v1 Announce Type: new Abstract: As large-scale speech-to-speech models achieve high fidelity, the distinction between synthetic voices in structured environments becomes a vital area of study. This paper introduces Advosynth-500, a specialized dataset comprising 100 synthetic speech files featuring 10 unique advocate identities. Using the Speech Llama Omni model, we simulate five distinct advocate pairs engaged in courtroom arguments. We define specific vocal characteristics for each advocate and present a speaker identification challenge to evaluate the ability of modern systems to map audio files to their respective synthetic origins. Dataset is available at this link-https: //github.com/naturenurtureelite/ADVOSYNTH-500.
|
https://arxiv.org/abs/2601.10315
|
Academic Papers
|
svg
|
49393a4c1d6271d15b7fd1ad7004850e95e89f1812d08bf33b86369255471783
|
2026-01-16T00:00:00-05:00
|
Boundary-Aware NL2SQL: Integrating Reliability through Hybrid Reward and Data Synthesis
|
arXiv:2601.10318v1 Announce Type: new Abstract: In this paper, we present BAR-SQL (Boundary-Aware Reliable NL2SQL), a unified training framework that embeds reliability and boundary awareness directly into the generation process. We introduce a Seed Mutation data synthesis paradigm that constructs a representative enterprise corpus, explicitly encompassing multi-step analytical queries alongside boundary cases including ambiguity and schema limitations. To ensure interpretability, we employ Knowledge-Grounded Reasoning Synthesis, which produces Chain-of-Thought traces explicitly anchored in schema metadata and business rules. The model is trained through a two-stage process: Supervised Fine-Tuning (SFT) followed by Reinforcement Learning via Group Relative Policy Optimization. We design a Task-Conditioned Hybrid Reward mechanism that simultaneously optimizes SQL execution accuracy-leveraging Abstract Syntax Tree analysis and dense result matching-and semantic precision in abstention responses. To evaluate reliability alongside generation accuracy, we construct and release Ent-SQL-Bench, which jointly assesse SQL precision and boundary-aware abstention across ambiguous and unanswerable queries. Experimental results on this benchmark demonstrate that BAR-SQL achieves 91.48% average accuracy, outperforming leading proprietary models, including Claude 4.5 Sonnet and GPT-5, in both SQL generation quality and boundary-aware abstention capability. The source code and benchmark are available anonymously at: https://github.com/TianSongS/BAR-SQL.
|
https://arxiv.org/abs/2601.10318
|
Academic Papers
|
svg
|
e13f4a4d49325d16ce1a4632da65462c041e1cd742cd51ff0c37142400acd45c
|
2026-01-16T00:00:00-05:00
|
An Efficient Long-Context Ranking Architecture With Calibrated LLM Distillation: Application to Person-Job Fit
|
arXiv:2601.10321v1 Announce Type: new Abstract: Finding the most relevant person for a job proposal in real time is challenging, especially when resumes are long, structured, and multilingual. In this paper, we propose a re-ranking model based on a new generation of late cross-attention architecture, that decomposes both resumes and project briefs to efficiently handle long-context inputs with minimal computational overhead. To mitigate historical data biases, we use a generative large language model (LLM) as a teacher, generating fine-grained, semantically grounded supervision. This signal is distilled into our student model via an enriched distillation loss function. The resulting model produces skill-fit scores that enable consistent and interpretable person-job matching. Experiments on relevance, ranking, and calibration metrics demonstrate that our approach outperforms state-of-the-art baselines.
|
https://arxiv.org/abs/2601.10321
|
Academic Papers
|
svg
|
d791952cc4baf595de77a596270ab98660632ca9f160691bd7b2a28761cd1e49
|
2026-01-16T00:00:00-05:00
|
Conjugate Gradient Methods are Not Efficient: Experimental Study of the Locality Limitation
|
arXiv:2601.10322v1 Announce Type: new Abstract: The convergence of the Conjugate Gradient method is subject to a locality limitation which imposes a lower bound on the number of iterations required before a qualitatively accurate approximation can be obtained. This limitation originates from the restricted transport of information in the graph induced by the sparsity pattern of the system matrix. In each iteration, information from the right-hand side can propagate only across directly connected graph nodes. The diameter of this graph therefore determines a minimum number of iterations that is necessary to achieve an acceptable level of accuracy.
|
https://arxiv.org/abs/2601.10322
|
Academic Papers
|
svg
|
a957428b8cde9785ce9bb88afa5419f51be787f156ba2efac10813df1bd8b7c3
|
2026-01-16T00:00:00-05:00
|
ROMA: Real-time Omni-Multimodal Assistant with Interactive Streaming Understanding
|
arXiv:2601.10323v1 Announce Type: new Abstract: Recent Omni-multimodal Large Language Models show promise in unified audio, vision, and text modeling. However, streaming audio-video understanding remains challenging, as existing approaches suffer from disjointed capabilities: they typically exhibit incomplete modality support or lack autonomous proactive monitoring. To address this, we present ROMA, a real-time omni-multimodal assistant for unified reactive and proactive interaction. ROMA processes continuous inputs as synchronized multimodal units, aligning dense audio with discrete video frames to handle granularity mismatches. For online decision-making, we introduce a lightweight speak head that decouples response initiation from generation to ensure precise triggering without task conflict. We train ROMA with a curated streaming dataset and a two-stage curriculum that progressively optimizes for streaming format adaptation and proactive responsiveness. To standardize the fragmented evaluation landscape, we reorganize diverse benchmarks into a unified suite covering both proactive (alert, narration) and reactive (QA) settings. Extensive experiments across 12 benchmarks demonstrate ROMA achieves state-of-the-art performance on proactive tasks while competitive in reactive settings, validating its robustness in unified real-time omni-multimodal understanding.
|
https://arxiv.org/abs/2601.10323
|
Academic Papers
|
svg
|
e3f8f8c1115d15926fe2eaf19f0a692c3c3085fa1c138d1d3589247e274b74b7
|
2026-01-16T00:00:00-05:00
|
SRAW-Attack: Space-Reweighted Adversarial Warping Attack for SAR Target Recognition
|
arXiv:2601.10324v1 Announce Type: new Abstract: Synthetic aperture radar (SAR) imagery exhibits intrinsic information sparsity due to its unique electromagnetic scattering mechanism. Despite the widespread adoption of deep neural network (DNN)-based SAR automatic target recognition (SAR-ATR) systems, they remain vulnerable to adversarial examples and tend to over-rely on background regions, leading to degraded adversarial robustness. Existing adversarial attacks for SAR-ATR often require visually perceptible distortions to achieve effective performance, thereby necessitating an attack method that balances effectiveness and stealthiness. In this paper, a novel attack method termed Space-Reweighted Adversarial Warping (SRAW) is proposed, which generates adversarial examples through optimized spatial deformation with reweighted budgets across foreground and background regions. Extensive experiments demonstrate that SRAW significantly degrades the performance of state-of-the-art SAR-ATR models and consistently outperforms existing methods in terms of imperceptibility and adversarial transferability. Code is made available at https://github.com/boremycin/SAR-ATR-TransAttack.
|
https://arxiv.org/abs/2601.10324
|
Academic Papers
|
svg
|
3f75f28e2ed02a3f34d64a359557f5731ea227e9310289e8a9ebafe0aefe2bc8
|
2026-01-16T00:00:00-05:00
|
Meta Dynamic Graph for Traffic Flow Prediction
|
arXiv:2601.10328v1 Announce Type: new Abstract: Traffic flow prediction is a typical spatio-temporal prediction problem and has a wide range of applications. The core challenge lies in modeling the underlying complex spatio-temporal dependencies. Various methods have been proposed, and recent studies show that the modeling of dynamics is useful to meet the core challenge. While handling spatial dependencies and temporal dependencies using separate base model structures may hinder the modeling of spatio-temporal correlations, the modeling of dynamics can bridge this gap. Incorporating spatio-temporal heterogeneity also advances the main goal, since it can extend the parameter space and allow more flexibility. Despite these advances, two limitations persist: 1) the modeling of dynamics is often limited to the dynamics of spatial topology (e.g., adjacency matrix changes), which, however, can be extended to a broader scope; 2) the modeling of heterogeneity is often separated for spatial and temporal dimensions, but this gap can also be bridged by the modeling of dynamics. To address the above limitations, we propose a novel framework for traffic prediction, called Meta Dynamic Graph (MetaDG). MetaDG leverages dynamic graph structures of node representations to explicitly model spatio-temporal dynamics. This generates both dynamic adjacency matrices and meta-parameters, extending dynamic modeling beyond topology while unifying the capture of spatio-temporal heterogeneity into a single dimension. Extensive experiments on four real-world datasets validate the effectiveness of MetaDG.
|
https://arxiv.org/abs/2601.10328
|
Academic Papers
|
svg
|
622823c1dfe3ea806becb4567bef3b5418ff08cd36a3b8e5709968231e4cd475
|
2026-01-16T00:00:00-05:00
|
On the Capacity of Noisy Frequency-based Channels
|
arXiv:2601.10329v1 Announce Type: new Abstract: We investigate the capacity of noisy frequency-based channels, motivated by DNA data storage in the short-molecule regime, where information is encoded in the frequency of items types rather than their order. The channel output is a histogram formed by random sampling of items, followed by noisy item identification. While the capacity of the noiseless frequency-based channel has been previously addressed, the effect of identification noise has not been fully characterized. We present a converse bound on the channel capacity that follows from stochastic degradation and the data processing inequality. We then establish an achievable bound, which is based on a Poissonization of the multinomial sampling process, and an analysis of the resulting vector Poisson channel with inter-symbol interference. This analysis refines concentration inequalities for the information density used in Feinstein bound, and explicitly characterizes an additive loss in the mutual information due to identification noise. We apply our results to a DNA storage channel in the short-molecule regime, and quantify the resulting loss in the scaling of the total number of reliably stored bits.
|
https://arxiv.org/abs/2601.10329
|
Academic Papers
|
svg
|
0d94bc340af02d347927f024173d123d48a71430314b72dd12232fa59ec4d941
|
2026-01-16T00:00:00-05:00
|
Think-Then-Generate: Reasoning-Aware Text-to-Image Diffusion with LLM Encoders
|
arXiv:2601.10332v1 Announce Type: new Abstract: Recent progress in text-to-image (T2I) diffusion models (DMs) has enabled high-quality visual synthesis from diverse textual prompts. Yet, most existing T2I DMs, even those equipped with large language model (LLM)-based text encoders, remain text-pixel mappers -- they employ LLMs merely as text encoders, without leveraging their inherent reasoning capabilities to infer what should be visually depicted given the textual prompt. To move beyond such literal generation, we propose the think-then-generate (T2G) paradigm, where the LLM-based text encoder is encouraged to reason about and rewrite raw user prompts; the states of the rewritten prompts then serve as diffusion conditioning. To achieve this, we first activate the think-then-rewrite pattern of the LLM encoder with a lightweight supervised fine-tuning process. Subsequently, the LLM encoder and diffusion backbone are co-optimized to ensure faithful reasoning about the context and accurate rendering of the semantics via Dual-GRPO. In particular, the text encoder is reinforced using image-grounded rewards to infer and recall world knowledge, while the diffusion backbone is pushed to produce semantically consistent and visually coherent images. Experiments show substantial improvements in factual consistency, semantic alignment, and visual realism across reasoning-based image generation and editing benchmarks, achieving 0.79 on WISE score, nearly on par with GPT-4. Our results constitute a promising step toward next-generation unified models with reasoning, expression, and demonstration capacities.
|
https://arxiv.org/abs/2601.10332
|
Academic Papers
|
svg
|
de5a550dcf0e7be8f3f8e9fd6825b34d6c1e2d9783f935a8f8cc8f986c5b6fa4
|
2026-01-16T00:00:00-05:00
|
An analytic theory of convolutional neural network inverse problems solvers
|
arXiv:2601.10334v1 Announce Type: new Abstract: Supervised convolutional neural networks (CNNs) are widely used to solve imaging inverse problems, achieving state-of-the-art performance in numerous applications. However, despite their empirical success, these methods are poorly understood from a theoretical perspective and often treated as black boxes. To bridge this gap, we analyze trained neural networks through the lens of the Minimum Mean Square Error (MMSE) estimator, incorporating functional constraints that capture two fundamental inductive biases of CNNs: translation equivariance and locality via finite receptive fields. Under the empirical training distribution, we derive an analytic, interpretable, and tractable formula for this constrained variant, termed Local-Equivariant MMSE (LE-MMSE). Through extensive numerical experiments across various inverse problems (denoising, inpainting, deconvolution), datasets (FFHQ, CIFAR-10, FashionMNIST), and architectures (U-Net, ResNet, PatchMLP), we demonstrate that our theory matches the neural networks outputs (PSNR $\gtrsim25$dB). Furthermore, we provide insights into the differences between \emph{physics-aware} and \emph{physics-agnostic} estimators, the impact of high-density regions in the training (patch) distribution, and the influence of other factors (dataset size, patch size, etc).
|
https://arxiv.org/abs/2601.10334
|
Academic Papers
|
svg
|
37b024195b93ca088669add48159b32bb816ee6b68e736742400373ed45d348b
|
2026-01-16T00:00:00-05:00
|
Agent Skills in the Wild: An Empirical Study of Security Vulnerabilities at Scale
|
arXiv:2601.10338v1 Announce Type: new Abstract: The rise of AI agent frameworks has introduced agent skills, modular packages containing instructions and executable code that dynamically extend agent capabilities. While this architecture enables powerful customization, skills execute with implicit trust and minimal vetting, creating a significant yet uncharacterized attack surface. We conduct the first large-scale empirical security analysis of this emerging ecosystem, collecting 42,447 skills from two major marketplaces and systematically analyzing 31,132 using SkillScan, a multi-stage detection framework integrating static analysis with LLM-based semantic classification. Our findings reveal pervasive security risks: 26.1% of skills contain at least one vulnerability, spanning 14 distinct patterns across four categories: prompt injection, data exfiltration, privilege escalation, and supply chain risks. Data exfiltration (13.3%) and privilege escalation (11.8%) are most prevalent, while 5.2% of skills exhibit high-severity patterns strongly suggesting malicious intent. We find that skills bundling executable scripts are 2.12x more likely to contain vulnerabilities than instruction-only skills (OR=2.12, p<0.001). Our contributions include: (1) a grounded vulnerability taxonomy derived from 8,126 vulnerable skills, (2) a validated detection methodology achieving 86.7% precision and 82.5% recall, and (3) an open dataset and detection toolkit to support future research. These results demonstrate an urgent need for capability-based permission systems and mandatory security vetting before this attack vector is further exploited.
|
https://arxiv.org/abs/2601.10338
|
Academic Papers
|
svg
|
3bfe7743176c02b0cbeffbae3e2d6d7c4f3b50f10f5e5c697790271cd3e74f5d
|
2026-01-16T00:00:00-05:00
|
CHORAL: Traversal-Aware Planning for Safe and Efficient Heterogeneous Multi-Robot Routing
|
arXiv:2601.10340v1 Announce Type: new Abstract: Monitoring large, unknown, and complex environments with autonomous robots poses significant navigation challenges, where deploying teams of heterogeneous robots with complementary capabilities can substantially improve both mission performance and feasibility. However, effectively modeling how different robotic platforms interact with the environment requires rich, semantic scene understanding. Despite this, existing approaches often assume homogeneous robot teams or focus on discrete task compatibility rather than continuous routing. Consequently, scene understanding is not fully integrated into routing decisions, limiting their ability to adapt to the environment and to leverage each robot's strengths. In this paper, we propose an integrated semantic-aware framework for coordinating heterogeneous robots. Starting from a reconnaissance flight, we build a metric-semantic map using open-vocabulary vision models and use it to identify regions requiring closer inspection and capability-aware paths for each platform to reach them. These are then incorporated into a heterogeneous vehicle routing formulation that jointly assigns inspection tasks and computes robot trajectories. Experiments in simulation and in a real inspection mission with three robotic platforms demonstrate the effectiveness of our approach in planning safer and more efficient routes by explicitly accounting for each platform's navigation capabilities. We release our framework, CHORAL, as open source to support reproducibility and deployment of diverse robot teams.
|
https://arxiv.org/abs/2601.10340
|
Academic Papers
|
svg
|
e68bdc5b1e3d6230e984bc46958f8fc1b254074cecb817436759b500876aa556
|
2026-01-16T00:00:00-05:00
|
Convertible Codes for Data and Device Heterogeneity
|
arXiv:2601.10341v1 Announce Type: new Abstract: Distributed storage systems must handle both data heterogeneity, arising from non-uniform access demands, and device heterogeneity, caused by time-varying node reliability. In this paper, we study convertible codes, which enable the transformation of one code into another with minimum cost in the merge regime, addressing the latter. We derive general lower bounds on the read and write costs of linear code conversion, applicable to arbitrary linear codes. We then focus on Reed-Muller codes, which efficiently handle data heterogeneity, addressing the former issue, and construct explicit conversion procedures that, for the first time, combine both forms of heterogeneity for distributed data storage.
|
https://arxiv.org/abs/2601.10341
|
Academic Papers
|
svg
|
96f46ed323b2b160c9e34291160a12550190705929b6b97902950313dac5aec6
|
2026-01-16T00:00:00-05:00
|
C-GRASP: Clinically-Grounded Reasoning for Affective Signal Processing
|
arXiv:2601.10342v1 Announce Type: new Abstract: Heart rate variability (HRV) is a pivotal noninvasive marker for autonomic monitoring; however, applying Large Language Models (LLMs) to HRV interpretation is hindered by physiological hallucinations. These include respiratory sinus arrhythmia (RSA) contamination, short-data instability in nonlinear metrics, and the neglect of individualized baselines in favor of population norms. We propose C-GRASP (Clinically-Grounded Reasoning for Affective Signal Processing), a guardrailed RAG-enhanced pipeline that decomposes HRV interpretation into eight traceable reasoning steps. Central to C-GRASP is a Z-score Priority Hierarchy that enforces the weighting of individualized baseline shifts over normative statistics. The system effectively mitigates spectral hallucinations through automated RSA-aware guardrails, preventing contamination of frequency-domain indices. Evaluated on 414 trials from the DREAMER dataset, C-GRASP integrated with high-scale reasoning models (e.g., MedGemma3-thinking) achieved superior performance in 4-class emotion classification (37.3% accuracy) and a Clinical Reasoning Consistency (CRC) score of 69.6%. Ablation studies confirm that the individualized Delta Z-score module serves as the critical logical anchor, preventing the "population bias" common in native LLMs. Ultimately, C-GRASP transitions affective computing from black-box classification to transparent, evidence-based clinical decision support, paving the way for safer AI integration in biomedical engineering.
|
https://arxiv.org/abs/2601.10342
|
Academic Papers
|
svg
|
a0619e10d8996a8cc52328966267406f4246aa060c6bba3d3a9acdcb7ccb68fc
|
2026-01-16T00:00:00-05:00
|
OctoBench: Benchmarking Scaffold-Aware Instruction Following in Repository-Grounded Agentic Coding
|
arXiv:2601.10343v1 Announce Type: new Abstract: Modern coding scaffolds turn LLMs into capable software agents, but their ability to follow scaffold-specified instructions remains under-examined, especially when constraints are heterogeneous and persist across interactions. To fill this gap, we introduce OctoBench, which benchmarks scaffold-aware instruction following in repository-grounded agentic coding. OctoBench includes 34 environments and 217 tasks instantiated under three scaffold types, and is paired with 7,098 objective checklist items. To disentangle solving the task from following the rules, we provide an automated observation-and-scoring toolkit that captures full trajectories and performs fine-grained checks. Experiments on eight representative models reveal a systematic gap between task-solving and scaffold-aware compliance, underscoring the need for training and evaluation that explicitly targets heterogeneous instruction following. We release the benchmark to support reproducible benchmarking and to accelerate the development of more scaffold-aware coding agents.
|
https://arxiv.org/abs/2601.10343
|
Academic Papers
|
svg
|
5b206a60b6bbe5d7ef509f3e9a63e2ac7b976520c677da3f11dabd8e8b0f6206
|
2026-01-16T00:00:00-05:00
|
Self-supervised restoration of singing voice degraded by pitch shifting using shallow diffusion
|
arXiv:2601.10345v1 Announce Type: new Abstract: Pitch shifting has been an essential feature in singing voice production. However, conventional signal processing approaches exhibit well known trade offs such as formant shifts and robotic coloration that becomes more severe at larger transposition jumps. This paper targets high quality pitch shifting for singing by reframing it as a restoration problem: given an audio track that has been pitch shifted (and thus contaminated by artifacts), we recover a natural sounding performance while preserving its melody and timing. Specifically, we use a lightweight, mel space diffusion model driven by frame level acoustic features such as f0, volume, and content features. We construct training pairs in a self supervised manner by applying pitch shifts and reversing them to simulate realistic artifacts while retaining ground truth. On a curated singing set, the proposed approach substantially reduces pitch shift artifacts compared to representative classical baselines, as measured by both statistical metrics and pairwise acoustic measures. The results suggest that restoration based pitch shifting could be a viable approach towards artifact resistant transposition in vocal production workflows.
|
https://arxiv.org/abs/2601.10345
|
Academic Papers
|
svg
|
7c41b1c2524c810da6a342301ce9b7d4dd87ed39779fc00f32a8e7fb6d429f35
|
2026-01-16T00:00:00-05:00
|
Training-Trajectory-Aware Token Selection
|
arXiv:2601.10348v1 Announce Type: new Abstract: Efficient distillation is a key pathway for converting expensive reasoning capability into deployable efficiency, yet in the frontier regime where the student already has strong reasoning ability, naive continual distillation often yields limited gains or even degradation. We observe a characteristic training phenomenon: even as loss decreases monotonically, all performance metrics can drop sharply at almost the same bottleneck, before gradually recovering. We further uncover a token-level mechanism: confidence bifurcates into steadily increasing Imitation-Anchor Tokens that quickly anchor optimization and other yet-to-learn tokens whose confidence is suppressed until after the bottleneck. And the characteristic that these two types of tokens cannot coexist is the root cause of the failure in continual distillation. To this end, we propose Training-Trajectory-Aware Token Selection (T3S) to reconstruct the training objective at the token level, clearing the optimization path for yet-to-learn tokens. T3 yields consistent gains in both AR and dLLM settings: with only hundreds of examples, Qwen3-8B surpasses DeepSeek-R1 on competitive reasoning benchmarks, Qwen3-32B approaches Qwen3-235B, and T3-trained LLaDA-2.0-Mini exceeds its AR baseline, achieving state-of-the-art performance among all of 16B-scale no-think models.
|
https://arxiv.org/abs/2601.10348
|
Academic Papers
|
svg
|
66df01452bec7dc85e6096547fd1524cdf75e303702d603fe264d6fee5c8870b
|
2026-01-16T00:00:00-05:00
|
SuS: Strategy-aware Surprise for Intrinsic Exploration
|
arXiv:2601.10349v1 Announce Type: new Abstract: We propose Strategy-aware Surprise (SuS), a novel intrinsic motivation framework that uses pre-post prediction mismatch as a novelty signal for exploration in reinforcement learning. Unlike traditional curiosity-driven methods that rely solely on state prediction error, SuS introduces two complementary components: Strategy Stability (SS) and Strategy Surprise (SuS). SS measures consistency in behavioral strategy across temporal steps, while SuS captures unexpected outcomes relative to the agent's current strategy representation. Our combined reward formulation leverages both signals through learned weighting coefficients. We evaluate SuS on mathematical reasoning tasks using large language models, demonstrating significant improvements in both accuracy and solution diversity. Ablation studies confirm that removing either component results in at least 10% performance degradation, validating the synergistic nature of our approach. SuS achieves 17.4% improvement in Pass@1 and 26.4% improvement in Pass@5 compared to baseline methods, while maintaining higher strategy diversity throughout training.
|
https://arxiv.org/abs/2601.10349
|
Academic Papers
|
svg
|
25c7bd7cf8785fe0774112e0af27604946415cde9436e8eb0903f5b779806b21
|
2026-01-16T00:00:00-05:00
|
A New Construction Structure on MISO Coded Caching with Linear Subpacketization: Half-Sum Disjoint Packing
|
arXiv:2601.10353v1 Announce Type: new Abstract: In the $(L,K,M,N)$ cache-aided multiple-input single-output (MISO) broadcast channel (BC) system, the server is equipped with $L$ antennas and communicates with $K$ single-antenna users through a wireless broadcast channel where the server has a library containing $N$ files, and each user is equipped with a cache of size $M$ files. Under the constraints of uncoded placement and one-shot linear delivery strategies, many schemes achieve the maximum sum Degree-of-Freedom (sum-DoF). However, for general parameters $L$, $M$, and $N$, their subpacketizations increase exponentially with the number of users. We aim to design a MISO coded caching scheme that achieves a large sum-DoF with low subpacketization $F$. An interesting combinatorial structure, called the multiple-antenna placement delivery array (MAPDA), can be used to generate MISO coded caching schemes under these two strategies; moreover, all existing schemes with these strategies can be represented by the corresponding MAPDAs. In this paper, we study the case with $F=K$ (i.e., $F$ grows linearly with $K$) by investigating MAPDAs. Specifically, based on the framework of Latin squares, we transform the design of MAPDA with $F=K$ into the construction of a combinatorial structure called the $L$-half-sum disjoint packing (HSDP). It is worth noting that a $1$-HSDP is exactly the concept of NHSDP, which is used to generate the shared-link coded caching scheme with $F=K$. By constructing $L$-HSDPs, we obtain a class of new schemes with $F=K$. Finally, theoretical and numerical analyses show that our $L$-HSDP schemes significantly reduce subpacketization compared to existing schemes with exponential subpacketization, while only slightly sacrificing sum-DoF, and achieve both a higher sum-DoF and lower subpacketization than the existing schemes with linear subpacketization.
|
https://arxiv.org/abs/2601.10353
|
Academic Papers
|
svg
|
34639b059cbf3195fb723715fbf05082328f9ba2c25457a905321cf77fed6ec7
|
2026-01-16T00:00:00-05:00
|
Unlocking Implicit Experience: Synthesizing Tool-Use Trajectories from Text
|
arXiv:2601.10355v1 Announce Type: new Abstract: Enabling Large Language Models (LLMs) to effectively utilize tools in multi-turn interactions is essential for building capable autonomous agents. However, acquiring diverse and realistic multi-turn tool-use data remains a significant challenge. In this work, we propose a novel text-based paradigm. We observe that textual corpora naturally contain rich, multi-step problem-solving experiences, which can serve as an untapped, scalable, and authentic data source for multi-turn tool-use tasks. Based on this insight, we introduce GEM, a data synthesis pipeline that enables the generation and extraction of multi-turn tool-use trajectories from text corpora through a four-stage process: relevance filtering, workflow & tool extraction, trajectory grounding, and complexity refinement. To reduce the computational cost, we further train a specialized Trajectory Synthesizer via supervised fine-tuning. This model distills the complex generation pipeline into an efficient, end-to-end trajectory generator. Experiments demonstrate that our GEM-32B achieve a 16.5% improvement on the BFCL V3 Multi-turn benchmark. Our models partially surpass the performance of models trained on {\tau} - bench (Airline and Retail) in-domain data, highlighting the superior generalization capability derived from our text-based synthesis paradigm. Notably, our Trajectory Synthesizer matches the quality of the full pipeline while significantly reducing inference latency and costs.
|
https://arxiv.org/abs/2601.10355
|
Academic Papers
|
svg
|
1dae375d2d6bcc47f4852b38bc4ebf495ae608bc2e888aae74f8ad7c7266d419
|
2026-01-16T00:00:00-05:00
|
EvoMorph: Counterfactual Explanations for Continuous Time-Series Extrinsic Regression Applied to Photoplethysmography
|
arXiv:2601.10356v1 Announce Type: new Abstract: Wearable devices enable continuous, population-scale monitoring of physiological signals, such as photoplethysmography (PPG), creating new opportunities for data-driven clinical assessment. Time-series extrinsic regression (TSER) models increasingly leverage PPG signals to estimate clinically relevant outcomes, including heart rate, respiratory rate, and oxygen saturation. For clinical reasoning and trust, however, single point estimates alone are insufficient: clinicians must also understand whether predictions are stable under physiologically plausible variations and to what extent realistic, attainable changes in physiological signals would meaningfully alter a model's prediction. Counterfactual explanations (CFE) address these "what-if" questions, yet existing time series CFE generation methods are largely restricted to classification, overlook waveform morphology, and often produce physiologically implausible signals, limiting their applicability to continuous biomedical time series. To address these limitations, we introduce EvoMorph, a multi-objective evolutionary framework for generating physiologically plausible and diverse CFE for TSER applications. EvoMorph optimizes morphology-aware objectives defined on interpretable signal descriptors and applies transformations to preserve the waveform structure. We evaluated EvoMorph on three PPG datasets (heart rate, respiratory rate, and oxygen saturation) against a nearest-unlike-neighbor baseline. In addition, in a case study, we evaluated EvoMorph as a tool for uncertainty quantification by relating counterfactual sensitivity to bootstrap-ensemble uncertainty and data-density measures. Overall, EvoMorph enables the generation of physiologically-aware counterfactuals for continuous biomedical signals and supports uncertainty-aware interpretability, advancing trustworthy model analysis for clinical time-series applications.
|
https://arxiv.org/abs/2601.10356
|
Academic Papers
|
svg
|
bb87c35ce95d3e434e4b456ef229bc916300d8d113d0e8e618adecf741439357
|
2026-01-16T00:00:00-05:00
|
PLGC: Pseudo-Labeled Graph Condensation
|
arXiv:2601.10358v1 Announce Type: new Abstract: Large graph datasets make training graph neural networks (GNNs) computationally costly. Graph condensation methods address this by generating small synthetic graphs that approximate the original data. However, existing approaches rely on clean, supervised labels, which limits their reliability when labels are scarce, noisy, or inconsistent. We propose Pseudo-Labeled Graph Condensation (PLGC), a self-supervised framework that constructs latent pseudo-labels from node embeddings and optimizes condensed graphs to match the original graph's structural and feature statistics -- without requiring ground-truth labels. PLGC offers three key contributions: (1) A diagnosis of why supervised condensation fails under label noise and distribution shift. (2) A label-free condensation method that jointly learns latent prototypes and node assignments. (3) Theoretical guarantees showing that pseudo-labels preserve latent structural statistics of the original graph and ensure accurate embedding alignment. Empirically, across node classification and link prediction tasks, PLGC achieves competitive performance with state-of-the-art supervised condensation methods on clean datasets and exhibits substantial robustness under label noise, often outperforming all baselines by a significant margin. Our findings highlight the practical and theoretical advantages of self-supervised graph condensation in noisy or weakly-labeled environments.
|
https://arxiv.org/abs/2601.10358
|
Academic Papers
|
svg
|
6c3653e969483d9542a5b38cafbc991a9c905a292a4b77793b4d20e3421b62c3
|
2026-01-16T00:00:00-05:00
|
Generalized Weight Structure of Polar Codes: Selected Template Polynomials
|
arXiv:2601.10362v1 Announce Type: new Abstract: Polar codes can be viewed as decreasing monomial codes, revealing a rich algebraic structure governed by the lower-triangular affine (LTA) group. We develop a general framework to compute the Hamming weight of codewords generated by sums of monomials, express these weights in a canonical dyadic form, and derive closed expressions for key structural templates (disjoint sums, nested blocks, complementary flips) that generate the low and intermediate weight spectrum. Combining these templates with the LTA group action, we obtain explicit multiplicity formulas, yielding a unified algebraic method to characterize and enumerate codewords.
|
https://arxiv.org/abs/2601.10362
|
Academic Papers
|
svg
|
ac3f11b98352f53a64b4cee582f432bdc015d9d1c3ae759c7812587ac39d7544
|
2026-01-16T00:00:00-05:00
|
FastStair: Learning to Run Up Stairs with Humanoid Robots
|
arXiv:2601.10365v1 Announce Type: new Abstract: Running up stairs is effortless for humans but remains extremely challenging for humanoid robots due to the simultaneous requirements of high agility and strict stability. Model-free reinforcement learning (RL) can generate dynamic locomotion, yet implicit stability rewards and heavy reliance on task-specific reward shaping tend to result in unsafe behaviors, especially on stairs; conversely, model-based foothold planners encode contact feasibility and stability structure, but enforcing their hard constraints often induces conservative motion that limits speed. We present FastStair, a planner-guided, multi-stage learning framework that reconciles these complementary strengths to achieve fast and stable stair ascent. FastStair integrates a parallel model-based foothold planner into the RL training loop to bias exploration toward dynamically feasible contacts and to pretrain a safety-focused base policy. To mitigate planner-induced conservatism and the discrepancy between low- and high-speed action distributions, the base policy was fine-tuned into speed-specialized experts and then integrated via Low-Rank Adaptation (LoRA) to enable smooth operation across the full commanded-speed range. We deploy the resulting controller on the Oli humanoid robot, achieving stable stair ascent at commanded speeds up to 1.65 m/s and traversing a 33-step spiral staircase (17 cm rise per step) in 12 s, demonstrating robust high-speed performance on long staircases. Notably, the proposed approach served as the champion solution in the Canton Tower Robot Run Up Competition.
|
https://arxiv.org/abs/2601.10365
|
Academic Papers
|
svg
|
7d6263924498c1e66c7e598bdc2fdce918fad0a1af9626776335a9359b27c581
|
2026-01-16T00:00:00-05:00
|
Inverse Learning in $2\times2$ Games: From Synthetic Interactions to Traffic Simulation
|
arXiv:2601.10367v1 Announce Type: new Abstract: Understanding how agents coordinate or compete from limited behavioral data is central to modeling strategic interactions in traffic, robotics, and other multi-agent systems. In this work, we investigate the following complementary formulations of inverse game-theoretic learning: (i) a Closed-form Correlated Equilibrium Maximum-Likelihood estimator (CE-ML) specialized for $2\times2$ games; and (ii) a Logit Best Response Maximum-Likelihood estimator (LBR-ML) that captures long-run adaptation dynamics via stochastic response processes. Together, these approaches span the spectrum between static equilibrium consistency and dynamic behavioral realism. We evaluate them on synthetic "chicken-dare" games and traffic-interaction scenarios simulated in SUMO, comparing parameter recovery and distributional fit. Results reveal clear trade-offs between interpretability, computational tractability, and behavioral expressiveness across models.
|
https://arxiv.org/abs/2601.10367
|
Academic Papers
|
svg
|
29db87754cdc8c28afbc5e1c06374ab395d3c5e959f4cfdd0cc70fb51ef0b1de
|
2026-01-16T00:00:00-05:00
|
Fine-Grained Human Pose Editing Assessment via Layer-Selective MLLMs
|
arXiv:2601.10369v1 Announce Type: new Abstract: Text-guided human pose editing has gained significant traction in AIGC applications. However,it remains plagued by structural anomalies and generative artifacts. Existing evaluation metrics often isolate authenticity detection from quality assessment, failing to provide fine-grained insights into pose-specific inconsistencies. To address these limitations, we introduce HPE-Bench, a specialized benchmark comprising 1,700 standardized samples from 17 state-of-the-art editing models, offering both authenticity labels and multi-dimensional quality scores. Furthermore, we propose a unified framework based on layer-selective multimodal large language models (MLLMs). By employing contrastive LoRA tuning and a novel layer sensitivity analysis (LSA) mechanism, we identify the optimal feature layer for pose evaluation. Our framework achieves superior performance in both authenticity detection and multi-dimensional quality regression, effectively bridging the gap between forensic detection and quality assessment.
|
https://arxiv.org/abs/2601.10369
|
Academic Papers
|
svg
|
d2be21cd8d22d5e50a5438eb21f9bedab6deafbe84e849c218093e15a14a2d93
|
2026-01-16T00:00:00-05:00
|
Towards Efficient Low-rate Image Compression with Frequency-aware Diffusion Prior Refinement
|
arXiv:2601.10373v1 Announce Type: new Abstract: Recent advancements in diffusion-based generative priors have enabled visually plausible image compression at extremely low bit rates. However, existing approaches suffer from slow sampling processes and suboptimal bit allocation due to fragmented training paradigms. In this work, we propose Accelerate \textbf{Diff}usion-based Image Compression via \textbf{C}onsistency Prior \textbf{R}efinement (DiffCR), a novel compression framework for efficient and high-fidelity image reconstruction. At the heart of DiffCR is a Frequency-aware Skip Estimation (FaSE) module that refines the $\epsilon$-prediction prior from a pre-trained latent diffusion model and aligns it with compressed latents at different timesteps via Frequency Decoupling Attention (FDA). Furthermore, a lightweight consistency estimator enables fast \textbf{two-step decoding} by preserving the semantic trajectory of diffusion sampling. Without updating the backbone diffusion model, DiffCR achieves substantial bitrate savings (27.2\% BD-rate (LPIPS) and 65.1\% BD-rate (PSNR)) and over $10\times$ speed-up compared to SOTA diffusion-based compression baselines.
|
https://arxiv.org/abs/2601.10373
|
Academic Papers
|
svg
|
92021b588b124056ba1687a6912ecfe9f6d312c74ceba9ddd10adaef367b7b01
|
2026-01-16T00:00:00-05:00
|
A Hybrid Reliability--Weight Framework for Construction of Polar Codes
|
arXiv:2601.10376v1 Announce Type: new Abstract: Polar codes are usually constructed by ranking synthetic bit-channels according to reliability, which guarantees capacity-achieving behavior but can yield poor low-weight spectra at short and moderate lengths. Recent algebraic results express the contribution of individual bit-channels to the multiplicities of minimum and near-minimum weight codewords in closed form. In this work we combine these insights into a mixed (reliability--weight) bit-channel ordering. We define a per-bit cost whose distance term is derived from orbit enumeration of minimum-weight codewords and scaled by a Bhattacharyya-type factor, and show that the resulting mixed construction minimises a truncated SC/ML union-bound surrogate within a class of decreasing monomial codes. We relate the mixed metric to error events in SCL decoding via a pruning/ML decomposition, and prove that mixed designs act as local perturbations of reliability-based constructions whose asymptotic impact vanishes as code-length approaches infinity. Numerical results for short and moderate lengths on BPSK-AWGN, implemented via Gaussian approximation and closed-form weight contributions, illustrate the trade-off between pure reliability-based and mixed constructions in terms of minimum distance, multiplicity, and union-bound approximations. All proofs are deferred to the appendices.
|
https://arxiv.org/abs/2601.10376
|
Academic Papers
|
svg
|
651de605f72de63f2b5363c43c0a9a32b91dbb788f128ee6d4b0145483d0396b
|
2026-01-16T00:00:00-05:00
|
Global Context Compression with Interleaved Vision-Text Transformation
|
arXiv:2601.10378v1 Announce Type: new Abstract: Recent achievements of vision-language models in end-to-end OCR point to a new avenue for low-loss compression of textual information. This motivates earlier works that render the Transformer's input into images for prefilling, which effectively reduces the number of tokens through visual encoding, thereby alleviating the quadratically increased Attention computations. However, this partial compression fails to save computational or memory costs at token-by-token inference. In this paper, we investigate global context compression, which saves tokens at both prefilling and inference stages. Consequently, we propose VIST2, a novel Transformer that interleaves input text chunks alongside their visual encoding, while depending exclusively on visual tokens in the pre-context to predict the next text token distribution. Around this idea, we render text chunks into sketch images and train VIST2 in multiple stages, starting from curriculum-scheduled pretraining for optical language modeling, followed by modal-interleaved instruction tuning. We conduct extensive experiments using VIST2 families scaled from 0.6B to 8B to explore the training recipe and hyperparameters. With a 4$\times$ compression ratio, the resulting models demonstrate significant superiority over baselines on long writing tasks, achieving, on average, a 3$\times$ speedup in first-token generation, 77% reduction in memory usage, and 74% reduction in FLOPS. Our codes and datasets will be public to support further studies.
|
https://arxiv.org/abs/2601.10378
|
Academic Papers
|
svg
|
86527c1fe15b466c9e7b57221d950b0b01abcc6a92bb6359ede69e447eb62125
|
2026-01-16T00:00:00-05:00
|
Online identification of nonlinear time-varying systems with uncertain information
|
arXiv:2601.10379v1 Announce Type: new Abstract: Digital twins (DTs), serving as the core enablers for real-time monitoring and predictive maintenance of complex cyber-physical systems, impose critical requirements on their virtual models: high predictive accuracy, strong interpretability, and online adaptive capability. However, existing techniques struggle to meet these demands simultaneously: Bayesian methods excel in uncertainty quantification but lack model interpretability, while interpretable symbolic identification methods (e.g., SINDy) are constrained by their offline, batch-processing nature, which make real-time updates challenging. To bridge this semantic and computational gap, this paper proposes a novel Bayesian Regression-based Symbolic Learning (BRSL) framework. The framework formulates online symbolic discovery as a unified probabilistic state-space model. By incorporating sparse horseshoe priors, model selection is transformed into a Bayesian inference task, enabling simultaneous system identification and uncertainty quantification. Furthermore, we derive an online recursive algorithm with a forgetting factor and establish precise recursive conditions that guarantee the well-posedness of the posterior distribution. These conditions also function as real-time monitors for data utility, enhancing algorithmic robustness. Additionally, a rigorous convergence analysis is provided, demonstrating the convergence of parameter estimates under persistent excitation conditions. Case studies validate the effectiveness of the proposed framework in achieving interpretable, probabilistic prediction and online learning.
|
https://arxiv.org/abs/2601.10379
|
Academic Papers
|
svg
|
e39416d651ceff3562dd0a54a142e7332dd5b01a08e0486ca40bfc9c023b5cbb
|
2026-01-16T00:00:00-05:00
|
Does Cognitive Load Affect Human Accuracy in Detecting Voice-Based Deepfakes?
|
arXiv:2601.10383v1 Announce Type: new Abstract: Deepfake technologies are powerful tools that can be misused for malicious purposes such as spreading disinformation on social media. The effectiveness of such malicious applications depends on the ability of deepfakes to deceive their audience. Therefore, researchers have investigated human abilities to detect deepfakes in various studies. However, most of these studies were conducted with participants who focused exclusively on the detection task; hence the studies may not provide a complete picture of human abilities to detect deepfakes under realistic conditions: Social media users are exposed to cognitive load on the platform, which can impair their detection abilities. In this paper, we investigate the influence of cognitive load on human detection abilities of voice-based deepfakes in an empirical study with 30 participants. Our results suggest that low cognitive load does not generally impair detection abilities, and that the simultaneous exposure to a secondary stimulus can actually benefit people in the detection task.
|
https://arxiv.org/abs/2601.10383
|
Academic Papers
|
svg
|
6f37df0783544b144e5219a8a54d513a2c7ec085fd2565b3644b777748230528
|
2026-01-16T00:00:00-05:00
|
RSA-Bench: Benchmarking Audio Large Models in Real-World Acoustic Scenarios
|
arXiv:2601.10384v1 Announce Type: new Abstract: While Audio Large Models (ALMs) have achieved remarkable proficiency, their robustness remains brittle in real-world deployment. Existing evaluations largely rely on synthetic Gaussian noise or simplistic single-source interference, failing to capture the intricate, multi-layered acoustic dynamics -- or ``Acoustic Ecology'' -- that characterize authentic physical environments. To bridge this ecological gap, we introduce \textbf{RSA-Bench}, a comprehensive robustness benchmark designed to stress-test ALLMs through high-fidelity auditory scene simulations. Unlike traditional methods, we construct evaluation samples by naturally superimposing diverse environmental soundscapes -- spanning \textit{Pasture}, \textit{Extreme Weather}, \textit{Classroom}, and \textit{Outdoors} -- onto clean speech signals across a spectrum of interference intensities. By evaluating models on six core tasks ranging from fundamental perception to complex reasoning, our study unveils three macro-level insights: \textbf{(I) The Perception-Cognition Gap:} Models maintain relative resilience in low-level recognition but suffer a \textbf{functional collapse} in high-order reasoning tasks under stress; \textbf{(II) Scenario Sensitivity:} ``Vocal-like'' interference (e.g., background laughter) proves significantly more destructive than mechanical noise, challenging the model's auditory attention mechanisms; and \textbf{(III) The Denoising Paradox:} Standard speech enhancement often exacerbates performance degradation, as ALLMs prove highly sensitive to the semantic distortions introduced by denoising artifacts.
|
https://arxiv.org/abs/2601.10384
|
Academic Papers
|
svg
|
dcbdf401500f2ab9635b096312ea6d1da9b2d99cea087e1a3fbf79b03336a97f
|
2026-01-16T00:00:00-05:00
|
Handling Missing Modalities in Multimodal Survival Prediction for Non-Small Cell Lung Cancer
|
arXiv:2601.10386v1 Announce Type: new Abstract: Accurate survival prediction in Non-Small Cell Lung Cancer (NSCLC) requires the integration of heterogeneous clinical, radiological, and histopathological information. While Multimodal Deep Learning (MDL) offers a promises for precision prognosis and survival prediction, its clinical applicability is severely limited by small cohort sizes and the presence of missing modalities, often forcing complete-case filtering or aggressive imputation. In this work, we present a missing-aware multimodal survival framework that integrates Computed Tomography (CT), Whole-Slide Histopathology (WSI) Images, and structured clinical variables for overall survival modeling in unresectable stage II-III NSCLC. By leveraging Foundation Models (FM) for modality-specific feature extraction and a missing-aware encoding strategy, the proposed approach enables intermediate multimodal fusion under naturally incomplete modality profiles. The proposed architecture is resilient to missing modalities by design, allowing the model to utilize all available data without being forced to drop patients during training or inference. Experimental results demonstrate that intermediate fusion consistently outperforms unimodal baselines as well as early and late fusion strategies, with the strongest performance achieved by the fusion of WSI and clinical modalities (73.30 C-index). Further analyses of modality importance reveal an adaptive behavior in which less informative modalities, i.e., CT modality, are automatically down-weighted and contribute less to the final survival prediction.
|
https://arxiv.org/abs/2601.10386
|
Academic Papers
|
svg
|
ae9d5fb375d34917ae7ae0eade5a31f74ed9779b45cd3bd2d52b6442c5f56e1b
|
2026-01-16T00:00:00-05:00
|
The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models
|
arXiv:2601.10387v1 Announce Type: new Abstract: Large language models can represent a variety of personas but typically default to a helpful Assistant identity cultivated during post-training. We investigate the structure of the space of model personas by extracting activation directions corresponding to diverse character archetypes. Across several different models, we find that the leading component of this persona space is an "Assistant Axis," which captures the extent to which a model is operating in its default Assistant mode. Steering towards the Assistant direction reinforces helpful and harmless behavior; steering away increases the model's tendency to identify as other entities. Moreover, steering away with more extreme values often induces a mystical, theatrical speaking style. We find this axis is also present in pre-trained models, where it primarily promotes helpful human archetypes like consultants and coaches and inhibits spiritual ones. Measuring deviations along the Assistant Axis predicts "persona drift," a phenomenon where models slip into exhibiting harmful or bizarre behaviors that are uncharacteristic of their typical persona. We find that persona drift is often driven by conversations demanding meta-reflection on the model's processes or featuring emotionally vulnerable users. We show that restricting activations to a fixed region along the Assistant Axis can stabilize model behavior in these scenarios -- and also in the face of adversarial persona-based jailbreaks. Our results suggest that post-training steers models toward a particular region of persona space but only loosely tethers them to it, motivating work on training and steering strategies that more deeply anchor models to a coherent persona.
|
https://arxiv.org/abs/2601.10387
|
Academic Papers
|
svg
|
0fcdaf9d4de376e980877147f49f80cc21e901486d87c50d0d656120b3d256b7
|
2026-01-16T00:00:00-05:00
|
INDIC DIALECT: A Multi Task Benchmark to Evaluate and Translate in Indian Language Dialects
|
arXiv:2601.10388v1 Announce Type: new Abstract: Recent NLP advances focus primarily on standardized languages, leaving most low-resource dialects under-served especially in Indian scenarios. In India, the issue is particularly important: despite Hindi being the third most spoken language globally (over 600 million speakers), its numerous dialects remain underrepresented. The situation is similar for Odia, which has around 45 million speakers. While some datasets exist which contain standard Hindi and Odia languages, their regional dialects have almost no web presence. We introduce INDIC-DIALECT, a human-curated parallel corpus of 13k sentence pairs spanning 11 dialects and 2 languages: Hindi and Odia. Using this corpus, we construct a multi-task benchmark with three tasks: dialect classification, multiple-choice question (MCQ) answering, and machine translation (MT). Our experiments show that LLMs like GPT-4o and Gemini 2.5 perform poorly on the classification task. While fine-tuned transformer based models pretrained on Indian languages substantially improve performance e.g., improving F1 from 19.6\% to 89.8\% on dialect classification. For dialect to language translation, we find that hybrid AI model achieves highest BLEU score of 61.32 compared to the baseline score of 23.36. Interestingly, due to complexity in generating dialect sentences, we observe that for language to dialect translation the ``rule-based followed by AI" approach achieves best BLEU score of 48.44 compared to the baseline score of 27.59. INDIC-DIALECT thus is a new benchmark for dialect-aware Indic NLP, and we plan to release it as open source to support further work on low-resource Indian dialects.
|
https://arxiv.org/abs/2601.10388
|
Academic Papers
|
svg
|
87ba66090580d7caa0a191263724a0d8d2d814fc69674b497c52d2749702d350
|
2026-01-16T00:00:00-05:00
|
Regularization of linear inverse problems by rational Krylov methods
|
arXiv:2601.10389v1 Announce Type: new Abstract: For approximately solving linear ill-posed problems in Hilbert spaces, we investigate the regularization properties of the aggregation method and the RatCG method. These recent algorithms use previously calculated solutions of Tikhonov regularization (respectively, Landweber iterations) to set up a new search space on which the least-squares functional is minimized. We outline how these methods can be understood as rational Krylov space methods, i.e., based on the space of rational functions of the forward operator. The main result is that these methods form an optimal-order regularization schemes when combined with the discrepancy principle as stopping rule and when the underlying regularization parameters are sufficiently large.
|
https://arxiv.org/abs/2601.10389
|
Academic Papers
|
svg
|
a9f91125093f8c6a7865910bc5c3d767a5556b40b7abffd08a3735e85e26ae81
|
2026-01-16T00:00:00-05:00
|
Codebook Design for Limited Feedback in Near-Field XL-MIMO Systems
|
arXiv:2601.10391v1 Announce Type: new Abstract: In this paper, we study efficient codebook design for limited feedback in extremely large-scale multiple-input-multiple-output (XL-MIMO) frequency division duplexing (FDD) systems. It is worth noting that existing codebook designs for XL-MIMO, such as polar-domain codebook, have not well taken into account user (location) distribution in practice, thereby incurring excessive feedback overhead. To address this issue, we propose in this paper a novel and efficient feedback codebook tailored to user distribution. To this end, we first consider a typical scenario where users are uniformly distributed within a specific polar-region, based on which a sum-rate maximization problem is formulated to jointly optimize angle-range samples and bit allocation among angle/range feedback. This problem is challenging to solve due to the lack of a closed-form expression for the received power in terms of angle and range samples. By leveraging a Voronoi partitioning approach, we show that uniform angle sampling is optimal for received power maximization. For more challenging range sampling design, we obtain a tight lower-bound on the received power and show that geometric sampling, where the ratio between adjacent samples is constant, can maximize the lower bound and thus serves as a high-quality suboptimal solution. We then extend the proposed framework to accommodate more general non-uniform user distribution via an alternating sampling method. Furthermore, theoretical analysis reveals that as the array size increases, the optimal allocation of feedback bits increasingly favors range samples at the expense of angle samples. Finally, numerical results validate the superior rate performance and robustness of the proposed codebook design under various system setups, achieving significant gains over benchmark schemes, including the widely used polar-domain codebook.
|
https://arxiv.org/abs/2601.10391
|
Academic Papers
|
svg
|
bb8793f123e39ffd757d5dde334e26cb395f988b9f9948653164d732aa0ae66d
|
2026-01-16T00:00:00-05:00
|
Multi-Temporal Frames Projection for Dynamic Processes Fusion in Fluorescence Microscopy
|
arXiv:2601.10392v1 Announce Type: new Abstract: Fluorescence microscopy is widely employed for the analysis of living biological samples; however, the utility of the resulting recordings is frequently constrained by noise, temporal variability, and inconsistent visualisation of signals that oscillate over time. We present a unique computational framework that integrates information from multiple time-resolved frames into a single high-quality image, while preserving the underlying biological content of the original video. We evaluate the proposed method through an extensive number of configurations (n = 111) and on a challenging dataset comprising dynamic, heterogeneous, and morphologically complex 2D monolayers of cardiac cells. Results show that our framework, which consists of a combination of explainable techniques from different computer vision application fields, is capable of generating composite images that preserve and enhance the quality and information of individual microscopy frames, yielding 44% average increase in cell count compared to previous methods. The proposed pipeline is applicable to other imaging domains that require the fusion of multi-temporal image stacks into high-quality 2D images, thereby facilitating annotation and downstream segmentation.
|
https://arxiv.org/abs/2601.10392
|
Academic Papers
|
svg
|
cf6bfa41f655862abae139e83704befb9f889b6515c5dc4db6c75e9790423dd5
|
2026-01-16T00:00:00-05:00
|
Multiaccess Coded Caching with Heterogeneous Retrieval Costs
|
arXiv:2601.10394v1 Announce Type: new Abstract: The multiaccess coded caching (MACC) system, as formulated by Hachem {\it et al.}, consists of a central server with a library of $N$ files, connected to $K$ cache-less users via an error-free shared link, and $K$ cache nodes, each equipped with cache memory of size $M$ files. Each user can access $L$ neighboring cache nodes under a cyclic wrap-around topology. Most existing studies operate under the strong assumption that users can retrieve content from their connected cache nodes at no communication cost. In practice, each user retrieves content from its $L$ different connected cache nodes at varying costs. Additionally, the server also incurs certain costs to transmit the content to the users. In this paper, we focus on a cost-aware MACC system and aim to minimize the total system cost, which includes cache-access costs and broadcast costs. Firstly, we propose a novel coded caching framework based on superposition coding, where the MACC schemes of Cheng \textit{et al.} are layered. Then, a cost-aware optimization problem is derived that optimizes cache placement and minimizes system cost. By identifying a sparsity property of the optimal solution, we propose a structure-aware algorithm with reduced complexity. Simulation results demonstrate that our proposed scheme consistently outperforms the scheme of Cheng {\it et al.} in scenarios with heterogeneous retrieval costs.
|
https://arxiv.org/abs/2601.10394
|
Academic Papers
|
svg
|
093ea3382f8870048efbebd7db56cb53dfc25b216e59c65c5f3643778dbea607
|
2026-01-16T00:00:00-05:00
|
LatentRefusal: Latent-Signal Refusal for Unanswerable Text-to-SQL Queries
|
arXiv:2601.10398v1 Announce Type: new Abstract: In LLM-based text-to-SQL systems, unanswerable and underspecified user queries may generate not only incorrect text but also executable programs that yield misleading results or violate safety constraints, posing a major barrier to safe deployment. Existing refusal strategies for such queries either rely on output-level instruction following, which is brittle due to model hallucinations, or estimate output uncertainty, which adds complexity and overhead. To address this challenge, we formalize safe refusal in text-to-SQL systems as an answerability-gating problem and propose LatentRefusal, a latent-signal refusal mechanism that predicts query answerability from intermediate hidden activations of a large language model. We introduce the Tri-Residual Gated Encoder, a lightweight probing architecture, to suppress schema noise and amplify sparse, localized cues of question-schema mismatch that indicate unanswerability. Extensive empirical evaluations across diverse ambiguous and unanswerable settings, together with ablation studies and interpretability analyses, demonstrate the effectiveness of the proposed approach and show that LatentRefusal provides an attachable and efficient safety layer for text-to-SQL systems. Across four benchmarks, LatentRefusal improves average F1 to 88.5 percent on both backbones while adding approximately 2 milliseconds of probe overhead.
|
https://arxiv.org/abs/2601.10398
|
Academic Papers
|
svg
|
7f2c0813d58381d10e1673c0f67126e013a3631d7f8c078ad7f86b1ef899ca09
|
2026-01-16T00:00:00-05:00
|
A Geometric Multigrid Preconditioner for Shifted Boundary Method
|
arXiv:2601.10399v1 Announce Type: new Abstract: The Shifted Boundary Method (SBM) trades some part of the burden of body-fitted meshing for increased algebraic complexity. While the resulting linear systems retain the standard $\mathcal{O}(h^{-2})$ conditioning of second-order operators, the non-symmetry and non-local boundary coupling render them resistant to standard Algebraic Multigrid (AMG) and simple smoothers for high-order discretisations. We present a geometric multigrid preconditioner that effectively tames these systems. At its core lies the \emph{Full-Residual Shy Patch} smoother: a subspace correction strategy that filters out some patches while capturing the full physics of the shifted boundary. Unlike previous cell-wise approaches that falter at high polynomial degrees, our method delivers convergence with low mesh dependence. We demonstrate performance for Continuous Galerkin approximations, maintaining low and stable iteration counts up to polynomial degree $p=3$ in 3D, proving that SBM can be both geometrically flexible and algebraically efficient.
|
https://arxiv.org/abs/2601.10399
|
Academic Papers
|
svg
|
46c7d933f9291cbd05bc731a0eea04af7b796da3c65d0e51b45adad21521d632
|
2026-01-16T00:00:00-05:00
|
Toward Ultra-Long-Horizon Agentic Science: Cognitive Accumulation for Machine Learning Engineering
|
arXiv:2601.10402v1 Announce Type: new Abstract: The advancement of artificial intelligence toward agentic science is currently bottlenecked by the challenge of ultra-long-horizon autonomy, the ability to sustain strategic coherence and iterative correction over experimental cycles spanning days or weeks. While Large Language Models (LLMs) have demonstrated prowess in short-horizon reasoning, they are easily overwhelmed by execution details in the high-dimensional, delayed-feedback environments of real-world research, failing to consolidate sparse feedback into coherent long-term guidance. Here, we present ML-Master 2.0, an autonomous agent that masters ultra-long-horizon machine learning engineering (MLE) which is a representative microcosm of scientific discovery. By reframing context management as a process of cognitive accumulation, our approach introduces Hierarchical Cognitive Caching (HCC), a multi-tiered architecture inspired by computer systems that enables the structural differentiation of experience over time. By dynamically distilling transient execution traces into stable knowledge and cross-task wisdom, HCC allows agents to decouple immediate execution from long-term experimental strategy, effectively overcoming the scaling limits of static context windows. In evaluations on OpenAI's MLE-Bench under 24-hour budgets, ML-Master 2.0 achieves a state-of-the-art medal rate of 56.44%. Our findings demonstrate that ultra-long-horizon autonomy provides a scalable blueprint for AI capable of autonomous exploration beyond human-precedent complexities.
|
https://arxiv.org/abs/2601.10402
|
Academic Papers
|
svg
|
3721d27ebb7073d5d99fa23cd13b48172261c20da59b7f5367c53e405ddf5bd2
|
2026-01-16T00:00:00-05:00
|
Discrete Feynman-Kac Correctors
|
arXiv:2601.10403v1 Announce Type: new Abstract: Discrete diffusion models have recently emerged as a promising alternative to the autoregressive approach for generating discrete sequences. Sample generation via gradual denoising or demasking processes allows them to capture hierarchical non-sequential interdependencies in the data. These custom processes, however, do not assume a flexible control over the distribution of generated samples. We propose Discrete Feynman-Kac Correctors, a framework that allows for controlling the generated distribution of discrete masked diffusion models at inference time. We derive Sequential Monte Carlo (SMC) algorithms that, given a trained discrete diffusion model, control the temperature of the sampled distribution (i.e. perform annealing), sample from the product of marginals of several diffusion processes (e.g. differently conditioned processes), and sample from the product of the marginal with an external reward function, producing likely samples from the target distribution that also have high reward. Notably, our framework does not require any training of additional models or fine-tuning of the original model. We illustrate the utility of our framework in several applications including: efficient sampling from the annealed Boltzmann distribution of the Ising model, improving the performance of language models for code generation and amortized learning, as well as reward-tilted protein sequence generation.
|
https://arxiv.org/abs/2601.10403
|
Academic Papers
|
svg
|
5f5ec7654069a023a12f252f619a120486461ea3d7cdfbb97f4330c27ff1d2e2
|
2026-01-16T00:00:00-05:00
|
ErrEval: Error-Aware Evaluation for Question Generation through Explicit Diagnostics
|
arXiv:2601.10406v1 Announce Type: new Abstract: Automatic Question Generation (QG) often produces outputs with critical defects, such as factual hallucinations and answer mismatches. However, existing evaluation methods, including LLM-based evaluators, mainly adopt a black-box and holistic paradigm without explicit error modeling, leading to the neglect of such defects and overestimation of question quality. To address this issue, we propose ErrEval, a flexible and Error-aware Evaluation framework that enhances QG evaluation through explicit error diagnostics. Specifically, ErrEval reformulates evaluation as a two-stage process of error diagnosis followed by informed scoring. At the first stage, a lightweight plug-and-play Error Identifier detects and categorizes common errors across structural, linguistic, and content-related aspects. These diagnostic signals are then incorporated as explicit evidence to guide LLM evaluators toward more fine-grained and grounded judgments. Extensive experiments on three benchmarks demonstrate the effectiveness of ErrEval, showing that incorporating explicit diagnostics improves alignment with human judgments. Further analyses confirm that ErrEval effectively mitigates the overestimation of low-quality questions.
|
https://arxiv.org/abs/2601.10406
|
Academic Papers
|
svg
|
723b471667624772bb71ad2eb6fe1748fb3c57864282a057d0a763250c458644
|
2026-01-16T00:00:00-05:00
|
CS-GBA: A Critical Sample-based Gradient-guided Backdoor Attack for Offline Reinforcement Learning
|
arXiv:2601.10407v1 Announce Type: new Abstract: Offline Reinforcement Learning (RL) enables policy optimization from static datasets but is inherently vulnerable to backdoor attacks. Existing attack strategies typically struggle against safety-constrained algorithms (e.g., CQL) due to inefficient random poisoning and the use of easily detectable Out-of-Distribution (OOD) triggers. In this paper, we propose CS-GBA (Critical Sample-based Gradient-guided Backdoor Attack), a novel framework designed to achieve high stealthiness and destructiveness under a strict budget. Leveraging the theoretical insight that samples with high Temporal Difference (TD) errors are pivotal for value function convergence, we introduce an adaptive Critical Sample Selection strategy that concentrates the attack budget on the most influential transitions. To evade OOD detection, we propose a Correlation-Breaking Trigger mechanism that exploits the physical mutual exclusivity of state features (e.g., 95th percentile boundaries) to remain statistically concealed. Furthermore, we replace the conventional label inversion with a Gradient-Guided Action Generation mechanism, which searches for worst-case actions within the data manifold using the victim Q-network's gradient. Empirical results on D4RL benchmarks demonstrate that our method significantly outperforms state-of-the-art baselines, achieving high attack success rates against representative safety-constrained algorithms with a minimal 5% poisoning budget, while maintaining the agent's performance in clean environments.
|
https://arxiv.org/abs/2601.10407
|
Academic Papers
|
svg
|
07c10df31c53956d229d7e575ca097a04d033bf5713194492d2f1ae6db310878
|
2026-01-16T00:00:00-05:00
|
TF3-RO-50M: Training Compact Romanian Language Models from Scratch on Synthetic Moral Microfiction
|
arXiv:2601.10410v1 Announce Type: new Abstract: Recent advances in synthetic data generation have shown that compact language models can be trained effectively when the underlying corpus is structurally controlled and linguistically coherent. However, for morphologically rich and computationally under-resourced languages such as Romanian, there is still no openly documented, end-to-end pipeline that unifies tokenizer design, preprocessing, pretraining, compression, evaluation, and large-scale synthetic data generation in a reproducible framework. Building on TF1, a three-million-story English fable dataset, and TF2, which extends TF1 through high-quality Romanian translations, we introduce TF3-RO, a Romanian-centric language modeling pipeline spanning tokenizer training, from-scratch model development, and Romanian-native dataset generation. TF3-RO constructs Romanian-specific BPE and Unigram tokenizers from a linguistically informed corpus to mitigate token inflation induced by Romanian morphology. Using long-sequence packed training, we pretrain a 51.65M-parameter LLaMA-style Transformer entirely from scratch. The model is subsequently optimized through quantization, structured pruning, and logit-based knowledge distillation, yielding a compact 26.45M-parameter student model with tied embeddings and strong deployment characteristics. Using this distilled model, TF3-RO generates three million Romanian-native synthetic fables via a controlled combinatorial prompting framework. Across all stages, the pipeline integrates a comprehensive evaluation suite combining intrinsic metrics, Romanian agreement probes, entity coherence, rule-based grammar checking, and LLM-based assessment. TF3-RO provides a reproducible and linguistically grounded framework for training compact Romanian language models and producing large-scale synthetic narrative corpora.
|
https://arxiv.org/abs/2601.10410
|
Academic Papers
|
svg
|
f2a2db8980446a5444cd79be87fca8ed8b4c66e6f2b8db83322d72a9c068d899
|
2026-01-16T00:00:00-05:00
|
LADFA: A Framework of Using Large Language Models and Retrieval-Augmented Generation for Personal Data Flow Analysis in Privacy Policies
|
arXiv:2601.10413v1 Announce Type: new Abstract: Privacy policies help inform people about organisations' personal data processing practices, covering different aspects such as data collection, data storage, and sharing of personal data with third parties. Privacy policies are often difficult for people to fully comprehend due to the lengthy and complex legal language used and inconsistent practices across different sectors and organisations. To help conduct automated and large-scale analyses of privacy policies, many researchers have studied applications of machine learning and natural language processing techniques, including large language models (LLMs). While a limited number of prior studies utilised LLMs for extracting personal data flows from privacy policies, our approach builds on this line of work by combining LLMs with retrieval-augmented generation (RAG) and a customised knowledge base derived from existing studies. This paper presents the development of LADFA, an end-to-end computational framework, which can process unstructured text in a given privacy policy, extract personal data flows and construct a personal data flow graph, and conduct analysis of the data flow graph to facilitate insight discovery. The framework consists of a pre-processor, an LLM-based processor, and a data flow post-processor. We demonstrated and validated the effectiveness and accuracy of the proposed approach by conducting a case study that involved examining ten selected privacy policies from the automotive industry. Moreover, it is worth noting that LADFA is designed to be flexible and customisable, making it suitable for a range of text-based analysis tasks beyond privacy policy analysis.
|
https://arxiv.org/abs/2601.10413
|
Academic Papers
|
svg
|
4291d3bb480546fff9e2579440b6f67e4b6d45778a35f855a9d1270bc3ef30bd
|
2026-01-16T00:00:00-05:00
|
LLMdoctor: Token-Level Flow-Guided Preference Optimization for Efficient Test-Time Alignment of Large Language Models
|
arXiv:2601.10416v1 Announce Type: new Abstract: Aligning Large Language Models (LLMs) with human preferences is critical, yet traditional fine-tuning methods are computationally expensive and inflexible. While test-time alignment offers a promising alternative, existing approaches often rely on distorted trajectory-level signals or inefficient sampling, fundamentally capping performance and failing to preserve the generative diversity of the base model. This paper introduces LLMdoctor, a novel framework for efficient test-time alignment that operates via a patient-doctor paradigm. It integrates token-level reward acquisition with token-level flow-guided preference optimization (TFPO) to steer a large, frozen patient LLM with a smaller, specialized doctor model. Unlike conventional methods that rely on trajectory-level rewards, LLMdoctor first extracts fine-grained, token-level preference signals from the patient model's behavioral variations. These signals then guide the training of the doctor model via TFPO, which establishes flow consistency across all subtrajectories, enabling precise token-by-token alignment while inherently preserving generation diversity. Extensive experiments demonstrate that LLMdoctor significantly outperforms existing test-time alignment methods and even surpasses the performance of full fine-tuning approaches like DPO.
|
https://arxiv.org/abs/2601.10416
|
Academic Papers
|
svg
|
152fdfc483de52a9e481aef8f98ee9b407ef92a42e9e0b24f9bb75e159efda6e
|
2026-01-16T00:00:00-05:00
|
Reinforcement Learning with Multi-Step Lookahead Information Via Adaptive Batching
|
arXiv:2601.10418v1 Announce Type: new Abstract: We study tabular reinforcement learning problems with multiple steps of lookahead information. Before acting, the learner observes $\ell$ steps of future transition and reward realizations: the exact state the agent would reach and the rewards it would collect under any possible course of action. While it has been shown that such information can drastically boost the value, finding the optimal policy is NP-hard, and it is common to apply one of two tractable heuristics: processing the lookahead in chunks of predefined sizes ('fixed batching policies'), and model predictive control. We first illustrate the problems with these two approaches and propose utilizing the lookahead in adaptive (state-dependent) batches; we refer to such policies as adaptive batching policies (ABPs). We derive the optimal Bellman equations for these strategies and design an optimistic regret-minimizing algorithm that enables learning the optimal ABP when interacting with unknown environments. Our regret bounds are order-optimal up to a potential factor of the lookahead horizon $\ell$, which can usually be considered a small constant.
|
https://arxiv.org/abs/2601.10418
|
Academic Papers
|
svg
|
41a7793e11e2d350cb0be30abf0f97a918f69ad299b8271d68285841da99fde5
|
2026-01-16T00:00:00-05:00
|
Are Language Models Models?
|
arXiv:2601.10421v1 Announce Type: new Abstract: Futrell and Mahowald claim LMs "serve as model systems", but an assessment at each of Marr's three levels suggests the claim is clearly not true at the implementation level, poorly motivated at the algorithmic-representational level, and problematic at the computational theory level. LMs are good candidates as tools; calling them cognitive models overstates the case and unnecessarily feeds LLM hype.
|
https://arxiv.org/abs/2601.10421
|
Academic Papers
|
svg
|
9005a5c600a4f9cf115d44b19957728bfb11e5bb492ab1375230c324e2e67b2e
|
2026-01-16T00:00:00-05:00
|
Placement Delivery Array for Cache-Aided MIMO Systems
|
arXiv:2601.10422v1 Announce Type: new Abstract: We consider a $(G,L,K,M,N)$ cache-aided multiple-input multiple-output (MIMO) network, where a server equipped with $L$ antennas and a library of $N$ equal-size files communicates with $K$ users, each equipped with $G$ antennas and a cache of size $M$ files, over a wireless interference channel. Each user requests an arbitrary file from the library. The goal is to design coded caching schemes that simultaneously achieve the maximum sum degrees of freedom (sum-DoF) and low subpacketization. In this paper, we first introduce a unified combinatorial structure, termed the MIMO placement delivery array (MIMO-PDA), which characterizes uncoded placement and one-shot zero-forcing delivery. By analyzing the combinatorial properties of MIMO-PDAs, we derive a sum-DoF upper bound of $\min\{KG, Gt+G\lceil L/G \rceil\}$, where $t=KM/N$, which coincides with the optimal DoF characterization in prior work by Tehrani \emph{et al.}. Based on this upper bound, we present two novel constructions of MIMO-PDAs that achieve the maximum sum-DoF. The first construction achieves linear subpacketization under stringent parameter constraints, while the second achieves ordered exponential subpacketization under substantially milder constraints. Theoretical analysis and numerical comparisons demonstrate that the second construction exponentially reduces subpacketization compared to existing schemes while preserving the maximum sum-DoF.
|
https://arxiv.org/abs/2601.10422
|
Academic Papers
|
svg
|
566113a2b5965095fc659d4eba5e585e13676517dac6d360dbe194629bbed915
|
2026-01-16T00:00:00-05:00
|
Aletheia-Probe: A Tool for Automated Journal Assessment
|
arXiv:2601.10431v1 Announce Type: new Abstract: Assessing journal legitimacy during literature reviews, publication venue selection, and citation verification requires consulting information scattered across multiple incompatible data-sets. This paper introduces Aletheia-Probe, an open-source tool that systematically aggregates curated databases and pattern analysis from multiple authoritative sources to provide transparent, confidence-scored journal assessments. The tool explicitly reports which sources were consulted, what each found, and where evidence conflicts. The tool integrates into research workflows through command-line and programmatic interfaces. It reduces manual assessment overhead while explicitly flagging uncertain cases. We present the tool's architecture, core design principles, and practical integration approach. Comprehensive empirical validation will be presented in forthcoming work.
|
https://arxiv.org/abs/2601.10431
|
Academic Papers
|
svg
|
48edbd1ed3b4000d1faeb29bf83226eeddbeded9fbbc42de7e613d2639c79cf8
|
2026-01-16T00:00:00-05:00
|
Development of Ontological Knowledge Bases by Leveraging Large Language Models
|
arXiv:2601.10436v1 Announce Type: new Abstract: Ontological Knowledge Bases (OKBs) play a vital role in structuring domain-specific knowledge and serve as a foundation for effective knowledge management systems. However, their traditional manual development poses significant challenges related to scalability, consistency, and adaptability. Recent advancements in Generative AI, particularly Large Language Models (LLMs), offer promising solutions for automating and enhancing OKB development. This paper introduces a structured, iterative methodology leveraging LLMs to optimize knowledge acquisition, automate ontology artifact generation, and enable continuous refinement cycles. We demonstrate this approach through a detailed case study focused on developing a user context profile ontology within the vehicle sales domain. Key contributions include significantly accelerated ontology construction processes, improved ontological consistency, effective bias mitigation, and enhanced transparency in the ontology engineering process. Our findings highlight the transformative potential of integrating LLMs into ontology development, notably improving scalability, integration capabilities, and overall efficiency in knowledge management systems.
|
https://arxiv.org/abs/2601.10436
|
Academic Papers
|
svg
|
77208429a66b8ab5c0142fc52497b3383b588d9bdf5106ccfe34e91001629862
|
2026-01-16T00:00:00-05:00
|
AgentGuardian: Learning Access Control Policies to Govern AI Agent Behavior
|
arXiv:2601.10440v1 Announce Type: new Abstract: Artificial intelligence (AI) agents are increasingly used in a variety of domains to automate tasks, interact with users, and make decisions based on data inputs. Ensuring that AI agents perform only authorized actions and handle inputs appropriately is essential for maintaining system integrity and preventing misuse. In this study, we introduce the AgentGuardian, a novel security framework that governs and protects AI agent operations by enforcing context-aware access-control policies. During a controlled staging phase, the framework monitors execution traces to learn legitimate agent behaviors and input patterns. From this phase, it derives adaptive policies that regulate tool calls made by the agent, guided by both real-time input context and the control flow dependencies of multi-step agent actions. Evaluation across two real-world AI agent applications demonstrates that AgentGuardian effectively detects malicious or misleading inputs while preserving normal agent functionality. Moreover, its control-flow-based governance mechanism mitigates hallucination-driven errors and other orchestration-level malfunctions.
|
https://arxiv.org/abs/2601.10440
|
Academic Papers
|
svg
|
65746ebdc47348287a6d1755f80dacc3731e45db52c0e1359aa34f3890216e28
|
2026-01-16T00:00:00-05:00
|
Subjective evaluation of UHD video coded using VVC with LCEVC and ML-VVC
|
arXiv:2601.10448v1 Announce Type: new Abstract: This paper presents the results of a subjective quality assessment of a multilayer video coding configuration in which Low Complexity Enhancement Video Coding (LCEVC) is applied as an enhancement layer on top of a Versatile Video Coding (VVC) base layer. The evaluation follows the same test methodology and conditions previously defined for MPEG multilayer video coding assessments, with the LCEVC enhancement layer encoded using version 8.1 of the LCEVC Test Model (LTM). The test compares reconstructed UHD output generated from an HD VVC base layer with LCEVC enhancement against two reference cases: upsampled VVC base layer decoding and multilayer VVC (ML-VVC). Two operating points are considered, corresponding to enhancement layers representing approximately 10% and 50% of the total bitrate. Subjective assessment was conducted using the Degradation Category Rating (DCR) methodology with twenty five participants, across a dataset comprising fifteen SDR and HDR sequences. The reported results include Mean Opinion Scores (MOS) with associated 95% confidence intervals, enabling comparison of perceptual quality across coding approaches and operating points within the defined test scope.
|
https://arxiv.org/abs/2601.10448
|
Academic Papers
|
svg
|
817eaa315745c37acb9e883e1b43f4174cdcd54d8c8c28c07e6d4612b2a3ac4f
|
2026-01-16T00:00:00-05:00
|
Lunar-G2R: Geometry-to-Reflectance Learning for High-Fidelity Lunar BRDF Estimation
|
arXiv:2601.10449v1 Announce Type: new Abstract: We address the problem of estimating realistic, spatially varying reflectance for complex planetary surfaces such as the lunar regolith, which is critical for high-fidelity rendering and vision-based navigation. Existing lunar rendering pipelines rely on simplified or spatially uniform BRDF models whose parameters are difficult to estimate and fail to capture local reflectance variations, limiting photometric realism. We propose Lunar-G2R, a geometry-to-reflectance learning framework that predicts spatially varying BRDF parameters directly from a lunar digital elevation model (DEM), without requiring multi-view imagery, controlled illumination, or dedicated reflectance-capture hardware at inference time. The method leverages a U-Net trained with differentiable rendering to minimize photometric discrepancies between real orbital images and physically based renderings under known viewing and illumination geometry. Experiments on a geographically held-out region of the Tycho crater show that our approach reduces photometric error by 38 % compared to a state-of-the-art baseline, while achieving higher PSNR and SSIM and improved perceptual similarity, capturing fine-scale reflectance variations absent from spatially uniform models. To our knowledge, this is the first method to infer a spatially varying reflectance model directly from terrain geometry.
|
https://arxiv.org/abs/2601.10449
|
Academic Papers
|
svg
|
4daefb2b7d9b2585be3db0caa6abcccafaae728d2db96fc1a2d483e52034d7b8
|
2026-01-16T00:00:00-05:00
|
Energy-Efficient Probabilistic Semantic Communication Over Visible Light Networks With Rate Splitting
|
arXiv:2601.10452v1 Announce Type: new Abstract: Visible light communication (VLC) is emerging as a key technology for future wireless communication systems due to its unique physical-layer advantages over traditional radio-frequency (RF)-based systems. However, its integration with higher-layer techniques, such as semantic communication, remains underexplored. This paper investigates the energy efficiency maximization problem in a resource-constrained VLC-based probabilistic semantic communication (PSCom) system. In the considered model, light-emitting diode (LED) transmitters perform semantic compression to reduce data size, which incurs additional computation overhead. The compressed semantic information is transmitted to the users for semantic inference using a shared knowledge base that requires periodic updates to ensure synchronization. In the PSCom system, the knowledge base is represented by probabilistic graphs. To enable simultaneous transmission of both knowledge and information data, rate splitting multiple access (RSMA) is employed. The optimization problem focuses on maximizing energy efficiency by jointly optimizing transmit beamforming, direct current (DC) bias, common rate allocation, and semantic compression ratio, while accounting for both communication and computation costs. To solve this problem, an alternating optimization algorithm based on successive convex approximation (SCA) and Dinkelbach method is developed. Simulation results demonstrate the effectiveness of the proposed approach.
|
https://arxiv.org/abs/2601.10452
|
Academic Papers
|
svg
|
555439042609c26558470e27502bad75b3039505fc57ac7b1a41063d372d46fb
|
2026-01-16T00:00:00-05:00
|
Stable Differentiable Modal Synthesis for Learning Nonlinear Dynamics
|
arXiv:2601.10453v1 Announce Type: new Abstract: Modal methods are a long-standing approach to physical modelling synthesis. Extensions to nonlinear problems are possible, including the case of a high-amplitude vibration of a string. A modal decomposition leads to a densely coupled nonlinear system of ordinary differential equations. Recent work in scalar auxiliary variable techniques has enabled construction of explicit and stable numerical solvers for such classes of nonlinear systems. On the other hand, machine learning approaches (in particular neural ordinary differential equations) have been successful in modelling nonlinear systems automatically from data. In this work, we examine how scalar auxiliary variable techniques can be combined with neural ordinary differential equations to yield a stable differentiable model capable of learning nonlinear dynamics. The proposed approach leverages the analytical solution for linear vibration of system's modes so that physical parameters of a system remain easily accessible after the training without the need for a parameter encoder in the model architecture. As a proof of concept, we generate synthetic data for the nonlinear transverse vibration of a string and show that the model can be trained to reproduce the nonlinear dynamics of the system. Sound examples are presented.
|
https://arxiv.org/abs/2601.10453
|
Academic Papers
|
svg
|
b24a0a877f513411207fd3fc021f55484750c03e7ed1c41d6e1a36ca7ae17f06
|
2026-01-16T00:00:00-05:00
|
SurgGoal: Rethinking Surgical Planning Evaluation via Goal-Satisfiability
|
arXiv:2601.10455v1 Announce Type: new Abstract: Surgical planning integrates visual perception, long-horizon reasoning, and procedural knowledge, yet it remains unclear whether current evaluation protocols reliably assess vision-language models (VLMs) in safety-critical settings. Motivated by a goal-oriented view of surgical planning, we define planning correctness via phase-goal satisfiability, where plan validity is determined by expert-defined surgical rules. Based on this definition, we introduce a multicentric meta-evaluation benchmark with valid procedural variations and invalid plans containing order and content errors. Using this benchmark, we show that sequence similarity metrics systematically misjudge planning quality, penalizing valid plans while failing to identify invalid ones. We therefore adopt a rule-based goal-satisfiability metric as a high-precision meta-evaluation reference to assess Video-LLMs under progressively constrained settings, revealing failures due to perception errors and under-constrained reasoning. Structural knowledge consistently improves performance, whereas semantic guidance alone is unreliable and benefits larger models only when combined with structural constraints.
|
https://arxiv.org/abs/2601.10455
|
Academic Papers
|
svg
|
b54afb1afe42ef57cfce1e48fb2e39ee421d63c98a3bbd5d2a072746c2cfd9ad
|
2026-01-16T00:00:00-05:00
|
NSR-Boost: A Neuro-Symbolic Residual Boosting Framework for Industrial Legacy Models
|
arXiv:2601.10457v1 Announce Type: new Abstract: Although the Gradient Boosted Decision Trees (GBDTs) dominate industrial tabular applications, upgrading legacy models in high-concurrency production environments still faces prohibitive retraining costs and systemic risks. To address this problem, we present NSR-Boost, a neuro-symbolic residual boosting framework designed specifically for industrial scenarios. Its core advantage lies in being "non-intrusive". It treats the legacy model as a frozen model and performs targeted repairs on "hard regions" where predictions fail. The framework comprises three key stages: first, finding hard regions through residuals, then generating interpretable experts by generating symbolic code structures using Large Language Model (LLM) and fine-tuning parameters using Bayesian optimization, and finally dynamically integrating experts with legacy model output through a lightweight aggregator. We report on the successful deployment of NSR-Boost within the core financial risk control system at Qfin Holdings. This framework not only significantly outperforms state-of-the-art (SOTA) baselines across six public datasets and one private dataset, more importantly, shows excellent performance gains on real-world online data. In conclusion, it effectively captures long-tail risks missed by traditional models and offers a safe, low-cost evolutionary paradigm for industry.
|
https://arxiv.org/abs/2601.10457
|
Academic Papers
|
svg
|
39c8f04c55e639e96d5e5676b758d64c99772d97a112db6eb57e4f23983f96a4
|
2026-01-16T00:00:00-05:00
|
LangLasso: Interactive Cluster Descriptions through LLM Explanation
|
arXiv:2601.10458v1 Announce Type: new Abstract: Dimensionality reduction is a powerful technique for revealing structure and potential clusters in data. However, as the axes are complex, non-linear combinations of features, they often lack semantic interpretability. Existing visual analytics (VA) methods support cluster interpretation through feature comparison and interactive exploration, but they require technical expertise and intense human effort. We present \textit{LangLasso}, a novel method that complements VA approaches through interactive, natural language descriptions of clusters using large language models (LLMs). It produces human-readable descriptions that make cluster interpretation accessible to non-experts and allow integration of external contextual knowledge beyond the dataset. We systematically evaluate the reliability of these explanations and demonstrate that \langlasso provides an effective first step for engaging broader audiences in cluster interpretation. The tool is available at https://langlasso.vercel.app
|
https://arxiv.org/abs/2601.10458
|
Academic Papers
|
svg
|
6809f5f8a325acbb0f3c2b93480288b04544dae28d8d2504ea544f739f8e04b7
|
2026-01-16T00:00:00-05:00
|
Contextual StereoSet: Stress-Testing Bias Alignment Robustness in Large Language Models
|
arXiv:2601.10460v1 Announce Type: new Abstract: A model that avoids stereotypes in a lab benchmark may not avoid them in deployment. We show that measured bias shifts dramatically when prompts mention different places, times, or audiences -- no adversarial prompting required. We introduce Contextual StereoSet, a benchmark that holds stereotype content fixed while systematically varying contextual framing. Testing 13 models across two protocols, we find striking patterns: anchoring to 1990 (vs. 2030) raises stereotype selection in all models tested on this contrast (p<0.05); gossip framing raises it in 5 of 6 full-grid models; out-group observer framing shifts it by up to 13 percentage points. These effects replicate in hiring, lending, and help-seeking vignettes. We propose Context Sensitivity Fingerprints (CSF): a compact profile of per-dimension dispersion and paired contrasts with bootstrap CIs and FDR correction. Two evaluation tracks support different use cases -- a 360-context diagnostic grid for deep analysis and a budgeted protocol covering 4,229 items for production screening. The implication is methodological: bias scores from fixed-condition tests may not generalize.This is not a claim about ground-truth bias rates; it is a stress test of evaluation robustness. CSF forces evaluators to ask, "Under what conditions does bias appear?" rather than "Is this model biased?" We release our benchmark, code, and results.
|
https://arxiv.org/abs/2601.10460
|
Academic Papers
|
svg
|
d0e53fc6e14405349886fa9d63c1310534b53c2eb4f3d19ff01a667de9efe2de
|
2026-01-16T00:00:00-05:00
|
ChartComplete: A Taxonomy-based Inclusive Chart Dataset
|
arXiv:2601.10462v1 Announce Type: new Abstract: With advancements in deep learning (DL) and computer vision techniques, the field of chart understanding is evolving rapidly. In particular, multimodal large language models (MLLMs) are proving to be efficient and accurate in understanding charts. To accurately measure the performance of MLLMs, the research community has developed multiple datasets to serve as benchmarks. By examining these datasets, we found that they are all limited to a small set of chart types. To bridge this gap, we propose the ChartComplete dataset. The dataset is based on a chart taxonomy borrowed from the visualization community, and it covers thirty different chart types. The dataset is a collection of classified chart images and does not include a learning signal. We present the ChartComplete dataset as is to the community to build upon it.
|
https://arxiv.org/abs/2601.10462
|
Academic Papers
|
svg
|
c5951e960188de4006cb1a3aea844cc40a01dfaffda0bfd214f8fcecff6e9800
|
2026-01-16T00:00:00-05:00
|
Architectural Classification of XR Workloads: Cross-Layer Archetypes and Implications
|
arXiv:2601.10463v1 Announce Type: new Abstract: Edge and mobile platforms for augmented and virtual reality, collectively referred to as extended reality (XR) must deliver deterministic ultra-low-latency performance under stringent power and area constraints. However, the diversity of XR workloads is rapidly increasing, characterized by heterogeneous operator types and complex dataflow structures. This trend poses significant challenges to conventional accelerator architectures centered around convolutional neural networks (CNNs), resulting in diminishing returns for traditional compute-centric optimization strategies. Despite the importance of this problem, a systematic architectural understanding of the full XR pipeline remains lacking. In this paper, we present an architectural classification of XR workloads using a cross-layer methodology that integrates model-based high-level design space exploration (DSE) with empirical profiling on commercial GPU and CPU hardware. By analyzing a representative set of workloads spanning 12 distinct XR kernels, we distill their complex architectural characteristics into a small set of cross-layer workload archetypes (e.g., capacity-limited and overhead-sensitive). Building on these archetypes, we further extract key architectural insights and provide actionable design guidelines for next-generation XR SoCs. Our study highlights that XR architecture design must shift from generic resource scaling toward phase-aware scheduling and elastic resource allocation in order to achieve greater energy efficiency and high performance in future XR systems.
|
https://arxiv.org/abs/2601.10463
|
Academic Papers
|
svg
|
bcafba33e3e8c4e817eab5c499b8b15d3bd764954dbccdac3398e69b471a6c72
|
2026-01-16T00:00:00-05:00
|
AI Sycophancy: How Users Flag and Respond
|
arXiv:2601.10467v1 Announce Type: new Abstract: While concerns about LLM sycophancy have grown among researchers and developers, how users themselves experience this behavior remains largely unexplored. We analyze Reddit discussions to investigate how users detect, mitigate, and perceive sycophantic AI. We develop the ODR Framework that maps user experiences across three stages: observing sycophantic behaviors, detecting sycophancy, and responding to these behaviors. Our findings reveal that users employ various detection techniques, including cross-platform comparison and inconsistency testing. We document diverse mitigation approaches, such as persona-based prompts to specific language patterns in prompt engineering. We find sycophancy's effects are context-dependent rather than universally harmful. Specifically, vulnerable populations experiencing trauma, mental health challenges, or isolation actively seek and value sycophantic behaviors as emotional support. Users develop both technical and folk explanations for why sycophancy occurs. These findings challenge the assumption that sycophancy should be eliminated universally. We conclude by proposing context-aware AI design that balances the risks with the benefits of affirmative interaction, while discussing implications for user education and transparency.
|
https://arxiv.org/abs/2601.10467
|
Academic Papers
|
svg
|
7b9e8c95babee3e0f5ef99124bbca87cabcf5df3ab9ad25a7256b5fea4262c2f
|
2026-01-16T00:00:00-05:00
|
Job Anxiety in Post-Secondary Computer Science Students Caused by Artificial Intelligence
|
arXiv:2601.10468v1 Announce Type: new Abstract: The emerging widespread usage of AI has led to industry adoption to improve efficiency and increase earnings. However, a major consequence of this is AI displacing employees from their jobs, leading to feelings of job insecurity and uncertainty. This is especially true for computer science students preparing to enter the workforce. To investigate this, we performed semi-structured interviews with (n = 25) students across computer science undergraduate and graduate programs at the University of Toronto to determine the extent of job replacement anxiety. Through thematic analysis, it was determined that computer science students indeed face stress and anxiety from AI displacement of jobs, leading to different strategies of managing pressure. Subfields such as software engineering and web development are strongly believed to be vulnerable to displacement, while specialized subfields like quantum computing and AI research are deemed more secure. Many students feel compelled to upskill by using more AI technologies, taking AI courses, and specializing in AI through graduate school. Some students also reskill by pursuing other fields of study seen as less vulnerable to AI displacement. Finally, international students experience additional job replacement anxiety because of pressure to secure permanent residence. Implications of these findings include feelings of low security in computer science careers, oversaturation of computer science students pursuing AI, and potential dissuasion of future university students from pursuing computer science.
|
https://arxiv.org/abs/2601.10468
|
Academic Papers
|
svg
|
fded440f3295d5eb14ab2072fab15891e91a809579c8ed55787df5c9c5ac3980
|
2026-01-16T00:00:00-05:00
|
Joint Source-Channel Coding for ISAC: Distortion Tradeoffs and Separation Theorems
|
arXiv:2601.10470v1 Announce Type: new Abstract: Integrated Sensing and Communication (ISAC) systems have garnered significant attention due to their capability to simultaneously achieve efficient communication and environmental sensing. A core objective in this field is characterizing the performance tradeoff between sensing and communication. In this paper, we consider a joint source-channel coding (JSCC) framework for the ISAC system that consists of a transmitter with a channel state estimator and a joint source-channel encoder, a state-dependent memoryless channel, and a receiver with a joint source-channel decoder. From an information-theoretic perspective, we establish the tradeoff relationships among channel capacity, distortions in both communication and sensing processes, and the estimation cost. We prove that the separate source and channel coding can achieve joint optimality in this setting. An illustrative example of a binary setting is also provided to validate our theoretical results.
|
https://arxiv.org/abs/2601.10470
|
Academic Papers
|
svg
|
ccdf2308581084ca7cdf9444e5e23b5a5bcedfe151fc3503f67b1586f18ec11e
|
2026-01-16T00:00:00-05:00
|
DeFlow: Decoupling Manifold Modeling and Value Maximization for Offline Policy Extraction
|
arXiv:2601.10471v1 Announce Type: new Abstract: We present DeFlow, a decoupled offline RL framework that leverages flow matching to faithfully capture complex behavior manifolds. Optimizing generative policies is computationally prohibitive, typically necessitating backpropagation through ODE solvers. We address this by learning a lightweight refinement module within an explicit, data-derived trust region of the flow manifold, rather than sacrificing the iterative generation capability via single-step distillation. This way, we bypass solver differentiation and eliminate the need for balancing loss terms, ensuring stable improvement while fully preserving the flow's iterative expressivity. Empirically, DeFlow achieves superior performance on the challenging OGBench benchmark and demonstrates efficient offline-to-online adaptation.
|
https://arxiv.org/abs/2601.10471
|
Academic Papers
|
svg
|
a0afb9d9e589378839c54b8a2349c54541936586452e14c2c3416b8f9dadd132
|
2026-01-16T00:00:00-05:00
|
Optimal error estimates for a discontinuous Galerkin method on curved boundaries with polygonal meshes
|
arXiv:2601.10474v1 Announce Type: new Abstract: We consider a discontinuous Galerkin method for the numerical solution of boundary value problems in two-dimensional domains with curved boundaries. A key challenge in this setting is the potential loss of convergence order due to approximating the physical domain by a polygonal mesh. Unless boundary conditions can be accurately transferred from the true boundary to the computational one, such geometric approximation errors generally lead to suboptimal convergence. To overcome this limitation, a higher-order strategy based on polynomial reconstruction of boundary data was introduced for classical finite element methods in [28, 29] and in the finite volume context in [7, 11]. More recently, this approach was extended to discontinuous Galerkin methods in [32], leading to the DG-ROD method, which restores optimal convergence rates on polygonal approximations of domains with curved boundaries. In this work, we provide a rigorous theoretical analysis of the DG-ROD method, establishing existence and uniqueness of the discrete solution and deriving error estimates for a two-dimensional linear advection-diffusion-reaction problem with homogeneous Dirichlet boundary conditions on both convex and non-convex domains. Following and extending techniques from classical finite element methods [29], we prove that, under suitable regularity assumptions on the exact solution, the DG-ROD method achieves optimal convergence despite polygonal approximations. Finally, we illustrate and confirm the theoretical results with a numerical benchmark.
|
https://arxiv.org/abs/2601.10474
|
Academic Papers
|
svg
|
88ad1d2cdd5d3914527e6e090030bbad53e9ce95d61ddda68521b254cb5640d7
|
2026-01-16T00:00:00-05:00
|
Urban Socio-Semantic Segmentation with Vision-Language Reasoning
|
arXiv:2601.10477v1 Announce Type: new Abstract: As hubs of human activity, urban surfaces consist of a wealth of semantic entities. Segmenting these various entities from satellite imagery is crucial for a range of downstream applications. Current advanced segmentation models can reliably segment entities defined by physical attributes (e.g., buildings, water bodies) but still struggle with socially defined categories (e.g., schools, parks). In this work, we achieve socio-semantic segmentation by vision-language model reasoning. To facilitate this, we introduce the Urban Socio-Semantic Segmentation dataset named SocioSeg, a new resource comprising satellite imagery, digital maps, and pixel-level labels of social semantic entities organized in a hierarchical structure. Additionally, we propose a novel vision-language reasoning framework called SocioReasoner that simulates the human process of identifying and annotating social semantic entities via cross-modal recognition and multi-stage reasoning. We employ reinforcement learning to optimize this non-differentiable process and elicit the reasoning capabilities of the vision-language model. Experiments demonstrate our approach's gains over state-of-the-art models and strong zero-shot generalization. Our dataset and code are available in https://github.com/AMAP-ML/SocioReasoner.
|
https://arxiv.org/abs/2601.10477
|
Academic Papers
|
svg
|
bbf03eb33065b6af2ac734e00c2225db63deac9cdccdcab4d1a4648bc0b9ea59
|
2026-01-16T00:00:00-05:00
|
A Construction Framework of Coded Caching Scheme for Multi-Access MIMO Systems via Knapsack Problem
|
arXiv:2601.10484v1 Announce Type: new Abstract: This paper investigates the coded caching problem in a multi-access multiple-input single-output (MAMISO) network with the combinatorial topology. The considered system consists of a server containing $N$ files, $\Lambda$ cache nodes, and $K$ cache-less users, where each user can access a unique subset of $r$ cache nodes. The server is equipped with $L$ transmit antennas. Our objective is to design a caching scheme that simultaneously achieves a high sum Degree of Freedom (sum-DoF) and low subpacketization complexity. To address this challenge, we formulate the design of multi-antenna placement delivery arrays (MAPDA) as a $0$--$1$ knapsack problem to maximize the achievable DoF, thereby transforming the complex combinatorial caching structure into a tractable optimization framework that yields efficient cache placement and flexible delivery strategies. Theoretical and numerical analyses demonstrate that: for networks with combinatorial topologies, the proposed scheme achieves a higher sum-DoF than existing schemes. Under identical cache size constraints, the subpacketization level remains comparable to existing linear subpacketization schemes. Moreover, under specific system conditions, the proposed scheme attains the theoretical maximum sum-DoF of $\min\{L+KM/N, K\}$ while achieving further reductions subpacketization. For particular combinatorial structures, we further derive optimized constructions that achieve even higher sum-DoF with lower subpacketization. ```
|
https://arxiv.org/abs/2601.10484
|
Academic Papers
|
svg
|
36d93daf20f9556679d903f7b3ffb5b5af5efb9eba2ce61102303d90ddde2629
|
2026-01-16T00:00:00-05:00
|
Panning for Gold: Expanding Domain-Specific Knowledge Graphs with General Knowledge
|
arXiv:2601.10485v1 Announce Type: new Abstract: Domain-specific knowledge graphs (DKGs) often lack coverage compared to general knowledge graphs (GKGs). To address this, we introduce Domain-specific Knowledge Graph Fusion (DKGF), a novel task that enriches DKGs by integrating relevant facts from GKGs. DKGF faces two key challenges: high ambiguity in domain relevance and misalignment in knowledge granularity across graphs. We propose ExeFuse, a simple yet effective Fact-as-Program paradigm. It treats each GKG fact as a latent semantic program, maps abstract relations to granularity-aware operators, and verifies domain relevance via program executability on the target DKG. This unified probabilistic framework jointly resolves relevance and granularity issues. We construct two benchmarks, DKGF(W-I) and DKGF(Y-I), with 21 evaluation configurations. Extensive experiments validate the task's importance and our model's effectiveness, providing the first standardized testbed for DKGF.
|
https://arxiv.org/abs/2601.10485
|
Academic Papers
|
svg
|
006f1796f59c83c70d141504d4a4a7a302fd5c0cf4145007770f028b0089d25a
|
2026-01-16T00:00:00-05:00
|
Communication-Efficient Federated Learning by Exploiting Spatio-Temporal Correlations of Gradients
|
arXiv:2601.10491v1 Announce Type: new Abstract: Communication overhead is a critical challenge in federated learning, particularly in bandwidth-constrained networks. Although many methods have been proposed to reduce communication overhead, most focus solely on compressing individual gradients, overlooking the temporal correlations among them. Prior studies have shown that gradients exhibit spatial correlations, typically reflected in low-rank structures. Through empirical analysis, we further observe a strong temporal correlation between client gradients across adjacent rounds. Based on these observations, we propose GradESTC, a compression technique that exploits both spatial and temporal gradient correlations. GradESTC exploits spatial correlations to decompose each full gradient into a compact set of basis vectors and corresponding combination coefficients. By exploiting temporal correlations, only a small portion of the basis vectors need to be dynamically updated in each round. GradESTC significantly reduces communication overhead by transmitting lightweight combination coefficients and a limited number of updated basis vectors instead of the full gradients. Extensive experiments show that, upon reaching a target accuracy level near convergence, GradESTC reduces uplink communication by an average of 39.79% compared to the strongest baseline, while maintaining comparable convergence speed and final accuracy to uncompressed FedAvg. By effectively leveraging spatio-temporal gradient structures, GradESTC offers a practical and scalable solution for communication-efficient federated learning.
|
https://arxiv.org/abs/2601.10491
|
Academic Papers
|
svg
|
5a87650646d914e4429f7c85f5fd6546b9cbeba1a01cc45b46a0b9d260afda81
|
2026-01-16T00:00:00-05:00
|
Model See, Model Do? Exposure-Aware Evaluation of Bug-vs-Fix Preference in Code LLMs
|
arXiv:2601.10496v1 Announce Type: new Abstract: Large language models are increasingly used for code generation and debugging, but their outputs can still contain bugs, that originate from training data. Distinguishing whether an LLM prefers correct code, or a familiar incorrect version might be influenced by what it's been exposed to during training. We introduce an exposure-aware evaluation framework that quantifies how prior exposure to buggy versus fixed code influences a model's preference. Using the ManySStuBs4J benchmark, we apply Data Portraits for membership testing on the Stack-V2 corpus to estimate whether each buggy and fixed variant was seen during training. We then stratify examples by exposure and compare model preference using code completion as well as multiple likelihood-based scoring metrics We find that most examples (67%) have neither variant in the training data, and when only one is present, fixes are more frequently present than bugs. In model generations, models reproduce buggy lines far more often than fixes, with bug-exposed examples amplifying this tendency and fix-exposed examples showing only marginal improvement. In likelihood scoring, minimum and maximum token-probability metrics consistently prefer the fixed code across all conditions, indicating a stable bias toward correct fixes. In contrast, metrics like the Gini coefficient reverse preference when only the buggy variant was seen. Our results indicate that exposure can skew bug-fix evaluations and highlight the risk that LLMs may propagate memorised errors in practice.
|
https://arxiv.org/abs/2601.10496
|
Academic Papers
|
svg
|
094f6d42371bed7938bc04a976133ad78e69377c37f9caedd37559268238445b
|
2026-01-16T00:00:00-05:00
|
mergetune: Continued fine-tuning of vision-language models
|
arXiv:2601.10497v1 Announce Type: new Abstract: Fine-tuning vision-language models (VLMs) such as CLIP often leads to catastrophic forgetting of pretrained knowledge. Prior work primarily aims to mitigate forgetting during adaptation; however, forgetting often remains inevitable during this process. We introduce a novel paradigm, \emph{continued fine-tuning (CFT)}, which seeks to recover pretrained knowledge after a zero-shot model has already been adapted. We propose a simple, model-agnostic CFT strategy (named MERGETUNE) guided by linear mode connectivity (LMC), which can be applied post hoc to existing fine-tuned models without requiring architectural changes. Given a fine-tuned model, we continue fine-tuning its trainable parameters (e.g., soft prompts or linear heads) to search for a continued model which has two low-loss paths to the zero-shot (e.g., CLIP) and the fine-tuned (e.g., CoOp) solutions. By exploiting the geometry of the loss landscape, the continued model implicitly merges the two solutions, restoring pretrained knowledge lost in the fine-tuned counterpart. A challenge is that the vanilla LMC constraint requires data replay from the pretraining task. We approximate this constraint for the zero-shot model via a second-order surrogate, eliminating the need for large-scale data replay. Experiments show that MERGETUNE improves the harmonic mean of CoOp by +5.6\% on base-novel generalisation without adding parameters. % We show \emph{the first time} superior performance than CLIP on both DTD and EuroSAT, on cross-dataset transfer. On robust fine-tuning evaluations, the LMC-merged model from MERGETUNE surpasses ensemble baselines with lower inference cost, achieving further gains and state-of-the-art results when ensembled with the zero-shot model. Our code is available at \href{https://github.com/Surrey-UP-Lab/MERGETUNE}{https://github.com/Surrey-UP-Lab/MERGETUNE}.
|
https://arxiv.org/abs/2601.10497
|
Academic Papers
|
svg
|
c7e6263aa4d061580dfea4ddf4885a4c77851b437d5a19d22f044d60600e6abf
|
2026-01-16T00:00:00-05:00
|
Projected Microbatch Accumulation yields reference-free proximal policy updates for reinforcement learning
|
arXiv:2601.10498v1 Announce Type: new Abstract: This note introduces Projected Microbatch Accumulation (PROMA), a proximal policy update method for large language model fine-tuning. PROMA accumulates policy gradients across microbatches by projecting out sequence-wise gradient components before microbatch aggregation. The projection is applied layer-wise during the backward pass, enabling efficient implementation without additional forward or backward passes. Empirically, PROMA enforces tighter control of local KL divergence than GRPO, resulting in more stable policy learning. Unlike PPO and GRPO, PROMA achieves proximal updates without inducing entropy collapse and does not rely on a reference policy or likelihood-ratio clipping.
|
https://arxiv.org/abs/2601.10498
|
Academic Papers
|
svg
|
c083d3d650fa51d6278fe4eac25da9dca95d624187d6d69c9eb659079d0fe678
|
2026-01-16T00:00:00-05:00
|
Higher order trade-offs in hypergraph community detection
|
arXiv:2601.10502v1 Announce Type: new Abstract: Extending community detection from pairwise networks to hypergraphs introduces fundamental theoretical challenges. Hypergraphs exhibit structural heterogeneity with no direct graph analogue: hyperedges of varying orders can connect nodes across communities in diverse configurations, introducing new trade-offs in defining and detecting community structure. We address these challenges by developing a unified framework for community detection in non-uniform hypergraphs under the Hypergraph Stochastic Block Model. We introduce a general signal-to-noise ratio that enables a quantitative analysis of trade-offs unique to higher-order networks, such as which hypergedges we choose to split across communities and how we choose to split them. Building on this framework, we derive a Bethe Hessian operator for non-uniform hypergraphs that provides efficient spectral clustering with principled model selection. We characterize the resulting spectral detectability threshold and compare it to belief propagation limits, showing the methods coincide for uniform hypergraphs but diverge in non-uniform settings. Synthetic experiments confirm our analytical predictions and reveal systematic biases toward preserving higher-order and balanced-shape hyperedges. Application to empirical data demonstrates the practical relevance of these higher-order detectability trade-offs in real-world systems.
|
https://arxiv.org/abs/2601.10502
|
Academic Papers
|
svg
|
ad479dd705b36bf3fc0f748cc7cf48e55bd1f5f5b03dd62343c6f1aee739f3da
|
2026-01-16T00:00:00-05:00
|
Coded Caching for Combinatorial Multi-Access Hotplug Networks from $t$-Designs
|
arXiv:2601.10503v1 Announce Type: new Abstract: We study hotplug coded caching in combinatorial multi-access networks, which generalizes existing hotplug coded caching models by allowing users to access multiple caches, while only a subset of caches is online during the delivery phase. We first generalize the Hotplug Placement Delivery Array (HpPDA) framework to the combinatorial multi-access setting. Based on this generalized framework, we propose a t-design-based coded caching scheme for combinatorial multi-access networks. We characterize a class of design parameters under which every active user has access to a sufficient number of coded subfiles to decode its requested file, and show that appropriate parameter choices allow for the elimination of redundant multicast transmissions. As a result, the proposed scheme achieves a family of rate memory trade offs with flexible subpacketization. We present numerical comparisons illustrating that the proposed t-scheme outperforms existing hotplug coded caching schemes in certain memory regimes.
|
https://arxiv.org/abs/2601.10503
|
Academic Papers
|
svg
|
bc2406e1f621ce06c20fa6aa83da7542d1d78808c285397ca0c064a89b3be34d
|
2026-01-16T00:00:00-05:00
|
DR-Arena: an Automated Evaluation Framework for Deep Research Agents
|
arXiv:2601.10504v1 Announce Type: new Abstract: As Large Language Models (LLMs) increasingly operate as Deep Research (DR) Agents capable of autonomous investigation and information synthesis, reliable evaluation of their task performance has become a critical bottleneck. Current benchmarks predominantly rely on static datasets, which suffer from several limitations: limited task generality, temporal misalignment, and data contamination. To address these, we introduce DR-Arena, a fully automated evaluation framework that pushes DR agents to their capability limits through dynamic investigation. DR-Arena constructs real-time Information Trees from fresh web trends to ensure the evaluation rubric is synchronized with the live world state, and employs an automated Examiner to generate structured tasks testing two orthogonal capabilities: Deep reasoning and Wide coverage. DR-Arena further adopts Adaptive Evolvement Loop, a state-machine controller that dynamically escalates task complexity based on real-time performance, demanding deeper deduction or wider aggregation until a decisive capability boundary emerges. Experiments with six advanced DR agents demonstrate that DR-Arena achieves a Spearman correlation of 0.94 with the LMSYS Search Arena leaderboard. This represents the state-of-the-art alignment with human preferences without any manual efforts, validating DR-Arena as a reliable alternative for costly human adjudication.
|
https://arxiv.org/abs/2601.10504
|
Academic Papers
|
svg
|
2dd55060616917b4d4ef2560f579e3892216197956b9bf8f797a89c80afdfef3
|
2026-01-16T00:00:00-05:00
|
A New Construction Structure on Coded Caching with Linear Subpacketization: Non-Half-Sum Latin Rectangle
|
arXiv:2601.10505v1 Announce Type: new Abstract: Coded caching is recognized as an effective method for alleviating network congestion during peak periods by leveraging local caching and coded multicasting gains. The key challenge in designing coded caching schemes lies in simultaneously achieving low subpacketization and low transmission load. Most existing schemes require exponential or polynomial subpacketization levels, while some linear subpacketization schemes often result in excessive transmission load. Recently, Cheng et al. proposed a construction framework for linear coded caching schemes called Non-Half-Sum Disjoint Packing (NHSDP), where the subpacketization equals the number of users $K$. This paper introduces a novel combinatorial structure, termed the Non-Half-Sum Latin Rectangle (NHSLR), which extends the framework of linear coded caching schemes from $F=K$ (i.e., the construction via NHSDP) to a broader scenario with $F=\mathcal{O}(K)$. By constructing NHSLR, we have obtained a new class of coded caching schemes that achieves linearly scalable subpacketization, while further reducing the transmission load compared with the NHSDP scheme. Theoretical and numerical analyses demonstrate that the proposed schemes not only achieves lower transmission load than existing linear subpacketization schemes but also approaches the performance of certain exponential subpacketization schemes.
|
https://arxiv.org/abs/2601.10505
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.