id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
ed39c0f08e640397fc1525c49b649c7f4f2bc46040fda4f0140dbabef84a293b
|
2026-01-13T00:00:00-05:00
|
How to predict creativity ratings from written narratives: A comparison of co-occurrence and textual forma mentis networks
|
arXiv:2601.07327v1 Announce Type: new Abstract: This tutorial paper provides a step-by-step workflow for building and analysing semantic networks from short creative texts. We introduce and compare two widely used text-to-network approaches: word co-occurrence networks and textual forma mentis networks (TFMNs). We also demonstrate how they can be used in machine learning to predict human creativity ratings. Using a corpus of 1029 short stories, we guide readers through text preprocessing, network construction, feature extraction (structural measures, spreading-activation indices, and emotion scores), and application of regression models. We evaluate how network-construction choices influence both network topology and predictive performance. Across all modelling settings, TFMNs consistently outperformed co-occurrence networks through lower prediction errors (best MAE = 0.581 for TFMN, vs 0.592 for co-occurrence with window size 3). Network-structural features dominated predictive performance (MAE = 0.591 for TFMN), whereas emotion features performed worse (MAE = 0.711 for TFMN) and spreading-activation measures contributed little (MAE = 0.788 for TFMN). This paper offers practical guidance for researchers interested in applying network-based methods for cognitive fields like creativity research. we show when syntactic networks are preferable to surface co-occurrence models, and provide an open, reproducible workflow accessible to newcomers in the field, while also offering deeper methodological insight for experienced researchers.
|
https://arxiv.org/abs/2601.07327
|
Academic Papers
|
svg
|
23f946fdf85e4288cdbdd85708838778b9dcd7eec62872f46b846e2af6a13dbd
|
2026-01-13T00:00:00-05:00
|
BayesRAG: Probabilistic Mutual Evidence Corroboration for Multimodal Retrieval-Augmented Generation
|
arXiv:2601.07329v1 Announce Type: new Abstract: Retrieval-Augmented Generation (RAG) has become a pivotal paradigm for Large Language Models (LLMs), yet current approaches struggle with visually rich documents by treating text and images as isolated retrieval targets. Existing methods relying solely on cosine similarity often fail to capture the semantic reinforcement provided by cross-modal alignment and layout-induced coherence. To address these limitations, we propose BayesRAG, a novel multimodal retrieval framework grounded in Bayesian inference and Dempster-Shafer evidence theory. Unlike traditional approaches that rank candidates strictly by similarity, BayesRAG models the intrinsic consistency of retrieved candidates across modalities as probabilistic evidence to refine retrieval confidence. Specifically, our method computes the posterior association probability for combinations of multimodal retrieval results, prioritizing text-image pairs that mutually corroborate each other in terms of both semantics and layout. Extensive experiments demonstrate that BayesRAG significantly outperforms state-of-the-art (SOTA) methods on challenging multimodal benchmarks. This study establishes a new paradigm for multimodal retrieval fusion that effectively resolves the isolation of heterogeneous modalities through an evidence fusion mechanism and enhances the robustness of retrieval outcomes. Our code is available at https://github.com/TioeAre/BayesRAG.
|
https://arxiv.org/abs/2601.07329
|
Academic Papers
|
svg
|
082d5eceddb7562c490b1709ea596f8d20c6ddc8c9a608e231360c5c40b47bfe
|
2026-01-13T00:00:00-05:00
|
SEE: Signal Embedding Energy for Quantifying Noise Interference in Large Audio Language Models
|
arXiv:2601.07331v1 Announce Type: new Abstract: Large Audio Language Models (LALMs) have been widely applied in real-time scenarios, such as in-car assistants and online meeting comprehension. In practice, audio inputs are often corrupted by device and environmental noise, leading to performance degradation. However, existing LALM studies on noise lack quantitative analysis and rely mainly on intuition and empirical observation, thus failing to understand practical robustness. To address this issue, we introduce Signal Embedding Energy (SEE), a method for quantifying the impact of noise intensity on LALM inputs, enabling the differentiation of LALM robustness in real-world deployments. SEE introduces a perspective based on structured activation subspaces derived from the model's internal representations, which more accurately captures its perception of noise than raw audio features. Across experiments, SEE exhibits a strong correlation with LALM performance, achieving a correlation of 0.98. Surprisingly, traditional audio denoising methods are only marginally effective for LALMs, and, in some cases, even increase SEE and impair performance. This suggests a mismatch between speech-centric denoising objectives and the noise sensitivity of modern LALMs. Therefore, we propose a mitigation strategy derived from SEE to denoise LALM inputs, outperforming existing denoising methods. This paper introduces a novel metric for noise quantification in LALMs, providing guidance for robustness improvements in real-world deployments.
|
https://arxiv.org/abs/2601.07331
|
Academic Papers
|
svg
|
0bbcf84b6f7aacf77941e28e7075922f4a3f47ddf9f02084a6136d794940aa49
|
2026-01-13T00:00:00-05:00
|
OSCAR: Open-Set CAD Retrieval from a Language Prompt and a Single Image
|
arXiv:2601.07333v1 Announce Type: new Abstract: 6D object pose estimation plays a crucial role in scene understanding for applications such as robotics and augmented reality. To support the needs of ever-changing object sets in such context, modern zero-shot object pose estimators were developed to not require object-specific training but only rely on CAD models. Such models are hard to obtain once deployed, and a continuously changing and growing set of objects makes it harder to reliably identify the instance model of interest. To address this challenge, we introduce an Open-Set CAD Retrieval from a Language Prompt and a Single Image (OSCAR), a novel training-free method that retrieves a matching object model from an unlabeled 3D object database. During onboarding, OSCAR generates multi-view renderings of database models and annotates them with descriptive captions using an image captioning model. At inference, GroundedSAM detects the queried object in the input image, and multi-modal embeddings are computed for both the Region-of-Interest and the database captions. OSCAR employs a two-stage retrieval: text-based filtering using CLIP identifies candidate models, followed by image-based refinement using DINOv2 to select the most visually similar object. In our experiments we demonstrate that OSCAR outperforms all state-of-the-art methods on the cross-domain 3D model retrieval benchmark MI3DOR. Furthermore, we demonstrate OSCAR's direct applicability in automating object model sourcing for 6D object pose estimation. We propose using the most similar object model for pose estimation if the exact instance is not available and show that OSCAR achieves an average precision of 90.48\% during object retrieval on the YCB-V object dataset. Moreover, we demonstrate that the most similar object model can be utilized for pose estimation using Megapose achieving better results than a reconstruction-based approach.
|
https://arxiv.org/abs/2601.07333
|
Academic Papers
|
svg
|
3881caeaa4ef0c03363c5a82fad66e1ed957e5e2dc561feb419bd3ed53a19762
|
2026-01-13T00:00:00-05:00
|
Examining the Effectiveness of Transformer-Based Smart Contract Vulnerability Scan
|
arXiv:2601.07334v1 Announce Type: new Abstract: Smart contract technology facilitates self-executing agreements on the blockchain, eliminating dependency on an external trusted authority. However, smart contracts may expose vulnerabilities that can lead to financial losses and disruptions in decentralized applications. In this work, we evaluate deep learning-based approaches for vulnerability scanning of Ethereum smart contracts. We propose VASCOT, a Vulnerability Analyzer for Smart COntracts using Transformers, which performs sequential analysis of Ethereum Virtual Machine (EVM) bytecode and incorporates a sliding window mechanism to overcome input length constraints. To assess VASCOT's detection efficacy, we construct a dataset of 16,469 verified Ethereum contracts deployed in 2022, and annotate it using trace analysis with concrete validation to mitigate false positives. VASCOT's performance is then compared against a state-of-the-art LSTM-based vulnerability detection model on both our dataset and an older public dataset. Our findings highlight the strengths and limitations of each model, providing insights into their detection capabilities and generalizability.
|
https://arxiv.org/abs/2601.07334
|
Academic Papers
|
svg
|
371910a2dac7d43fdd864b5f79958a6d45a0441d3b5e2da47c3c25c7e1f34490
|
2026-01-13T00:00:00-05:00
|
Reconstruction Guided Few-shot Network For Remote Sensing Image Classification
|
arXiv:2601.07335v1 Announce Type: new Abstract: Few-shot remote sensing image classification is challenging due to limited labeled samples and high variability in land-cover types. We propose a reconstruction-guided few-shot network (RGFS-Net) that enhances generalization to unseen classes while preserving consistency for seen categories. Our method incorporates a masked image reconstruction task, where parts of the input are occluded and reconstructed to encourage semantically rich feature learning. This auxiliary task strengthens spatial understanding and improves class discrimination under low-data settings. We evaluated the efficacy of EuroSAT and PatternNet datasets under 1-shot and 5-shot protocols, our approach consistently outperforms existing baselines. The proposed method is simple, effective, and compatible with standard backbones, offering a robust solution for few-shot remote sensing classification. Codes are available at https://github.com/stark0908/RGFS.
|
https://arxiv.org/abs/2601.07335
|
Academic Papers
|
svg
|
338c8a6667e77f237c58e3e2990c527f0f5cead4b650be621b97968ba6eed28a
|
2026-01-13T00:00:00-05:00
|
Improved lower bounds for the maximum size of Condorcet domains
|
arXiv:2601.07336v1 Announce Type: new Abstract: Condorcet domains are sets of linear orders with the property that, whenever voters' preferences are restricted to the domain, the pairwise majority relation (for an odd number of voters) is transitive and hence a linear order. Determining the maximum size of a Condorcet domain, sometimes under additional constraints, has been a longstanding problem in the mathematical theory of majority voting. The exact maximum is only known for $n\leq 8$ alternatives. In this paper we use a structural analysis of the largest domains for small $n$ to design a new inductive search method. Using an implementation of this method on a supercomputer, together with existing algorithms, we improve the size of the largest known domains for all $9 \leq n \leq 20$. These domains are then used in a separate construction to obtain the currently largest known domains for $21 \leq n \leq 25$, and to improve the best asymptotic lower bound for the maximum size of a Condorcet domain to $\Omega(2.198139^n)$. Finally, we discuss properties of the domains found and state several open problems and conjectures.
|
https://arxiv.org/abs/2601.07336
|
Academic Papers
|
svg
|
903ce107c1a294164973c2afd1e6ead66af62c933cd47a0b927f98f972885085
|
2026-01-13T00:00:00-05:00
|
Beyond Literal Mapping: Benchmarking and Improving Non-Literal Translation Evaluation
|
arXiv:2601.07338v1 Announce Type: new Abstract: Large Language Models (LLMs) have significantly advanced Machine Translation (MT), applying them to linguistically complex domains-such as Social Network Services, literature etc. In these scenarios, translations often require handling non-literal expressions, leading to the inaccuracy of MT metrics. To systematically investigate the reliability of MT metrics, we first curate a meta-evaluation dataset focused on non-literal translations, namely MENT. MENT encompasses four non-literal translation domains and features source sentences paired with translations from diverse MT systems, with 7,530 human-annotated scores on translation quality. Experimental results reveal the inaccuracies of traditional MT metrics and the limitations of LLM-as-a-Judge, particularly the knowledge cutoff and score inconsistency problem. To mitigate these limitations, we propose RATE, a novel agentic translation evaluation framework, centered by a reflective Core Agent that dynamically invokes specialized sub-agents. Experimental results indicate the efficacy of RATE, achieving an improvement of at least 3.2 meta score compared with current metrics. Further experiments demonstrate the robustness of RATE to general-domain MT evaluation. Code and dataset are available at: https://github.com/BITHLP/RATE.
|
https://arxiv.org/abs/2601.07338
|
Academic Papers
|
svg
|
04c0505ffb5432e8a40792195fd606eea5e51b3b0cae5238f64614db541948eb
|
2026-01-13T00:00:00-05:00
|
On the Extremal Source Key Rates for Secure Storage over Graphs
|
arXiv:2601.07340v1 Announce Type: new Abstract: This paper investigates secure storage codes over graphs, where multiple independent source symbols are encoded and stored at graph nodes subject to edge-wise correctness and security constraints. For each edge, a specified subset of source symbols must be recoverable from its two incident nodes, while no information about the remaining sources is revealed. To meet the security requirement, a shared source key may be employed. The ratio between the source symbol size and the source key size defines the source key rate, and the supremum of all achievable rates is referred to as the source key capacity. We study extremal values of the source key capacity in secure storage systems and provide complete graph characterizations for several fundamental settings. For the case where each edge is associated with a single source symbol, we characterize all graphs whose source key capacity equals one. We then generalize this result to the case where each edge is associated with multiple source symbols and identify a broad class of graphs that achieve the corresponding extremal capacity under a mild structural condition. In addition, we characterize all graphs for which secure storage can be achieved without using any source key.
|
https://arxiv.org/abs/2601.07340
|
Academic Papers
|
svg
|
3a15badfe5d34694b38222e6ebb8f1494c673978f5c1cf56391e8097840aa25b
|
2026-01-13T00:00:00-05:00
|
Agentic Diagnostic Reasoning over Telecom and Datacenter Infrastructure
|
arXiv:2601.07342v1 Announce Type: new Abstract: Large-scale telecom and datacenter infrastructures rely on multi-layered service and resource models, where failures propagate across physical and logical components and affect multiple customers. Traditional approaches to root cause analysis(RCA) rely on hard-coded graph traversal algorithms or rule-based correlation engines, which are costly to maintain and tightly coupled to the infrastructure model. In this work, we introduce an agentic diagnostic framework where a Large Language Model (LLM) performs step-wise investigation using a constrained tool space exposed through the Model Context Protocol (MCP). Instead of embedding causal logic or traversal algorithms into the application, the agent autonomously navigates the infrastructure model by invoking tools for service lookup, dependency retrieval, structured and unstructured data, and event analysis, and impact discovery. We define an investigation protocol that structures the agent's reasoning and ensures grounding, reproducibility, and safe handling of missing or ambiguous information. This work lays the foundation for autonomous incident resolution and change impact mitigation. Future systems will not only diagnose and remediate infrastructure failures, but also predict the impact of planned changes on services and customers, enabling operators to mitigate risks before executing maintenance operations.
|
https://arxiv.org/abs/2601.07342
|
Academic Papers
|
svg
|
20691a6669e97fb6027204a451d388751d81ae545b71e215094ce54664233376
|
2026-01-13T00:00:00-05:00
|
PulseMind: A Multi-Modal Medical Model for Real-World Clinical Diagnosis
|
arXiv:2601.07344v1 Announce Type: new Abstract: Recent advances in medical multi-modal models focus on specialized image analysis like dermatology, pathology, or radiology. However, they do not fully capture the complexity of real-world clinical diagnostics, which involve heterogeneous inputs and require ongoing contextual understanding during patient-physician interactions. To bridge this gap, we introduce PulseMind, a new family of multi-modal diagnostic models that integrates a systematically curated dataset, a comprehensive evaluation benchmark, and a tailored training framework. Specifically, we first construct a diagnostic dataset, MediScope, which comprises 98,000 real-world multi-turn consultations and 601,500 medical images, spanning over 10 major clinical departments and more than 200 sub-specialties. Then, to better reflect the requirements of real-world clinical diagnosis, we develop the PulseMind Benchmark, a multi-turn diagnostic consultation benchmark with a four-dimensional evaluation protocol comprising proactiveness, accuracy, usefulness, and language quality. Finally, we design a training framework tailored for multi-modal clinical diagnostics, centered around a core component named Comparison-based Reinforcement Policy Optimization (CRPO). Compared to absolute score rewards, CRPO uses relative preference signals from multi-dimensional com-parisons to provide stable and human-aligned training guidance. Extensive experiments demonstrate that PulseMind achieves competitive performance on both the diagnostic consultation benchmark and public medical benchmarks.
|
https://arxiv.org/abs/2601.07344
|
Academic Papers
|
svg
|
589f912970fe5f099a55ce74e2acab4a42b2c0ec0ba7f850c85464db34d54dc0
|
2026-01-13T00:00:00-05:00
|
DiffER: Diffusion Entity-Relation Modeling for Reversal Curse in Diffusion Large Language Models
|
arXiv:2601.07347v1 Announce Type: new Abstract: The "reversal curse" refers to the phenomenon where large language models (LLMs) exhibit predominantly unidirectional behavior when processing logically bidirectional relationships. Prior work attributed this to autoregressive training -- predicting the next token inherently favors left-to-right information flow over genuine bidirectional knowledge associations. However, we observe that Diffusion LLMs (DLLMs), despite being trained bidirectionally, also suffer from the reversal curse. To investigate the root causes, we conduct systematic experiments on DLLMs and identify three key reasons: 1) entity fragmentation during training, 2) data asymmetry, and 3) missing entity relations. Motivated by the analysis of these reasons, we propose Diffusion Entity-Relation Modeling (DiffER), which addresses the reversal curse through entity-aware training and balanced data construction. Specifically, DiffER introduces whole-entity masking, which mitigates entity fragmentation by predicting complete entities in a single step. DiffER further employs distribution-symmetric and relation-enhanced data construction strategies to alleviate data asymmetry and missing relations. Extensive experiments demonstrate that DiffER effectively alleviates the reversal curse in Diffusion LLMs, offering new perspectives for future research.
|
https://arxiv.org/abs/2601.07347
|
Academic Papers
|
svg
|
9dfa6efdf914470b9f1d9b44aa927141a7a69f1b3d8d6a7f3e4b0cfd8a743932
|
2026-01-13T00:00:00-05:00
|
Controlled Self-Evolution for Algorithmic Code Optimization
|
arXiv:2601.07348v1 Announce Type: new Abstract: Self-evolution methods enhance code generation through iterative "generate-verify-refine" cycles, yet existing approaches suffer from low exploration efficiency, failing to discover solutions with superior complexity within limited budgets. This inefficiency stems from initialization bias trapping evolution in poor solution regions, uncontrolled stochastic operations lacking feedback guidance, and insufficient experience utilization across tasks.To address these bottlenecks, we propose Controlled Self-Evolution (CSE), which consists of three key components. Diversified Planning Initialization generates structurally distinct algorithmic strategies for broad solution space coverage. Genetic Evolution replaces stochastic operations with feedback-guided mechanisms, enabling targeted mutation and compositional crossover. Hierarchical Evolution Memory captures both successful and failed experiences at inter-task and intra-task levels.Experiments on EffiBench-X demonstrate that CSE consistently outperforms all baselines across various LLM backbones. Furthermore, CSE achieves higher efficiency from early generations and maintains continuous improvement throughout evolution. Our code is publicly available at https://github.com/QuantaAlpha/EvoControl.
|
https://arxiv.org/abs/2601.07348
|
Academic Papers
|
svg
|
b3385e6b08d0acda51bb6faf4d35979863dcb2b00f76e5066388a2dff37e1cd6
|
2026-01-13T00:00:00-05:00
|
Reward Modeling from Natural Language Human Feedback
|
arXiv:2601.07349v1 Announce Type: new Abstract: Reinforcement Learning with Verifiable reward (RLVR) on preference data has become the mainstream approach for training Generative Reward Models (GRMs). Typically in pairwise rewarding tasks, GRMs generate reasoning chains ending with critiques and preference labels, and RLVR then relies on the correctness of the preference labels as the training reward. However, in this paper, we demonstrate that such binary classification tasks make GRMs susceptible to guessing correct outcomes without sound critiques. Consequently, these spurious successes introduce substantial noise into the reward signal, thereby impairing the effectiveness of reinforcement learning. To address this issue, we propose Reward Modeling from Natural Language Human Feedback (RM-NLHF), which leverages natural language feedback to obtain process reward signals, thereby mitigating the problem of limited solution space inherent in binary tasks. Specifically, we compute the similarity between GRM-generated and human critiques as the training reward, which provides more accurate reward signals than outcome-only supervision. Additionally, considering that human critiques are difficult to scale up, we introduce Meta Reward Model (MetaRM) which learns to predict process reward from datasets with human critiques and then generalizes to data without human critiques. Experiments on multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art GRMs trained with outcome-only reward, confirming the superiority of integrating natural language over binary human feedback as supervision.
|
https://arxiv.org/abs/2601.07349
|
Academic Papers
|
svg
|
a2e382c0711753bd67cc658e91b07046f0c4c563b5c80cafa9fa6b26dcc795cd
|
2026-01-13T00:00:00-05:00
|
Beyond Hard Masks: Progressive Token Evolution for Diffusion Language Models
|
arXiv:2601.07351v1 Announce Type: new Abstract: Diffusion Language Models (DLMs) offer a promising alternative for language modeling by enabling parallel decoding through iterative refinement. However, most DLMs rely on hard binary masking and discrete token assignments, which hinder the revision of early decisions and underutilize intermediate probabilistic representations. In this paper, we propose EvoToken-DLM, a novel diffusion-based language modeling approach that replaces hard binary masks with evolving soft token distributions. EvoToken-DLM enables a progressive transition from masked states to discrete outputs, supporting revisable decoding. To effectively support this evolution, we introduce continuous trajectory supervision, which aligns training objectives with iterative probabilistic updates. Extensive experiments across multiple benchmarks show that EvoToken-DLM consistently achieves superior performance, outperforming strong diffusion-based and masked DLM baselines. Project webpage: https://aim-uofa.github.io/EvoTokenDLM.
|
https://arxiv.org/abs/2601.07351
|
Academic Papers
|
svg
|
9a5774bb598746354470124c68e17c6c97d8364a6c03cd018b17b3ccbccfbd34
|
2026-01-13T00:00:00-05:00
|
TALON: Confidence-Aware Speculative Decoding with Adaptive Token Trees
|
arXiv:2601.07353v1 Announce Type: new Abstract: Speculative decoding (SD) has become a standard technique for accelerating LLM inference without sacrificing output quality. Recent advances in speculative decoding have shifted from sequential chain-based drafting to tree-structured generation, where the draft model constructs a tree of candidate tokens to explore multiple possible drafts in parallel. However, existing tree-based SD methods typically build a fixed-width, fixed-depth draft tree, which fails to adapt to the varying difficulty of tokens and contexts. As a result, the draft model cannot dynamically adjust the tree structure to early stop on difficult tokens and extend generation for simple ones. To address these challenges, we introduce TALON, a training-free, budget-driven adaptive tree expansion framework that can be plugged into existing tree-based methods. Unlike static methods, TALON constructs the draft tree iteratively until a fixed token budget is met, using a hybrid expansion strategy that adaptively allocates the node budget to each layer of the draft tree. This framework naturally shapes the draft tree into a "deep-and-narrow" form for deterministic contexts and a "shallow-and-wide" form for uncertain branches, effectively optimizing the trade-off between exploration width and generation depth under a given budget. Extensive experiments across 5 models and 6 datasets demonstrate that TALON consistently outperforms state-of-the-art EAGLE-3, achieving up to 5.16x end-to-end speedup over auto-regressive decoding.
|
https://arxiv.org/abs/2601.07353
|
Academic Papers
|
svg
|
618bd800178819d7fa4df40f9b34e521ff9240fa2e698e887defdb6d13856e7d
|
2026-01-13T00:00:00-05:00
|
Semantic Compression of LLM Instructions via Symbolic Metalanguages
|
arXiv:2601.07354v1 Announce Type: new Abstract: We introduce MetaGlyph, a symbolic language for compressing prompts by encoding instructions as mathematical symbols rather than prose. Unlike systems requiring explicit decoding rules, MetaGlyph uses symbols like $\in$ (membership) and $\Rightarrow$ (implication) that models already understand from their training data. We test whether these symbols work as ''instruction shortcuts'' that models can interpret without additional teaching. We evaluate eight models across two dimensions relevant to practitioners: scale (3B-1T parameters) and accessibility (open-source for local deployment vs. proprietary APIs). MetaGlyph achieves 62-81% token reduction across all task types. For API-based deployments, this translates directly to cost savings; for local deployments, it reduces latency and memory pressure. Results vary by model. Gemini 2.5 Flash achieves 75% semantic equivalence between symbolic and prose instructions on selection tasks, with 49.9% membership operator fidelity. Kimi K2 reaches 98.1% fidelity for implication ($\Rightarrow$) and achieves perfect (100%) accuracy on selection tasks with symbolic prompts. GPT-5.2 Chat shows the highest membership fidelity observed (91.3%), though with variable parse success across task types. Claude Haiku 4.5 achieves 100% parse success with 26% membership fidelity. Among mid-sized models, Qwen 2.5 7B shows 62% equivalence on extraction tasks. Mid-sized open-source models (7B-12B) show near-zero operator fidelity, suggesting a U-shaped relationship where sufficient scale overcomes instruction-tuning biases.
|
https://arxiv.org/abs/2601.07354
|
Academic Papers
|
svg
|
99b0ecf6106e19599492f7205c4e0d6c3413181decb7570ebfab6872a599d396
|
2026-01-13T00:00:00-05:00
|
Fast and Provable Nonconvex Robust Matrix Completion
|
arXiv:2601.07355v1 Announce Type: new Abstract: This paper studies the robust matrix completion problem and a computationally efficient non-convex method called ARMC has been proposed. This method is developed by introducing subspace projection to a singular value thresholding based method when updating the low rank part. Numerical experiments on synthetic and real data show that ARMC is superior to existing non-convex RMC methods. Through a refined analysis based on the leave-one-out technique, we have established the theoretical guarantee for ARMC subject to both sparse outliers and stochastic noise. The established bounds for the sample complexity and outlier sparsity are better than those established for a convex approach that also considers both outliers and stochastic noise.
|
https://arxiv.org/abs/2601.07355
|
Academic Papers
|
svg
|
e1e025bd88fd893260f878141a79638ff6c92791e2ecc897f1cd4d73dfc6cff2
|
2026-01-13T00:00:00-05:00
|
Seeing Right but Saying Wrong: Inter- and Intra-Layer Refinement in MLLMs without Training
|
arXiv:2601.07359v1 Announce Type: new Abstract: Multimodal Large Language Models (MLLMs) have demonstrated strong capabilities across a variety of vision-language tasks. However, their internal reasoning often exhibits a critical inconsistency: although deeper layers may attend to the correct visual regions, final predictions are frequently misled by noisy attention from earlier layers. This results in a disconnect between what the model internally understands and what it ultimately expresses, a phenomenon we describe as seeing it right but saying it wrong. To address this issue, we propose DualPD, a dual-perspective decoding refinement strategy that enhances the visual understanding without any additional training. DualPD consists of two components. (1) The layer-wise attention-guided contrastive logits module captures how the belief in the correct answer evolves by comparing output logits between layers that exhibit the largest attention shift. (2) The head-wise information filtering module suppresses low-contribution attention heads that focus on irrelevant regions, thereby improving attention quality within each layer. Experiments conducted on both the LLaVA and Qwen-VL model families across multiple multimodal benchmarks demonstrate that DualPD consistently improves accuracy without training, confirming its effectiveness and generalizability. The code will be released upon publication.
|
https://arxiv.org/abs/2601.07359
|
Academic Papers
|
svg
|
e45b3a9fac7dbd8cd3c72f676d90d3f206dfeba581081e81279ebcd00de9f8a6
|
2026-01-13T00:00:00-05:00
|
Large-Scale Autonomous Gas Monitoring for Volcanic Environments: A Legged Robot on Mount Etna
|
arXiv:2601.07362v1 Announce Type: new Abstract: Volcanic gas emissions are key precursors of eruptive activity. Yet, obtaining accurate near-surface measurements remains hazardous and logistically challenging, motivating the need for autonomous solutions. Limited mobility in rough volcanic terrain has prevented wheeled systems from performing reliable in situ gas measurements, reducing their usefulness as sensing platforms. We present a legged robotic system for autonomous volcanic gas analysis, utilizing the quadruped ANYmal, equipped with a quadrupole mass spectrometer system. Our modular autonomy stack integrates a mission planning interface, global planner, localization framework, and terrain-aware local navigation. We evaluated the system on Mount Etna across three autonomous missions in varied terrain, achieving successful gas-source detections with autonomy rates of 93-100%. In addition, we conducted a teleoperated mission in which the robot measured natural fumaroles, detecting sulfur dioxide and carbon dioxide. We discuss lessons learned from the gas-analysis and autonomy perspectives, emphasizing the need for adaptive sensing strategies, tighter integration of global and local planning, and improved hardware design.
|
https://arxiv.org/abs/2601.07362
|
Academic Papers
|
svg
|
2c1fc4918c6c923e3ed484c600f0d5db49ed225351a19ef9c4c55f5394227478
|
2026-01-13T00:00:00-05:00
|
On the universal definition of intelligence
|
arXiv:2601.07364v1 Announce Type: new Abstract: This paper aims to propose a universal definition of intelligence that enables fair and consistent comparison of human and artificial intelligence (AI). With the rapid development of AI technology in recent years, how to compare and evaluate human and AI intelligence has become an important theoretical issue. However, existing definitions of intelligence are anthropocentric and unsuitable for empirical comparison, resulting in a lack of consensus in the research field. This paper first introduces four criteria for evaluating intelligence definitions based on R. Carnap's methodology of conceptual clarification: similarity to explicandum, exactness, fruitfulness, and simplicity. We then examine six representative definitions: IQ testing, complex problem-solving ability, reward optimization, environmental adaptation, learning efficiency, and predictive ability, and clarify their theoretical strengths and limitations. The results show that while definitions based on predictive ability have high explanatory power and empirical feasibility, they suffer from an inability to adequately explain the relationship between predictions and behavior/benefits. This paper proposes the Extended Predictive Hypothesis (EPH), which views intelligence as a combination of the ability to accurately predict the future and the ability to benefit from those predictions. Furthermore, by distinguishing predictive ability into spontaneous and reactive predictions and adding the concept of gainability, we present a unified framework for explaining various aspects of intelligence, such as creativity, learning, and future planning. In conclusion, this paper argues that the EPH is the most satisfactory and universal definition for comparing human and AI intelligence.
|
https://arxiv.org/abs/2601.07364
|
Academic Papers
|
svg
|
76e3e22b4d91c279cfec947453c527fcc1f2005d11607a28c1f9af15f85d8003
|
2026-01-13T00:00:00-05:00
|
HiVid-Narrator: Hierarchical Video Narrative Generation with Scene-Primed ASR-anchored Compression
|
arXiv:2601.07366v1 Announce Type: new Abstract: Generating structured narrations for real-world e-commerce videos requires models to perceive fine-grained visual details and organize them into coherent, high-level stories--capabilities that existing approaches struggle to unify. We introduce the E-commerce Hierarchical Video Captioning (E-HVC) dataset with dual-granularity, temporally grounded annotations: a Temporal Chain-of-Thought that anchors event-level observations and Chapter Summary that compose them into concise, story-centric summaries. Rather than directly prompting chapters, we adopt a staged construction that first gathers reliable linguistic and visual evidence via curated ASR and frame-level descriptions, then refines coarse annotations into precise chapter boundaries and titles conditioned on the Temporal Chain-of-Thought, yielding fact-grounded, time-aligned narratives. We also observe that e-commerce videos are fast-paced and information-dense, with visual tokens dominating the input sequence. To enable efficient training while reducing input tokens, we propose the Scene-Primed ASR-anchored Compressor (SPA-Compressor), which compresses multimodal tokens into hierarchical scene and event representations guided by ASR semantic cues. Built upon these designs, our HiVid-Narrator framework achieves superior narrative quality with fewer input tokens compared to existing methods.
|
https://arxiv.org/abs/2601.07366
|
Academic Papers
|
svg
|
c3c46a1e03ad71c31cb7504c800576b78e131bece87319776e5f5e2f7d12b7c3
|
2026-01-13T00:00:00-05:00
|
FOCAL: A Novel Benchmarking Technique for Multi-modal Agents
|
arXiv:2601.07367v1 Announce Type: new Abstract: With the recent advancements in reasoning capa- bilities, tool calling using MCP servers and Audio Language Models (ALMs), development and integration of multi-modal agents (with voice and text support) has come to the industry forefront. Cascading pipelines for voice agents still play a central role in the industry owing to their superior reasoning capabilities facilitated by LLMs. Although, cascading pipelines often present error propagation through the pipeline. We propose a framework, FOCAL to benchmark end-to-end reasoning, component-wise error propagation and error analysis for automated as well as human-assisted testing of multi-modal agents (voice to voice + text input). We also share two novel metrics viz. Reasoning and Semantic scores to evaluate efficacy of the agent in having meaningful conversations in voice mode.
|
https://arxiv.org/abs/2601.07367
|
Academic Papers
|
svg
|
9abfd1a5b8a54f5e4f6eb4bcbf7c07bb88c70ef6f8e5effb49d500d8ab985ce1
|
2026-01-13T00:00:00-05:00
|
Interpretable Text Classification Applied to the Detection of LLM-generated Creative Writing
|
arXiv:2601.07368v1 Announce Type: new Abstract: We consider the problem of distinguishing human-written creative fiction (excerpts from novels) from similar text generated by an LLM. Our results show that, while human observers perform poorly (near chance levels) on this binary classification task, a variety of machine-learning models achieve accuracy in the range 0.93 - 0.98 over a previously unseen test set, even using only short samples and single-token (unigram) features. We therefore employ an inherently interpretable (linear) classifier (with a test accuracy of 0.98), in order to elucidate the underlying reasons for this high accuracy. In our analysis, we identify specific unigram features indicative of LLM-generated text, one of the most important being that the LLM tends to use a larger variety of synonyms, thereby skewing the probability distributions in a manner that is easy to detect for a machine learning classifier, yet very difficult for a human observer. Four additional explanation categories were also identified, namely, temporal drift, Americanisms, foreign language usage, and colloquialisms. As identification of the AI-generated text depends on a constellation of such features, the classification appears robust, and therefore not easy to circumvent by malicious actors intent on misrepresenting AI-generated text as human work.
|
https://arxiv.org/abs/2601.07368
|
Academic Papers
|
svg
|
a08cdb16dc63ec78641431157b5e0888ec349713d4c0d9e5488bfdcfe80718b5
|
2026-01-13T00:00:00-05:00
|
Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models
|
arXiv:2601.07372v1 Announce Type: new Abstract: While Mixture-of-Experts (MoE) scales capacity via conditional computation, Transformers lack a native primitive for knowledge lookup, forcing them to inefficiently simulate retrieval through computation. To address this, we introduce conditional memory as a complementary sparsity axis, instantiated via Engram, a module that modernizes classic $N$-gram embedding for O(1) lookup. By formulating the Sparsity Allocation problem, we uncover a U-shaped scaling law that optimizes the trade-off between neural computation (MoE) and static memory (Engram). Guided by this law, we scale Engram to 27B parameters, achieving superior performance over a strictly iso-parameter and iso-FLOPs MoE baseline. Most notably, while the memory module is expected to aid knowledge retrieval (e.g., MMLU +3.4; CMMLU +4.0), we observe even larger gains in general reasoning (e.g., BBH +5.0; ARC-Challenge +3.7) and code/math domains~(HumanEval +3.0; MATH +2.4). Mechanistic analyses reveal that Engram relieves the backbone's early layers from static reconstruction, effectively deepening the network for complex reasoning. Furthermore, by delegating local dependencies to lookups, it frees up attention capacity for global context, substantially boosting long-context retrieval (e.g., Multi-Query NIAH: 84.2 to 97.0). Finally, Engram establishes infrastructure-aware efficiency: its deterministic addressing enables runtime prefetching from host memory, incurring negligible overhead. We envision conditional memory as an indispensable modeling primitive for next-generation sparse models.
|
https://arxiv.org/abs/2601.07372
|
Academic Papers
|
svg
|
76c5a2a95bf565e3cff6055cc265b29628e1a039563fc62edc31c8254e777ad2
|
2026-01-13T00:00:00-05:00
|
GROKE: Vision-Free Navigation Instruction Evaluation via Graph Reasoning on OpenStreetMap
|
arXiv:2601.07375v1 Announce Type: new Abstract: The evaluation of navigation instructions remains a persistent challenge in Vision-and-Language Navigation (VLN) research. Traditional reference-based metrics such as BLEU and ROUGE fail to capture the functional utility of spatial directives, specifically whether an instruction successfully guides a navigator to the intended destination. Although existing VLN agents could serve as evaluators, their reliance on high-fidelity visual simulators introduces licensing constraints and computational costs, and perception errors further confound linguistic quality assessment. This paper introduces GROKE(Graph-based Reasoning over OSM Knowledge for instruction Evaluation), a vision-free training-free hierarchical LLM-based framework for evaluating navigation instructions using OpenStreetMap data. Through systematic ablation studies, we demonstrate that structured JSON and textual formats for spatial information substantially outperform grid-based and visual graph representations. Our hierarchical architecture combines sub-instruction planning with topological graph navigation, reducing navigation error by 68.5% compared to heuristic and sampling baselines on the Map2Seq dataset. The agent's execution success, trajectory fidelity, and decision patterns serve as proxy metrics for functional navigability given OSM-visible landmarks and topology, establishing a scalable and interpretable evaluation paradigm without visual dependencies. Code and data are available at https://anonymous.4open.science/r/groke.
|
https://arxiv.org/abs/2601.07375
|
Academic Papers
|
svg
|
b721ecaf9901d613ed2f8195ec6b7951ec54048d380581e35329a7780da81b2a
|
2026-01-13T00:00:00-05:00
|
OpenTinker: Separating Concerns in Agentic Reinforcement Learning
|
arXiv:2601.07376v1 Announce Type: new Abstract: We introduce OpenTinker, an infrastructure for reinforcement learning (RL) of large language model (LLM) agents built around a separation of concerns across algorithm design, execution, and agent-environment interaction. Rather than relying on monolithic, end-to-end RL pipelines, OpenTinker decomposes agentic learning systems into lightweight, composable components with clearly defined abstraction boundaries. Users specify agents, environments, and interaction protocols, while inference and training are delegated to a managed execution runtime. OpenTinker introduces a centralized scheduler for managing training and inference workloads, including LoRA-based and full-parameter RL, supervised fine-tuning, and inference, over shared resources. We further discuss design principles for extending OpenTinker to multi-agent training. Finally, we present a set of RL use cases that demonstrate the effectiveness of the framework in practical agentic learning scenarios.
|
https://arxiv.org/abs/2601.07376
|
Academic Papers
|
svg
|
23bb11e316472017a4e1c50a09cd7622f6282ee66c81823acfbd54dcc2638195
|
2026-01-13T00:00:00-05:00
|
Learning Dynamic Collaborative Network for Semi-supervised 3D Vessel Segmentation
|
arXiv:2601.07377v1 Announce Type: new Abstract: In this paper, we present a new dynamic collaborative network for semi-supervised 3D vessel segmentation, termed DiCo. Conventional mean teacher (MT) methods typically employ a static approach, where the roles of the teacher and student models are fixed. However, due to the complexity of 3D vessel data, the teacher model may not always outperform the student model, leading to cognitive biases that can limit performance. To address this issue, we propose a dynamic collaborative network that allows the two models to dynamically switch their teacher-student roles. Additionally, we introduce a multi-view integration module to capture various perspectives of the inputs, mirroring the way doctors conduct medical analysis. We also incorporate adversarial supervision to constrain the shape of the segmented vessels in unlabeled data. In this process, the 3D volume is projected into 2D views to mitigate the impact of label inconsistencies. Experiments demonstrate that our DiCo method sets new state-of-the-art performance on three 3D vessel segmentation benchmarks. The code repository address is https://github.com/xujiaommcome/DiCo
|
https://arxiv.org/abs/2601.07377
|
Academic Papers
|
svg
|
e6ba663e276eafa6ed9946631c4be64a6659ed0e3950de1d3c1bd073e86e163a
|
2026-01-13T00:00:00-05:00
|
Interactive visualizations for adolescents to understand and challenge algorithmic profiling in online platforms
|
arXiv:2601.07381v1 Announce Type: new Abstract: Social media platforms regularly track, aggregate, and monetize adolescents' data, yet provide them with little visibility or agency over how algorithms construct their digital identities and make inferences about them. We introduce Algorithmic Mirror, an interactive visualization tool that transforms opaque profiling practices into explorable landscapes of personal data. It uniquely leverages adolescents' real digital footprints across YouTube, TikTok, and Netflix, to provide situated, personalized insights into datafication over time. In our study with 27 participants (ages 12--16), we show how engaging with their own data enabled adolescents to uncover the scale and persistence of data collection, recognize cross-platform profiling, and critically reflect algorithmic categorizations of their interests. These findings highlight how identity is a powerful motivator for adolescents' desire for greater digital agency, underscoring the need for platforms and policymakers to move toward structural reforms that guarantee children better transparency and the agency to influence their online experiences.
|
https://arxiv.org/abs/2601.07381
|
Academic Papers
|
svg
|
910c32e3a1700a222a5d0513bf3f303a058dea2d638a77efc66fb23c80a90a74
|
2026-01-13T00:00:00-05:00
|
CompNO: A Novel Foundation Model approach for solving Partial Differential Equations
|
arXiv:2601.07384v1 Announce Type: new Abstract: Partial differential equations (PDEs) govern a wide range of physical phenomena, but their numerical solution remains computationally demanding, especially when repeated simulations are required across many parameter settings. Recent Scientific Foundation Models (SFMs) aim to alleviate this cost by learning universal surrogates from large collections of simulated systems, yet they typically rely on monolithic architectures with limited interpretability and high pretraining expense. In this work we introduce Compositional Neural Operators (CompNO), a compositional neural operator framework for parametric PDEs. Instead of pretraining a single large model on heterogeneous data, CompNO first learns a library of Foundation Blocks, where each block is a parametric Fourier neural operator specialized to a fundamental differential operator (e.g. convection, diffusion, nonlinear convection). These blocks are then assembled, via lightweight Adaptation Blocks, into task-specific solvers that approximate the temporal evolution operator for target PDEs. A dedicated boundary-condition operator further enforces Dirichlet constraints exactly at inference time. We validate CompNO on one-dimensional convection, diffusion, convection--diffusion and Burgers' equations from the PDEBench suite. The proposed framework achieves lower relative L2 error than strong baselines (PFNO, PDEFormer and in-context learning based models) on linear parametric systems, while remaining competitive on nonlinear Burgers' flows. The model maintains exact boundary satisfaction with zero loss at domain boundaries, and exhibits robust generalization across a broad range of Peclet and Reynolds numbers. These results demonstrate that compositional neural operators provide a scalable and physically interpretable pathway towards foundation models for PDEs.
|
https://arxiv.org/abs/2601.07384
|
Academic Papers
|
svg
|
70f59b9fdcbeee6c1adcb6381cc39e30ca9d9bdde9e70dadbf7a17f9e9767410
|
2026-01-13T00:00:00-05:00
|
Computing patient similarity based on unstructured clinical notes
|
arXiv:2601.07385v1 Announce Type: new Abstract: Clinical notes hold rich yet unstructured details about diagnoses, treatments, and outcomes that are vital to precision medicine but hard to exploit at scale. We introduce a method that represents each patient as a matrix built from aggregated embeddings of all their notes, enabling robust patient similarity computation based on their latent low-rank representations. Using clinical notes of 4,267 Czech breast-cancer patients and expert similarity labels from Masaryk Memorial Cancer Institute, we evaluate several matrix-based similarity measures and analyze their strengths and limitations across different similarity facets, such as clinical history, treatment, and adverse events. The results demonstrate the usefulness of the presented method for downstream tasks, such as personalized therapy recommendations or toxicity warnings.
|
https://arxiv.org/abs/2601.07385
|
Academic Papers
|
svg
|
a97766792ab3f014dc31ed74a40cbd90bd4f632430ad5bd039e7b7d456b24bb2
|
2026-01-13T00:00:00-05:00
|
Novel Decoding Algorithm for Noiseless Non-Adaptive Group Testing
|
arXiv:2601.07388v1 Announce Type: new Abstract: Group testing enables the identification of a small subset of defective items within a larger population by performing tests on pools of items rather than on each item individually. Over the years, it has not only attracted attention from the academic community, but has also demonstrated its potential in addressing real-world problems such as infectious disease screening, drug discovery and manufacturing quality control. With the emergence of the COVID-19 pandemic, interest in group testing has grown further, particularly in non-adaptive testing, due to its time efficiency compared to adaptive approaches. This highlights the importance of improving the performance currently achievable in such a scheme. This article focuses on advancing the field of noiseless non-adaptive group testing. The main objective of this work is to study and maximize the probability of successfully identifying the subset of defective items while performing as few tests as possible. To this end, we first note current well-known decoding algorithms, as well as established test design strategies for assigning items to pools. From this review, we identify key opportunities for improvement that inform the development of new decoding algorithms. Specifically, we propose a novel method, Weighted Sequential Combinatorial Orthogonal Matching Pursuit (W-SCOMP), to enhance the efficiency of existing detection procedures. Theoretical results demonstrate that W-SCOMP outperforms other algorithms in noiseless non-adaptive group testing. Furthermore, we develop a simulation framework to model the group testing process and conduct comparative evaluations between the proposed and existing algorithms. The empirical results are consistent with the theoretical findings. Overall, our work expands the range of available decoding algorithms and contributes to the broader understanding of noiseless non-adaptive group testing.
|
https://arxiv.org/abs/2601.07388
|
Academic Papers
|
svg
|
887a6ee349f1375f27092fb89eb77a2cf9735e761e9eabc28a276e183ea1ac81
|
2026-01-13T00:00:00-05:00
|
On the Non-decoupling of Supervised Fine-tuning and Reinforcement Learning in Post-training
|
arXiv:2601.07389v1 Announce Type: new Abstract: Post-training of large language models routinely interleaves supervised fine-tuning (SFT) with reinforcement learning (RL). These two methods have different objectives: SFT minimizes the cross-entropy loss between model outputs and expert responses, while RL maximizes reward signals derived from human preferences or rule-based verifiers. Modern reasoning models have widely adopted the practice of alternating SFT and RL training. However, there is no theoretical account of whether they can be decoupled. We prove that decoupling is impossible in either order: (1) SFT-then-RL coupling: RL increases SFT loss under SFT optimality and (2) RL-then-SFT coupling: SFT lowers the reward achieved by RL. Experiments on Qwen3-0.6B confirm the predicted degradation, verifying that SFT and RL cannot be separated without loss of prior performance in the post-training
|
https://arxiv.org/abs/2601.07389
|
Academic Papers
|
svg
|
e2299e5aaa14637e0816ab3536db813414070fe97fee24a8d13b6fa6d2e0f003
|
2026-01-13T00:00:00-05:00
|
OceanSAR-2: A Universal Feature Extractor for SAR Ocean Observation
|
arXiv:2601.07392v1 Announce Type: new Abstract: We present OceanSAR-2, the second generation of our foundation model for SAR-based ocean observation. Building on our earlier release, which pioneered self-supervised learning on Sentinel-1 Wave Mode data, OceanSAR-2 relies on improved SSL training and dynamic data curation strategies, which enhances performance while reducing training cost. OceanSAR-2 demonstrates strong transfer performance across downstream tasks, including geophysical pattern classification, ocean surface wind vector and significant wave height estimation, and iceberg detection. We release standardized benchmark datasets, providing a foundation for systematic evaluation and advancement of SAR models for ocean applications.
|
https://arxiv.org/abs/2601.07392
|
Academic Papers
|
svg
|
7209f6d9fd4a1db7ae7a6058ab1e71a84e2d88248b16473ab86a5b1ba6f3376d
|
2026-01-13T00:00:00-05:00
|
Software-Hardware Co-optimization for Modular E2E AV Paradigm: A Unified Framework of Optimization Approaches, Simulation Environment and Evaluation Metrics
|
arXiv:2601.07393v1 Announce Type: new Abstract: Modular end-to-end (ME2E) autonomous driving paradigms combine modular interpretability with global optimization capability and have demonstrated strong performance. However, existing studies mainly focus on accuracy improvement, while critical system-level factors such as inference latency and energy consumption are often overlooked, resulting in increasingly complex model designs that hinder practical deployment. Prior efforts on model compression and acceleration typically optimize either the software or hardware side in isolation. Software-only optimization cannot fundamentally remove intermediate tensor access and operator scheduling overheads, whereas hardware-only optimization is constrained by model structure and precision. As a result, the real-world benefits of such optimizations are often limited. To address these challenges, this paper proposes a reusable software and hardware co-optimization and closed-loop evaluation framework for ME2E autonomous driving inference. The framework jointly integrates software-level model optimization with hardware-level computation optimization under a unified system-level objective. In addition, a multidimensional evaluation metric is introduced to assess system performance by jointly considering safety, comfort, efficiency, latency, and energy, enabling quantitative comparison of different optimization strategies. Experiments across multiple ME2E autonomous driving stacks show that the proposed framework preserves baseline-level driving performance while significantly reducing inference latency and energy consumption, achieving substantial overall system-level improvements. These results demonstrate that the proposed framework provides practical and actionable guidance for efficient deployment of ME2E autonomous driving systems.
|
https://arxiv.org/abs/2601.07393
|
Academic Papers
|
svg
|
21a8549913f3ab12915aa4c41ece17c5d515d0acbc5e5abc0d53f3b261afcb0a
|
2026-01-13T00:00:00-05:00
|
MCP-ITP: An Automated Framework for Implicit Tool Poisoning in MCP
|
arXiv:2601.07395v1 Announce Type: new Abstract: To standardize interactions between LLM-based agents and their environments, the Model Context Protocol (MCP) was proposed and has since been widely adopted. However, integrating external tools expands the attack surface, exposing agents to tool poisoning attacks. In such attacks, malicious instructions embedded in tool metadata are injected into the agent context during MCP registration phase, thereby manipulating agent behavior. Prior work primarily focuses on explicit tool poisoning or relied on manually crafted poisoned tools. In contrast, we focus on a particularly stealthy variant: implicit tool poisoning, where the poisoned tool itself remains uninvoked. Instead, the instructions embedded in the tool metadata induce the agent to invoke a legitimate but high-privilege tool to perform malicious operations. We propose MCP-ITP, the first automated and adaptive framework for implicit tool poisoning within the MCP ecosystem. MCP-ITP formulates poisoned tool generation as a black-box optimization problem and employs an iterative optimization strategy that leverages feedback from both an evaluation LLM and a detection LLM to maximize Attack Success Rate (ASR) while evading current detection mechanisms. Experimental results on the MCPTox dataset across 12 LLM agents demonstrate that MCP-ITP consistently outperforms the manually crafted baseline, achieving up to 84.2% ASR while suppressing the Malicious Tool Detection Rate (MDR) to as low as 0.3%.
|
https://arxiv.org/abs/2601.07395
|
Academic Papers
|
svg
|
2063fa7d75f33018be1728180db3bf695bfd5ab8d920a38c781902df29706077
|
2026-01-13T00:00:00-05:00
|
Forecast the Principal, Stabilize the Residual: Subspace-Aware Feature Caching for Efficient Diffusion Transformers
|
arXiv:2601.07396v1 Announce Type: new Abstract: Diffusion Transformer (DiT) models have achieved unprecedented quality in image and video generation, yet their iterative sampling process remains computationally prohibitive. To accelerate inference, feature caching methods have emerged by reusing intermediate representations across timesteps. However, existing caching approaches treat all feature components uniformly. We reveal that DiT feature spaces contain distinct principal and residual subspaces with divergent temporal behavior: the principal subspace evolves smoothly and predictably, while the residual subspace exhibits volatile, low-energy oscillations that resist accurate prediction. Building on this insight, we propose SVD-Cache, a subspace-aware caching framework that decomposes diffusion features via Singular Value Decomposition (SVD), applies exponential moving average (EMA) prediction to the dominant low-rank components, and directly reuses the residual subspace. Extensive experiments demonstrate that SVD-Cache achieves near-lossless across diverse models and methods, including 5.55$\times$ speedup on FLUX and HunyuanVideo, and compatibility with model acceleration techniques including distillation, quantization and sparse attention. Our code is in supplementary material and will be released on Github.
|
https://arxiv.org/abs/2601.07396
|
Academic Papers
|
svg
|
68cdd3e150b44170e0c318848bcac72eaa5c16fc9f4e9fe88fb22b3b59363ab2
|
2026-01-13T00:00:00-05:00
|
On Narrative: The Rhetorical Mechanisms of Online Polarisation
|
arXiv:2601.07398v1 Announce Type: new Abstract: Polarisation research has demonstrated how people cluster in homogeneous groups with opposing opinions. However, this effect emerges not only through interaction between people, limiting communication between groups, but also between narratives, shaping opinions and partisan identities. Yet, how polarised groups collectively construct and negotiate opposing interpretations of reality, and whether narratives move between groups despite limited interactions, remains unexplored. To address this gap, we formalise the concept of narrative polarisation and demonstrate its measurement in 212 YouTube videos and 90,029 comments on the Israeli-Palestinian conflict. Based on structural narrative theory and implemented through a large language model, we extract the narrative roles assigned to central actors in two partisan information environments. We find that while videos produce highly polarised narratives, comments significantly reduce narrative polarisation, harmonising discourse on the surface level. However, on a deeper narrative level, recurring narrative motifs reveal additional differences between partisan groups.
|
https://arxiv.org/abs/2601.07398
|
Academic Papers
|
svg
|
ebfddea9ecff92abb91cb912362ced5e8cf22884822379a10abf3afdc5394095
|
2026-01-13T00:00:00-05:00
|
Recommendation-as-Experience: A framework for context-sensitive adaptation in conversational recommender systems
|
arXiv:2601.07401v1 Announce Type: new Abstract: While Conversational Recommender Systems (CRS) have matured technically, they frequently lack principled methods for encoding latent experiential aims as adaptive state variables. Consequently, contemporary architectures often prioritise ranking accuracy at the expense of nuanced, context-sensitive interaction behaviours. This paper addresses this gap through a comprehensive multi-domain study ($N = 168$) that quantifies the joint prioritisation of three critical interaction aims: educative (to inform and justify), explorative (to diversify and inspire), and affective (to align emotionally and socially). Utilising Bayesian hierarchical ordinal regression, we establish domain profiles and perceived item value as systematic modulators of these priorities. Furthermore, we identify stable user-level preferences for autonomy that persist across distinct interactional goals, suggesting that agency is a fundamental requirement of the conversational experience. Drawing on these empirical foundations, we formalise the Recommendation-as-Experience (RAE) adaptation framework. RAE systematically encodes contextual and individual signals into structured state representations, mapping them to experience-aligned dialogue policies realised through retrieval diversification, heuristic logic, or Large Language Model based controllable generation. As an architecture-agnostic blueprint, RAE facilitates the design of context-sensitive CRS that effectively balance experiential quality with predictive performance.
|
https://arxiv.org/abs/2601.07401
|
Academic Papers
|
svg
|
ea4fe522ced38e9507b89d7d7dea32b3931ee524e9a536367917e5d07a38c283
|
2026-01-13T00:00:00-05:00
|
Peacock: UEFI Firmware Runtime Observability Layer for Detection and Response
|
arXiv:2601.07402v1 Announce Type: new Abstract: Modern computing platforms rely on the Unified Extensible Firmware Interface (UEFI) to initialize hardware and coordinate the transition to the operating system. Because this execution environment operates with high privileges and persists across reboots, it has increasingly become a target for advanced threats, including bootkits documented in real systems. Existing protections, including Secure Boot and static signature verification, are insufficient against adversaries who exploit runtime behavior or manipulate firmware components after signature checks have completed. In contrast to operating system (OS) environments, where mature tools provide dynamic inspection and incident response, the pre-OS stage lacks practical mechanisms for real-time visibility and threat detection. We present Peacock, a modular framework that introduces integrity-assured monitoring and remote verification for the UEFI boot process. Peacock consists of three components: (i) a UEFI-based agent that records Boot and Runtime Service activity with cryptographic protection against tampering; (ii) a cross-platform OS Agent that extracts the recorded measurements and produces a verifiable attestation bundle using hardware-backed guarantees from the platform's trusted module; and (iii) a Peacock Server that verifies attestation results and exports structured telemetry for enterprise detection. Our evaluation shows that Peacock reliably detects multiple real-world UEFI bootkits, including Glupteba, BlackLotus, LoJax, and MosaicRegressor. Taken together, these results indicate that Peacock provides practical visibility and verification capabilities within the firmware layer, addressing threats that bypass traditional OS-level security mechanisms.
|
https://arxiv.org/abs/2601.07402
|
Academic Papers
|
svg
|
8140c5746aacb0229fe349c82a02afa5f25f15ad6a309e258451abb8bd1ed34d
|
2026-01-13T00:00:00-05:00
|
Outcome-Grounded Advantage Reshaping for Fine-Grained Credit Assignment in Mathematical Reasoning
|
arXiv:2601.07408v1 Announce Type: new Abstract: Group Relative Policy Optimization (GRPO) has emerged as a promising critic-free reinforcement learning paradigm for reasoning tasks. However, standard GRPO employs a coarse-grained credit assignment mechanism that propagates group-level rewards uniformly to to every token in a sequence, neglecting the varying contribution of individual reasoning steps. We address this limitation by introducing Outcome-grounded Advantage Reshaping (OAR), a fine-grained credit assignment mechanism that redistributes advantages based on how much each token influences the model's final answer. We instantiate OAR via two complementary strategies: (1) OAR-P, which estimates outcome sensitivity through counterfactual token perturbations, serving as a high-fidelity attribution signal; (2) OAR-G, which uses an input-gradient sensitivity proxy to approximate the influence signal with a single backward pass. These importance signals are integrated with a conservative Bi-Level advantage reshaping scheme that suppresses low-impact tokens and boosts pivotal ones while preserving the overall advantage mass. Empirical results on extensive mathematical reasoning benchmarks demonstrate that while OAR-P sets the performance upper bound, OAR-G achieves comparable gains with negligible computational overhead, both significantly outperforming a strong GRPO baseline, pushing the boundaries of critic-free LLM reasoning.
|
https://arxiv.org/abs/2601.07408
|
Academic Papers
|
svg
|
8915fa820af7ca0db5581296552a22e19b5242e86744b2a355a85f4edf1a9177
|
2026-01-13T00:00:00-05:00
|
SCALPEL: Selective Capability Ablation via Low-rank Parameter Editing for Large Language Model Interpretability Analysis
|
arXiv:2601.07411v1 Announce Type: new Abstract: Large language models excel across diverse domains, yet their deployment in healthcare, legal systems, and autonomous decision-making remains limited by incomplete understanding of their internal mechanisms. As these models integrate into high-stakes systems, understanding how they encode capabilities has become fundamental to interpretability research. Traditional approaches identify important modules through gradient attribution or activation analysis, assuming specific capabilities map to specific components. However, this oversimplifies neural computation: modules may contribute to multiple capabilities simultaneously, while single capabilities may distribute across multiple modules. These coarse-grained analyses fail to capture fine-grained, distributed capability encoding. We present SCALPEL (Selective Capability Ablation via Low-rank Parameter Editing for Large language models), a framework representing capabilities as low-rank parameter subspaces rather than discrete modules. Our key insight is that capabilities can be characterized by low-rank modifications distributed across layers and modules, enabling precise capability removal without affecting others. By training LoRA adapters to reduce distinguishing correct from incorrect answers while preserving general language modeling quality, SCALPEL identifies low-rank representations responsible for particular capabilities while remaining disentangled from others. Experiments across diverse capability and linguistic tasks from BLiMP demonstrate that SCALPEL successfully removes target capabilities while preserving general capabilities, providing fine-grained insights into capability distribution across parameter space. Results reveal that capabilities exhibit low-rank structure and can be selectively ablated through targeted parameter-space interventions, offering nuanced understanding of capability encoding in LLMs.
|
https://arxiv.org/abs/2601.07411
|
Academic Papers
|
svg
|
f5d467b3e1aa02e8837815423b95cc6020afc91f534d373188767ce3bfedf4fd
|
2026-01-13T00:00:00-05:00
|
The Practicality of Normalizing Flow Test-Time Training in Bayesian Inference for Agent-Based Models
|
arXiv:2601.07413v1 Announce Type: new Abstract: Agent-Based Models (ABMs) are gaining great popularity in economics and social science because of their strong flexibility to describe the realistic and heterogeneous decisions and interaction rules between individual agents. In this work, we investigate for the first time the practicality of test-time training (TTT) of deep models such as normalizing flows, in the parameters posterior estimations of ABMs. We propose several practical TTT strategies for fine-tuning the normalizing flow against distribution shifts. Our numerical study demonstrates that TTT schemes are remarkably effective, enabling real-time adjustment of flow-based inference for ABM parameters.
|
https://arxiv.org/abs/2601.07413
|
Academic Papers
|
svg
|
6fbba68be4f57925e2f09070f74d704e471fcae33e817eb6e19321cef4c98cd9
|
2026-01-13T00:00:00-05:00
|
PLANET v2.0: A comprehensive Protein-Ligand Affinity Prediction Model Based on Mixture Density Network
|
arXiv:2601.07415v1 Announce Type: new Abstract: Drug discovery represents a time-consuming and financially intensive process, and virtual screening can accelerate it. Scoring functions, as one of the tools guiding virtual screening, have their precision closely tied to screening efficiency. In our previous study, we developed a graph neural network model called PLANET (Protein-Ligand Affinity prediction NETwork), but it suffers from the defect in representing protein-ligand contact maps. Incorrect binding modes inevitably lead to poor affinity predictions, so accurate prediction of the protein-ligand contact map is desired to improve PLANET. In this study, we have proposed PLANET v2.0 as an upgraded version. The model is trained via multi-objective training strategy and incorporates the Mixture Density Network to predict binding modes. Except for the probability density distributions of non-covalent interactions, we innovatively employ another Gaussian mixture model to describe the relationship between distance and energy of each interaction pair and predict protein-ligand affinity like calculating the mathematical expectation. As on the CASF-2016 benchmark, PLANET v2.0 demonstrates excellent scoring power, ranking power, and docking power. The screening power of PLANET v2.0 gets notably improved compared to PLANET and Glide SP and it demonstrates robust validation on a commercial ultra-large-scale dataset. Given its efficiency and accuracy, PLANET v2.0 can hopefully become one of the practical tools for virtual screening workflows. PLANET v2.0 is freely available at https://www.pdbbind-plus.org.cn/planetv2.
|
https://arxiv.org/abs/2601.07415
|
Academic Papers
|
svg
|
eb316ca2aa4586f96f44be30b2ffe59afd03f8ecd1ae230d147a0da5d8e252ff
|
2026-01-13T00:00:00-05:00
|
SDHSI-Net: Learning Better Representations for Hyperspectral Images via Self-Distillation
|
arXiv:2601.07416v1 Announce Type: new Abstract: Hyperspectral image (HSI) classification presents unique challenges due to its high spectral dimensionality and limited labeled data. Traditional deep learning models often suffer from overfitting and high computational costs. Self-distillation (SD), a variant of knowledge distillation where a network learns from its own predictions, has recently emerged as a promising strategy to enhance model performance without requiring external teacher networks. In this work, we explore the application of SD to HSI by treating earlier outputs as soft targets, thereby enforcing consistency between intermediate and final predictions. This process improves intra-class compactness and inter-class separability in the learned feature space. Our approach is validated on two benchmark HSI datasets and demonstrates significant improvements in classification accuracy and robustness, highlighting the effectiveness of SD for spectral-spatial learning. Codes are available at https://github.com/Prachet-Dev-Singh/SDHSI.
|
https://arxiv.org/abs/2601.07416
|
Academic Papers
|
svg
|
7d1950b372a084d96f0d074a0eaddc2f8ba2d31303821789b86d252391455f62
|
2026-01-13T00:00:00-05:00
|
Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations
|
arXiv:2601.07422v1 Announce Type: new Abstract: Despite their impressive capabilities, large language models (LLMs) frequently generate hallucinations. Previous work shows that their internal states encode rich signals of truthfulness, yet the origins and mechanisms of these signals remain unclear. In this paper, we demonstrate that truthfulness cues arise from two distinct information pathways: (1) a Question-Anchored pathway that depends on question-answer information flow, and (2) an Answer-Anchored pathway that derives self-contained evidence from the generated answer itself. First, we validate and disentangle these pathways through attention knockout and token patching. Afterwards, we uncover notable and intriguing properties of these two mechanisms. Further experiments reveal that (1) the two mechanisms are closely associated with LLM knowledge boundaries; and (2) internal representations are aware of their distinctions. Finally, building on these insightful findings, two applications are proposed to enhance hallucination detection performance. Overall, our work provides new insight into how LLMs internally encode truthfulness, offering directions for more reliable and self-aware generative systems.
|
https://arxiv.org/abs/2601.07422
|
Academic Papers
|
svg
|
fe48c3ddf43b72b0c64a9cc9dae2fdf901a2374fe5a3236127e71ac7ddd1b624
|
2026-01-13T00:00:00-05:00
|
SAD: A Large-Scale Strategic Argumentative Dialogue Dataset
|
arXiv:2601.07423v1 Announce Type: new Abstract: Argumentation generation has attracted substantial research interest due to its central role in human reasoning and decision-making. However, most existing argumentative corpora focus on non-interactive, single-turn settings, either generating arguments from a given topic or refuting an existing argument. In practice, however, argumentation is often realized as multi-turn dialogue, where speakers defend their stances and employ diverse argumentative strategies to strengthen persuasiveness. To support deeper modeling of argumentation dialogue, we present the first large-scale \textbf{S}trategic \textbf{A}rgumentative \textbf{D}ialogue dataset, SAD, consisting of 392,822 examples. Grounded in argumentation theories, we annotate each utterance with five strategy types, allowing multiple strategies per utterance. Unlike prior datasets, SAD requires models to generate contextually appropriate arguments conditioned on the dialogue history, a specified stance on the topic, and targeted argumentation strategies. We further benchmark a range of pretrained generative models on SAD and present in-depth analysis of strategy usage patterns in argumentation.
|
https://arxiv.org/abs/2601.07423
|
Academic Papers
|
svg
|
e93abeae1f67e452b928e50e830d191d8bf8831b61f036fe2b8e8969d14020fc
|
2026-01-13T00:00:00-05:00
|
Center-Fed Pinching Antenna System (C-PASS) Aided Wireless Communications
|
arXiv:2601.07424v1 Announce Type: new Abstract: The novel architecture of the center-fed pinching antenna system (C-PASS) is investigated, where the waveguide-fed signal is divided into two propagation directions through controllable power splitting. By doing so, a doubled degree of freedom (DoF) is achieved compared to conventional PASS. Based on the new designed basic signal model of C-PASS, three practical operating protocols for C-PASS are proposed, namely power splitting (PS), direction switching (DS), and time switching (TS). Then, the sum-rate maximization problem for the joint optimization of transmit and pinching beamforming is formulated for each of the proposed protocols. 1) For PS, the highly coupled non-convex problem is first transformed into a tractable form via the weighted minimum mean square error reformulation and solved using the alternating optimization framework; 2) For DS, the above approach is subsequently extended to solve the mixed-integer constraints inherent for DS via the penalty-based algorithm; 3) For TS, the optimization problem can be decomposed into two subproblems and solved using the similar iterative techniques, while its optimal time allocation ratio is derived in closed form. Finally, numerical results reveal that TS is superior in the low-power regime, while PS and DS achieve significantly higher rates in the high-power regime due to the enhanced DoF.
|
https://arxiv.org/abs/2601.07424
|
Academic Papers
|
svg
|
13e5cbc1b2505bd4eae101e5b86fcda6caa1f4fb1606d9360dedda46567ae756
|
2026-01-13T00:00:00-05:00
|
KALE: Enhancing Knowledge Manipulation in Large Language Models via Knowledge-aware Learning
|
arXiv:2601.07430v1 Announce Type: new Abstract: Despite the impressive performance of large language models (LLMs) pretrained on vast knowledge corpora, advancing their knowledge manipulation-the ability to effectively recall, reason, and transfer relevant knowledge-remains challenging. Existing methods mainly leverage Supervised Fine-Tuning (SFT) on labeled datasets to enhance LLMs' knowledge manipulation ability. However, we observe that SFT models still exhibit the known&incorrect phenomenon, where they explicitly possess relevant knowledge for a given question but fail to leverage it for correct answers. To address this challenge, we propose KALE (Knowledge-Aware LEarning)-a post-training framework that leverages knowledge graphs (KGs) to generate high-quality rationales and enhance LLMs' knowledge manipulation ability. Specifically, KALE first introduces a Knowledge-Induced (KI) data synthesis method that efficiently extracts multi-hop reasoning paths from KGs to generate high-quality rationales for question-answer pairs. Then, KALE employs a Knowledge-Aware (KA) fine-tuning paradigm that enhances knowledge manipulation by internalizing rationale-guided reasoning through minimizing the KL divergence between predictions with and without rationales. Extensive experiments on eight popular benchmarks across six different LLMs demonstrate the effectiveness of KALE, achieving accuracy improvements of up to 11.72% and an average of 4.18%.
|
https://arxiv.org/abs/2601.07430
|
Academic Papers
|
svg
|
6bdb3ce0d23f5ba7556ff8c67fc733546f3cb3a86e098b2190cd60f1512f4d9b
|
2026-01-13T00:00:00-05:00
|
LOONG: Online Time-Optimal Autonomous Flight for MAVs in Cluttered Environments
|
arXiv:2601.07434v1 Announce Type: new Abstract: Autonomous flight of micro air vehicles (MAVs) in unknown, cluttered environments remains challenging for time-critical missions due to conservative maneuvering strategies. This article presents an integrated planning and control framework for high-speed, time-optimal autonomous flight of MAVs in cluttered environments. In each replanning cycle (100 Hz), a time-optimal trajectory under polynomial presentation is generated as a reference, with the time-allocation process accelerated by imitation learning. Subsequently, a time-optimal model predictive contouring control (MPCC) incorporates safe flight corridor (SFC) constraints at variable horizon steps to enable aggressive yet safe maneuvering, while fully exploiting the MAV's dynamics. We validate the proposed framework extensively on a custom-built LiDAR-based MAV platform. Simulation results demonstrate superior aggressiveness compared to the state of the art, while real-world experiments achieve a peak speed of 18 m/s in a cluttered environment and succeed in 10 consecutive trials from diverse start points. The video is available at the following link: https://youtu.be/vexXXhv99oQ.
|
https://arxiv.org/abs/2601.07434
|
Academic Papers
|
svg
|
1cf27e18cc81378fe763167875b887750da6d8ebb240ac9f569ae3460fc99d0e
|
2026-01-13T00:00:00-05:00
|
Variational Autoencoder with Normalizing flow for X-ray spectral fitting
|
arXiv:2601.07440v1 Announce Type: new Abstract: Black hole X-ray binaries (BHBs) can be studied with spectral fitting to provide physical constraints on accretion in extreme gravitational environments. Traditional methods of spectral fitting such as Markov Chain Monte Carlo (MCMC) face limitations due to computational times. We introduce a probabilistic model, utilizing a variational autoencoder with a normalizing flow, trained to adopt a physical latent space. This neural network produces predictions for spectral-model parameters as well as their full probability distributions. Our implementations result in a significant improvement in spectral reconstructions over a previous deterministic model while performing three orders of magnitude faster than traditional methods.
|
https://arxiv.org/abs/2601.07440
|
Academic Papers
|
svg
|
d13e847c8f7782596bf961b8a63522858edb0708e6726cf3aacab8b9f9e2a532
|
2026-01-13T00:00:00-05:00
|
Surrogate-based Optimization via Clustering for Box-Constrained Problems
|
arXiv:2601.07442v1 Announce Type: new Abstract: Global optimization of large-scale, complex systems such as multi-physics black-box simulations and real-world industrial systems is important but challenging. This work presents a novel Surrogate-Based Optimization framework based on Clustering, SBOC for global optimization of such systems, which can be used with any surrogate modeling technique. At each iteration, it uses a single surrogate model for the entire domain, employs k-means clustering to identify unexplored domain, and exploits a local region around the surrogate optimum to potentially add three new sample points in the domain. SBOC has been tested against sixteen promising benchmarking algorithms using 52 analytical test functions of varying input dimensionalities and shape profiles. It successfully identified a global minimum for most test functions with substantially lower computational effort than other algorithms. It worked especially well on test functions with four or more input variables. It was also among the top six algorithms in approaching a global minimum closely. Overall, SBOC is a robust, reliable, and efficient algorithm for global optimization of box-constrained systems.
|
https://arxiv.org/abs/2601.07442
|
Academic Papers
|
svg
|
1261ff204f87f446a03354946dab651c9a6e7c9046e3846dff2a6ae9a242ce38
|
2026-01-13T00:00:00-05:00
|
Formalization of Amicable Numbers Theory
|
arXiv:2601.07444v1 Announce Type: new Abstract: This paper presents a formalization of the theory of amicable numbers in the Lean~4 proof assistant. Two positive integers $m$ and $n$ are called an amicable pair if the sum of proper divisors of $m$ equals $n$ and the sum of proper divisors of $n$ equals $m$. Our formalization introduces the proper divisor sum function $\propersum(n) = \sigma(n) - n$, defines the concepts of amicable pairs and amicable numbers, and computationally verifies historically famous amicable pairs. Furthermore, we formalize basic structural theorems, including symmetry, non-triviality, and connections to abundant/deficient numbers. A key contribution is the complete formal proof of the classical Th\={a}bit formula (9th century), using index-shifting and the \texttt{zify} tactic. Additionally, we provide complete formal proofs of both Th\={a}bit's rule and Euler's generalized rule (1747), two fundamental theorems for generating amicable pairs. A major achievement is the first complete formalization of the Borho-Hoffmann breeding method (1986), comprising 540 lines with 33 theorems and leveraging automated algebra tactics (\texttt{zify} and \texttt{ring}) to verify complex polynomial identities. We also formalize extensions including sociable numbers (aliquot cycles), betrothed numbers (quasi-amicable pairs), parity constraint theorems, and computational search bounds for coprime pairs ($>10^{65}$). We verify the smallest sociable cycle of length 5 (Poulet's cycle) and computationally verify specific instances. The formalization comprises 2076 lines of Lean code organized into Mathlib-candidate and paper-specific modules, with 139 theorems and all necessary infrastructure for divisor sum multiplicativity and coprimality reasoning.
|
https://arxiv.org/abs/2601.07444
|
Academic Papers
|
svg
|
6c4f1f300f631cfd4caf859f9d44ffc7af1e36c43582566163011f4bd9bfe4bd
|
2026-01-13T00:00:00-05:00
|
PanoSAMic: Panoramic Image Segmentation from SAM Feature Encoding and Dual View Fusion
|
arXiv:2601.07447v1 Announce Type: new Abstract: Existing image foundation models are not optimized for spherical images having been trained primarily on perspective images. PanoSAMic integrates the pre-trained Segment Anything (SAM) encoder to make use of its extensive training and integrate it into a semantic segmentation model for panoramic images using multiple modalities. We modify the SAM encoder to output multi-stage features and introduce a novel spatio-modal fusion module that allows the model to select the relevant modalities and best features from each modality for different areas of the input. Furthermore, our semantic decoder uses spherical attention and dual view fusion to overcome the distortions and edge discontinuity often associated with panoramic images. PanoSAMic achieves state-of-the-art (SotA) results on Stanford2D3DS for RGB, RGB-D, and RGB-D-N modalities and on Matterport3D for RGB and RGB-D modalities. https://github.com/dfki-av/PanoSAMic
|
https://arxiv.org/abs/2601.07447
|
Academic Papers
|
svg
|
3045a6156274c9d6bc456ce161d17617d742ba8f28c4aecbfa7da58244a0e79a
|
2026-01-13T00:00:00-05:00
|
RLPO: Residual Listwise Preference Optimization for Long-Context Review Ranking
|
arXiv:2601.07449v1 Announce Type: new Abstract: Review ranking is pivotal in e-commerce for prioritizing diagnostic and authentic feedback from the deluge of user-generated content. While large language models have improved semantic assessment, existing ranking paradigms face a persistent trade-off in long-context settings. Pointwise scoring is efficient but often fails to account for list-level interactions, leading to miscalibrated top-$k$ rankings. Listwise approaches can leverage global context, yet they are computationally expensive and become unstable as candidate lists grow. To address this, we propose Residual Listwise Preference Optimization (RLPO), which formulates ranking as listwise representation-level residual correction over a strong pointwise LLM scorer. RLPO first produces calibrated pointwise scores and item representations, then applies a lightweight encoder over the representations to predict listwise score residuals, avoiding full token-level listwise processing. We also introduce a large-scale benchmark for long-context review ranking with human verification. Experiments show RLPO improves NDCG@k over strong pointwise and listwise baselines and remains robust as list length increases.
|
https://arxiv.org/abs/2601.07449
|
Academic Papers
|
svg
|
57b52c2e8e33f342d3e9356da950a25bee58da169b14223ddf4a18715bb39884
|
2026-01-13T00:00:00-05:00
|
Building Faculty Expertise Ontology using Protege: Enhancing Academic Library Research Services
|
arXiv:2601.07451v1 Announce Type: new Abstract: Academic libraries struggle to find and access faculty expertise across disciplines. This research proposes a faculty expertise ontology with a hierarchical structure based on Prot\'eg\'e to enhance library services and knowledge organisation. The ontology classifies relationships between departments, subject areas, faculty members, and contact data into layers including Top, Middle, and Bottom levels. The academic structure that this tiered form takes enables discovery of expertise in departments. The ontology which answers competency questions generated from the subject matter experts can answer real-world questions like which faculties are in the specific areas, how to collaborate with other disciplines and search contact information and so on. Competency questions act as design and test instruments to show that the ontology will fulfil the information needs of Researchers, Librarians and Administrators. The ontology is able to cope with semantically-enhanced queries, as shown by SPARQL implementations. The model works effectively in initiating referrals to an expert, aligning research with the strength of a department and allowing academics to partner up. The ontology delivers a scalable platform that adapts to institutional change. In the future, we intend to integrate with institutional databases and library systems for automatic API updates, as well as develop user interfaces and visualisations.
|
https://arxiv.org/abs/2601.07451
|
Academic Papers
|
svg
|
1794e959d1f2ec9dcf121b6deef16a6ae951aedf64ae72b7d7c35dee891541fc
|
2026-01-13T00:00:00-05:00
|
WaveMan: mmWave-Based Room-Scale Human Interaction Perception for Humanoid Robots
|
arXiv:2601.07454v1 Announce Type: new Abstract: Reliable humanoid-robot interaction (HRI) in household environments is constrained by two fundamental requirements, namely robustness to unconstrained user positions and preservation of user privacy. Millimeter-wave (mmWave) sensing inherently supports privacy-preserving interaction, making it a promising modality for room-scale HRI. However, existing mmWave-based interaction-sensing systems exhibit poor spatial generalization at unseen distances or viewpoints. To address this challenge, we introduce WaveMan, a spatially adaptive room-scale perception system that restores reliable human interaction sensing across arbitrary user positions. WaveMan integrates viewpoint alignment and spectrogram enhancement for spatial consistency, with dual-channel attention for robust feature extraction. Experiments across five participants show that, under fixed-position evaluation, WaveMan achieves the same cross-position accuracy as the baseline with five times fewer training positions. In random free-position testing, accuracy increases from 33.00% to 94.33%, enabled by the proposed method. These results demonstrate the feasibility of reliable, privacy-preserving interaction for household humanoid robots across unconstrained user positions.
|
https://arxiv.org/abs/2601.07454
|
Academic Papers
|
svg
|
f1f6a54e5bf8570a3aa396428270e3e7de44dec17c3ebeccc37ec154822e4ef7
|
2026-01-13T00:00:00-05:00
|
TriCG with deflated restarting for symmetric quasi-definite linear systems
|
arXiv:2601.07455v1 Announce Type: new Abstract: TriCG is a short-recurrence iterative method recently introduced by Montoison and Orban [SIAM J. Sci. Comput., 43 (2021), pp. A2502--A2525] for solving symmetric quasi-definite (SQD) linear systems. TriCG takes advantage of the inherent block structure of SQD linear systems and performs substantially better than SYMMLQ. However, numerical experiments have revealed that the convergence of TriCG can be notably slow when the off-diagonal block contains a substantial number of large elliptic singular values. To address this limitation, we introduce a deflation strategy tailored for TriCG to improve its convergence behavior. Specifically, we develop a generalized Saunders--Simon--Yip process with deflated restarting to construct the deflation subspaces. Building upon this process, we propose a novel method termed TriCG with deflated restarting. The deflation subspaces can also be utilized to solve SQD linear systems with multiple right-hand sides. Numerical experiments are provided to illustrate the superior performance of the proposed methods.
|
https://arxiv.org/abs/2601.07455
|
Academic Papers
|
svg
|
1bd51c5408738d9a42066a145721c6f22c92066ec1e271daf15f8323e4d296fb
|
2026-01-13T00:00:00-05:00
|
Improving Video Question Answering through query-based frame selection
|
arXiv:2601.07459v1 Announce Type: new Abstract: Video Question Answering (VideoQA) models enhance understanding and interaction with audiovisual content, making it more accessible, searchable, and useful for a wide range of fields such as education, surveillance, entertainment, and content creation. Due to heavy compute requirements, most large visual language models (VLMs) for VideoQA rely on a fixed number of frames by uniformly sampling the video. However, this process does not pick important frames or capture the context of the video. We present a novel query-based selection of frames relevant to the questions based on the submodular mutual Information (SMI) functions. By replacing uniform frame sampling with query-based selection, our method ensures that the chosen frames provide complementary and essential visual information for accurate VideoQA. We evaluate our approach on the MVBench dataset, which spans a diverse set of multi-action video tasks. VideoQA accuracy on this dataset was assessed using two VLMs, namely Video-LLaVA and LLaVA-NeXT, both of which originally employed uniform frame sampling. Experiments were conducted using both uniform and query-based sampling strategies. An accuracy improvement of up to \textbf{4\%} was observed when using query-based frame selection over uniform sampling. Qualitative analysis further highlights that query-based selection, using SMI functions, consistently picks frames better aligned with the question. We opine that such query-based frame selection can enhance accuracy in a wide range of tasks that rely on only a subset of video frames.
|
https://arxiv.org/abs/2601.07459
|
Academic Papers
|
svg
|
434b92d2cc0a01091459dc8047acfa6178d4a1afa992a725559e7c209bd477d3
|
2026-01-13T00:00:00-05:00
|
From Sketch to Fresco: Efficient Diffusion Transformer with Progressive Resolution
|
arXiv:2601.07462v1 Announce Type: new Abstract: Diffusion Transformers achieve impressive generative quality but remain computationally expensive due to iterative sampling. Recently, dynamic resolution sampling has emerged as a promising acceleration technique by reducing the resolution of early sampling steps. However, existing methods rely on heuristic re-noising at every resolution transition, injecting noise that breaks cross-stage consistency and forces the model to relearn global structure. In addition, these methods indiscriminately upsample the entire latent space at once without checking which regions have actually converged, causing accumulated errors, and visible artifacts. Therefore, we propose \textbf{Fresco}, a dynamic resolution framework that unifies re-noise and global structure across stages with progressive upsampling, preserving both the efficiency of low-resolution drafting and the fidelity of high-resolution refinement, with all stages aligned toward the same final target. Fresco achieves near-lossless acceleration across diverse domains and models, including 10$\times$ speedup on FLUX, and 5$\times$ on HunyuanVideo, while remaining orthogonal to distillation, quantization and feature caching, reaching 22$\times$ speedup when combined with distilled models. Our code is in supplementary material and will be released on Github.
|
https://arxiv.org/abs/2601.07462
|
Academic Papers
|
svg
|
43f83654b19241223688916b66c1b78d7361582707011efb158ce11970edcfeb
|
2026-01-13T00:00:00-05:00
|
Puzzle it Out: Local-to-Global World Model for Offline Multi-Agent Reinforcement Learning
|
arXiv:2601.07463v1 Announce Type: new Abstract: Offline multi-agent reinforcement learning (MARL) aims to solve cooperative decision-making problems in multi-agent systems using pre-collected datasets. Existing offline MARL methods primarily constrain training within the dataset distribution, resulting in overly conservative policies that struggle to generalize beyond the support of the data. While model-based approaches offer a promising solution by expanding the original dataset with synthetic data generated from a learned world model, the high dimensionality, non-stationarity, and complexity of multi-agent systems make it challenging to accurately estimate the transitions and reward functions in offline MARL. Given the difficulty of directly modeling joint dynamics, we propose a local-to-global (LOGO) world model, a novel framework that leverages local predictions-which are easier to estimate-to infer global state dynamics, thus improving prediction accuracy while implicitly capturing agent-wise dependencies. Using the trained world model, we generate synthetic data to augment the original dataset, expanding the effective state-action space. To ensure reliable policy learning, we further introduce an uncertainty-aware sampling mechanism that adaptively weights synthetic data by prediction uncertainty, reducing approximation error propagation to policies. In contrast to conventional ensemble-based methods, our approach requires only an additional encoder for uncertainty estimation, significantly reducing computational overhead while maintaining accuracy. Extensive experiments across 8 scenarios against 8 baselines demonstrate that our method surpasses state-of-the-art baselines on standard offline MARL benchmarks, establishing a new model-based baseline for generalizable offline multi-agent learning.
|
https://arxiv.org/abs/2601.07463
|
Academic Papers
|
svg
|
b7748c876455d002bc08f37e9bc4972a39d0929d06cf54671581aac27b6440cc
|
2026-01-13T00:00:00-05:00
|
IFDNS: An Iterative Feedback-Driven Neuro-Symbolic Method for Faithful Logical Reasoning
|
arXiv:2601.07464v1 Announce Type: new Abstract: Large language models (LLMs) have demonstrated impressive capabilities across a wide range of reasoning tasks, including logical and mathematical problem-solving. While prompt-based methods like Chain-of-Thought (CoT) can enhance LLM reasoning abilities to some extent, they often suffer from a lack of faithfulness, where the derived conclusions may not align with the generated reasoning chain. To address this issue, researchers have explored neuro-symbolic approaches to bolster LLM logical reasoning capabilities. However, existing neuro-symbolic methods still face challenges with information loss during the process. To overcome these limitations, we introduce Iterative Feedback-Driven Neuro-Symbolic (IFDNS), a novel prompt-based method that employs a multi-round feedback mechanism to address LLM limitations in handling complex logical relationships. IFDNS utilizes iterative feedback during the logic extraction phase to accurately extract causal relationship statements and translate them into propositional and logical implication expressions, effectively mitigating information loss issues. Furthermore, IFDNS is orthogonal to existing prompt methods, allowing for seamless integration with various prompting approaches. Empirical evaluations across six datasets demonstrate the effectiveness of IFDNS in significantly improving the performance of CoT and Chain-of-Thought with Self-Consistency (CoT-SC). Specifically, IFDNS achieves a +9.40% accuracy boost for CoT on the LogiQA dataset and a +11.70% improvement for CoT-SC on the PrOntoQA dataset.
|
https://arxiv.org/abs/2601.07464
|
Academic Papers
|
svg
|
4e5f2bbca6da09dc31dba8ab2c8708501c143c568cf6cd41409b3d855dea091c
|
2026-01-13T00:00:00-05:00
|
A Scalable Solution for Node Mobility Problems in NDN-Based Massive LEO Constellations
|
arXiv:2601.07466v1 Announce Type: new Abstract: In recent years, there has been increasing investment in the deployment of massive commercial Low Earth Orbit (LEO) constellations to provide global Internet connectivity. These constellations, now equipped with inter-satellite links, can serve as low-latency Internet backbones, requiring LEO satellites to act not only as access nodes for ground stations, but also as in-orbit core routers. Due to their high velocity and the resulting frequent handovers of ground gateways, LEO networks highly stress mobility procedures at both the sender and receiver endpoints. On the other hand, a growing trend in networking is the use of technologies based on the Information Centric Networking (ICN) paradigm for servicing IoT networks and sensor networks in general, as its addressing, storage, and security mechanisms are usually a good match for IoT needs. Furthermore, ICN networks possess additional characteristics that are beneficial for the massive LEO scenario. For instance, the mobility of the receiver is helped by the inherent data-forwarding procedures in their architectures. However, the mobility of the senders remains an open problem. This paper proposes a comprehensive solution to the mobility problem for massive LEO constellations using the Named-Data Networking (NDN) architecture, as it is probably the most mature ICN proposal. Our solution includes a scalable method to relate content to ground gateways and a way to address traffic to the gateway that does not require cooperation from the network routing algorithm. Moreover, our solution works without requiring modifications to the actual NDN protocol itself, so it is easy to test and deploy. Our results indicate that, for long enough handover lengths, traffic losses are negligible even for ground stations with just one satellite in sight.
|
https://arxiv.org/abs/2601.07466
|
Academic Papers
|
svg
|
fb68030d40a6676437e9e26ef11d385b996b1d7d778d2fad695d1773e2938362
|
2026-01-13T00:00:00-05:00
|
Beyond Dialogue Time: Temporal Semantic Memory for Personalized LLM Agents
|
arXiv:2601.07468v1 Announce Type: new Abstract: Memory enables Large Language Model (LLM) agents to perceive, store, and use information from past dialogues, which is essential for personalization. However, existing methods fail to properly model the temporal dimension of memory in two aspects: 1) Temporal inaccuracy: memories are organized by dialogue time rather than their actual occurrence time; 2) Temporal fragmentation: existing methods focus on point-wise memory, losing durative information that captures persistent states and evolving patterns. To address these limitations, we propose Temporal Semantic Memory (TSM), a memory framework that models semantic time for point-wise memory and supports the construction and utilization of durative memory. During memory construction, it first builds a semantic timeline rather than a dialogue one. Then, it consolidates temporally continuous and semantically related information into a durative memory. During memory utilization, it incorporates the query's temporal intent on the semantic timeline, enabling the retrieval of temporally appropriate durative memories and providing time-valid, duration-consistent context to support response generation. Experiments on LongMemEval and LoCoMo show that TSM consistently outperforms existing methods and achieves up to 12.2% absolute improvement in accuracy, demonstrating the effectiveness of the proposed method.
|
https://arxiv.org/abs/2601.07468
|
Academic Papers
|
svg
|
17dc3d88d022619c065f9b9e94df578916649c4ec8f8132b7f13d154e2b921b4
|
2026-01-13T00:00:00-05:00
|
Knowledge Distillation for LLM-Based Human Activity Recognition in Homes
|
arXiv:2601.07469v1 Announce Type: new Abstract: Human Activity Recognition (HAR) is a central problem for context-aware applications, especially for smart homes and assisted living. A few very recent studies have shown that Large Language Models (LLMs) can be used for HAR at home, reaching high performance and addressing key challenges. In this paper, we provide new experimental results regarding the use of LLMs for HAR, on two state-of-the-art datasets. More specifically, we show how recognition performance evolves depending on the size of the LLM used. Moreover, we experiment on the use of knowledge distillation techniques to fine-tune smaller LLMs with HAR reasoning examples generated by larger LLMs. We show that such fine-tuned models can perform almost as well as the largest LLMs, while having 50 times less parameters.
|
https://arxiv.org/abs/2601.07469
|
Academic Papers
|
svg
|
5cfe7b8907af2f18c574186f83324ce1d8e4833f03415ea0bb1dad02db9202d5
|
2026-01-13T00:00:00-05:00
|
Learning How to Remember: A Meta-Cognitive Management Method for Structured and Transferable Agent Memory
|
arXiv:2601.07470v1 Announce Type: new Abstract: Large language model (LLM) agents increasingly rely on accumulated memory to solve long-horizon decision-making tasks. However, most existing approaches store memory in fixed representations and reuse it at a single or implicit level of abstraction, which limits generalization and often leads to negative transfer when distribution shift. This paper proposes the Meta-Cognitive Memory Abstraction method (MCMA), which treats memory abstraction as a learnable cognitive skill rather than a fixed design choice. MCMA decouples task execution from memory management by combining a frozen task model with a learned memory copilot. The memory copilot is trained using direct preference optimization, it determines how memories should be structured, abstracted, and reused. Memories are further organized into a hierarchy of abstraction levels, enabling selective reuse based on task similarity. When no memory is transferable, MCMA transfers the ability to abstract and manage memory by transferring the memory copilot. Experiments on ALFWorld, ScienceWorld, and BabyAI demonstrate substantial improvements in performance, out-of-distribution generalization, and cross-task transfer over several baselines.
|
https://arxiv.org/abs/2601.07470
|
Academic Papers
|
svg
|
bd8863cf5cb80672809eb664b11551dce5d32028c208ab4ba0aae956b15e1ff6
|
2026-01-13T00:00:00-05:00
|
Secure Joint Source-Channel Coding for the AWGN Channel with Feedback: A Finite Blocklength Analysis
|
arXiv:2601.07472v1 Announce Type: new Abstract: In the literature, it has been shown that the secrecy capacity of the additive white Gaussian noise (AWGN) wiretap channel with noise-free feedback equals the capacity of the same model without secrecy constraint, and the classical Schalkwijk-Kailath (SK) scheme achieves the secrecy capacity. In this paper, we show that in finite blocklength regime, the SK scheme is not optimal, and propose a modified SK scheme which may perform better than the classical one. Besides this, this paper establishes a finite blocklength converse for the AWGN wiretap channel with feedback, which can also be viewed as a converse for the same model without secrecy constraint. To the best of the authors' knowledge, this is the first paper to address such a problem, and the results of this paper are further explained via numerical examples.
|
https://arxiv.org/abs/2601.07472
|
Academic Papers
|
svg
|
42fa67b0be40a277fc5dce534a3b9036be549bbbd5d960166a85ab756365d67a
|
2026-01-13T00:00:00-05:00
|
AntiPaSTO: Self-Supervised Steering of Moral Reasoning
|
arXiv:2601.07473v1 Announce Type: new Abstract: As models grow more capable, human supervision breaks down: labels don't scale, outputs can be gamed, and training doesn't generalize. Scalable oversight requires steering methods that are internal, self-supervised, and transfer out-of-distribution; existing methods satisfy some but not all three. We introduce AntiPaSTO, which separates representations along an anti-parallel axis ($\alpha=\pm1$ produce opposite shifts), with coherence constraints preventing collapse. Human input is minimal: two contrasting words inserted into template sentences, no preference labels. Using 800 such pairs on Gemma-3-1B, AntiPaSTO beats prompting baselines by $6.9\times$ on DailyDilemmas and maintains bidirectional control where prompting triggers refusal. Code is available at https://github.com/wassname/AntiPaSTO.
|
https://arxiv.org/abs/2601.07473
|
Academic Papers
|
svg
|
8f198b02009240bfe64251809828eeedeadae1599ac6d7c4db62f605296807cb
|
2026-01-13T00:00:00-05:00
|
Task Prototype-Based Knowledge Retrieval for Multi-Task Learning from Partially Annotated Data
|
arXiv:2601.07474v1 Announce Type: new Abstract: Multi-task learning (MTL) is critical in real-world applications such as autonomous driving and robotics, enabling simultaneous handling of diverse tasks. However, obtaining fully annotated data for all tasks is impractical due to labeling costs. Existing methods for partially labeled MTL typically rely on predictions from unlabeled tasks, making it difficult to establish reliable task associations and potentially leading to negative transfer and suboptimal performance. To address these issues, we propose a prototype-based knowledge retrieval framework that achieves robust MTL instead of relying on predictions from unlabeled tasks. Our framework consists of two key components: (1) a task prototype embedding task-specific characteristics and quantifying task associations, and (2) a knowledge retrieval transformer that adaptively refines feature representations based on these associations. To achieve this, we introduce an association knowledge generating (AKG) loss to ensure the task prototype consistently captures task-specific characteristics. Extensive experiments demonstrate the effectiveness of our framework, highlighting its potential for robust multi-task learning, even when only a subset of tasks is annotated.
|
https://arxiv.org/abs/2601.07474
|
Academic Papers
|
svg
|
0b092c4f40290b3298596ffedc3de751fce30fc8a0b19bc8518080e48f76ce63
|
2026-01-13T00:00:00-05:00
|
ARCQuant: Boosting NVFP4 Quantization with Augmented Residual Channels for LLMs
|
arXiv:2601.07475v1 Announce Type: new Abstract: The emergence of fine-grained numerical formats like NVFP4 presents new opportunities for efficient Large Language Model (LLM) inference. However, it is difficult to adapt existing Post-Training Quantization (PTQ) strategies to these formats: rotation-based methods compromise fine-grained block isolation; smoothing techniques struggle with significant 4-bit quantization errors; and mixed-precision approaches often conflict with hardware constraints on unified-precision computation. To address these challenges, we propose ARCQuant, a framework that boosts NVFP4 performance via Augmented Residual Channels. Distinct from methods that compromise block isolation or hardware uniformity, ARCQuant maintains a strictly unified NVFP4 format by augmenting the activation matrix with quantized residual channels. This design integrates the error compensation process directly into the matrix reduction dimension, enabling the use of standard, highly optimized GEMM kernels with minimal overhead. Theoretical analysis confirms that the worst-case error bound of our dual-stage NVFP4 quantization is comparable to that of standard 8-bit formats such as MXFP8. Extensive experiments on LLaMA and Qwen models demonstrate that ARCQuant achieves state-of-the-art accuracy, comparable to full-precision baselines in perplexity and downstream tasks. Furthermore, deployment on RTX 5090 and RTX PRO 6000 GPUs confirms practical benefits, achieving up to 3x speedup over FP16. Our code is available at https://github.com/actypedef/ARCQuant .
|
https://arxiv.org/abs/2601.07475
|
Academic Papers
|
svg
|
56ac3b9225abc37cb6967e15f21602ad6088f30a32703c7e6efc72ca3c216071
|
2026-01-13T00:00:00-05:00
|
NanoCockpit: Performance-optimized Application Framework for AI-based Autonomous Nanorobotics
|
arXiv:2601.07476v1 Announce Type: new Abstract: Autonomous nano-drones, powered by vision-based tiny machine learning (TinyML) models, are a novel technology gaining momentum thanks to their broad applicability and pushing scientific advancement on resource-limited embedded systems. Their small form factor, i.e., a few 10s grams, severely limits their onboard computational resources to sub-\SI{100}{\milli\watt} microcontroller units (MCUs). The Bitcraze Crazyflie nano-drone is the \textit{de facto} standard, offering a rich set of programmable MCUs for low-level control, multi-core processing, and radio transmission. However, roboticists very often underutilize these onboard precious resources due to the absence of a simple yet efficient software layer capable of time-optimal pipelining of multi-buffer image acquisition, multi-core computation, intra-MCUs data exchange, and Wi-Fi streaming, leading to sub-optimal control performances. Our \textit{NanoCockpit} framework aims to fill this gap, increasing the throughput and minimizing the system's latency, while simplifying the developer experience through coroutine-based multi-tasking. In-field experiments on three real-world TinyML nanorobotics applications show our framework achieves ideal end-to-end latency, i.e. zero overhead due to serialized tasks, delivering quantifiable improvements in closed-loop control performance ($-$30\% mean position error, mission success rate increased from 40\% to 100\%).
|
https://arxiv.org/abs/2601.07476
|
Academic Papers
|
svg
|
ef86b8b8035f20c28a1d97658945cf37767373f5b627764ebf761ee12c329c99
|
2026-01-13T00:00:00-05:00
|
JudgeFlow: Agentic Workflow Optimization via Block Judge
|
arXiv:2601.07477v1 Announce Type: new Abstract: Optimizing LLM-based agentic workflows is challenging for scaling AI capabilities. Current methods rely on coarse, end-to-end evaluation signals and lack fine-grained signals on where to refine, often resulting in inefficient or low-impact modifications. To address these limitations, we propose {\our{}}, an Evaluation-Judge-Optimization-Update pipeline. We incorporate reusable, configurable logic blocks into agentic workflows to capture fundamental forms of logic. On top of this abstraction, we design a dedicated Judge module that inspects execution traces -- particularly failed runs -- and assigns rank-based responsibility scores to problematic blocks. These fine-grained diagnostic signals are then leveraged by an LLM-based optimizer, which focuses modifications on the most problematic block in the workflow. Our approach improves sample efficiency, enhances interpretability through block-level diagnostics, and provides a scalable foundation for automating increasingly complex agentic workflows. We evaluate {\our{}} on mathematical reasoning and code generation benchmarks, where {\our{}} achieves superior performance and efficiency compared to existing methods. The source code is publicly available at https://github.com/ma-zihan/JudgeFlow.
|
https://arxiv.org/abs/2601.07477
|
Academic Papers
|
svg
|
52d911eb4b83ce590b39bbdba5d45ad54cb2d74491e93e5b05f152f0584cc357
|
2026-01-13T00:00:00-05:00
|
Derivative-free discrete gradient methods
|
arXiv:2601.07479v1 Announce Type: new Abstract: Discrete gradient methods are a class of numerical integrators producing solutions with exact preservation of first integrals of ordinary differential equations. In this paper, we apply order theory combined with the symmetrized Itoh--Abe discrete gradient and finite differences to construct an integral-preserving fourth-order method that is derivative-free. The numerical scheme is implicit and a convergence result for Newton's iterations is provided, taking into account how the error due to the finite difference approximations affects the convergence rate. Numerical experiments verify the order and show that the derivative-free method is significantly faster than obtaining derivatives by automatic differentiation. Finally, an experiment using topographic data as the potential function of a Hamiltonian oscillator demonstrates how this method allows the simulation of discrete-time dynamics from a Hamiltonian that is a combination of data and analytical expressions.
|
https://arxiv.org/abs/2601.07479
|
Academic Papers
|
svg
|
ee50bfc238bd94d4a77896e4fd06e91329bd2a14cec4f30cec64ab9d1dd02cbb
|
2026-01-13T00:00:00-05:00
|
The Secretary Problem with Predictions and a Chosen Order
|
arXiv:2601.07482v1 Announce Type: new Abstract: We study a learning-augmented variant of the secretary problem, recently introduced by Fujii and Yoshida (2023), in which the decision-maker has access to machine-learned predictions of candidate values. The central challenge is to balance consistency and robustness: when predictions are accurate, the algorithm should select a near-optimal secretary, while under inaccurate predictions it should still guarantee a bounded competitive ratio. We consider both the classical Random Order Secretary Problem (ROSP), where candidates arrive in a uniformly random order, and a more natural learning-augmented model in which the decision-maker may choose the arrival order based on predicted values. We call this model the Chosen Order Secretary Problem (COSP), capturing scenarios such as interview schedules set in advance. We propose a new randomized algorithm applicable to both ROSP and COSP. Our method switches from fully trusting predictions to a threshold-based rule once a large prediction deviation is detected. Let $\epsilon \in [0,1]$ denote the maximum multiplicative prediction error. For ROSP, our algorithm achieves a competitive ratio of $\max\{0.221, (1-\epsilon)/(1+\epsilon)\}$, improving upon the prior bound of $\max\{0.215, (1-\epsilon)/(1+\epsilon)\}$. For COSP, we achieve $\max\{0.262, (1-\epsilon)/(1+\epsilon)\}$, surpassing the $0.25$ worst-case bound for prior approaches and moving closer to the classical secretary benchmark of $1/e \approx 0.368$. These results highlight the benefit of combining predictions with arrival-order control in online decision-making.
|
https://arxiv.org/abs/2601.07482
|
Academic Papers
|
svg
|
19067fb3d4fc9123017b87103346c1b732e3b08013841d29a65151025e3e3c0d
|
2026-01-13T00:00:00-05:00
|
FocalOrder: Focal Preference Optimization for Reading Order Detection
|
arXiv:2601.07483v1 Announce Type: new Abstract: Reading order detection is the foundation of document understanding. Most existing methods rely on uniform supervision, implicitly assuming a constant difficulty distribution across layout regions. In this work, we challenge this assumption by revealing a critical flaw: \textbf{Positional Disparity}, a phenomenon where models demonstrate mastery over the deterministic start and end regions but suffer a performance collapse in the complex intermediate sections. This degradation arises because standard training allows the massive volume of easy patterns to drown out the learning signals from difficult layouts. To address this, we propose \textbf{FocalOrder}, a framework driven by \textbf{Focal Preference Optimization (FPO)}. Specifically, FocalOrder employs adaptive difficulty discovery with exponential moving average mechanism to dynamically pinpoint hard-to-learn transitions, while introducing a difficulty-calibrated pairwise ranking objective to enforce global logical consistency. Extensive experiments demonstrate that FocalOrder establishes new state-of-the-art results on OmniDocBench v1.0 and Comp-HRDoc. Our compact model not only outperforms competitive specialized baselines but also significantly surpasses large-scale general VLMs. These results demonstrate that aligning the optimization with intrinsic structural ambiguity of documents is critical for mastering complex document structures.
|
https://arxiv.org/abs/2601.07483
|
Academic Papers
|
svg
|
b38d31cfb389cbb121cb55a29d477a22f0d4569a9f00f0af06ff92c529eeba7b
|
2026-01-13T00:00:00-05:00
|
R3-RECON: Radiance-Field-Free Active Reconstruction via Renderability
|
arXiv:2601.07484v1 Announce Type: new Abstract: In active reconstruction, an embodied agent must decide where to look next to efficiently acquire views that support high-quality novel-view rendering. Recent work on active view planning for neural rendering largely derives next-best-view (NBV) criteria by backpropagating through radiance fields or estimating information entropy over 3D Gaussian primitives. While effective, these strategies tightly couple view selection to heavy, representation-specific mechanisms and fail to account for the computational and resource constraints required for lightweight online deployment. In this paper, we revisit active reconstruction from a renderability-centric perspective. We propose $\mathbb{R}^{3}$-RECON, a radiance-fields-free active reconstruction framework that induces an implicit, pose-conditioned renderability field over SE(3) from a lightweight voxel map. Our formulation aggregates per-voxel online observation statistics into a unified scalar renderability score that is cheap to update and can be queried in closed form at arbitrary candidate viewpoints in milliseconds, without requiring gradients or radiance-field training. This renderability field is strongly correlated with image-space reconstruction error, naturally guiding NBV selection. We further introduce a panoramic extension that estimates omnidirectional (360$^\circ$) view utility to accelerate candidate evaluation. In the standard indoor Replica dataset, $\mathbb{R}^{3}$-RECON achieves more uniform novel-view quality and higher 3D Gaussian splatting (3DGS) reconstruction accuracy than recent active GS baselines with matched view and time budgets.
|
https://arxiv.org/abs/2601.07484
|
Academic Papers
|
svg
|
585d96a82c64e2cc6c20641f20af85e397f7b4cd8fc1458d57f6475929397ba0
|
2026-01-13T00:00:00-05:00
|
Frequency-Adaptive Multi-Band Architecture for Upper Mid-Band MIMO Systems
|
arXiv:2601.07489v1 Announce Type: new Abstract: FR3 ($\approx$7-24 GHz), also referred to as the upper mid-band, has recently emerged as promising spectrum for 6G; however, its propagation and MIMO characteristics vary significantly with frequency and environment, and spectrum availability may be intermittent due to incumbents. Using site-specific ray tracing (Sionna RT) in representative indoor and outdoor scenarios, we evaluate 7, 10, 14, 20, and 24 GHz under SISO and MIMO configurations. The results show that FR3 exhibits propagation characteristics intermediate between sub-6 GHz and mmWave bands while supporting meaningful spatial multiplexing, albeit with strong site dependence. Motivated by these findings, we propose a fully digital frequency-adaptive multi-band MIMO architecture that repurposes ADCs/DACs and baseband processing resources across FR3 subbands via switching, enabling dynamic trade-offs between bandwidth (spectrum gain) and antenna consolidation (MIMO gain) under availability and channel constraints. Simulation results demonstrate that exploiting additional spectrum is often optimal, while adaptive resource repurposing becomes beneficial when subbands are unavailable or when multiplexing gains are concentrated at specific frequencies.
|
https://arxiv.org/abs/2601.07489
|
Academic Papers
|
svg
|
61e51e0b2bc09147d366d7bb4c54d8483ed7ffe3be28e50a070d80a39878ae79
|
2026-01-13T00:00:00-05:00
|
Graph Inference Towards ICD Coding
|
arXiv:2601.07496v1 Announce Type: new Abstract: Automated ICD coding involves assigning standardized diagnostic codes to clinical narratives. The vast label space and extreme class imbalance continue to challenge precise prediction. To address these issues, LabGraph is introduced -- a unified framework that reformulates ICD coding as a graph generation task. By combining adversarial domain adaptation, graph-based reinforcement learning, and perturbation regularization, LabGraph effectively enhances model robustness and generalization. In addition, a label graph discriminator dynamically evaluates each generated code, providing adaptive reward feedback during training. Experiments on benchmark datasets demonstrate that LabGraph consistently outperforms previous approaches on micro-F1, micro-AUC, and P@K.
|
https://arxiv.org/abs/2601.07496
|
Academic Papers
|
svg
|
9f7ea43bf7a7dc0d32109219ec9c14c9df2ef3ea980696d22d3fe299ef2167e0
|
2026-01-13T00:00:00-05:00
|
On spectral properties and fast initial convergence of the Kaczmarz method
|
arXiv:2601.07498v1 Announce Type: new Abstract: The Kaczmarz method is successfully used for solving discretizations of linear inverse problems, especially in computed tomography where it is known as ART. Practitioners often observe and appreciate its fast convergence in the first few iterations, leading to the same favorable semi-convergence that we observe for simultaneous iterative reconstruction methods. While the latter methods have symmetric and positive definite iteration operators that facilitate their analysis, the operator in Kaczmarz's method is nonsymmetric and it has been an open question so far to understand this fast initial convergence. We perform a spectral analysis of Kaczmarz's method that gives new insight into its (often fast) initial behavior. We also carry out a statistical analysis of how the data noise enters the iteration vectors, which sheds new light on the semi-convergence. Our results are illustrated with several numerical examples.
|
https://arxiv.org/abs/2601.07498
|
Academic Papers
|
svg
|
5615d408f8a4f9f4801471928200b2bd01cc48252608737c1297710003e54700
|
2026-01-13T00:00:00-05:00
|
Anatomy Aware Cascade Network: Bridging Epistemic Uncertainty and Geometric Manifold for 3D Tooth Segmentation
|
arXiv:2601.07499v1 Announce Type: new Abstract: Accurate three-dimensional (3D) tooth segmentation from Cone-Beam Computed Tomography (CBCT) is a prerequisite for digital dental workflows. However, achieving high-fidelity segmentation remains challenging due to adhesion artifacts in naturally occluded scans, which are caused by low contrast and indistinct inter-arch boundaries. To address these limitations, we propose the Anatomy Aware Cascade Network (AACNet), a coarse-to-fine framework designed to resolve boundary ambiguity while maintaining global structural consistency. Specifically, we introduce two mechanisms: the Ambiguity Gated Boundary Refiner (AGBR) and the Signed Distance Map guided Anatomical Attention (SDMAA). The AGBR employs an entropy based gating mechanism to perform targeted feature rectification in high uncertainty transition zones. Meanwhile, the SDMAA integrates implicit geometric constraints via signed distance map to enforce topological consistency, preventing the loss of spatial details associated with standard pooling. Experimental results on a dataset of 125 CBCT volumes demonstrate that AACNet achieves a Dice Similarity Coefficient of 90.17 \% and a 95\% Hausdorff Distance of 3.63 mm, significantly outperforming state-of-the-art methods. Furthermore, the model exhibits strong generalization on an external dataset with an HD95 of 2.19 mm, validating its reliability for downstream clinical applications such as surgical planning. Code for AACNet is available at https://github.com/shiliu0114/AACNet.
|
https://arxiv.org/abs/2601.07499
|
Academic Papers
|
svg
|
f6e59a288013b6e631996d0f3f6bcadd6b001b101fbe1994f091261e979e1035
|
2026-01-13T00:00:00-05:00
|
FROAV: A Framework for RAG Observation and Agent Verification - Lowering the Barrier to LLM Agent Research
|
arXiv:2601.07504v1 Announce Type: new Abstract: The rapid advancement of Large Language Models (LLMs) and their integration into autonomous agent systems has created unprecedented opportunities for document analysis, decision support, and knowledge retrieval. However, the complexity of developing, evaluating, and iterating on LLM-based agent workflows presents significant barriers to researchers, particularly those without extensive software engineering expertise. We present FROAV (Framework for RAG Observation and Agent Verification), an open-source research platform that democratizes LLM agent research by providing a plug-and-play architecture combining visual workflow orchestration, a comprehensive evaluation framework, and extensible Python integration. FROAV implements a multi-stage Retrieval-Augmented Generation (RAG) pipeline coupled with a rigorous "LLM-as-a-Judge" evaluation system, all accessible through intuitive graphical interfaces. Our framework integrates n8n for no-code workflow design, PostgreSQL for granular data management, FastAPI for flexible backend logic, and Streamlit for human-in-the-loop interaction. Through this integrated ecosystem, researchers can rapidly prototype RAG strategies, conduct prompt engineering experiments, validate agent performance against human judgments, and collect structured feedback-all without writing infrastructure code. We demonstrate the framework's utility through its application to financial document analysis, while emphasizing its material-agnostic architecture that adapts to any domain requiring semantic analysis. FROAV represents a significant step toward making LLM agent research accessible to a broader scientific community, enabling researchers to focus on hypothesis testing and algorithmic innovation rather than system integration challenges.
|
https://arxiv.org/abs/2601.07504
|
Academic Papers
|
svg
|
5833516b9971d9bc44dde4fcfa50ec9b327d0549103cae26e0813f03f5141adc
|
2026-01-13T00:00:00-05:00
|
Judging Against the Reference: Uncovering Knowledge-Driven Failures in LLM-Judges on QA Evaluation
|
arXiv:2601.07506v1 Announce Type: new Abstract: While large language models (LLMs) are increasingly used as automatic judges for question answering (QA) and other reference-conditioned evaluation tasks, little is known about their ability to adhere to a provided reference. We identify a critical failure mode of such reference-based LLM QA evaluation: when the provided reference conflicts with the judge model's parametric knowledge, the resulting scores become unreliable, substantially degrading evaluation fidelity. To study this phenomenon systematically, we introduce a controlled swapped-reference QA framework that induces reference-belief conflicts. Specifically, we replace the reference answer with an incorrect entity and construct diverse pairings of original and swapped references with correspondingly aligned candidate answers. Surprisingly, grading reliability drops sharply under swapped references across a broad set of judge models. We empirically show that this vulnerability is driven by judges' over-reliance on parametric knowledge, leading judges to disregard the given reference under conflict. Finally, we find that this failure persists under common prompt-based mitigation strategies, highlighting a fundamental limitation of LLM-as-a-judge evaluation and motivating reference-based protocols that enforce stronger adherence to the provided reference.
|
https://arxiv.org/abs/2601.07506
|
Academic Papers
|
svg
|
d06adec51fc8e85fe3b020669b979540110ffc49b14255ea2f198dd910448e7a
|
2026-01-13T00:00:00-05:00
|
High-Rank Structured Modulation for Parameter-Efficient Fine-Tuning
|
arXiv:2601.07507v1 Announce Type: new Abstract: As the number of model parameters increases, parameter-efficient fine-tuning (PEFT) has become the go-to choice for tailoring pre-trained large language models. Low-rank Adaptation (LoRA) uses a low-rank update method to simulate full parameter fine-tuning, which is widely used to reduce resource requirements. However, decreasing the rank encounters challenges with limited representational capacity when compared to full parameter fine-tuning. We present \textbf{SMoA}, a high-rank \textbf{S}tructured \textbf{MO}dulation \textbf{A}dapter that uses fewer trainable parameters while maintaining a higher rank, thereby improving the model's representational capacity and offering improved performance potential. The core idea is to freeze the original pretrained weights and selectively amplify or suppress important features of the original weights across multiple subspaces. The subspace mechanism provides an efficient way to increase the capacity and complexity of a model. We conduct both theoretical analyses and empirical studies on various tasks. Experiment results show that SMoA outperforms LoRA and its variants on 10 tasks, with extensive ablation studies validating its effectiveness.
|
https://arxiv.org/abs/2601.07507
|
Academic Papers
|
svg
|
464bd60c3a6d3d98ed7d242becc23581d6fc983e4685f8714d7366966be7b5e8
|
2026-01-13T00:00:00-05:00
|
Multiword matrix multiplication over large finite fields in floating-point arithmetic
|
arXiv:2601.07508v1 Announce Type: new Abstract: This article is concerned with the efficient computation of modular matrix multiplication C=AB mod p, a key kernel in computer algebra. We focus on floating-point arithmetic, which allows for using efficient matrix multiplication libraries. However, the existing approach is limited to primes p with bitsize at most half the mantissa size (e.g., 26 bits with double precision arithmetic), and becomes quite inefficient when p approaches this limit. We present a new approach that overcomes this limitation and can efficiently handle primes with larger bitsizes. The key idea is to use multiword decompositions, which represent A and B as scaled sums of u and v matrices (words) with smaller coefficients. We provide a rigorous analysis that proves the correctness of this approach for suitably chosen scaling parameters. Our analysis determines the maximum bitsize of p that can be handled for a given number of words; in particular, we show that decomposing in two words each input suffices to handle bitsizes almost equal to the full mantissa size (e.g., the 26 bits limit is raised to 52 bits in double precision arithmetic). Moreover, we show that (1,v) decompositions with v>1 are also of interest to handle intermediate bitsizes. We perform an extensive experimental analysis for various matrix shapes and prime bitsizes. Our performance benchmarks on both CPU and GPU architectures confirm the efficiency of the proposed approach, which can outperform the existing single word approach for bitsizes as low as 23, and can handle bitsizes as high as 52 while retaining high performance.
|
https://arxiv.org/abs/2601.07508
|
Academic Papers
|
svg
|
d12766a8a7759bed0936b09104a5d8e2caf25563fb664fd59c25910dd99e46c5
|
2026-01-13T00:00:00-05:00
|
Machine Learning Model Trading with Verification under Information Asymmetry
|
arXiv:2601.07510v1 Announce Type: new Abstract: Machine learning (ML) model trading, known for its role in protecting data privacy, faces a major challenge: information asymmetry. This issue can lead to model deception, a problem that current literature has not fully solved, where the seller misrepresents model performance to earn more. We propose a game-theoretic approach, adding a verification step in the ML model market that lets buyers check model quality before buying. However, this method can be expensive and offers imperfect information, making it harder for buyers to decide. Our analysis reveals that a seller might probabilistically conduct model deception considering the chance of model verification. This deception probability decreases with the verification accuracy and increases with the verification cost. To maximize seller payoff, we further design optimal pricing schemes accounting for heterogeneous buyers' strategic behaviors. Interestingly, we find that reducing information asymmetry benefits both the seller and buyer. Meanwhile, protecting buyer order information doesn't improve the payoff for the buyer or the seller. These findings highlight the importance of reducing information asymmetry in ML model trading and open new directions for future research.
|
https://arxiv.org/abs/2601.07510
|
Academic Papers
|
svg
|
cb85208ddfd49ad825d4ae983ce6cb757cade35869377f16085902c2ae13b4ab
|
2026-01-13T00:00:00-05:00
|
Principal ideal problem and ideal shortest vector over rational primes in power-of-two cyclotomic fields
|
arXiv:2601.07511v1 Announce Type: new Abstract: The shortest vector problem (SVP) over ideal lattices is closely related to the Ring-LWE problem, which is widely used to build post-quantum cryptosystems. Power-of-two cyclotomic fields are frequently adopted to instantiate Ring-LWE. Pan et al. (EUROCRYPT~2021) explored the SVP over ideal lattices via the decomposition fields and, in particular determined the length of ideal lattices over rational primes $p\equiv3,5\pmod{8}$ in power-of-two cyclotomic fields via explicit construction of reduced lattice bases. In this work, we first provide a new method (different from analyzing lattice bases) to analyze the length of the shortest vector in prime ideals in $\mathbb{Z}[\zeta_{2^{n+1}}]$ when $p\equiv3,5\pmod{8}$. Then we precisely characterize the length of the shortest vector on the cases of $p\equiv7,9\pmod{16}$. Furthermore, we derive a new upper bound for this length, which is tighter than the bound obtained from Minkowski's theorem. Our key technique is to investigate whether a generator of a principal ideal can achieve the shortest length after embedding as a vector. If this holds for the ideal, finding the shortest vector in this ideal can be reduced to finding its shortest generator.
|
https://arxiv.org/abs/2601.07511
|
Academic Papers
|
svg
|
a0d2902a7192fa3ee60c427f76ef195f325102753c778567dac5d84c22f28d25
|
2026-01-13T00:00:00-05:00
|
Land-then-transport: A Flow Matching-Based Generative Decoder for Wireless Image Transmission
|
arXiv:2601.07512v1 Announce Type: new Abstract: Due to strict rate and reliability demands, wireless image transmission remains difficult for both classical layered designs and joint source-channel coding (JSCC), especially under low latency. Diffusion-based generative decoders can deliver strong perceptual quality by leveraging learned image priors, but iterative stochastic denoising leads to high decoding delay. To enable low-latency decoding, we propose a flow-matching (FM) generative decoder under a new land-then-transport (LTT) paradigm that tightly integrates the physical wireless channel into a continuous-time probability flow. For AWGN channels, we build a Gaussian smoothing path whose noise schedule indexes effective noise levels, and derive a closed-form teacher velocity field along this path. A neural-network student vector field is trained by conditional flow matching, yielding a deterministic, channel-aware ODE decoder with complexity linear in the number of ODE steps. At inference, it only needs an estimate of the effective noise variance to set the ODE starting time. We further show that Rayleigh fading and MIMO channels can be mapped, via linear MMSE equalization and singular-value-domain processing, to AWGN-equivalent channels with calibrated starting times. Therefore, the same probability path and trained velocity field can be reused for Rayleigh and MIMO without retraining. Experiments on MNIST, Fashion-MNIST, and DIV2K over AWGN, Rayleigh, and MIMO demonstrate consistent gains over JPEG2000+LDPC, DeepJSCC, and diffusion-based baselines, while achieving good perceptual quality with only a few ODE steps. Overall, LTT provides a deterministic, physically interpretable, and computation-efficient framework for generative wireless image decoding across diverse channels.
|
https://arxiv.org/abs/2601.07512
|
Academic Papers
|
svg
|
0a0faf4036b7ec238114dd003f287e20cc529ec4adb9d525e3075faeeb29fd7b
|
2026-01-13T00:00:00-05:00
|
A Parity-Consistent Decomposition Method for the Weight Distribution of Pre-Transformed Polar Codes
|
arXiv:2601.07515v1 Announce Type: new Abstract: This paper introduces an efficient algorithm based on the Parity-Consistent Decomposition (PCD) method to determine the WD of pre-transformed polar codes. First, to address the bit dependencies introduced by the pre-transformation matrix, we propose an iterative algorithm to construct an \emph{Expanded Information Set}. By expanding the information bits within this set into 0s and 1s, we eliminate the correlations among information bits, thereby enabling the recursive calculation of the Hamming weight distribution using the \emph{PCD method}. Second, to further reduce computational complexity, we establish the theory of equivalence classes for pre-transformed polar codes. Codes within the same equivalence class share an identical weight distribution but correspond to different \emph{Expanded Information Set} sizes. By selecting the pre-transformation matrix that minimizes the \emph{Expanded Information Set} size within an equivalence class, we optimize the computation process. Numerical results demonstrate that the proposed method significantly reduces computational complexity compared to existing deterministic algorithms.
|
https://arxiv.org/abs/2601.07515
|
Academic Papers
|
svg
|
efdbe4e0c52db97dfe648b08a305f2bd9ee0236386cff5be5eb95cf7a64aa294
|
2026-01-13T00:00:00-05:00
|
Controlling Multimodal Conversational Agents with Coverage-Enhanced Latent Actions
|
arXiv:2601.07516v1 Announce Type: new Abstract: Vision-language models are increasingly employed as multimodal conversational agents (MCAs) for diverse conversational tasks. Recently, reinforcement learning (RL) has been widely explored for adapting MCAs to various human-AI interaction scenarios. Despite showing great enhancement in generalization performance, fine-tuning MCAs via RL still faces challenges in handling the extremely large text token space. To address this, we learn a compact latent action space for RL fine-tuning instead. Specifically, we adopt the learning from observation mechanism to construct the codebook for the latent action space, where future observations are leveraged to estimate current latent actions that could further be used to reconstruct future observations. However, the scarcity of paired image-text data hinders learning a codebook with sufficient coverage. Thus, we leverage both paired image-text data and text-only data to construct the latent action space, using a cross-modal projector for transforming text embeddings into image-text embeddings. We initialize the cross-modal projector on paired image-text data, and further train it on massive text-only data with a novel cycle consistency loss to enhance its robustness. We show that our latent action based method outperforms competitive baselines on two conversation tasks across various RL algorithms.
|
https://arxiv.org/abs/2601.07516
|
Academic Papers
|
svg
|
fb333672caeaef7a8afb92fad915de58dc47040b4063af3f0f4deb10d2865ca9
|
2026-01-13T00:00:00-05:00
|
Mon3tr: Monocular 3D Telepresence with Pre-built Gaussian Avatars as Amortization
|
arXiv:2601.07518v1 Announce Type: new Abstract: Immersive telepresence aims to transform human interaction in AR/VR applications by enabling lifelike full-body holographic representations for enhanced remote collaboration. However, existing systems rely on hardware-intensive multi-camera setups and demand high bandwidth for volumetric streaming, limiting their real-time performance on mobile devices. To overcome these challenges, we propose Mon3tr, a novel Monocular 3D telepresence framework that integrates 3D Gaussian splatting (3DGS) based parametric human modeling into telepresence for the first time. Mon3tr adopts an amortized computation strategy, dividing the process into a one-time offline multi-view reconstruction phase to build a user-specific avatar and a monocular online inference phase during live telepresence sessions. A single monocular RGB camera is used to capture body motions and facial expressions in real time to drive the 3DGS-based parametric human model, significantly reducing system complexity and cost. The extracted motion and appearance features are transmitted at 28 dB for novel poses, an end-to-end latency of ~ 80 ms, and > 1000x bandwidth reduction compared to point-cloud streaming, while supporting real-time operation from monocular inputs across diverse scenarios. Our demos can be found at https://mon3tr3d.github.io.
|
https://arxiv.org/abs/2601.07518
|
Academic Papers
|
svg
|
637a0cd54b54b60472a8c004ad58693786d5650f3b5623f10748dc38a436e9df
|
2026-01-13T00:00:00-05:00
|
Sparse Point-wise Privacy Leakage: Mechanism Design and Fundamental Limits
|
arXiv:2601.07523v1 Announce Type: new Abstract: We study an information-theoretic privacy mechanism design problem, where an agent observes useful data $Y$ that is arbitrarily correlated with sensitive data $X$, and design disclosed data $U$ generated from $Y$ (the agent has no direct access to $X$). We introduce \emph{sparse point-wise privacy leakage}, a worst-case privacy criterion that enforces two simultaneous constraints for every disclosed symbol $u\in\mathcal{U}$: (i) $u$ may be correlated with at most $N$ realizations of $X$, and (ii) the total leakage toward those realizations is bounded. In the high-privacy regime, we use concepts from information geometry to obtain a local quadratic approximation of mutual information which measures utility between $U$ and $Y$. When the leakage matrix $P_{X|Y}$ is invertible, this approximation reduces the design problem to a sparse quadratic maximization, known as the Rayleigh-quotient problem, with an $\ell_0$ constraint. We further show that, for the approximated problem, one can without loss of optimality restrict attention to a binary released variable $U$ with a uniform distribution. For small alphabet sizes, the exact sparsity-constrained optimum can be computed via combinatorial support enumeration, which quickly becomes intractable as the dimension grows. For general dimensions, the resulting sparse Rayleigh-quotient maximization is NP-hard and closely related to sparse principal component analysis (PCA). We propose a convex semidefinite programming (SDP) relaxation that is solvable in polynomial time and provides a tractable surrogate for the NP-hard design, together with a simple rounding procedure to recover a feasible leakage direction. We also identify a sparsity threshold beyond which the sparse optimum saturates at the unconstrained spectral value and the SDP relaxation becomes tight.
|
https://arxiv.org/abs/2601.07523
|
Academic Papers
|
svg
|
5c218c09a0be16e7d1a14d9ab8a3fab8e54dee89016ce6691061c4067452864f
|
2026-01-13T00:00:00-05:00
|
Stagewise Reinforcement Learning and the Geometry of the Regret Landscape
|
arXiv:2601.07524v1 Announce Type: new Abstract: Singular learning theory characterizes Bayesian learning as an evolving tradeoff between accuracy and complexity, with transitions between qualitatively different solutions as sample size increases. We extend this theory to deep reinforcement learning, proving that the concentration of the generalized posterior over policies is governed by the local learning coefficient (LLC), an invariant of the geometry of the regret function. This theory predicts that Bayesian phase transitions in reinforcement learning should proceed from simple policies with high regret to complex policies with low regret. We verify this prediction empirically in a gridworld environment exhibiting stagewise policy development: phase transitions over SGD training manifest as "opposing staircases" where regret decreases sharply while the LLC increases. Notably, the LLC detects phase transitions even when estimated on a subset of states where the policies appear identical in terms of regret, suggesting it captures changes in the underlying algorithm rather than just performance.
|
https://arxiv.org/abs/2601.07524
|
Academic Papers
|
svg
|
9a9d7b0640190eff6f14cb1308741c7507765ffcbba429068966acf90158c752
|
2026-01-13T00:00:00-05:00
|
Thinking Before Constraining: A Unified Decoding Framework for Large Language Models
|
arXiv:2601.07525v1 Announce Type: new Abstract: Natural generation allows Language Models (LMs) to produce free-form responses with rich reasoning, but the lack of guaranteed structure makes outputs difficult to parse or verify. Structured generation, or constrained decoding, addresses this drawback by producing content in standardized formats such as JSON, ensuring consistency and guaranteed-parsable outputs, but it can inadvertently restrict the model's reasoning capabilities. In this work, we propose a simple approach that combines the advantages of both natural and structured generation. By allowing LLMs to reason freely until specific trigger tokens are generated, and then switching to structured generation, our method preserves the expressive power of natural language reasoning while ensuring the reliability of structured outputs. We further evaluate our approach on several datasets, covering both classification and reasoning tasks, to demonstrate its effectiveness, achieving a substantial gain of up to 27% in accuracy compared to natural generation, while requiring only a small overhead of 10-20 extra tokens.
|
https://arxiv.org/abs/2601.07525
|
Academic Papers
|
svg
|
999e816ae52e3fdd94b82435d319fb6aae1a6c7099b14b75c8da8fd801127cd3
|
2026-01-13T00:00:00-05:00
|
MegaFlow: Large-Scale Distributed Orchestration System for the Agentic Era
|
arXiv:2601.07526v1 Announce Type: new Abstract: The rapid development of interactive and autonomous AI systems signals our entry into the agentic era. Training and evaluating agents on complex agentic tasks such as software engineering and computer use requires not only efficient model computation but also sophisticated infrastructure capable of coordinating vast agent-environment interactions. However, no open-source infrastructure can effectively support large-scale training and evaluation on such complex agentic tasks. To address this challenge, we present MegaFlow, a large-scale distributed orchestration system that enables efficient scheduling, resource allocation, and fine-grained task management for agent-environment workloads. MegaFlow abstracts agent training infrastructure into three independent services (Model Service, Agent Service, and Environment Service) that interact through unified interfaces, enabling independent scaling and flexible resource allocation across diverse agent-environment configurations. In our agent training deployments, MegaFlow successfully orchestrates tens of thousands of concurrent agent tasks while maintaining high system stability and achieving efficient resource utilization. By enabling such large-scale agent training, MegaFlow addresses a critical infrastructure gap in the emerging agentic AI landscape.
|
https://arxiv.org/abs/2601.07526
|
Academic Papers
|
svg
|
644cc0ebf5cbdc0a1286f0a08e68fdee1d3f65aa9e9123f7944e677d0e9a4b04
|
2026-01-13T00:00:00-05:00
|
Energy-efficient torque allocation for straight-line driving of electric vehicles based on pseudoconvex polynomials
|
arXiv:2601.07527v1 Announce Type: new Abstract: Electric vehicles with multiple motors provide a flexibility in meeting the driver torque demand, which calls for minimizing the battery energy consumption through torque allocation. In this paper, we present an approach to this problem based on approximating electric motor losses using higher-order polynomials with specific properties. To ensure a well-behaved optimization landscape, monotonicity and positivity constraints are imposed on the polynomial models using sum of squares programming. This methodology provides robustness against noisy or sparse data, while retaining the computational efficiency of a polynomial function approximation. The torque allocation problem based on such polynomials is formulated as a constrained nonlinear optimization problem and solved efficiently using readily available solvers. In the nominal case, the first-order necessary conditions for optimality can also be used to obtain a global solution. The performance of the proposed method is evaluated on several certification driving cycles against a grid search-based benchmark. Results show a modest influence on electric energy consumption, while enabling real-time optimization and integration with other vehicle control systems.
|
https://arxiv.org/abs/2601.07527
|
Academic Papers
|
svg
|
82fd68d31e5c4d3ef4581e84b7b54fa9ab158a8cdd6a87c62d12ac83a3d36783
|
2026-01-13T00:00:00-05:00
|
From RAG to Agentic RAG for Faithful Islamic Question Answering
|
arXiv:2601.07528v1 Announce Type: new Abstract: LLMs are increasingly used for Islamic question answering, where ungrounded responses may carry serious religious consequences. Yet standard MCQ/MRC-style evaluations do not capture key real-world failure modes, notably free-form hallucinations and whether models appropriately abstain when evidence is lacking. To shed a light on this aspect we introduce ISLAMICFAITHQA, a 3,810-item bilingual (Arabic/English) generative benchmark with atomic single-gold answers, which enables direct measurement of hallucination and abstention. We additionally developed an end-to-end grounded Islamic modelling suite consisting of (i) 25K Arabic text-grounded SFT reasoning pairs, (ii) 5K bilingual preference samples for reward-guided alignment, and (iii) a verse-level Qur'an retrieval corpus of $\sim$6k atomic verses (ayat). Building on these resources, we develop an agentic Quran-grounding framework (agentic RAG) that uses structured tool calls for iterative evidence seeking and answer revision. Experiments across Arabic-centric and multilingual LLMs show that retrieval improves correctness and that agentic RAG yields the largest gains beyond standard RAG, achieving state-of-the-art performance and stronger Arabic-English robustness even with a small model (i.e., Qwen3 4B). We will make the experimental resources and datasets publicly available for the community.
|
https://arxiv.org/abs/2601.07528
|
Academic Papers
|
svg
|
7b9de74c82c3639fc672d40a38aa05b4b15f48e3c32b030c66b5a8fe9887f263
|
2026-01-13T00:00:00-05:00
|
Loci Similes: A Benchmark for Extracting Intertextualities in Latin Literature
|
arXiv:2601.07533v1 Announce Type: new Abstract: Tracing connections between historical texts is an important part of intertextual research, enabling scholars to reconstruct the virtual library of a writer and identify the sources influencing their creative process. These intertextual links manifest in diverse forms, ranging from direct verbatim quotations to subtle allusions and paraphrases disguised by morphological variation. Language models offer a promising path forward due to their capability of capturing semantic similarity beyond lexical overlap. However, the development of new methods for this task is held back by the scarcity of standardized benchmarks and easy-to-use datasets. We address this gap by introducing Loci Similes, a benchmark for Latin intertextuality detection comprising of a curated dataset of ~172k text segments containing 545 expert-verified parallels linking Late Antique authors to a corpus of classical authors. Using this data, we establish baselines for retrieval and classification of intertextualities with state-of-the-art LLMs.
|
https://arxiv.org/abs/2601.07533
|
Academic Papers
|
svg
|
3792ac5bd8aef399da90d847e0a0c5f7c60b5c6230c0b6fe4f9e6776e3680c39
|
2026-01-13T00:00:00-05:00
|
A Protocol-Aware P4 Pipeline for MQTT Security and Anomaly Mitigation in Edge IoT Systems
|
arXiv:2601.07536v1 Announce Type: new Abstract: MQTT is the dominant lightweight publish--subscribe protocol for IoT deployments, yet edge security remains inadequate. Cloud-based intrusion detection systems add latency that is unsuitable for real-time control, while CPU-bound firewalls and generic SDN controllers lack MQTT awareness to enforce session validation, topic-based authorization, and behavioral anomaly detection. We propose a P4-based data-plane enforcement scheme for protocol-aware MQTT security and anomaly detection at the network edge. The design combines parser-safe MQTT header extraction with session-order validation, byte-level topic-prefix authorization with per-client rate limiting and soft-cap enforcement, and lightweight anomaly detection based on KeepAlive and Remaining Length screening with clone-to-CPU diagnostics. The scheme leverages stateful primitives in BMv2 (registers, meters, direct counters) to enable runtime policy adaptation with minimal per-packet latency. Experiments on a Mininet/BMv2 testbed demonstrate high policy enforcement accuracy (99.8%, within 95% CI), strong anomaly detection sensitivity (98\% true-positive rate), and high delivery >99.9% for 100--5~kpps; 99.8% at 10~kpps; 99.6\% at 16~kpps) with sub-millisecond per-packet latency. These results show that protocol-aware MQTT filtering can be efficiently realized in the programmable data plane, providing a practical foundation for edge IoT security. Future work will validate the design on production P4 hardware and integrate machine learning--based threshold adaptation.
|
https://arxiv.org/abs/2601.07536
|
Academic Papers
|
svg
|
7d1e0d55d030472933a675234e561dd88ddbffb44200d065bb4d32bb203883d1
|
2026-01-13T00:00:00-05:00
|
FairRF: Multi-Objective Search for Single and Intersectional Software Fairness
|
arXiv:2601.07537v1 Announce Type: new Abstract: Background: The wide adoption of AI- and ML-based systems in sensitive domains raises severe concerns about their fairness. Many methods have been proposed in the literature to enhance software fairness. However, the majority behave as a black-box, not allowing stakeholders to prioritise fairness or effectiveness (i.e., prediction correctness) based on their needs. Aims: In this paper, we introduce FairRF, a novel approach based on multi-objective evolutionary search to optimise fairness and effectiveness in classification tasks. FairRF uses a Random Forest (RF) model as a base classifier and searches for the best hyperparameter configurations and data mutation to maximise fairness and effectiveness. Eventually, it returns a set of Pareto optimal solutions, allowing the final stakeholders to choose the best one based on their needs. Method: We conduct an extensive empirical evaluation of FairRF against 26 different baselines in 11 different scenarios using five effectiveness and three fairness metrics. Additionally, we also include two variations of the fairness metrics for intersectional bias for a total of six definitions analysed. Result: Our results show that FairRF can significantly improve the fairness of base classifiers, while maintaining consistent prediction effectiveness. Additionally, FairRF provides a more consistent optimisation under all fairness definitions compared to state-of-the-art bias mitigation methods and overcomes the existing state-of-the-art approach for intersectional bias mitigation. Conclusions: FairRF is an effective approach for bias mitigation also allowing stakeholders to adapt the development of fair software systems based on their specific needs.
|
https://arxiv.org/abs/2601.07537
|
Academic Papers
|
svg
|
9567c1e8aca0602f4d67afefa0eab41a6ff7f277be17da7858ac1f3920e77cbd
|
2026-01-13T00:00:00-05:00
|
ViewMorpher3D: A 3D-aware Diffusion Framework for Multi-Camera Novel View Synthesis in Autonomous Driving
|
arXiv:2601.07540v1 Announce Type: new Abstract: Autonomous driving systems rely heavily on multi-view images to ensure accurate perception and robust decision-making. To effectively develop and evaluate perception stacks and planning algorithms, realistic closed-loop simulators are indispensable. While 3D reconstruction techniques such as Gaussian Splatting offer promising avenues for simulator construction, the rendered novel views often exhibit artifacts, particularly in extrapolated perspectives or when available observations are sparse. We introduce ViewMorpher3D, a multi-view image enhancement framework based on image diffusion models, designed to elevate photorealism and multi-view coherence in driving scenes. Unlike single-view approaches, ViewMorpher3D jointly processes a set of rendered views conditioned on camera poses, 3D geometric priors, and temporally adjacent or spatially overlapping reference views. This enables the model to infer missing details, suppress rendering artifacts, and enforce cross-view consistency. Our framework accommodates variable numbers of cameras and flexible reference/target view configurations, making it adaptable to diverse sensor setups. Experiments on real-world driving datasets demonstrate substantial improvements in image quality metrics, effectively reducing artifacts while preserving geometric fidelity.
|
https://arxiv.org/abs/2601.07540
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.