id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
05518beaa86e2c4b9b72b2496fe5b02d1dafd864a8dd65b973a8f332595e9146
|
2026-01-13T00:00:00-05:00
|
Generating readily synthesizable small molecule fluorophore scaffolds with reinforcement learning
|
arXiv:2601.07145v1 Announce Type: new Abstract: Developing new fluorophores for advanced imaging techniques requires exploring new chemical space. While generative AI approaches have shown promise in designing novel dye scaffolds, prior efforts often produced synthetically intractable candidates due to a lack of reaction constraints. Here, we developed SyntheFluor-RL, a generative AI model that employs known reaction libraries and molecular building blocks to create readily synthesizable fluorescent molecule scaffolds via reinforcement learning. To guide the generation of fluorophores, SyntheFluor-RL employs a scoring function built on multiple graph neural networks (GNNs) that predict key photophysical properties, including photoluminescence quantum yield, absorption, and emission wavelengths. These outputs are dynamically weighted and combined with a computed pi-conjugation score to prioritize candidates with desirable optical characteristics and synthetic feasibility. SyntheFluor-RL generated 11,590 candidate molecules, which were filtered to 19 structures predicted to possess dye-like properties. Of the 19 molecules, 14 were synthesized and 13 were experimentally confirmed. The top three were characterized, with the lead compound featuring a benzothiadiazole chromophore and exhibiting strong fluorescence (PLQY = 0.62), a large Stokes shift (97 nm), and a long excited-state lifetime (11.5 ns). These results demonstrate the effectiveness of SyntheFluor-RL in the identification of synthetically accessible fluorophores for further development.
|
https://arxiv.org/abs/2601.07145
|
Academic Papers
|
svg
|
0b7946dfa025490278acad4d3be032feb1eca3abe79a5805a10dd7cea1263e30
|
2026-01-13T00:00:00-05:00
|
PASS-Enabled Covert Communications With Distributed Cooperative Wardens
|
arXiv:2601.07147v1 Announce Type: new Abstract: This paper investigates PASS-enabled downlink covert communication in the presence of distributed surveillance, where multiple wardens perform signal detection and fuse their local binary decisions via majority-voting rule. We consider a dual-waveguide architecture that simultaneously delivers covert information and randomized jamming to hide the transmission footprint, incorporating three representative PASS power-radiation laws-general, proportional, and equal. To characterize the system-level detectability, we derive closed-form expressions for local false-alarm and miss-detection probabilities. By leveraging a probability-generating-function (PGF) and elementary-symmetric-polynomial (ESP) framework, combined with a breakpoint-based partition of the threshold domain, we obtain explicit closed-form characterizations of the system-level detection error probability (DEP) under non-i.i.d. majority-voting fusion. Building on this analytical framework, we formulate a robust optimization problem to maximize the average covert rate subject to covertness constraint. To solve the resulting nonconvex design, we develop an MM-BCD-SCA algorithm that produces tractable alternating updates for power/radiation variables and PA positions via convex surrogates and inner approximations of the DEP value function. Numerical results validate the theoretical analysis and demonstrate the impact of cooperative monitoring and PASS radiation laws on the covertness-rate tradeoff.
|
https://arxiv.org/abs/2601.07147
|
Academic Papers
|
svg
|
8abf910cea2e068e101dd4632aa1ccd1478cf4b1ae41f369bbd7faf5e126d456
|
2026-01-13T00:00:00-05:00
|
Measuring Iterative Temporal Reasoning with TimePuzzles
|
arXiv:2601.07148v1 Announce Type: new Abstract: We introduce TimePuzzles, a constraint-based date inference task for evaluating iterative temporal reasoning. Each puzzle combines factual temporal anchors with (cross-cultural) calendar relations, admits one or multiple valid solution dates, and is algorithmically generated for controlled, dynamic, and continual evaluation. Across 13 diverse LLMs, TimePuzzles well distinguishes their iterative temporal reasoning capabilities and remains challenging without tools: GPT-5 reaches only 49.3% accuracy and all other models stay below 31%, despite the dataset's simplicity. Web search consistently yields substantial gains and using code interpreter shows mixed effects, but all models perform much better when constraints are rewritten with explicit dates, revealing a gap in reliable tool use. Overall, TimePuzzles presents a simple, cost-effective diagnostic for tool-augmented iterative temporal reasoning.
|
https://arxiv.org/abs/2601.07148
|
Academic Papers
|
svg
|
0dbb0308cb2050a0fea2be4f4981c9b3927cc7bada4ff04b449a5d77f8c26fd6
|
2026-01-13T00:00:00-05:00
|
Rewarding Creativity: A Human-Aligned Generative Reward Model for Reinforcement Learning in Storytelling
|
arXiv:2601.07149v1 Announce Type: new Abstract: While Large Language Models (LLMs) can generate fluent text, producing high-quality creative stories remains challenging. Reinforcement Learning (RL) offers a promising solution but faces two critical obstacles: designing reliable reward signals for subjective storytelling quality and mitigating training instability. This paper introduces the Reinforcement Learning for Creative Storytelling (RLCS) framework to systematically address both challenges. First, we develop a Generative Reward Model (GenRM) that provides multi-dimensional analysis and explicit reasoning about story preferences, trained through supervised fine-tuning on demonstrations with reasoning chains distilled from strong teacher models, followed by GRPO-based refinement on expanded preference data. Second, we introduce an entropy-based reward shaping strategy that dynamically prioritizes learning on confident errors and uncertain correct predictions, preventing overfitting on already-mastered patterns. Experiments demonstrate that GenRM achieves 68\% alignment with human creativity judgments, and RLCS significantly outperforms strong baselines including Gemini-2.5-Pro in overall story quality. This work provides a practical pipeline for applying RL to creative domains, effectively navigating the dual challenges of reward modeling and training stability.
|
https://arxiv.org/abs/2601.07149
|
Academic Papers
|
svg
|
449a52695b5d801e8c9af6e212a684e240b3d755b121def6e292c1d5f400f1d2
|
2026-01-13T00:00:00-05:00
|
Fault detection of nonlinear industrial processes based on control theory-informed machine learning methods
|
arXiv:2601.07150v1 Announce Type: new Abstract: This paper deals with analysis, simultaneous detection of faults and attacks, fault-tolerant control and attack-resilient of cyber-physical control systems. In our recent work, it has been observed that an attack detector driven by an input residual signal is capable of reliably detecting attacks. In particular, observing system dynamics from the perspective of the system input-output signal space reveals that attacks and system uncertainties act on different system subspaces. These results motivate our exploration of secure and safe cyber-physical control systems in the unified framework of control and detection. The unified framework is proposed to handle control and detection issues uniformly and in subspaces of system input-output data. Its mathematical and control-theoretic basis is system coprime factorizations with Bezout identity at its core. We firstly explore those methods and schemes of the unified framework, which serve as the major control-theoretic tool in our work. It is followed by re-visiting and examining established attack detection and resilient control schemes. The major part of our work is the endeavours to develop a control-theoretic paradigm, in which analysis, simultaneous detection of faults and attacks, fault-tolerant and attack-resilient control of cyber-physical control systems are addressed in a unified manner.
|
https://arxiv.org/abs/2601.07150
|
Academic Papers
|
svg
|
5c2c21072d8fde5cdfb2fa76d187576aec0409156899c0ce6e1ed6c47177f8a6
|
2026-01-13T00:00:00-05:00
|
Agents of Diffusion: Enhancing Diffusion Language Models with Multi-Agent Reinforcement Learning for Structured Data Generation (Extended Version)
|
arXiv:2601.07152v1 Announce Type: new Abstract: Generating high-quality structured data such as JSON records, remains a fundamental challenge for large language models (LLMs), particularly when semantic richness must coexist with strict schema adherence. While autoregressive LLMs offer strong structural consistency, they often struggle with semantic variation and output diversity. In contrast, diffusion language models (DLMs) introduce powerful mechanisms for semantic richness and bidirectional decoding, yet lack the inductive biases needed for reliable structure preservation. We present Agents of Diffusion (AoD), a novel framework that unifies the generative flexibility of DLMs with the reasoning capabilities of autoregressive models through language-mediated reinforcement learning. AoD frames structured text generation as a multi-agent alignment process, where a prompt optimization agent collaborates with a judge agent to iteratively guide a DLM using natural language feedback. This approach enables controllable, schema-consistent generation without modifying model parameters or relying on handcrafted constraints. AoD advances the state of controllable generation by demonstrating that diffusion models, when supervised by cooperative agents, can achieve both high semantic novelty and structural fidelity. Across multiple structured data benchmarks, AoD consistently outperforms diffusion and autoregressive baselines, establishing a new path forward for structure-aware, diversity-enhanced text synthesis.
|
https://arxiv.org/abs/2601.07152
|
Academic Papers
|
svg
|
682d5874034607c0d41be6ca0e8faab11387cfbe03425c52714713bf82997733
|
2026-01-13T00:00:00-05:00
|
Can Large Language Models Understand, Reason About, and Generate Code-Switched Text?
|
arXiv:2601.07153v1 Announce Type: new Abstract: Code-switching is a pervasive phenomenon in multilingual communication, yet the robustness of large language models (LLMs) in mixed-language settings remains insufficiently understood. In this work, we present a comprehensive evaluation of LLM capabilities in understanding, reasoning over, and generating code-switched text. We introduce CodeMixQA a novel benchmark with high-quality human annotations, comprising 16 diverse parallel code-switched language-pair variants that span multiple geographic regions and code-switching patterns, and include both original scripts and their transliterated forms. Using this benchmark, we analyze the reasoning behavior of LLMs on code-switched question-answering tasks, shedding light on how models process and reason over mixed-language inputs. We further conduct a systematic evaluation of LLM-generated synthetic code-switched text, focusing on both naturalness and semantic fidelity, and uncover key limitations in current generation capabilities. Our findings reveal persistent challenges in both reasoning and generation under code-switching conditions and provide actionable insights for building more robust multilingual LLMs. We release the dataset and code as open source.
|
https://arxiv.org/abs/2601.07153
|
Academic Papers
|
svg
|
3821319cc5e323f360ab7bf11dc4f1f32305a8c6f5286135de2da1529da93225
|
2026-01-13T00:00:00-05:00
|
Motion Focus Recognition in Fast-Moving Egocentric Video
|
arXiv:2601.07154v1 Announce Type: new Abstract: From Vision-Language-Action (VLA) systems to robotics, existing egocentric datasets primarily focus on action recognition tasks, while largely overlooking the inherent role of motion analysis in sports and other fast-movement scenarios. To bridge this gap, we propose a real-time motion focus recognition method that estimates the subject's locomotion intention from any egocentric video. Our approach leverages the foundation model for camera pose estimation and introduces system-level optimizations to enable efficient and scalable inference. Evaluated on a collected egocentric action dataset, our method achieves real-time performance with manageable memory consumption through a sliding batch inference strategy. This work makes motion-centric analysis practical for edge deployment and offers a complementary perspective to existing egocentric studies on sports and fast-movement activities.
|
https://arxiv.org/abs/2601.07154
|
Academic Papers
|
svg
|
700534950fb3e5e34435e319aaf93a9179f7685cac9f8467c9f74a46e7032f4f
|
2026-01-13T00:00:00-05:00
|
Stable On-Policy Distillation through Adaptive Target Reformulation
|
arXiv:2601.07155v1 Announce Type: new Abstract: Knowledge distillation (KD) is a widely adopted technique for transferring knowledge from large language models to smaller student models; however, conventional supervised KD often suffers from a distribution mismatch between training and inference. While on-policy KD approaches attempt to mitigate this issue by learning directly from student-generated outputs, they frequently encounter training instabilities because the distributional gap between the novice student and the expert teacher is often too wide to bridge directly. These challenges manifest as pathological gradients in forward KL objectives or diversity collapse in reverse KL regimes. To address these limitations, we propose Veto, an objective-level reformulation that constructs a geometric bridge in the logit space. Unlike prior methods that mix data samples, Veto creates an intermediate target distribution that promotes alignment between the teacher and the student. By introducing a tunable parameter beta, Veto serves as an Adaptive Gradient Veto that stabilizes optimization by suppressing harmful gradients on low-confidence tokens, while simultaneously acting as a Decisiveness Knob to balance reward-driven performance with output diversity. Extensive experiments across various reasoning and generation tasks demonstrate that Veto consistently outperforms supervised fine-tuning and existing on-policy baselines.
|
https://arxiv.org/abs/2601.07155
|
Academic Papers
|
svg
|
623be2feb0dfd465ad087e9bcac4ac8b4a6333f94733eccdb598e02aa661a6d4
|
2026-01-13T00:00:00-05:00
|
Nonlinear Observer Design for Visual-Inertial Odometry
|
arXiv:2601.07156v1 Announce Type: new Abstract: This paper addresses the problem of Visual-Inertial Odometry (VIO) for rigid body systems evolving in three-dimensional space. We introduce a novel matrix Lie group structure, denoted SE_{3+n}(3), that unifies the pose, gravity, linear velocity, and landmark positions within a consistent geometric framework tailored to the VIO problem. Building upon this formulation, we design an almost globally asymptotically stable nonlinear geometric observer that tightly integrates data from an Inertial Measurement Unit (IMU) and visual sensors. Unlike conventional Extended Kalman Filter (EKF)-based estimators that rely on local linearization and thus ensure only local convergence, the proposed observer achieves almost global stability through the decoupling of the rotational and translational dynamics. A globally exponentially stable Riccati-based translational observer along with an almost global input-to-state stable attitude observer are designed such that the overall cascaded observer enjoys almost global asymptotic stability. This cascaded architecture guarantees robust and consistent estimation of the extended state, including orientation, position, velocity, gravity, and landmark positions, up to the VIO unobservable directions (i.e., a global translation and rotation about gravity). The effectiveness of the proposed scheme is demonstrated through numerical simulations as well as experimental validation on the EuRoC MAV dataset, highlighting its robustness and suitability for real-world VIO applications.
|
https://arxiv.org/abs/2601.07156
|
Academic Papers
|
svg
|
acb8d061001708f9a998ee441f5f74c3826e02de25fffcf1daaa9939d2a17ab7
|
2026-01-13T00:00:00-05:00
|
AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units
|
arXiv:2601.07160v1 Announce Type: new Abstract: To meet the ever-increasing demand for computational efficiency, Neural Processing Units (NPUs) have become critical in modern AI infrastructure. However, unlocking their full potential requires developing high-performance compute kernels using vendor-specific Domain-Specific Languages (DSLs), a task that demands deep hardware expertise and is labor-intensive. While Large Language Models (LLMs) have shown promise in general code generation, they struggle with the strict constraints and scarcity of training data in the NPU domain. Our preliminary study reveals that state-of-the-art general-purpose LLMs fail to generate functional complex kernels for Ascend NPUs, yielding a near-zero success rate. To address these challenges, we propose AscendKernelGen, a generation-evaluation integrated framework for NPU kernel development. We introduce Ascend-CoT, a high-quality dataset incorporating chain-of-thought reasoning derived from real-world kernel implementations, and KernelGen-LM, a domain-adaptive model trained via supervised fine-tuning and reinforcement learning with execution feedback. Furthermore, we design NPUKernelBench, a comprehensive benchmark for assessing compilation, correctness, and performance across varying complexity levels. Experimental results demonstrate that our approach significantly bridges the gap between general LLMs and hardware-specific coding. Specifically, the compilation success rate on complex Level-2 kernels improves from 0% to 95.5% (Pass@10), while functional correctness achieves 64.3% compared to the baseline's complete failure. These results highlight the critical role of domain-specific reasoning and rigorous evaluation in automating accelerator-aware code generation.
|
https://arxiv.org/abs/2601.07160
|
Academic Papers
|
svg
|
21af26387153e134c29ef0d8807cacd59c100ff82c49d4810edd38f666f9c3bf
|
2026-01-13T00:00:00-05:00
|
Test-time Adaptive Hierarchical Co-enhanced Denoising Network for Reliable Multimodal Classification
|
arXiv:2601.07163v1 Announce Type: new Abstract: Reliable learning on low-quality multimodal data is a widely concerning issue, especially in safety-critical applications. However, multimodal noise poses a major challenge in this domain and leads existing methods to suffer from two key limitations. First, they struggle to reliably remove heterogeneous data noise, hindering robust multimodal representation learning. Second, they exhibit limited adaptability and generalization when encountering previously unseen noise. To address these issues, we propose Test-time Adaptive Hierarchical Co-enhanced Denoising Network (TAHCD). On one hand, TAHCD introduces the Adaptive Stable Subspace Alignment and Sample-Adaptive Confidence Alignment to reliably remove heterogeneous noise. They account for noise at both global and instance levels and enable jointly removal of modality-specific and cross-modality noise, achieving robust learning. On the other hand, TAHCD introduces test-time cooperative enhancement, which adaptively updates the model in response to input noise in a label-free manner, improving adaptability and generalization. This is achieved by collaboratively enhancing the joint removal process of modality-specific and cross-modality noise across global and instance levels according to sample noise. Experiments on multiple benchmarks demonstrate that the proposed method achieves superior classification performance, robustness, and generalization compared with state-of-the-art reliable multimodal learning approaches.
|
https://arxiv.org/abs/2601.07163
|
Academic Papers
|
svg
|
57709152d54be698fdbccb472be82b57e58da28b6de74471dfdbda6091423977
|
2026-01-13T00:00:00-05:00
|
Offline Meta-Reinforcement Learning with Flow-Based Task Inference and Adaptive Correction of Feature Overgeneralization
|
arXiv:2601.07164v1 Announce Type: new Abstract: Offline meta-reinforcement learning (OMRL) combines the strengths of learning from diverse datasets in offline RL with the adaptability to new tasks of meta-RL, promising safe and efficient knowledge acquisition by RL agents. However, OMRL still suffers extrapolation errors due to out-of-distribution (OOD) actions, compromised by broad task distributions and Markov Decision Process (MDP) ambiguity in meta-RL setups. Existing research indicates that the generalization of the $Q$ network affects the extrapolation error in offline RL. This paper investigates this relationship by decomposing the $Q$ value into feature and weight components, observing that while decomposition enhances adaptability and convergence in the case of high-quality data, it often leads to policy degeneration or collapse in complex tasks. We observe that decomposed $Q$ values introduce a large estimation bias when the feature encounters OOD samples, a phenomenon we term ''feature overgeneralization''. To address this issue, we propose FLORA, which identifies OOD samples by modeling feature distributions and estimating their uncertainties. FLORA integrates a return feedback mechanism to adaptively adjust feature components. Furthermore, to learn precise task representations, FLORA explicitly models the complex task distribution using a chain of invertible transformations. We theoretically and empirically demonstrate that FLORA achieves rapid adaptation and meta-policy improvement compared to baselines across various environments.
|
https://arxiv.org/abs/2601.07164
|
Academic Papers
|
svg
|
dc43eeb17cf00d25802bf784c4e6df75c83dce2a1b7a12ce7799526c54650b22
|
2026-01-13T00:00:00-05:00
|
TranSC: Hardware-Aware Design of Transcendental Functions Using Stochastic Logic
|
arXiv:2601.07172v1 Announce Type: new Abstract: The hardware-friendly implementation of transcendental functions remains a longstanding challenge in design automation. These functions, which cannot be expressed as finite combinations of algebraic operations, pose significant complexity in digital circuit design. This study introduces a novel approach, TranSC, that utilizes stochastic computing (SC) for lightweight yet accurate implementation of transcendental functions. Building on established SC techniques, our method explores alternative random sources-specifically, quasi-random Van der Corput low-discrepancy (LD) sequences-instead of conventional pseudo-randomness. This shift enhances both the accuracy and efficiency of SC-based computations. We validate our approach through extensive experiments on various function types, including trigonometric, hyperbolic, and activation functions. The proposed design approach significantly reduces MSE by up to 98% compared to the state-of-the-art solutions while reducing hardware area, power consumption, and energy usage by 33%, 72%, and 64%, respectively.
|
https://arxiv.org/abs/2601.07172
|
Academic Papers
|
svg
|
5869d7ea9a8b13f3858b324b8914b3ecb2b18f351a615d7dabf55040e439ce2b
|
2026-01-13T00:00:00-05:00
|
The MAC scheme for linear elasticity in displacement-stress formulation on non-uniform staggered grids
|
arXiv:2601.07174v1 Announce Type: new Abstract: A marker-and-cell finite difference method is developed for solving the two dimensional and three dimensional linear elasticity in the displacement-stress formulation on staggered grids. The method employs a staggered grid arrangement, where the displacement components are approximated on the midpoints of cell edges, the normal stresses are defined at the cell centers, and the shear stresses are defined at the grid points. This structure ensures local conservation properties and avoids spurious oscillations in stress approximation. A rigorous mathematical analysis is presented, establishing the stability of the scheme and proving the second-order L2-error super-convergence for both displacement and stress. The proposed method is locking-free with respect to the Lame constant, making it suitable for both compressible and nearly incompressible elastic materials. Numerical experiments demonstrate the efficiency and robustness of the finite difference scheme, and the computed results show excellent agreement with the theoretical predictions.
|
https://arxiv.org/abs/2601.07174
|
Academic Papers
|
svg
|
53bd36ed443fc2f99bce5a548f91829dde5756eea9a2e8dc013cd1b70a5169c1
|
2026-01-13T00:00:00-05:00
|
Safe-FedLLM: Delving into the Safety of Federated Large Language Models
|
arXiv:2601.07177v1 Announce Type: new Abstract: Federated learning (FL) addresses data privacy and silo issues in large language models (LLMs). Most prior work focuses on improving the training efficiency of federated LLMs. However, security in open environments is overlooked, particularly defenses against malicious clients. To investigate the safety of LLMs during FL, we conduct preliminary experiments to analyze potential attack surfaces and defensible characteristics from the perspective of Low-Rank Adaptation (LoRA) weights. We find two key properties of FL: 1) LLMs are vulnerable to attacks from malicious clients in FL, and 2) LoRA weights exhibit distinct behavioral patterns that can be filtered through simple classifiers. Based on these properties, we propose Safe-FedLLM, a probe-based defense framework for federated LLMs, constructing defenses across three dimensions: Step-Level, Client-Level, and Shadow-Level. The core concept of Safe-FedLLM is to perform probe-based discrimination on the LoRA weights locally trained by each client during FL, treating them as high-dimensional behavioral features and using lightweight classification models to determine whether they possess malicious attributes. Extensive experiments demonstrate that Safe-FedLLM effectively enhances the defense capability of federated LLMs without compromising performance on benign data. Notably, our method effectively suppresses malicious data impact without significant impact on training speed, and remains effective even with many malicious clients. Our code is available at: https://github.com/dmqx/Safe-FedLLM.
|
https://arxiv.org/abs/2601.07177
|
Academic Papers
|
svg
|
de53f36f6cccc3c3570ffcb2e55b8196cb110fab3b2a1791c61b5f56c73e7c35
|
2026-01-13T00:00:00-05:00
|
DIVER: Dynamic Iterative Visual Evidence Reasoning for Multimodal Fake News Detection
|
arXiv:2601.07178v1 Announce Type: new Abstract: Multimodal fake news detection is crucial for mitigating adversarial misinformation. Existing methods, relying on static fusion or LLMs, face computational redundancy and hallucination risks due to weak visual foundations. To address this, we propose DIVER (Dynamic Iterative Visual Evidence Reasoning), a framework grounded in a progressive, evidence-driven reasoning paradigm. DIVER first establishes a strong text-based baseline through language analysis, leveraging intra-modal consistency to filter unreliable or hallucinated claims. Only when textual evidence is insufficient does the framework introduce visual information, where inter-modal alignment verification adaptively determines whether deeper visual inspection is necessary. For samples exhibiting significant cross-modal semantic discrepancies, DIVER selectively invokes fine-grained visual tools (e.g., OCR and dense captioning) to extract task-relevant evidence, which is iteratively aggregated via uncertainty-aware fusion to refine multimodal reasoning. Experiments on Weibo, Weibo21, and GossipCop demonstrate that DIVER outperforms state-of-the-art baselines by an average of 2.72\%, while optimizing inference efficiency with a reduced latency of 4.12 s.
|
https://arxiv.org/abs/2601.07178
|
Academic Papers
|
svg
|
080efae4e2377f3bc7b1a5c75a619b73e450b7dd86298522e4deb6106de2af95
|
2026-01-13T00:00:00-05:00
|
Structured Reasoning for Large Language Models
|
arXiv:2601.07180v1 Announce Type: new Abstract: Large language models (LLMs) achieve strong performance by generating long chains of thought, but longer traces always introduce redundant or ineffective reasoning steps. One typical behavior is that they often perform unnecessary verification and revisions even if they have reached the correct answers. This limitation stems from the unstructured nature of reasoning trajectories and the lack of targeted supervision for critical reasoning abilities. To address this, we propose Structured Reasoning (SCR), a framework that decouples reasoning trajectories into explicit, evaluable, and trainable components. We mainly implement SCR using a Generate-Verify-Revise paradigm. Specifically, we construct structured training data and apply Dynamic Termination Supervision to guide the model in deciding when to terminate reasoning. To avoid interference between learning signals for different reasoning abilities, we adopt a progressive two-stage reinforcement learning strategy: the first stage targets initial generation and self-verification, and the second stage focuses on revision. Extensive experiments on three backbone models show that SCR substantially improves reasoning efficiency and self-verification. Besides, compared with existing reasoning paradigms, it reduces output token length by up to 50%.
|
https://arxiv.org/abs/2601.07180
|
Academic Papers
|
svg
|
210abc55e2089f32aa7ca6d6f9014ce371bd041321aebd2bd0097ff98a14f3e1
|
2026-01-13T00:00:00-05:00
|
ShowUI-Aloha: Human-Taught GUI Agent
|
arXiv:2601.07181v1 Announce Type: new Abstract: Graphical User Interfaces (GUIs) are central to human-computer interaction, yet automating complex GUI tasks remains a major challenge for autonomous agents, largely due to a lack of scalable, high-quality training data. While recordings of human demonstrations offer a rich data source, they are typically long, unstructured, and lack annotations, making them difficult for agents to learn from.To address this, we introduce ShowUI-Aloha, a comprehensive pipeline that transforms unstructured, in-the-wild human screen recordings from desktop environments into structured, actionable tasks. Our framework includes four key components: A recorder that captures screen video along with precise user interactions like mouse clicks, keystrokes, and scrolls. A learner that semantically interprets these raw interactions and the surrounding visual context, translating them into descriptive natural language captions. A planner that reads the parsed demonstrations, maintains task states, and dynamically formulates the next high-level action plan based on contextual reasoning. An executor that faithfully carries out these action plans at the OS level, performing precise clicks, drags, text inputs, and window operations with safety checks and real-time feedback. Together, these components provide a scalable solution for collecting and parsing real-world human data, demonstrating a viable path toward building general-purpose GUI agents that can learn effectively from simply observing humans.
|
https://arxiv.org/abs/2601.07181
|
Academic Papers
|
svg
|
c7a1466bc7968127bd8d0738f871ac1d2b154d52629ed0d0312992f95dfe0065
|
2026-01-13T00:00:00-05:00
|
PRPO: Aligning Process Reward with Outcome Reward in Policy Optimization
|
arXiv:2601.07182v1 Announce Type: new Abstract: Policy optimization for large language models often suffers from sparse reward signals in multi-step reasoning tasks. Critic-free methods like GRPO assign a single normalized outcome reward to all tokens, providing limited guidance for intermediate reasoning . While Process Reward Models (PRMs) offer dense feedback, they risk premature collapse when used alone, as early low-reward tokens can drive policies toward truncated outputs. We introduce Process Relative Policy Optimization (PRPO), which combines outcome reliability with process-level guidance in a critic-free framework. PRPO segments reasoning sequences based on semantic clues, normalizes PRM scores into token-level advantages, and aligns their distribution with outcome advantages through location-parameter shift. On MATH500, PRPO improves Qwen2.5-Math-1.5B accuracy from 61.2% to 64.4% over GRPO using only eight rollouts and no value network, demonstrating efficient fine-grained credit assignment within critic-free optimization.
|
https://arxiv.org/abs/2601.07182
|
Academic Papers
|
svg
|
b50989e3ea71b859c1f869dedbc73d9dffcca610bad23052d78674418d7f4b61
|
2026-01-13T00:00:00-05:00
|
RAIRS: Optimizing Redundant Assignment and List Layout for IVF-Based ANN Search
|
arXiv:2601.07183v1 Announce Type: new Abstract: IVF is one of the most widely used ANNS (Approximate Nearest Neighbors Search) methods in vector databases. The idea of redundant assignment is to assign a data vector to more than one IVF lists for reducing the chance of missing true neighbors in IVF search. However, the naive strategy, which selects the second IVF list based on the distance between a data vector and the list centroids, performs poorly. Previous work focuses only on the inner product distance, while there is no optimized list selection study for the most popular Euclidean space. Moreover, the IVF search may access the same vector in more than one lists, resulting in redundant distance computation and decreasing query throughput. In this paper, we present RAIRS to address the above two challenges. For the challenge of the list selection, we propose an optimized AIR metric for the Euclidean space. AIR takes not only distances but also directions into consideration in order to support queries that are closer to the data vector but father away from the first chosen list's centroid. For the challenge of redundant distance computation, we propose SEIL, an optimized list layout that exploits shared cells to reduce repeated distance computations for IVF search. Our experimental results using representative real-world data sets show that RAIRS out-performs existing redundant assignment solutions and achieves up to 1.33x improvement over the best-performing IVF method, IVF-PQ Fast Scan with refinement.
|
https://arxiv.org/abs/2601.07183
|
Academic Papers
|
svg
|
1924375ab1c365cf2f4fd13dbfc50137e893dc16c454b4aa729ac8e6b08f8594
|
2026-01-13T00:00:00-05:00
|
Defenses Against Prompt Attacks Learn Surface Heuristics
|
arXiv:2601.07185v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly deployed in security-sensitive applications, where they must follow system- or developer-specified instructions that define the intended task behavior, while completing benign user requests. When adversarial instructions appear in user queries or externally retrieved content, models may override intended logic. Recent defenses rely on supervised fine-tuning with benign and malicious labels. Although these methods achieve high attack rejection rates, we find that they rely on narrow correlations in defense data rather than harmful intent, leading to systematic rejection of safe inputs. We analyze three recurring shortcut behaviors induced by defense fine-tuning. \emph{Position bias} arises when benign content placed later in a prompt is rejected at much higher rates; across reasoning benchmarks, suffix-task rejection rises from below \textbf{10\%} to as high as \textbf{90\%}. \emph{Token trigger bias} occurs when strings common in attack data raise rejection probability even in benign contexts; inserting a single trigger token increases false refusals by up to \textbf{50\%}. \emph{Topic generalization bias} reflects poor generalization beyond the defense data distribution, with defended models suffering test-time accuracy drops of up to \textbf{40\%}. These findings suggest that current prompt-injection defenses frequently respond to attack-like surface patterns rather than the underlying intent. We introduce controlled diagnostic datasets and a systematic evaluation across two base models and multiple defense pipelines, highlighting limitations of supervised fine-tuning for reliable LLM security.
|
https://arxiv.org/abs/2601.07185
|
Academic Papers
|
svg
|
8047c27f87b4e4ff4fbe1a35d099994e30e14deb1fc3c2a117dd7f08cd402314
|
2026-01-13T00:00:00-05:00
|
PROTEA: Securing Robot Task Planning and Execution
|
arXiv:2601.07186v1 Announce Type: new Abstract: Robots need task planning methods to generate action sequences for complex tasks. Recent work on adversarial attacks has revealed significant vulnerabilities in existing robot task planners, especially those built on foundation models. In this paper, we aim to address these security challenges by introducing PROTEA, an LLM-as-a-Judge defense mechanism, to evaluate the security of task plans. PROTEA is developed to address the dimensionality and history challenges in plan safety assessment. We used different LLMs to implement multiple versions of PROTEA for comparison purposes. For systemic evaluations, we created a dataset containing both benign and malicious task plans, where the harmful behaviors were injected at varying levels of stealthiness. Our results provide actionable insights for robotic system practitioners seeking to enhance robustness and security of their task planning systems. Details, dataset and demos are provided: https://protea-secure.github.io/PROTEA/
|
https://arxiv.org/abs/2601.07186
|
Academic Papers
|
svg
|
e6984da0385dbc45dd16c99270d09d23b2fb62d70111ab55e2258d32b86695c5
|
2026-01-13T00:00:00-05:00
|
Standardization of Post-Publication Code Verification by Journals is Possible with the Support of the Community
|
arXiv:2601.07189v1 Announce Type: new Abstract: Reproducibility remains a challenge in machine learning research. While code and data availability requirements have become increasingly common, post-publication verification in journals is still limited and unformalized. This position paper argues that it is plausible for journals and conference proceedings to implement post-publication verification. We propose a modification to ACM pre-publication verification badges that allows independent researchers to submit post-publication code replications to the journal, leading to visible verification badges included in the article metadata. Each article may earn up to two badges, each linked to verified code in its corresponding public repository. We describe the motivation, related initiatives, a formal framework, the potential impact, possible limitations, and alternative views.
|
https://arxiv.org/abs/2601.07189
|
Academic Papers
|
svg
|
112d68222d63a21282c96ad92028ded62f984a1d8bec1aead6c0c4e74feb632d
|
2026-01-13T00:00:00-05:00
|
Active Context Compression: Autonomous Memory Management in LLM Agents
|
arXiv:2601.07190v1 Announce Type: new Abstract: Large Language Model (LLM) agents struggle with long-horizon software engineering tasks due to "Context Bloat." As interaction history grows, computational costs explode, latency increases, and reasoning capabilities degrade due to distraction by irrelevant past errors. Existing solutions often rely on passive, external summarization mechanisms that the agent cannot control. This paper proposes Focus, an agent-centric architecture inspired by the biological exploration strategies of Physarum polycephalum (slime mold). The Focus Agent autonomously decides when to consolidate key learnings into a persistent "Knowledge" block and actively withdraws (prunes) the raw interaction history. Using an optimized scaffold matching industry best practices (persistent bash + string-replacement editor), we evaluated Focus on N=5 context-intensive instances from SWE-bench Lite using Claude Haiku 4.5. With aggressive prompting that encourages frequent compression, Focus achieves 22.7% token reduction (14.9M -> 11.5M tokens) while maintaining identical accuracy (3/5 = 60% for both agents). Focus performed 6.0 autonomous compressions per task on average, with token savings up to 57% on individual instances. We demonstrate that capable models can autonomously self-regulate their context when given appropriate tools and prompting, opening pathways for cost-aware agentic systems without sacrificing task performance.
|
https://arxiv.org/abs/2601.07190
|
Academic Papers
|
svg
|
1a70cc0458a283e8178293e0bb9756ab8d7a83372bab6ce09883c428caec11f3
|
2026-01-13T00:00:00-05:00
|
Relink: Constructing Query-Driven Evidence Graph On-the-Fly for GraphRAG
|
arXiv:2601.07192v1 Announce Type: new Abstract: Graph-based Retrieval-Augmented Generation (GraphRAG) mitigates hallucinations in Large Language Models (LLMs) by grounding them in structured knowledge. However, current GraphRAG methods are constrained by a prevailing \textit{build-then-reason} paradigm, which relies on a static, pre-constructed Knowledge Graph (KG). This paradigm faces two critical challenges. First, the KG's inherent incompleteness often breaks reasoning paths. Second, the graph's low signal-to-noise ratio introduces distractor facts, presenting query-relevant but misleading knowledge that disrupts the reasoning process. To address these challenges, we argue for a \textit{reason-and-construct} paradigm and propose Relink, a framework that dynamically builds a query-specific evidence graph. To tackle incompleteness, \textbf{Relink} instantiates required facts from a latent relation pool derived from the original text corpus, repairing broken paths on the fly. To handle misleading or distractor facts, Relink employs a unified, query-aware evaluation strategy that jointly considers candidates from both the KG and latent relations, selecting those most useful for answering the query rather than relying on their pre-existence. This empowers Relink to actively discard distractor facts and construct the most faithful and precise evidence path for each query. Extensive experiments on five Open-Domain Question Answering benchmarks show that Relink achieves significant average improvements of 5.4\% in EM and 5.2\% in F1 over leading GraphRAG baselines, demonstrating the superiority of our proposed framework.
|
https://arxiv.org/abs/2601.07192
|
Academic Papers
|
svg
|
b5173a860d2848b9526544ffff96c91e06738c9e203b1d1937aaade0f701a6e5
|
2026-01-13T00:00:00-05:00
|
Beyond Variance: Knowledge-Aware LLM Compression via Fisher-Aligned Subspace Diagnostics
|
arXiv:2601.07197v1 Announce Type: new Abstract: Post-training activation compression is essential for deploying Large Language Models (LLMs) on resource-constrained hardware. However, standard methods like Singular Value Decomposition (SVD) are gradient-blind: they preserve high-variance dimensions regardless of their impact on factual knowledge preservation. We introduce Fisher-Aligned Subspace Compression (FASC), a knowledge-aware compression framework that selects subspaces by directly modeling activation-gradient coupling, minimizing a second-order surrogate of the loss function. FASC leverages the Fisher Information Matrix to identify dimensions critical for factual knowledge, which often reside in low-variance but high-gradient-sensitivity subspaces. We propose the Dependence Violation Score (\r{ho}) as a general-purpose diagnostic metric that quantifies activation-gradient coupling, revealing where factual knowledge is stored within transformer architectures. Extensive experiments on Mistral-7B and Llama-3-8B demonstrate that FASC preserves 6-8% more accuracy on knowledge-intensive benchmarks (MMLU, LAMA) compared to variance-based methods at 50% rank reduction, effectively enabling a 7B model to match the factual recall of a 13B uncompressed model. Our analysis reveals that \r{ho} serves as a fundamental signal of stored knowledge, with high-\r{ho} layers emerging only when models internalize factual associations during training.
|
https://arxiv.org/abs/2601.07197
|
Academic Papers
|
svg
|
79c48ebd9a2abb11c13538ccee55567ad6877e14723aff3c09184d702fb3d270
|
2026-01-13T00:00:00-05:00
|
Forward versus Backward: Comparing Reasoning Objectives in Direct Preference Optimization
|
arXiv:2601.07199v1 Announce Type: new Abstract: Large language models exhibit impressive reasoning capabilities yet frequently generate plausible but incorrect solutions, a phenomenon commonly termed hallucination. This paper investigates the effect of training objective composition on reasoning reliability through Direct Preference Optimization. Two complementary training signals are examined: forward chain-of-thought generation, which trains the model to produce correct reasoning traces, and backward verification, which trains the model to verify and acknowledge errors in candidate solutions. Experiments on GSM8K reveal a fundamental trade-off between these objectives. Forward-only DPO training achieves the highest accuracy improvement, increasing from 83.1% to 86.6% (+3.5 percentage points), while backward-only training yields minimal accuracy gains but substantially reduces the false positive rate from 13.4% to 4.3%. Notably, both training variants reduce acknowledgement rate compared to the baseline, suggesting that preference optimization increases model confidence in its outputs. These findings indicate that forward and backward reasoning objectives provide distinct and complementary learning signals: forward training improves problem-solving capability, while backward training improves verification calibration. The complete training and evaluation pipeline, implemented efficiently through Low-Rank Adaptation, is released to facilitate further research.
|
https://arxiv.org/abs/2601.07199
|
Academic Papers
|
svg
|
ae2258313903700fec68af19839685b2616e8b6ff3e1b22145857a4c4fd085ed
|
2026-01-13T00:00:00-05:00
|
Safeguarding LLM Fine-tuning via Push-Pull Distributional Alignment
|
arXiv:2601.07200v1 Announce Type: new Abstract: The inherent safety alignment of Large Language Models (LLMs) is prone to erosion during fine-tuning, even when using seemingly innocuous datasets. While existing defenses attempt to mitigate this via data selection, they typically rely on heuristic, instance-level assessments that neglect the global geometry of the data distribution and fail to explicitly repel harmful patterns. To address this, we introduce Safety Optimal Transport (SOT), a novel framework that reframes safe fine-tuning from an instance-level filtering challenge to a distribution-level alignment task grounded in Optimal Transport (OT). At its core is a dual-reference ``push-pull'' weight-learning mechanism: SOT optimizes sample importance by actively pulling the downstream distribution towards a trusted safe anchor while simultaneously pushing it away from a general harmful reference. This establishes a robust geometric safety boundary that effectively purifies the training data. Extensive experiments across diverse model families and domains demonstrate that SOT significantly enhances model safety while maintaining competitive downstream performance, achieving a superior safety-utility trade-off compared to baselines.
|
https://arxiv.org/abs/2601.07200
|
Academic Papers
|
svg
|
593aacab0a10bb51bc38203826edb6072d95afe81f9bf22a22da2a4d71a610a5
|
2026-01-13T00:00:00-05:00
|
CalPro: Prior-Aware Evidential--Conformal Prediction with Structure-Aware Guarantees for Protein Structures
|
arXiv:2601.07201v1 Announce Type: new Abstract: Deep protein structure predictors such as AlphaFold provide confidence estimates (e.g., pLDDT) that are often miscalibrated and degrade under distribution shifts across experimental modalities, temporal changes, and intrinsically disordered regions. We introduce CalPro, a prior-aware evidential-conformal framework for shift-robust uncertainty quantification. CalPro combines (i) a geometric evidential head that outputs Normal-Inverse-Gamma predictive distributions via a graph-based architecture; (ii) a differentiable conformal layer that enables end-to-end training with finite-sample coverage guarantees; and (iii) domain priors (disorder, flexibility) encoded as soft constraints. We derive structure-aware coverage guarantees under distribution shift using PAC-Bayesian bounds over ambiguity sets, and show that CalPro maintains near-nominal coverage while producing tighter intervals than standard conformal methods in regions where priors are informative. Empirically, CalPro exhibits at most 5% coverage degradation across modalities (vs. 15-25% for baselines), reduces calibration error by 30-50%, and improves downstream ligand-docking success by 25%. Beyond proteins, CalPro applies to structured regression tasks in which priors encode local reliability, validated on non-biological benchmarks.
|
https://arxiv.org/abs/2601.07201
|
Academic Papers
|
svg
|
d02ad6ab50acee037a4e24b2e5d8eb6b1318b24866a3c4cc274bb386a8691392
|
2026-01-13T00:00:00-05:00
|
Intercultural Communication Strategies of a Technology Brand: A Comparative Quantitative Analysis of Xiaomi's Digital Marketing in China and Russia
|
arXiv:2601.07204v1 Announce Type: new Abstract: In the 21st century, the era of globalization, consumers are dispersed across the globe, and brands compete for their attention and loyalty, largely within the digital realm. This reality elevates the importance of effective communication and the transmission of product value across diverse cultural contexts. This study presents a comparative quantitative analysis of the digital marketing strategies of Xiaomi, a leading Chinese technology brand, on two major social media platforms: Sina Weibo in China and VK (VKontakte) in Russia. The research investigates how Xiaomi adapts its communication to align with local cultural values, as defined by the theoretical frameworks of Hofstede and Hall. Through a frequency analysis of text-based posts and emoji usage, this paper demonstrates the significant differences in Xiaomi's communication strategies in these two markets. The findings reveal that in China, a market characterized by high masculinity and low uncertainty avoidance, Xiaomi's messaging focuses on innovation, authority, and aspiration. In contrast, in Russia, a market with high uncertainty avoidance and lower masculinity, the brand's communication is more pragmatic, emphasizing tangible product benefits and building emotional connections. This study contributes to the field of intercultural digital marketing by providing empirical evidence of how a global brand adapts its communication strategies to different cultural contexts. The findings offer valuable insights for multinational corporations seeking to develop effective global marketing strategies in an increasingly interconnected world.
|
https://arxiv.org/abs/2601.07204
|
Academic Papers
|
svg
|
3871aa1a25212b5b9228c24c89df3c6dba11ef2d908abd95dc12bc9725b547b0
|
2026-01-13T00:00:00-05:00
|
LLMRouterBench: A Massive Benchmark and Unified Framework for LLM Routing
|
arXiv:2601.07206v1 Announce Type: new Abstract: Large language model (LLM) routing assigns each query to the most suitable model from an ensemble. We introduce LLMRouterBench, a large-scale benchmark and unified framework for LLM routing. It comprises over 400K instances from 21 datasets and 33 models. Moreover, it provides comprehensive metrics for both performance-oriented routing and performance-cost trade-off routing, and integrates 10 representative routing baselines. Using LLMRouterBench, we systematically re-evaluate the field. While confirming strong model complementarity-the central premise of LLM routing-we find that many routing methods exhibit similar performance under unified evaluation, and several recent approaches, including commercial routers, fail to reliably outperform a simple baseline. Meanwhile, a substantial gap remains to the Oracle, driven primarily by persistent model-recall failures. We further show that backbone embedding models have limited impact, that larger ensembles exhibit diminishing returns compared to careful model curation, and that the benchmark also enables latency-aware analysis. All code and data are available at https://github.com/ynulihao/LLMRouterBench.
|
https://arxiv.org/abs/2601.07206
|
Academic Papers
|
svg
|
44c1db4c2b685ede7179e6b4e84627b0dd2b798f56dfcbcedd333fb48a628241
|
2026-01-13T00:00:00-05:00
|
MAESTRO: Meta-learning Adaptive Estimation of Scalarization Trade-offs for Reward Optimization
|
arXiv:2601.07208v1 Announce Type: new Abstract: Group-Relative Policy Optimization (GRPO) has emerged as an efficient paradigm for aligning Large Language Models (LLMs), yet its efficacy is primarily confined to domains with verifiable ground truths. Extending GRPO to open-domain settings remains a critical challenge, as unconstrained generation entails multi-faceted and often conflicting objectives - such as creativity versus factuality - where rigid, static reward scalarization is inherently suboptimal. To address this, we propose MAESTRO (Meta-learning Adaptive Estimation of Scalarization Trade-offs for Reward Optimization), which introduces a meta-cognitive orchestration layer that treats reward scalarization as a dynamic latent policy, leveraging the model's terminal hidden states as a semantic bottleneck to perceive task-specific priorities. We formulate this as a contextual bandit problem within a bi-level optimization framework, where a lightweight Conductor network co-evolves with the policy by utilizing group-relative advantages as a meta-reward signal. Across seven benchmarks, MAESTRO consistently outperforms single-reward and static multi-objective baselines, while preserving the efficiency advantages of GRPO, and in some settings even reducing redundant generation.
|
https://arxiv.org/abs/2601.07208
|
Academic Papers
|
svg
|
c1e4c21a11070bfe40eeaf0a76c777d44a5b7268c44db9a1f42de2d5efd52cd1
|
2026-01-13T00:00:00-05:00
|
SIRR-LMM: Single-image Reflection Removal via Large Multimodal Model
|
arXiv:2601.07209v1 Announce Type: new Abstract: Glass surfaces create complex interactions of reflected and transmitted light, making single-image reflection removal (SIRR) challenging. Existing datasets suffer from limited physical realism in synthetic data or insufficient scale in real captures. We introduce a synthetic dataset generation framework that path-traces 3D glass models over real background imagery to create physically accurate reflection scenarios with varied glass properties, camera settings, and post-processing effects. To leverage the capabilities of Large Multimodal Model (LMM), we concatenate the image layers into a single composite input, apply joint captioning, and fine-tune the model using task-specific LoRA rather than full-parameter training. This enables our approach to achieve improved reflection removal and separation performance compared to state-of-the-art methods.
|
https://arxiv.org/abs/2601.07209
|
Academic Papers
|
svg
|
08200310ee9c18e3765ea60c5214f58d75b9b790e7c10d62e687536cb15afa74
|
2026-01-13T00:00:00-05:00
|
MI-PRUN: Optimize Large Language Model Pruning via Mutual Information
|
arXiv:2601.07212v1 Announce Type: new Abstract: Large Language Models (LLMs) have become indispensable across various domains, but this comes at the cost of substantial computational and memory resources. Model pruning addresses this by removing redundant components from models. In particular, block pruning can achieve significant compression and inference acceleration. However, existing block pruning methods are often unstable and struggle to attain globally optimal solutions. In this paper, we propose a mutual information based pruning method MI-PRUN for LLMs. Specifically, we leverages mutual information to identify redundant blocks by evaluating transitions in hidden states. Additionally, we incorporate the Data Processing Inequality (DPI) to reveal the relationship between the importance of entire contiguous blocks and that of individual blocks. Moreover, we develop the Fast-Block-Select algorithm, which iteratively updates block combinations to achieve a globally optimal solution while significantly improving the efficiency. Extensive experiments across various models and datasets demonstrate the stability and effectiveness of our method.
|
https://arxiv.org/abs/2601.07212
|
Academic Papers
|
svg
|
d7aae4f3766469ae293dc097b251e80cd4183c1ede1a43c6742ca2903b393086
|
2026-01-13T00:00:00-05:00
|
BlindU: Blind Machine Unlearning without Revealing Erasing Data
|
arXiv:2601.07214v1 Announce Type: new Abstract: Machine unlearning enables data holders to remove the contribution of their specified samples from trained models to protect their privacy. However, it is paradoxical that most unlearning methods require the unlearning requesters to firstly upload their data to the server as a prerequisite for unlearning. These methods are infeasible in many privacy-preserving scenarios where servers are prohibited from accessing users' data, such as federated learning (FL). In this paper, we explore how to implement unlearning under the condition of not uncovering the erasing data to the server. We propose \textbf{Blind Unlearning (BlindU)}, which carries out unlearning using compressed representations instead of original inputs. BlindU only involves the server and the unlearning user: the user locally generates privacy-preserving representations, and the server performs unlearning solely on these representations and their labels. For the FL model training, we employ the information bottleneck (IB) mechanism. The encoder of the IB-based FL model learns representations that distort maximum task-irrelevant information from inputs, allowing FL users to generate compressed representations locally. For effective unlearning using compressed representation, BlindU integrates two dedicated unlearning modules tailored explicitly for IB-based models and uses a multiple gradient descent algorithm to balance forgetting and utility retaining. While IB compression already provides protection for task-irrelevant information of inputs, to further enhance the privacy protection, we introduce a noise-free differential privacy (DP) masking method to deal with the raw erasing data before compressing. Theoretical analysis and extensive experimental results illustrate the superiority of BlindU in privacy protection and unlearning effectiveness compared with the best existing privacy-preserving unlearning benchmarks.
|
https://arxiv.org/abs/2601.07214
|
Academic Papers
|
svg
|
91d3863888d5b49e5da4b54b3c3d5c06d1f8bb1653e5189d660466e327772fed
|
2026-01-13T00:00:00-05:00
|
SceneNAT: Masked Generative Modeling for Language-Guided Indoor Scene Synthesis
|
arXiv:2601.07218v1 Announce Type: new Abstract: We present SceneNAT, a single-stage masked non-autoregressive Transformer that synthesizes complete 3D indoor scenes from natural language instructions through only a few parallel decoding passes, offering improved performance and efficiency compared to prior state-of-the-art approaches. SceneNAT is trained via masked modeling over fully discretized representations of both semantic and spatial attributes. By applying a masking strategy at both the attribute level and the instance level, the model can better capture intra-object and inter-object structure. To boost relational reasoning, SceneNAT employs a dedicated triplet predictor for modeling the scene's layout and object relationships by mapping a set of learnable relation queries to a sparse set of symbolic triplets (subject, predicate, object). Extensive experiments on the 3D-FRONT dataset demonstrate that SceneNAT achieves superior performance compared to state-of-the-art autoregressive and diffusion baselines in both semantic compliance and spatial arrangement accuracy, while operating with substantially lower computational cost.
|
https://arxiv.org/abs/2601.07218
|
Academic Papers
|
svg
|
bd19c70d1a8e158715c81a090b816c8e731d217028b8fa42e751d6fd24145983
|
2026-01-13T00:00:00-05:00
|
VENUS: Visual Editing with Noise Inversion Using Scene Graphs
|
arXiv:2601.07219v1 Announce Type: new Abstract: State-of-the-art text-based image editing models often struggle to balance background preservation with semantic consistency, frequently resulting either in the synthesis of entirely new images or in outputs that fail to realize the intended edits. In contrast, scene graph-based image editing addresses this limitation by providing a structured representation of semantic entities and their relations, thereby offering improved controllability. However, existing scene graph editing methods typically depend on model fine-tuning, which incurs high computational cost and limits scalability. To this end, we introduce VENUS (Visual Editing with Noise inversion Using Scene graphs), a training-free framework for scene graph-guided image editing. Specifically, VENUS employs a split prompt conditioning strategy that disentangles the target object of the edit from its background context, while simultaneously leveraging noise inversion to preserve fidelity in unedited regions. Moreover, our proposed approach integrates scene graphs extracted from multimodal large language models with diffusion backbones, without requiring any additional training. Empirically, VENUS substantially improves both background preservation and semantic alignment on PIE-Bench, increasing PSNR from 22.45 to 24.80, SSIM from 0.79 to 0.84, and reducing LPIPS from 0.100 to 0.070 relative to the state-of-the-art scene graph editing model (SGEdit). In addition, VENUS enhances semantic consistency as measured by CLIP similarity (24.97 vs. 24.19). On EditVal, VENUS achieves the highest fidelity with a 0.87 DINO score and, crucially, reduces per-image runtime from 6-10 minutes to only 20-30 seconds. Beyond scene graph-based editing, VENUS also surpasses strong text-based editing baselines such as LEDIT++ and P2P+DirInv, thereby demonstrating consistent improvements across both paradigms.
|
https://arxiv.org/abs/2601.07219
|
Academic Papers
|
svg
|
b65c146b91be13f56ca626100414154120bd4bd22d7d736f2a39fbc548bcb65f
|
2026-01-13T00:00:00-05:00
|
The Roots of Performance Disparity in Multilingual Language Models: Intrinsic Modeling Difficulty or Design Choices?
|
arXiv:2601.07220v1 Announce Type: new Abstract: Multilingual language models (LMs) promise broader NLP access, yet current systems deliver uneven performance across the world's languages. This survey examines why these gaps persist and whether they reflect intrinsic linguistic difficulty or modeling artifacts. We organize the literature around two questions: do linguistic disparities arise from representation and allocation choices (e.g., tokenization, encoding, data exposure, parameter sharing) rather than inherent complexity; and which design choices mitigate inequities across typologically diverse languages. We review linguistic features, such as orthography, morphology, lexical diversity, syntax, information density, and typological distance, linking each to concrete modeling mechanisms. Gaps often shrink when segmentation, encoding, and data exposure are normalized, suggesting much apparent difficulty stems from current modeling choices. We synthesize these insights into design recommendations for tokenization, sampling, architectures, and evaluation to support more balanced multilingual LMs.
|
https://arxiv.org/abs/2601.07220
|
Academic Papers
|
svg
|
3969e636c0c1fe4667cda1ca7c459c2d35685ef8ce371cfc27fc50f4564b70e7
|
2026-01-13T00:00:00-05:00
|
Language-Grounded Multi-Domain Image Translation via Semantic Difference Guidance
|
arXiv:2601.07221v1 Announce Type: new Abstract: Multi-domain image-to-image translation re quires grounding semantic differences ex pressed in natural language prompts into corresponding visual transformations, while preserving unrelated structural and seman tic content. Existing methods struggle to maintain structural integrity and provide fine grained, attribute-specific control, especially when multiple domains are involved. We propose LACE (Language-grounded Attribute Controllable Translation), built on two compo nents: (1) a GLIP-Adapter that fuses global semantics with local structural features to pre serve consistency, and (2) a Multi-Domain Control Guidance mechanism that explicitly grounds the semantic delta between source and target prompts into per-attribute translation vec tors, aligning linguistic semantics with domain level visual changes. Together, these modules enable compositional multi-domain control with independent strength modulation for each attribute. Experiments on CelebA(Dialog) and BDD100K demonstrate that LACE achieves high visual fidelity, structural preservation, and interpretable domain-specific control, surpass ing prior baselines. This positions LACE as a cross-modal content generation framework bridging language semantics and controllable visual translation.
|
https://arxiv.org/abs/2601.07221
|
Academic Papers
|
svg
|
e84258f876121162b3bc634d4b6972fd7f628d05fc3dddd7bb741093b87481a5
|
2026-01-13T00:00:00-05:00
|
Consolidation or Adaptation? PRISM: Disentangling SFT and RL Data via Gradient Concentration
|
arXiv:2601.07224v1 Announce Type: new Abstract: While Hybrid Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) has become the standard paradigm for training LLM agents, effective mechanisms for data allocation between these stages remain largely underexplored. Current data arbitration strategies often rely on surface-level heuristics that fail to diagnose intrinsic learning needs. Since SFT targets pattern consolidation through imitation while RL drives structural adaptation via exploration, misaligning data with these functional roles causes severe optimization interference. We propose PRISM, a dynamics-aware framework grounded in Schema Theory that arbitrates data based on its degree of cognitive conflict with the model's existing knowledge. By analyzing the spatial geometric structure of gradients, PRISM identifies data triggering high spatial concentration as high-conflict signals that require RL for structural restructuring. In contrast, data yielding diffuse updates is routed to SFT for efficient consolidation. Extensive experiments on WebShop and ALFWorld demonstrate that PRISM achieves a Pareto improvement, outperforming state-of-the-art hybrid methods while reducing computational costs by up to 3.22$\times$. Our findings suggest that disentangling data based on internal optimization regimes is crucial for scalable and robust agent alignment.
|
https://arxiv.org/abs/2601.07224
|
Academic Papers
|
svg
|
12761f743f7b50ba323f84bb313f536fc0731691de9a70e05fb4e9e823963a35
|
2026-01-13T00:00:00-05:00
|
Lost in the Noise: How Reasoning Models Fail with Contextual Distractors
|
arXiv:2601.07226v1 Announce Type: new Abstract: Recent advances in reasoning models and agentic AI systems have led to an increased reliance on diverse external information. However, this shift introduces input contexts that are inherently noisy, a reality that current sanitized benchmarks fail to capture. We introduce NoisyBench, a comprehensive benchmark that systematically evaluates model robustness across 11 datasets in RAG, reasoning, alignment, and tool-use tasks against diverse noise types, including random documents, irrelevant chat histories, and hard negative distractors. Our evaluation reveals a catastrophic performance drop of up to 80% in state-of-the-art models when faced with contextual distractors. Crucially, we find that agentic workflows often amplify these errors by over-trusting noisy tool outputs, and distractors can trigger emergent misalignment even without adversarial intent. We find that prompting, context engineering, SFT, and outcome-reward only RL fail to ensure robustness; in contrast, our proposed Rationale-Aware Reward (RARE) significantly strengthens resilience by incentivizing the identification of helpful information within noise. Finally, we uncover an inverse scaling trend where increased test-time computation leads to worse performance in noisy settings and demonstrate via attention visualization that models disproportionately focus on distractor tokens, providing vital insights for building the next generation of robust, reasoning-capable agents.
|
https://arxiv.org/abs/2601.07226
|
Academic Papers
|
svg
|
bf529396e1a7729ec83f8a3279b0e4d5a22755581c172c0b448f4bea1719d36f
|
2026-01-13T00:00:00-05:00
|
DiSCo: Making Absence Visible in Intelligent Summarization Interfaces
|
arXiv:2601.07229v1 Announce Type: new Abstract: Intelligent interfaces increasingly use large language models to summarize user-generated content, yet these summaries emphasize what is mentioned while overlooking what is missing. This presence bias can mislead users who rely on summaries to make decisions. We present Domain Informed Summarization through Contrast (DiSCo), an expectation-based computational approach that makes absences visible by comparing each entity's content with domain topical expectations captured in reference distributions of aspects typically discussed in comparable accommodations. This comparison identifies aspects that are either unusually emphasized or missing relative to domain norms and integrates them into the generated text. In a user study across three accommodation domains, namely ski, beach, and city center, DiSCo summaries were rated as more detailed and useful for decision making than baseline large language model summaries, although slightly harder to read. The findings show that modeling expectations reduces presence bias and improves both transparency and decision support in intelligent summarization interfaces.
|
https://arxiv.org/abs/2601.07229
|
Academic Papers
|
svg
|
e4b49b55263f1686867b8b6acf848e619a47a71ec1ab35cac4081e9273be86a8
|
2026-01-13T00:00:00-05:00
|
Yes FLoReNce, I Will Do Better Next Time! Agentic Feedback Reasoning for Humorous Meme Detection
|
arXiv:2601.07232v1 Announce Type: new Abstract: Humorous memes blend visual and textual cues to convey irony, satire, or social commentary, posing unique challenges for AI systems that must interpret intent rather than surface correlations. Existing multimodal or prompting-based models generate explanations for humor but operate in an open loop,lacking the ability to critique or refine their reasoning once a prediction is made. We propose FLoReNce, an agentic feedback reasoning framework that treats meme understanding as a closed-loop process during learning and an open-loop process during inference. In the closed loop, a reasoning agent is critiqued by a judge; the error and semantic feedback are converted into control signals and stored in a feedback-informed, non-parametric knowledge base. At inference, the model retrieves similar judged experiences from this KB and uses them to modulate its prompt, enabling better, self-aligned reasoning without finetuning. On the PrideMM dataset, FLoReNce improves both predictive performance and explanation quality over static multimodal baselines, showing that feedback-regulated prompting is a viable path to adaptive meme humor understanding.
|
https://arxiv.org/abs/2601.07232
|
Academic Papers
|
svg
|
3d3e368bbbd6ebb2916a13f6c9c679c031b59d8c1ecc6c32b70c64e39ffcd094
|
2026-01-13T00:00:00-05:00
|
From "Thinking" to "Justifying": Aligning High-Stakes Explainability with Professional Communication Standards
|
arXiv:2601.07233v1 Announce Type: new Abstract: Explainable AI (XAI) in high-stakes domains should help stakeholders trust and verify system outputs. Yet Chain-of-Thought methods reason before concluding, and logical gaps or hallucinations can yield conclusions that do not reliably align with their rationale. Thus, we propose "Result -> Justify", which constrains the output communication to present a conclusion before its structured justification. We introduce SEF (Structured Explainability Framework), operationalizing professional conventions (e.g., CREAC, BLUF) via six metrics for structure and grounding. Experiments across four tasks in three domains validate this approach: all six metrics correlate with correctness (r=0.20-0.42; p<0.001), and SEF achieves 83.9% accuracy (+5.3 over CoT). These results suggest structured justification can improve verifiability and may also improve reliability.
|
https://arxiv.org/abs/2601.07233
|
Academic Papers
|
svg
|
6c9cfb3e46455186fea606789af63223cf97b9fb7dd4a623cd1205b055df042c
|
2026-01-13T00:00:00-05:00
|
Making Absence Visible: The Roles of Reference and Prompting in Recognizing Missing Information
|
arXiv:2601.07234v1 Announce Type: new Abstract: Interactive systems that explain data, or support decision making often emphasize what is present while overlooking what is expected but missing. This presence bias limits users' ability to form complete mental models of a dataset or situation. Detecting absence depends on expectations about what should be there, yet interfaces rarely help users form such expectations. We present an experimental study examining how reference framing and prompting influence people's ability to recognize expected but missing categories in datasets. Participants compared distributions across three domains (energy, wealth, and regime) under two reference conditions: Global, presenting a unified population baseline, and Partial, showing several concrete exemplars. Results indicate that absence detection was higher with Partial reference than with Global reference, suggesting that partial, samples-based framing can support expectation formation and absence detection. When participants were prompted to look for what was missing, absence detection rose sharply. We discuss implications for interactive user interfaces and expectation-based visualization design, while considering cognitive trade-offs of reference structures and guided attention.
|
https://arxiv.org/abs/2601.07234
|
Academic Papers
|
svg
|
d5d8ea9634a5f8da43d7398f9fbf8015c53d78da5cdb7136ec602f2f293e92bd
|
2026-01-13T00:00:00-05:00
|
Sentiment Analysis on Movie Reviews: A Deep Dive into Modern Techniques and Open Challenges
|
arXiv:2601.07235v1 Announce Type: new Abstract: This paper presents a comprehensive survey of sentiment analysis methods for movie reviews, a benchmark task that has played a central role in advancing natural language processing. We review the evolution of techniques from early lexicon-based and classical machine learning approaches to modern deep learning architectures and large language models, covering widely used datasets such as IMDb, Rotten Tomatoes, and SST-2, and models ranging from Naive Bayes and support vector machines to LSTM networks, BERT, and attention-based transformers. Beyond summarizing prior work, this survey differentiates itself by offering a comparative, challenge-driven analysis of how these modeling paradigms address domain-specific issues such as sarcasm, negation, contextual ambiguity, and domain shift, which remain open problems in existing literature. Unlike earlier reviews that focus primarily on text-only pipelines, we also synthesize recent advances in multimodal sentiment analysis that integrate textual, audio, and visual cues from movie trailers and clips. In addition, we examine emerging concerns related to interpretability, fairness, and robustness that are often underexplored in prior surveys, and we outline future research directions including zero-shot and few-shot learning, hybrid symbolic--neural models, and real-time deployment considerations. Overall, this abstract provides a domain-focused roadmap that highlights both established solutions and unresolved challenges toward building more accurate, generalizable, and explainable sentiment analysis systems for movie review data.
|
https://arxiv.org/abs/2601.07235
|
Academic Papers
|
svg
|
3583fd6de8dbee87b6fb00a7112013970f3eb8aa38ad847143290c784c937cd3
|
2026-01-13T00:00:00-05:00
|
Group Pattern Selection Optimization: Let LRMs Pick the Right Pattern for Reasoning
|
arXiv:2601.07238v1 Announce Type: new Abstract: Large reasoning models (LRMs) exhibit diverse high-level reasoning patterns (e.g., direct solution, reflection-and-verification, and exploring multiple solutions), yet prevailing training recipes implicitly bias models toward a limited set of dominant patterns. Through a systematic analysis, we identify substantial accuracy variance across these patterns on mathematics and science benchmarks, revealing that a model's default reasoning pattern is often sub-optimal for a given problem. To address this, we introduce Group Pattern Selection Optimization (GPSO), a reinforcement learning framework that extends GRPO by incorporating multi-pattern rollouts, verifier-guided optimal pattern selection per problem, and attention masking during optimization to prevent the leakage of explicit pattern suffixes into the learned policy. By exploring a portfolio of diverse reasoning strategies and optimizing the policy on the most effective ones, GPSO enables the model to internalize the mapping from problem characteristics to optimal reasoning patterns. Extensive experiments demonstrate that GPSO delivers consistent and substantial performance gains across various model backbones and benchmarks, effectively mitigating pattern sub-optimality and fostering more robust, adaptable reasoning. All data and codes are available at https://github.com/wanghanbinpanda/GPSO.
|
https://arxiv.org/abs/2601.07238
|
Academic Papers
|
svg
|
648131ea51c1a6e2aca6850166e4ea031e07254f57174d13744ff0b41d5c6e98
|
2026-01-13T00:00:00-05:00
|
Stochastic CHAOS: Why Deterministic Inference Kills, and Distributional Variability Is the Heartbeat of Artifical Cognition
|
arXiv:2601.07239v1 Announce Type: new Abstract: Deterministic inference is a comforting ideal in classical software: the same program on the same input should always produce the same output. As large language models move into real-world deployment, this ideal has been imported wholesale into inference stacks. Recent work from the Thinking Machines Lab has presented a detailed analysis of nondeterminism in LLM inference, showing how batch-invariant kernels and deterministic attention can enforce bitwise-identical outputs, positioning deterministic inference as a prerequisite for reproducibility and enterprise reliability. In this paper, we take the opposite stance. We argue that, for LLMs, deterministic inference kills. It kills the ability to model uncertainty, suppresses emergent abilities, collapses reasoning into a single brittle path, and weakens safety alignment by hiding tail risks. LLMs implement conditional distributions over outputs, not fixed functions. Collapsing these distributions to a single canonical completion may appear reassuring, but it systematically conceals properties central to artificial cognition. We instead advocate Stochastic CHAOS, treating distributional variability as a signal to be measured and controlled. Empirically, we show that deterministic inference is systematically misleading. Single-sample deterministic evaluation underestimates both capability and fragility, masking failure probability under paraphrases and noise. Phase-like transitions associated with emergent abilities disappear under greedy decoding. Multi-path reasoning degrades when forced onto deterministic backbones, reducing accuracy and diagnostic insight. Finally, deterministic evaluation underestimates safety risk by hiding rare but dangerous behaviors that appear only under multi-sample evaluation.
|
https://arxiv.org/abs/2601.07239
|
Academic Papers
|
svg
|
77803ecc149bc17cd1cfd465e8ba335fd40aa592dc4f9dc218ad84389142a408
|
2026-01-13T00:00:00-05:00
|
Bias-Aware BP Decoding of Quantum Codes via Directional Degeneracy
|
arXiv:2601.07240v1 Announce Type: new Abstract: We study directionally informed belief propagation (BP) decoding for quantum CSS codes, where anisotropic Tanner-graph structure and biased noise concentrate degeneracy along preferred directions. We formalize this by placing orientation weights on Tanner-graph edges, aggregating them into per-qubit directional weights, and defining a \emph{directional degeneracy enumerator} that summarizes how degeneracy concentrates along those directions. A single bias parameter~$\beta$ maps these weights into site-dependent log-likelihood ratios (LLRs), yielding anisotropic priors that plug directly into standard BP$\rightarrow$OSD decoders without changing the code construction. We derive bounds relating directional and Hamming distances, upper bound the number of degenerate error classes per syndrome as a function of distance, rate, and directional bias, and give a MacWilliams-type expression for the directional enumerator. Finite-length simulations under code-capacity noise show significant logical error-rate reductions -- often an order of magnitude at moderate physical error rates -- confirming that modest anisotropy is a simple and effective route to hardware-aware decoding gains.
|
https://arxiv.org/abs/2601.07240
|
Academic Papers
|
svg
|
dd876cd0f4cd5e25e3b37ab4209376d727e8e400e7d07826b9825dd021813175
|
2026-01-13T00:00:00-05:00
|
HERE: Hierarchical Active Exploration of Radiance Field with Epistemic Uncertainty Minimization
|
arXiv:2601.07242v1 Announce Type: new Abstract: We present HERE, an active 3D scene reconstruction framework based on neural radiance fields, enabling high-fidelity implicit mapping. Our approach centers around an active learning strategy for camera trajectory generation, driven by accurate identification of unseen regions, which supports efficient data acquisition and precise scene reconstruction. The key to our approach is epistemic uncertainty quantification based on evidential deep learning, which directly captures data insufficiency and exhibits a strong correlation with reconstruction errors. This allows our framework to more reliably identify unexplored or poorly reconstructed regions compared to existing methods, leading to more informed and targeted exploration. Additionally, we design a hierarchical exploration strategy that leverages learned epistemic uncertainty, where local planning extracts target viewpoints from high-uncertainty voxels based on visibility for trajectory generation, and global planning uses uncertainty to guide large-scale coverage for efficient and comprehensive reconstruction. The effectiveness of the proposed method in active 3D reconstruction is demonstrated by achieving higher reconstruction completeness compared to previous approaches on photorealistic simulated scenes across varying scales, while a hardware demonstration further validates its real-world applicability.
|
https://arxiv.org/abs/2601.07242
|
Academic Papers
|
svg
|
fb8bdb36304ae36458d9fdb163fefa7b6e782cd008b71ffe87dfec77c37c3f8b
|
2026-01-13T00:00:00-05:00
|
Learning to Trust the Crowd: A Multi-Model Consensus Reasoning Engine for Large Language Models
|
arXiv:2601.07245v1 Announce Type: new Abstract: Large language models (LLMs) achieve strong aver- age performance yet remain unreliable at the instance level, with frequent hallucinations, brittle failures, and poorly calibrated confidence. We study reliability through the lens of multi-model consensus: given responses from several heterogeneous LLMs, can we learn which answer is most likely correct for a given query? We introduce a Multi-Model Consensus Reasoning Engine that treats the set of LLM outputs as input to a supervised meta-learner. The system maps natural language responses into structured features using semantic embeddings, pairwise similarity and clustering statistics, lexical and structural cues, reasoning-quality scores, confidence estimates, and model-specific priors, and then applies gradient-boosted trees, listwise ranking, and graph neural networks over similarity graphs of answers. Using three open-weight LLMs evaluated on compact, resource- constrained subsets of GSM8K, ARC-Challenge, HellaSwag, and TruthfulQA, our best graph-attention-based consensus model improves macro-average accuracy by 4.6 percentage points over the strongest single LLM and by 8.1 points over majority vote, while also yielding lower Brier scores and fewer TruthfulQA hal- lucinations. Ablation and feature-importance analyses show that semantic agreement and clustering features are most influential, with reasoning-quality and model-prior features providing com- plementary gains, suggesting supervised multi-model consensus is a practical route toward more reliable LLM behavior, even in a modest single-machine setup.
|
https://arxiv.org/abs/2601.07245
|
Academic Papers
|
svg
|
692bf74daed8c52791519622b9f6c9a690a03ffb6b2ea67b79e455c45f09dbab
|
2026-01-13T00:00:00-05:00
|
Rate-distortion Theory on Non-compact Spaces: A Concentration-compactness Approach
|
arXiv:2601.07246v1 Announce Type: new Abstract: In this paper, we study rate-distortion theory for general sources with an emphasis on the existence of optimal reconstruction distributions. Classical existence results rely on compactness assumptions that are often violated in non-compact settings. By introducing the concentration-compactness principle into the analysis of the rate-distortion functional, we establish the existence of optimal reconstructions under mild coercivity conditions on the distortion function. Our results provide a unified and transparent existence theorem for rate-distortion problems on general non-compact spaces.
|
https://arxiv.org/abs/2601.07246
|
Academic Papers
|
svg
|
f6060f4c65c034ed5dcc471eae19028ff61d68692e3c05ca66e0697ade5fee31
|
2026-01-13T00:00:00-05:00
|
DarwinTOD: LLM Driven Lifelong Self Evolution for Task Oriented Dialog Systems
|
arXiv:2601.07248v1 Announce Type: new Abstract: Traditional task-oriented dialog systems are unable to evolve from ongoing interactions or adapt to new domains after deployment, that is a critical limitation in real-world dynamic environments. Continual learning approaches depend on episodic retraining with human curated data, failing to achieve autonomy lifelong improvement. While evolutionary computation and LLM driven self improvement offer promising mechanisms for dialog optimization, they lack a unified framework for holistic, iterative strategy refinement. To bridge this gap, we propose DarwinTOD, a lifelong self evolving dialog framework that systematically integrates these two paradigms, enabling continuous strategy optimization from a zero-shot base without task specific fine-tuning. DarwinTOD maintains an Evolvable Strategy Bank and operates through a dual-loop process: online multi-agent dialog execution with peer critique, and offline structured evolutionary operations that refine the strategy bank using accumulated feedback. This closed-loop design enables autonomous continuous improvement without human intervention. Extensive experiments show that DarwinTOD surpasses previous state-of-the-art methods and exhibits continuous performance gains throughout evolution. Our work provides a novel framework for building dialog systems with lifelong self evolution capabilities.
|
https://arxiv.org/abs/2601.07248
|
Academic Papers
|
svg
|
ed3079dbf5fbc6e9e0b214fdd6a5161b089b59dd48d25f29037338ef8bdf19f8
|
2026-01-13T00:00:00-05:00
|
DDT: A Dual-Masking Dual-Expert Transformer for Energy Time-Series Forecasting
|
arXiv:2601.07250v1 Announce Type: new Abstract: Accurate energy time-series forecasting is crucial for ensuring grid stability and promoting the integration of renewable energy, yet it faces significant challenges from complex temporal dependencies and the heterogeneity of multi-source data. To address these issues, we propose DDT, a novel and robust deep learning framework for high-precision time-series forecasting. At its core, DDT introduces two key innovations. First, we design a dual-masking mechanism that synergistically combines a strict causal mask with a data-driven dynamic mask. This novel design ensures theoretical causal consistency while adaptively focusing on the most salient historical information, overcoming the rigidity of traditional masking techniques. Second, our architecture features a dual-expert system that decouples the modeling of temporal dynamics and cross-variable correlations into parallel, specialized pathways, which are then intelligently integrated through a dynamic gated fusion module. We conducted extensive experiments on 7 challenging energy benchmark datasets, including ETTh, Electricity, and Solar. The results demonstrate that DDT consistently outperforms strong state-of-the-art baselines across all prediction horizons, establishing a new benchmark for the task.
|
https://arxiv.org/abs/2601.07250
|
Academic Papers
|
svg
|
d542ee3a30f58fc8f3cb641ae484e87fa0ebba41cd80ea8cfda35d14b07e74df
|
2026-01-13T00:00:00-05:00
|
MeepleLM: A Virtual Playtester Simulating Diverse Subjective Experiences
|
arXiv:2601.07251v1 Announce Type: new Abstract: Recent advancements have expanded the role of Large Language Models in board games from playing agents to creative co-designers. However, a critical gap remains: current systems lack the capacity to offer constructive critique grounded in the emergent user experience. Bridging this gap is fundamental for harmonizing Human-AI collaboration, as it empowers designers to refine their creations via external perspectives while steering models away from biased or unpredictable outcomes. Automating critique for board games presents two challenges: inferring the latent dynamics connecting rules to gameplay without an explicit engine, and modeling the subjective heterogeneity of diverse player groups. To address these, we curate a dataset of 1,727 structurally corrected rulebooks and 150K reviews selected via quality scoring and facet-aware sampling. We augment this data with Mechanics-Dynamics-Aesthetics (MDA) reasoning to explicitly bridge the causal gap between written rules and player experience. We further distill player personas and introduce MeepleLM, a specialized model that internalizes persona-specific reasoning patterns to accurately simulate the subjective feedback of diverse player archetypes. Experiments demonstrate that MeepleLM significantly outperforms latest commercial models (e.g., GPT-5.1, Gemini3-Pro) in community alignment and critique quality, achieving a 70% preference rate in user studies assessing utility. MeepleLM serves as a reliable virtual playtester for general interactive systems, marking a pivotal step towards audience-aligned, experience-aware Human-AI collaboration.
|
https://arxiv.org/abs/2601.07251
|
Academic Papers
|
svg
|
bd5f05be1a1580dedd235fb5b33327dd976073b17f2215318f28c508b1aec00a
|
2026-01-13T00:00:00-05:00
|
SwarmFoam: An OpenFOAM Multi-Agent System Based on Multiple Types of Large Language Models
|
arXiv:2601.07252v1 Announce Type: new Abstract: Numerical simulation is one of the mainstream methods in scientific research, typically performed by professional engineers. With the advancement of multi-agent technology, using collaborating agents to replicate human behavior shows immense potential for intelligent Computational Fluid Dynamics (CFD) simulations. Some muti-agent systems based on Large Language Models have been proposed. However, they exhibit significant limitations when dealing with complex geometries. This paper introduces a new multi-agent simulation framework, SwarmFoam. SwarmFoam integrates functionalities such as Multi-modal perception, Intelligent error correction, and Retrieval-Augmented Generation, aiming to achieve more complex simulations through dual parsing of images and high-level instructions. Experimental results demonstrate that SwarmFoam has good adaptability to simulation inputs from different modalities. The overall pass rate for 25 test cases was 84%, with natural language and multi-modal input cases achieving pass rates of 80% and 86.7%, respectively. The work presented by SwarmFoam will further promote the development of intelligent agent methods for CFD.
|
https://arxiv.org/abs/2601.07252
|
Academic Papers
|
svg
|
5257630e65d61f6e8a4f31b9583c6054d04b3e0b8b5a35344f6dce3da0e9ce58
|
2026-01-13T00:00:00-05:00
|
Universal Adversarial Purification with DDIM Metric Loss for Stable Diffusion
|
arXiv:2601.07253v1 Announce Type: new Abstract: Stable Diffusion (SD) often produces degraded outputs when the training dataset contains adversarial noise. Adversarial purification offers a promising solution by removing adversarial noise from contaminated data. However, existing purification methods are primarily designed for classification tasks and fail to address SD-specific adversarial strategies, such as attacks targeting the VAE encoder, UNet denoiser, or both. To address the gap in SD security, we propose Universal Diffusion Adversarial Purification (UDAP), a novel framework tailored for defending adversarial attacks targeting SD models. UDAP leverages the distinct reconstruction behaviors of clean and adversarial images during Denoising Diffusion Implicit Models (DDIM) inversion to optimize the purification process. By minimizing the DDIM metric loss, UDAP can effectively remove adversarial noise. Additionally, we introduce a dynamic epoch adjustment strategy that adapts optimization iterations based on reconstruction errors, significantly improving efficiency without sacrificing purification quality. Experiments demonstrate UDAP's robustness against diverse adversarial methods, including PID (VAE-targeted), Anti-DreamBooth (UNet-targeted), MIST (hybrid), and robustness-enhanced variants like Anti-Diffusion (Anti-DF) and MetaCloak. UDAP also generalizes well across SD versions and text prompts, showcasing its practical applicability in real-world scenarios.
|
https://arxiv.org/abs/2601.07253
|
Academic Papers
|
svg
|
5650d8d3e6fd8cc4361536ef4fca5611f76dca00f248f800494585b96a42de2c
|
2026-01-13T00:00:00-05:00
|
Innovation Capacity of Dynamical Learning Systems
|
arXiv:2601.07257v1 Announce Type: new Abstract: In noisy physical reservoirs, the classical information-processing capacity $C_{\mathrm{ip}}$ quantifies how well a linear readout can realize tasks measurable from the input history, yet $C_{\mathrm{ip}}$ can be far smaller than the observed rank of the readout covariance. We explain this ``missing capacity'' by introducing the innovation capacity $C_{\mathrm{i}}$, the total capacity allocated to readout components orthogonal to the input filtration (Doob innovations, including input-noise mixing). Using a basis-free Hilbert-space formulation of the predictable/innovation decomposition, we prove the conservation law $C_{\mathrm{ip}}+C_{\mathrm{i}}=\mathrm{rank}(\Sigma_{XX})\le d$, so predictable and innovation capacities exactly partition the rank of the observable readout dimension covariance $\Sigma_{XX}\in \mathbb{R}^{\rm d\times d}$. In linear-Gaussian Johnson-Nyquist regimes, $\Sigma_{XX}(T)=S+T N_0$, the split becomes a generalized-eigenvalue shrinkage rule and gives an explicit monotone tradeoff between temperature and predictable capacity. Geometrically, in whitened coordinates the predictable and innovation components correspond to complementary covariance ellipsoids, making $C_{\mathrm{i}}$ a trace-controlled innovation budget. A large $C_{\mathrm{i}}$ forces a high-dimensional innovation subspace with a variance floor and under mild mixing and anti-concentration assumptions this yields extensive innovation-block differential entropy and exponentially many distinguishable histories. Finally, we give an information-theoretic lower bound showing that learning the induced innovation-block law in total variation requires a number of samples that scales with the effective innovation dimension, supporting the generative utility of noisy physical reservoirs.
|
https://arxiv.org/abs/2601.07257
|
Academic Papers
|
svg
|
d1b2bace51307282f8ce3e24f278fab8756de65d133b793abd34bb3789716df0
|
2026-01-13T00:00:00-05:00
|
Simulated Annealing-based Candidate Optimization for Batch Acquisition Functions
|
arXiv:2601.07258v1 Announce Type: new Abstract: Bayesian Optimization with multi-objective acquisition functions such as q-Expected Hypervolume Improvement (qEHVI) requires efficient candidate optimization to maximize acquisition function values. Traditional approaches rely on continuous optimization methods like Sequential Least Squares Programming (SLSQP) for candidate selection. However, these gradient-based methods can become trapped in local optima, particularly in complex or high-dimensional objective landscapes. This paper presents a simulated annealing-based approach for candidate optimization in batch acquisition functions as an alternative to conventional continuous optimization methods. We evaluate our simulated annealing approach against SLSQP across four benchmark multi-objective optimization problems: ZDT1 (30D, 2 objectives), DTLZ2 (7D, 3 objectives), Kursawe (3D, 2 objectives), and Latent-Aware (4D, 2 objectives). Our results demonstrate that simulated annealing consistently achieves superior hypervolume performance compared to SLSQP in most test functions. The improvement is particularly pronounced for DTLZ2 and Latent-Aware problems, where simulated annealing reaches significantly higher hypervolume values and maintains better convergence characteristics. The histogram analysis of objective space coverage further reveals that simulated annealing explores more diverse and optimal regions of the Pareto front. These findings suggest that metaheuristic optimization approaches like simulated annealing can provide more robust and effective candidate optimization for multi-objective Bayesian optimization, offering a promising alternative to traditional gradient-based methods for batch acquisition function optimization.
|
https://arxiv.org/abs/2601.07258
|
Academic Papers
|
svg
|
a6812e8e43f498a5f3f9ac69f26ff2627097483c22387b624ef1e1bd0103b9d0
|
2026-01-13T00:00:00-05:00
|
ActiShade: Activating Overshadowed Knowledge to Guide Multi-Hop Reasoning in Large Language Models
|
arXiv:2601.07260v1 Announce Type: new Abstract: In multi-hop reasoning, multi-round retrieval-augmented generation (RAG) methods typically rely on LLM-generated content as the retrieval query. However, these approaches are inherently vulnerable to knowledge overshadowing - a phenomenon where critical information is overshadowed during generation. As a result, the LLM-generated content may be incomplete or inaccurate, leading to irrelevant retrieval and causing error accumulation during the iteration process. To address this challenge, we propose ActiShade, which detects and activates overshadowed knowledge to guide large language models (LLMs) in multi-hop reasoning. Specifically, ActiShade iteratively detects the overshadowed keyphrase in the given query, retrieves documents relevant to both the query and the overshadowed keyphrase, and generates a new query based on the retrieved documents to guide the next-round iteration. By supplementing the overshadowed knowledge during the formulation of next-round queries while minimizing the introduction of irrelevant noise, ActiShade reduces the error accumulation caused by knowledge overshadowing. Extensive experiments show that ActiShade outperforms existing methods across multiple datasets and LLMs.
|
https://arxiv.org/abs/2601.07260
|
Academic Papers
|
svg
|
412c307d1cb4314c4676fe806e1df14f03d50e4461334a82c653f10618a7c68a
|
2026-01-13T00:00:00-05:00
|
Pseudodata-guided Invariant Representation Learning Boosts the Out-of-Distribution Generalization in Enzymatic Kinetic Parameter Prediction
|
arXiv:2601.07261v1 Announce Type: new Abstract: Accurate prediction of enzyme kinetic parameters is essential for understanding catalytic mechanisms and guiding enzyme engineering.However, existing deep learning-based enzyme-substrate interaction (ESI) predictors often exhibit performance degradation on sequence-divergent, out-of-distribution (OOD) cases, limiting robustness under biologically relevant perturbations.We propose O$^2$DENet, a lightweight, plug-and-play module that enhances OOD generalization via biologically and chemically informed perturbation augmentation and invariant representation learning.O$^2$DENet introduces enzyme-substrate perturbations and enforces consistency between original and augmented enzyme-substrate-pair representations to encourage invariance to distributional shifts.When integrated with representative ESI models, O$^2$DENet consistently improves predictive performance for both $k_{cat}$ and $K_m$ across stringent sequence-identity-based OOD benchmarks, achieving state-of-the-art results among the evaluated methods in terms of accuracy and robustness metrics.Overall, O$^2$DENet provides a general and effective strategy to enhance the stability and deployability of data-driven enzyme kinetics predictors for real-world enzyme engineering applications.
|
https://arxiv.org/abs/2601.07261
|
Academic Papers
|
svg
|
4b8d90b2d5382e4a5082199693167b5407dda099551283c7787ecc4a6061dc73
|
2026-01-13T00:00:00-05:00
|
ColorBrowserAgent: An Intelligent GUI Agent for Complex Long-Horizon Web Automation
|
arXiv:2601.07262v1 Announce Type: new Abstract: The web browser serves as a primary interface for daily human activities, making its automation a critical frontier for Human-Centred AI. While Large Language Models (LLMs) have enabled autonomous agents to interact with web GUIs, their reliability in real-world scenarios is hampered by long-horizon instability and the vast heterogeneity of site designs. In this paper, we introduce ColorBrowserAgent, a framework designed for Collaborative Autonomy in complex web tasks. Our approach integrates two human-centred mechanisms: (1) Progressive Progress Summarization, which mimics human short-term memory to maintain coherence over extended interactions; and (2) Human-in-the-Loop Knowledge Adaptation, which bridges the knowledge gap in diverse environments by soliciting expert intervention only when necessary. This symbiotic design allows the agent to learn from human tips without extensive retraining, effectively combining the scalability of AI with the adaptability of human cognition. Evaluated on the WebArena benchmark using GPT-5, ColorBrowserAgent achieves a state-of-the-art success rate of 71.2\%, demonstrating the efficacy of interactive human assistance in robust web automation.
|
https://arxiv.org/abs/2601.07262
|
Academic Papers
|
svg
|
a8372c3fe899308c662a036c66b8b0e18fadd7eb49750d177feed7a1f74d59ac
|
2026-01-13T00:00:00-05:00
|
When Bots Take the Bait: Exposing and Mitigating the Emerging Social Engineering Attack in Web Automation Agent
|
arXiv:2601.07263v1 Announce Type: new Abstract: Web agents, powered by large language models (LLMs), are increasingly deployed to automate complex web interactions. The rise of open-source frameworks (e.g., Browser Use, Skyvern-AI) has accelerated adoption, but also broadened the attack surface. While prior research has focused on model threats such as prompt injection and backdoors, the risks of social engineering remain largely unexplored. We present the first systematic study of social engineering attacks against web automation agents and design a pluggable runtime mitigation solution. On the attack side, we introduce the AgentBait paradigm, which exploits intrinsic weaknesses in agent execution: inducement contexts can distort the agent's reasoning and steer it toward malicious objectives misaligned with the intended task. On the defense side, we propose SUPERVISOR, a lightweight runtime module that enforces environment and intention consistency alignment between webpage context and intended goals to mitigate unsafe operations before execution. Empirical results show that mainstream frameworks are highly vulnerable to AgentBait, with an average attack success rate of 67.5% and peaks above 80% under specific strategies (e.g., trusted identity forgery). Compared with existing lightweight defenses, our module can be seamlessly integrated across different web automation frameworks and reduces attack success rates by up to 78.1% on average while incurring only a 7.7% runtime overhead and preserving usability. This work reveals AgentBait as a critical new threat surface for web agents and establishes a practical, generalizable defense, advancing the security of this rapidly emerging ecosystem. We reported the details of this attack to the framework developers and received acknowledgment before submission.
|
https://arxiv.org/abs/2601.07263
|
Academic Papers
|
svg
|
c88ad021bd6332dcfc6c639b0b730b1da00bf2338b67d6f799f875e2db31daa4
|
2026-01-13T00:00:00-05:00
|
The Confidence Dichotomy: Analyzing and Mitigating Miscalibration in Tool-Use Agents
|
arXiv:2601.07264v1 Announce Type: new Abstract: Autonomous agents based on large language models (LLMs) are rapidly evolving to handle multi-turn tasks, but ensuring their trustworthiness remains a critical challenge. A fundamental pillar of this trustworthiness is calibration, which refers to an agent's ability to express confidence that reliably reflects its actual performance. While calibration is well-established for static models, its dynamics in tool-integrated agentic workflows remain underexplored. In this work, we systematically investigate verbalized calibration in tool-use agents, revealing a fundamental confidence dichotomy driven by tool type. Specifically, our pilot study identifies that evidence tools (e.g., web search) systematically induce severe overconfidence due to inherent noise in retrieved information, while verification tools (e.g., code interpreters) can ground reasoning through deterministic feedback and mitigate miscalibration. To robustly improve calibration across tool types, we propose a reinforcement learning (RL) fine-tuning framework that jointly optimizes task accuracy and calibration, supported by a holistic benchmark of reward designs. We demonstrate that our trained agents not only achieve superior calibration but also exhibit robust generalization from local training environments to noisy web settings and to distinct domains such as mathematical reasoning. Our results highlight the necessity of domain-specific calibration strategies for tool-use agents. More broadly, this work establishes a foundation for building self-aware agents that can reliably communicate uncertainty in high-stakes, real-world deployments.
|
https://arxiv.org/abs/2601.07264
|
Academic Papers
|
svg
|
1f3d52e615511612b78085a587a66addcf8c20b6e40ad4b803aec04f910ce722
|
2026-01-13T00:00:00-05:00
|
From Landslide Conditioning Factors to Satellite Embeddings: Evaluating the Utilisation of Google AlphaEarth for Landslide Susceptibility Mapping using Deep Learning
|
arXiv:2601.07268v1 Announce Type: new Abstract: Data-driven landslide susceptibility mapping (LSM) typically relies on landslide conditioning factors (LCFs), whose availability, heterogeneity, and preprocessing-related uncertainties can constrain mapping reliability. Recently, Google AlphaEarth (AE) embeddings, derived from multi-source geospatial observations, have emerged as a unified representation of Earth surface conditions. This study evaluated the potential of AE embeddings as alternative predictors for LSM. Two AE representations, including retained principal components and the full set of 64 embedding bands, were systematically compared with conventional LCFs across three study areas (Nantou County, Taiwan; Hong Kong; and part of Emilia-Romagna, Italy) using three deep learning models (CNN1D, CNN2D, and Vision Transformer). Performance was assessed using multiple evaluation metrics, ROC-AUC analysis, error statistics, and spatial pattern assessment. Results showed that AE-based models consistently outperformed LCFs across all regions and models, yielding higher F1-scores, AUC values, and more stable error distributions. Such improvement was most pronounced when using the full 64-band AE representation, with F1-score improvements of approximately 4% to 15% and AUC increased ranging from 0.04 to 0.11, depending on the study area and model. AE-based susceptibility maps also exhibited clearer spatial correspondence with observed landslide occurrences and enhanced sensitivity to localised landslide-prone conditions. Performance improvements were more evident in Nantou and Emilia than in Hong Kong, revealing that closer temporal alignment between AE embeddings and landslide inventories may lead to more effective LSM outcomes. These findings highlight the strong potential of AE embeddings as a standardised and information-rich alternative to conventional LCFs for LSM.
|
https://arxiv.org/abs/2601.07268
|
Academic Papers
|
svg
|
08581961904381716443389a9a1c4098c08ce54e01b17b792144a935cce8a037
|
2026-01-13T00:00:00-05:00
|
Document-Level Zero-Shot Relation Extraction with Entity Side Information
|
arXiv:2601.07271v1 Announce Type: new Abstract: Document-Level Zero-Shot Relation Extraction (DocZSRE) aims to predict unseen relation labels in text documents without prior training on specific relations. Existing approaches rely on Large Language Models (LLMs) to generate synthetic data for unseen labels, which poses challenges for low-resource languages like Malaysian English. These challenges include the incorporation of local linguistic nuances and the risk of factual inaccuracies in LLM-generated data. This paper introduces Document-Level Zero-Shot Relation Extraction with Entity Side Information (DocZSRE-SI) to address limitations in the existing DocZSRE approach. The DocZSRE-SI framework leverages Entity Side Information, such as Entity Mention Descriptions and Entity Mention Hypernyms, to perform ZSRE without depending on LLM-generated synthetic data. The proposed low-complexity model achieves an average improvement of 11.6% in the macro F1-Score compared to baseline models and existing benchmarks. By utilizing Entity Side Information, DocZSRE-SI offers a robust and efficient alternative to error-prone, LLM-based methods, demonstrating significant advancements in handling low-resource languages and linguistic diversity in relation extraction tasks. This research provides a scalable and reliable solution for ZSRE, particularly in contexts like Malaysian English news articles, where traditional LLM-based approaches fall short.
|
https://arxiv.org/abs/2601.07271
|
Academic Papers
|
svg
|
03a829de4af01ee5a53a8e274642bcd4af8287155e10a093465a206f6aafd980
|
2026-01-13T00:00:00-05:00
|
PALUM: Part-based Attention Learning for Unified Motion Retargeting
|
arXiv:2601.07272v1 Announce Type: new Abstract: Retargeting motion between characters with different skeleton structures is a fundamental challenge in computer animation. When source and target characters have vastly different bone arrangements, maintaining the original motion's semantics and quality becomes increasingly difficult. We present PALUM, a novel approach that learns common motion representations across diverse skeleton topologies by partitioning joints into semantic body parts and applying attention mechanisms to capture spatio-temporal relationships. Our method transfers motion to target skeletons by leveraging these skeleton-agnostic representations alongside target-specific structural information. To ensure robust learning and preserve motion fidelity, we introduce a cycle consistency mechanism that maintains semantic coherence throughout the retargeting process. Extensive experiments demonstrate superior performance in handling diverse skeletal structures while maintaining motion realism and semantic fidelity, even when generalizing to previously unseen skeleton-motion combinations. We will make our implementation publicly available to support future research.
|
https://arxiv.org/abs/2601.07272
|
Academic Papers
|
svg
|
7d5bd67b23a0d44b519df0fc693d0ca1de924d8865824665791c3e8a057726c5
|
2026-01-13T00:00:00-05:00
|
GenDet: Painting Colored Bounding Boxes on Images via Diffusion Model for Object Detection
|
arXiv:2601.07273v1 Announce Type: new Abstract: This paper presents GenDet, a novel framework that redefines object detection as an image generation task. In contrast to traditional approaches, GenDet adopts a pioneering approach by leveraging generative modeling: it conditions on the input image and directly generates bounding boxes with semantic annotations in the original image space. GenDet establishes a conditional generation architecture built upon the large-scale pre-trained Stable Diffusion model, formulating the detection task as semantic constraints within the latent space. It enables precise control over bounding box positions and category attributes, while preserving the flexibility of the generative model. This novel methodology effectively bridges the gap between generative models and discriminative tasks, providing a fresh perspective for constructing unified visual understanding systems. Systematic experiments demonstrate that GenDet achieves competitive accuracy compared to discriminative detectors, while retaining the flexibility characteristic of generative methods.
|
https://arxiv.org/abs/2601.07273
|
Academic Papers
|
svg
|
bddf551a4afbf8f0a335a6da37a9ca2cecb5cedc19b419f00359cb3c27d81ec5
|
2026-01-13T00:00:00-05:00
|
Towards Comprehensive Semantic Speech Embeddings for Chinese Dialects
|
arXiv:2601.07274v1 Announce Type: new Abstract: Despite having hundreds of millions of speakers, Chinese dialects lag behind Mandarin in speech and language technologies. Most varieties are primarily spoken, making dialect-to-Mandarin speech-LLMs (large language models) more practical than dialect LLMs. Building dialect-to-Mandarin speech-LLMs requires speech representations with cross-dialect semantic alignment between Chinese dialects and Mandarin. In this paper, we achieve such a cross-dialect semantic alignment by training a speech encoder with ASR (automatic speech recognition)-only data, as demonstrated by speech-to-speech retrieval on a new benchmark of spoken Chinese varieties that we contribute. Our speech encoder further demonstrates state-of-the-art ASR performance on Chinese dialects. Together, our Chinese dialect benchmark, semantically aligned speech representations, and speech-to-speech retrieval evaluation lay the groundwork for future Chinese dialect speech-LLMs. We release the benchmark at https://github.com/kalvinchang/yubao.
|
https://arxiv.org/abs/2601.07274
|
Academic Papers
|
svg
|
b6af1e67baf6aeeda2517b880a2fb65130f5164589a1bc24a97ad3a9e5ee500c
|
2026-01-13T00:00:00-05:00
|
A High-Recall Cost-Sensitive Machine Learning Framework for Real-Time Online Banking Transaction Fraud Detection
|
arXiv:2601.07276v1 Announce Type: new Abstract: Fraudulent activities on digital banking services are becoming more intricate by the day, challenging existing defenses. While older rule driven methods struggle to keep pace, even precision focused algorithms fall short when new scams are introduced. These tools typically overlook subtle shifts in criminal behavior, missing crucial signals. Because silent breaches cost institutions far more than flagged but legitimate actions, catching every possible case is crucial. High sensitivity to actual threats becomes essential when oversight leads to heavy losses. One key aim here involves reducing missed fraud cases without spiking incorrect alerts too much. This study builds a system using group learning methods adjusted through smart threshold choices. Using real world transaction records shared openly, where cheating acts rarely appear among normal activities, tests are run under practical skewed distributions. The outcomes reveal that approximately 91 percent of actual fraud is detected, outperforming standard setups that rely on unchanging rules when dealing with uneven examples across classes. When tested in live settings, the fraud detection system connects directly to an online banking transaction flow, stopping questionable activities before they are completed. Alongside this setup, a browser add on built for Chrome is designed to flag deceptive web links and reduce threats from harmful sites. These results show that adjusting decisions by cost impact and validating across entire systems makes deployment more stable and realistic for today's digital banking platforms.
|
https://arxiv.org/abs/2601.07276
|
Academic Papers
|
svg
|
163965fcfae1bb54a90c2d6cddd5a9e4e788ec4863559b6d0fdd08b962fc8ad1
|
2026-01-13T00:00:00-05:00
|
Parametric Probabilistic Manifold Decomposition for Nonlinear Model Reduction
|
arXiv:2601.07278v1 Announce Type: new Abstract: Probabilistic Manifold Decomposition (PMD)\cite{doi:10.1137/25M1738863}, developed in our earlier work, provides a nonlinear model reduction by embedding high-dimensional dynamics onto low-dimensional probabilistic manifolds. The PMD has demonstrated strong performance for time-dependent systems. However, its formulation is for temporal dynamics and does not directly accommodate parametric variability, which limits its applicability to tasks such as design optimization, control, and uncertainty quantification. In order to address the limitations, a \emph{Parametric Probabilistic Manifold Decomposition} (PPMD) is presented to deal with parametric problems. The central advantage of PPMD is its ability to construct continuous, high-fidelity parametric surrogates while retaining the transparency and non-intrusive workflow of PMD. By integrating probabilistic-manifold embeddings with parameter-aware latent learning, PPMD enables smooth predictions across unseen parameter values (such as different boundary or initial conditions). To validate the proposed method, a comprehensive convergence analysis is established for PPMD, covering the approximation of the linear principal subspace, the geometric recovery of the nonlinear solution manifold, and the statistical consistency of the kernel ridge regression used for latent learning. The framework is then numerically demonstrated on two classic flow configurations: flow past a cylinder and backward-facing step flow. Results confirm that PPMD achieves superior accuracy and generalization beyond the training parameter range compared to the conventional proper orthogonal decomposition with Gaussian process regression (POD+GPR) method.
|
https://arxiv.org/abs/2601.07278
|
Academic Papers
|
svg
|
a931f29b5cf4fc0515496888020961ea683ea998ac0f55d0c14175237462e550
|
2026-01-13T00:00:00-05:00
|
Coalition Tactics: Bribery and Control in Parliamentary Elections
|
arXiv:2601.07279v1 Announce Type: new Abstract: Strategic manipulation of elections is typically studied in the context of promoting individual candidates. In parliamentary elections, however, the focus shifts: voters may care more about the overall governing coalition than the individual parties' seat counts. This paper studies this new problem: manipulating parliamentary elections with the goal of promoting the collective seat count of a coalition of parties. We focus on proportional representation elections, and consider two variants of the problem; one in which the sole goal is to maximize the total number of seats held by the desired coalition, and the other with a dual objective of both promoting the coalition and promoting the relative power of some favorite party within the coalition. We examine two types of strategic manipulations: \emph{bribery}, which allows modifying voters' preferences, and \emph{control}, which allows changing the sets of voters and parties. We consider multiple bribery types, presenting polynomial-time algorithms for some, while proving NP-hardness for others. For control, we provide polynomial-time algorithms for control by adding and deleting voters. In contrast, control by adding and deleting parties, we show, is either impossible (i.e., the problem is immune to control) or computationally hard, in particular, W[1]-hard when parameterized by the number of parties that can be added or deleted.
|
https://arxiv.org/abs/2601.07279
|
Academic Papers
|
svg
|
428faaafbfdf5be89a3b0a2923aed7f45a103fdd76f133469fde4a6b5c0e79c0
|
2026-01-13T00:00:00-05:00
|
ReasonTabQA: A Comprehensive Benchmark for Table Question Answering from Real World Industrial Scenarios
|
arXiv:2601.07280v1 Announce Type: new Abstract: Recent advancements in Large Language Models (LLMs) have significantly catalyzed table-based question answering (TableQA). However, existing TableQA benchmarks often overlook the intricacies of industrial scenarios, which are characterized by multi-table structures, nested headers, and massive scales. These environments demand robust table reasoning through deep structured inference, presenting a significant challenge that remains inadequately addressed by current methodologies. To bridge this gap, we present ReasonTabQA, a large-scale bilingual benchmark encompassing 1,932 tables across 30 industry domains such as energy and automotive. ReasonTabQA provides high-quality annotations for both final answers and explicit reasoning chains, supporting both thinking and no-thinking paradigms. Furthermore, we introduce TabCodeRL, a reinforcement learning method that leverages table-aware verifiable rewards to guide the generation of logical reasoning paths. Extensive experiments on ReasonTabQA and 4 TableQA datasets demonstrate that while TabCodeRL yields substantial performance gains on open-source LLMs, the persistent performance gap on ReasonTabQA underscores the inherent complexity of real-world industrial TableQA.
|
https://arxiv.org/abs/2601.07280
|
Academic Papers
|
svg
|
a978b04b10a01303d2c595e69ecaa455535cb377d97bb5a6a5df25c12db5cfc6
|
2026-01-13T00:00:00-05:00
|
AdaMorph: Unified Motion Retargeting via Embodiment-Aware Adaptive Transformers
|
arXiv:2601.07284v1 Announce Type: new Abstract: Retargeting human motion to heterogeneous robots is a fundamental challenge in robotics, primarily due to the severe kinematic and dynamic discrepancies between varying embodiments. Existing solutions typically resort to training embodiment-specific models, which scales poorly and fails to exploit shared motion semantics. To address this, we present AdaMorph, a unified neural retargeting framework that enables a single model to adapt human motion to diverse robot morphologies. Our approach treats retargeting as a conditional generation task. We map human motion into a morphology-agnostic latent intent space and utilize a dual-purpose prompting mechanism to condition the generation. Instead of simple input concatenation, we leverage Adaptive Layer Normalization (AdaLN) to dynamically modulate the decoder's feature space based on embodiment constraints. Furthermore, we enforce physical plausibility through a curriculum-based training objective that ensures orientation and trajectory consistency via integration. Experimental results on 12 distinct humanoid robots demonstrate that AdaMorph effectively unifies control across heterogeneous topologies, exhibiting strong zero-shot generalization to unseen complex motions while preserving the dynamic essence of the source behaviors.
|
https://arxiv.org/abs/2601.07284
|
Academic Papers
|
svg
|
8759ff3ac45c9e8dfb46ae3844628ac282ad00ca18dd5cc7e455e3bba48e4ea8
|
2026-01-13T00:00:00-05:00
|
Focal Guidance: Unlocking Controllability from Semantic-Weak Layers in Video Diffusion Models
|
arXiv:2601.07287v1 Announce Type: new Abstract: The task of Image-to-Video (I2V) generation aims to synthesize a video from a reference image and a text prompt. This requires diffusion models to reconcile high-frequency visual constraints and low-frequency textual guidance during the denoising process. However, while existing I2V models prioritize visual consistency, how to effectively couple this dual guidance to ensure strong adherence to the text prompt remains underexplored. In this work, we observe that in Diffusion Transformer (DiT)-based I2V models, certain intermediate layers exhibit weak semantic responses (termed Semantic-Weak Layers), as indicated by a measurable drop in text-visual similarity. We attribute this to a phenomenon called Condition Isolation, where attention to visual features becomes partially detached from text guidance and overly relies on learned visual priors. To address this, we propose Focal Guidance (FG), which enhances the controllability from Semantic-Weak Layers. FG comprises two mechanisms: (1) Fine-grained Semantic Guidance (FSG) leverages CLIP to identify key regions in the reference frame and uses them as anchors to guide Semantic-Weak Layers. (2) Attention Cache transfers attention maps from semantically responsive layers to Semantic-Weak Layers, injecting explicit semantic signals and alleviating their over-reliance on the model's learned visual priors, thereby enhancing adherence to textual instructions. To further validate our approach and address the lack of evaluation in this direction, we introduce a benchmark for assessing instruction following in I2V models. On this benchmark, Focal Guidance proves its effectiveness and generalizability, raising the total score on Wan2.1-I2V to 0.7250 (+3.97\%) and boosting the MMDiT-based HunyuanVideo-I2V to 0.5571 (+7.44\%).
|
https://arxiv.org/abs/2601.07287
|
Academic Papers
|
svg
|
ada90e17a38c9ed97c201aa3547d918e5682b043b6a80254a4a811af623d51e1
|
2026-01-13T00:00:00-05:00
|
Kernel Alignment-based Multi-view Unsupervised Feature Selection with Sample-level Adaptive Graph Learning
|
arXiv:2601.07288v1 Announce Type: new Abstract: Although multi-view unsupervised feature selection (MUFS) has demonstrated success in dimensionality reduction for unlabeled multi-view data, most existing methods reduce feature redundancy by focusing on linear correlations among features but often overlook complex nonlinear dependencies. This limits the effectiveness of feature selection. In addition, existing methods fuse similarity graphs from multiple views by employing sample-invariant weights to preserve local structure. However, this process fails to account for differences in local neighborhood clarity among samples within each view, thereby hindering accurate characterization of the intrinsic local structure of the data. In this paper, we propose a Kernel Alignment-based multi-view unsupervised FeatUre selection with Sample-level adaptive graph lEarning method (KAFUSE) to address these issues. Specifically, we first employ kernel alignment with an orthogonal constraint to reduce feature redundancy in both linear and nonlinear relationships. Then, a cross-view consistent similarity graph is learned by applying sample-level fusion to each slice of a tensor formed by stacking similarity graphs from different views, which automatically adjusts the view weights for each sample during fusion. These two steps are integrated into a unified model for feature selection, enabling mutual enhancement between them. Extensive experiments on real multi-view datasets demonstrate the superiority of KAFUSE over state-of-the-art methods.
|
https://arxiv.org/abs/2601.07288
|
Academic Papers
|
svg
|
5dbfdcee7f41abacae155a4deb11f691e0f3a134341a74010e1b482cb57c4731
|
2026-01-13T00:00:00-05:00
|
VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding
|
arXiv:2601.07290v1 Announce Type: new Abstract: This paper presents VideoLoom, a unified Video Large Language Model (Video LLM) for joint spatial-temporal understanding. To facilitate the development of fine-grained spatial and temporal localization capabilities, we curate LoomData-8.7k, a human-centric video dataset with temporally grounded and spatially localized captions. With this, VideoLoom achieves state-of-the-art or highly competitive performance across a variety of spatial and temporal benchmarks (e.g., 63.1 J&F on ReVOS for referring video object segmentation, and 48.3 R1@0.7 on Charades-STA for temporal grounding). In addition, we introduce LoomBench, a novel benchmark consisting of temporal, spatial, and compositional video-question pairs, enabling a comprehensive evaluation of Video LLMs from diverse aspects. Collectively, these contributions offer a universal and effective suite for joint spatial-temporal video understanding, setting a new standard in multimodal intelligence.
|
https://arxiv.org/abs/2601.07290
|
Academic Papers
|
svg
|
c40d8f0b2119a6052bb6f5c6e6fe07b088a7aa8bfed30859ba5c7d58520e292e
|
2026-01-13T00:00:00-05:00
|
A Visual Semantic Adaptive Watermark grounded by Prefix-Tuning for Large Vision-Language Model
|
arXiv:2601.07291v1 Announce Type: new Abstract: Watermarking has emerged as a pivotal solution for content traceability and intellectual property protection in Large Vision-Language Models (LVLMs). However, vision-agnostic watermarks introduce visually irrelevant tokens and disrupt visual grounding by enforcing indiscriminate pseudo-random biases, while some semantic-aware methods incur prohibitive inference latency due to rejection sampling. In this paper, we propose the VIsual Semantic Adaptive Watermark (VISA-Mark), a novel framework that embeds detectable signals while strictly preserving visual fidelity. Our approach employs a lightweight, efficiently trained prefix-tuner to extract dynamic Visual-Evidence Weights, which quantify the evidentiary support for candidate tokens based on the visual input. These weights guide an adaptive vocabulary partitioning and logits perturbation mechanism, concentrating watermark strength specifically on visually-supported tokens. By actively aligning the watermark with visual evidence, VISA-Mark effectively maintains visual fidelity. Empirical results confirm that VISA-Mark outperforms conventional methods with a 7.8% improvement in visual consistency (Chair-I) and superior semantic fidelity. The framework maintains highly competitive detection accuracy (96.88% AUC) and robust attack resilience (99.3%) without sacrificing inference efficiency, effectively establishing a new standard for reliability-preserving multimodal watermarking.
|
https://arxiv.org/abs/2601.07291
|
Academic Papers
|
svg
|
d83f587aae0756e5a062bc0d4e8065b2d4e4bdf51421c3f7d2b0d836c76094b7
|
2026-01-13T00:00:00-05:00
|
Inference-Time Scaling for Visual AutoRegressive modeling by Searching Representative Samples
|
arXiv:2601.07293v1 Announce Type: new Abstract: While inference-time scaling has significantly enhanced generative quality in large language and diffusion models, its application to vector-quantized (VQ) visual autoregressive modeling (VAR) remains unexplored. We introduce VAR-Scaling, the first general framework for inference-time scaling in VAR, addressing the critical challenge of discrete latent spaces that prohibit continuous path search. We find that VAR scales exhibit two distinct pattern types: general patterns and specific patterns, where later-stage specific patterns conditionally optimize early-stage general patterns. To overcome the discrete latent space barrier in VQ models, we map sampling spaces to quasi-continuous feature spaces via kernel density estimation (KDE), where high-density samples approximate stable, high-quality solutions. This transformation enables effective navigation of sampling distributions. We propose a density-adaptive hybrid sampling strategy: Top-k sampling focuses on high-density regions to preserve quality near distribution modes, while Random-k sampling explores low-density areas to maintain diversity and prevent premature convergence. Consequently, VAR-Scaling optimizes sample fidelity at critical scales to enhance output quality. Experiments in class-conditional and text-to-image evaluations demonstrate significant improvements in inference process. The code is available at https://github.com/WD7ang/VAR-Scaling.
|
https://arxiv.org/abs/2601.07293
|
Academic Papers
|
svg
|
2f0c362b43a7f53d019ed4c067b20613e65b38d9874e45445de978de59edb450
|
2026-01-13T00:00:00-05:00
|
Towards Multi-Behavior Multi-Task Recommendation via Behavior-informed Graph Embedding Learning
|
arXiv:2601.07294v1 Announce Type: new Abstract: Multi-behavior recommendation (MBR) aims to improve the performance w.r.t. the target behavior (i.e., purchase) by leveraging auxiliary behaviors (e.g., click, favourite). However, in real-world scenarios, a recommendation method often needs to process different types of behaviors and generate personalized lists for each task (i.e., each behavior type). Such a new recommendation problem is referred to as multi-behavior multi-task recommendation (MMR). So far, the most powerful MBR methods usually model multi-behavior interactions using a cascading graph paradigm. Although significant progress has been made in optimizing the performance of the target behavior, it often neglects the performance of auxiliary behaviors. To compensate for the deficiencies of the cascading paradigm, we propose a novel solution for MMR, i.e., behavior-informed graph embedding learning (BiGEL). Specifically, we first obtain a set of behavior-aware embeddings by using a cascading graph paradigm. Subsequently, we introduce three key modules to improve the performance of the model. The cascading gated feedback (CGF) module enables a feedback-driven optimization process by integrating feedback from the target behavior to refine the auxiliary behaviors preferences. The global context enhancement (GCE) module integrates the global context to maintain the user's overall preferences, preventing the loss of key preferences due to individual behavior graph modeling. Finally, the contrastive preference alignment (CPA) module addresses the potential changes in user preferences during the cascading process by aligning the preferences of the target behaviors with the global preferences through contrastive learning. Extensive experiments on two real-world datasets demonstrate the effectiveness of our BiGEL compared with ten very competitive methods.
|
https://arxiv.org/abs/2601.07294
|
Academic Papers
|
svg
|
20196ca73d5562829f078725f86db809d1ddbcb95294a6319405910c60218985
|
2026-01-13T00:00:00-05:00
|
Stochastic Power-Water Coordination: Unlocking Flexibility in Hybrid RO Desalination Plants via Variable-Speed Pumps and Tank Mixing
|
arXiv:2601.07295v1 Announce Type: new Abstract: Water desalination plants (DPs) are among the most critical infrastructures and largest electricity loads in water-scarce regions worldwide. Although reverse osmosis (RO) desalination is the most energy-efficient and dominant technology, it remains energy-intensive but can offer substantial flexibility potential for power systems. This paper proposes a coordinated operation framework for power systems and DPs that explicitly accounts for both systems' operational constraints and fully unlocks DP flexibility. To achieve this, a detailed DP model is developed, incorporating the characteristics of an actual high-pressure pump with variable-speed operation, on-off operation with flushing requirements, water quality constraints, and water dynamics and salt mixing in the storage tank. By proactively managing freshwater storage and tank salinity in a closed-loop coordinated scheduling framework, the operational flexibility of the DP is significantly enhanced. With appropriate simplification and linearization, the resulting coordinated scheduling problem is formulated as a tractable mixed-integer linear programming (MILP) model, and a two-step decomposed commitment-scheduling stochastic optimization (TDCSO) is proposed to efficiently address uncertainties. Case studies validate the proposed approach and demonstrate up to a 6% operating cost reduction.
|
https://arxiv.org/abs/2601.07295
|
Academic Papers
|
svg
|
048cb82e2e65ae3bae6f05149182fdedc5d857985326865ea7993d9865404dcd
|
2026-01-13T00:00:00-05:00
|
LRAS: Advanced Legal Reasoning with Agentic Search
|
arXiv:2601.07296v1 Announce Type: new Abstract: While Large Reasoning Models (LRMs) have demonstrated exceptional logical capabilities in mathematical domains, their application to the legal field remains hindered by the strict requirements for procedural rigor and adherence to legal logic. Existing legal LLMs, which rely on "closed-loop reasoning" derived solely from internal parametric knowledge, frequently suffer from lack of self-awareness regarding their knowledge boundaries, leading to confident yet incorrect conclusions. To address this challenge, we present Legal Reasoning with Agentic Search (LRAS), the first framework designed to transition legal LLMs from static and parametric "closed-loop thinking" to dynamic and interactive "Active Inquiry". By integrating Introspective Imitation Learning and Difficulty-aware Reinforcement Learning, LRAS enables LRMs to identify knowledge boundaries and handle legal reasoning complexity. Empirical results demonstrate that LRAS outperforms state-of-the-art baselines by 8.2-32\%, with the most substantial gains observed in tasks requiring deep reasoning with reliable knowledge. We will release our data and models for further exploration soon.
|
https://arxiv.org/abs/2601.07296
|
Academic Papers
|
svg
|
f3b2c99010012a290f42559b9ad6c14bdfa54c327efaf482b431aacd6db072d4
|
2026-01-13T00:00:00-05:00
|
Mimic Human Cognition, Master Multi-Image Reasoning: A Meta-Action Framework for Enhanced Visual Understanding
|
arXiv:2601.07298v1 Announce Type: new Abstract: While Multimodal Large Language Models (MLLMs) excel at single-image understanding, they exhibit significantly degraded performance in multi-image reasoning scenarios. Multi-image reasoning presents fundamental challenges including complex inter-relationships between images and scattered critical information across image sets. Inspired by human cognitive processes, we propose the Cognition-Inspired Meta-Action Framework (CINEMA), a novel approach that decomposes multi-image reasoning into five structured meta-actions: Global, Focus, Hint, Think, and Answer which explicitly modeling the sequential cognitive steps humans naturally employ. For cold-start training, we introduce a Retrieval-Based Tree Sampling strategy that generates high-quality meta-action trajectories to bootstrap the model with reasoning patterns. During reinforcement learning, we adopt a two-stage paradigm: an exploration phase with Diversity-Preserving Strategy to avoid entropy collapse, followed by an annealed exploitation phase with DAPO to gradually strengthen exploitation. To train our model, we construct a dataset of 57k cold-start and 58k reinforcement learning instances spanning multi-image, multi-frame, and single-image tasks. We conduct extensive evaluations on multi-image reasoning benchmarks, video understanding benchmarks, and single-image benchmarks, achieving competitive state-of-the-art performance on several key benchmarks. Our model surpasses GPT-4o on the MUIR and MVMath benchmarks and notably outperforms specialized video reasoning models on video understanding benchmarks, demonstrating the effectiveness and generalizability of our human cognition-inspired reasoning framework.
|
https://arxiv.org/abs/2601.07298
|
Academic Papers
|
svg
|
9b3da63360906117f591f9aad1ff2eab5fc2223268391f4dc28025064899bcb4
|
2026-01-13T00:00:00-05:00
|
Engineering Decisions in MBSE: Insights for a Decision Capture Framework Development
|
arXiv:2601.07301v1 Announce Type: new Abstract: Decision-making is a core engineering design activity that conveys the engineer's knowledge and translates it into courses of action. Capturing this form of knowledge can reap potential benefits for the engineering teams and enhance development efficiency. Despite its clear value, traditional decision capture often requires a significant amount of effort and still falls short of capturing the necessary context for reuse. Model-based systems engineering (MBSE) can be a promising solution to address these challenges by embedding decisions directly within system models, which can reduce the capture workload while maintaining explicit links to requirements, behaviors, and architectural elements. This article discusses a lightweight framework for integrating decision capture into MBSE workflows by representing decision alternatives as system model slices. Using a simplified industry example from aircraft architecture, we discuss the main challenges associated with decision capture and propose preliminary solutions to address these challenges.
|
https://arxiv.org/abs/2601.07301
|
Academic Papers
|
svg
|
1a223c850767024e284b96e28245b28288ef07101dd7455cf24a942a0d28609e
|
2026-01-13T00:00:00-05:00
|
ESDD2: Environment-Aware Speech and Sound Deepfake Detection Challenge Evaluation Plan
|
arXiv:2601.07303v1 Announce Type: new Abstract: Audio recorded in real-world environments often contains a mixture of foreground speech and background environmental sounds. With rapid advances in text-to-speech, voice conversion, and other generation models, either component can now be modified independently. Such component-level manipulations are harder to detect, as the remaining unaltered component can mislead the systems designed for whole deepfake audio, and they often sound more natural to human listeners. To address this gap, we have proposed CompSpoofV2 dataset and a separation-enhanced joint learning framework. CompSpoofV2 is a large-scale curated dataset designed for component-level audio anti-spoofing, which contains over 250k audio samples, with a total duration of approximately 283 hours. Based on the CompSpoofV2 and the separation-enhanced joint learning framework, we launch the Environment-Aware Speech and Sound Deepfake Detection Challenge (ESDD2), focusing on component-level spoofing, where both speech and environmental sounds may be manipulated or synthesized, creating a more challenging and realistic detection scenario. The challenge will be held in conjunction with the IEEE International Conference on Multimedia and Expo 2026 (ICME 2026).
|
https://arxiv.org/abs/2601.07303
|
Academic Papers
|
svg
|
011ee68c8de99ca819f7c50ed473779f5d267b7f128030e79fbf6d5dae418758
|
2026-01-13T00:00:00-05:00
|
Heterogeneous Multi-Expert Reinforcement Learning for Long-Horizon Multi-Goal Tasks in Autonomous Forklifts
|
arXiv:2601.07304v1 Announce Type: new Abstract: Autonomous mobile manipulation in unstructured warehouses requires a balance between efficient large-scale navigation and high-precision object interaction. Traditional end-to-end learning approaches often struggle to handle the conflicting demands of these distinct phases. Navigation relies on robust decision-making over large spaces, while manipulation needs high sensitivity to fine local details. Forcing a single network to learn these different objectives simultaneously often causes optimization interference, where improving one task degrades the other. To address these limitations, we propose a Heterogeneous Multi-Expert Reinforcement Learning (HMER) framework tailored for autonomous forklifts. HMER decomposes long-horizon tasks into specialized sub-policies controlled by a Semantic Task Planner. This structure separates macro-level navigation from micro-level manipulation, allowing each expert to focus on its specific action space without interference. The planner coordinates the sequential execution of these experts, bridging the gap between task planning and continuous control. Furthermore, to solve the problem of sparse exploration, we introduce a Hybrid Imitation-Reinforcement Training Strategy. This method uses expert demonstrations to initialize the policy and Reinforcement Learning for fine-tuning. Experiments in Gazebo simulations show that HMER significantly outperforms sequential and end-to-end baselines. Our method achieves a task success rate of 94.2\% (compared to 62.5\% for baselines), reduces operation time by 21.4\%, and maintains placement error within 1.5 cm, validating its efficacy for precise material handling.
|
https://arxiv.org/abs/2601.07304
|
Academic Papers
|
svg
|
27035d6657760ef66b3fc50d312a0db5d02c9560d9468c52e3c097098966b22b
|
2026-01-13T00:00:00-05:00
|
Memory-Based Malware Detection under Limited Data Conditions: A Comparative Evaluation of TabPFN and Ensemble Models
|
arXiv:2601.07305v1 Announce Type: new Abstract: Artificial intelligence and machine learning have significantly advanced malware research by enabling automated threat detection and behavior analysis. However, the availability of exploitable data is limited, due to the absence of large datasets with real-world data. Despite the progress of AI in cybersecurity, malware analysis still suffers from this data scarcity, which limits model generalization. In order to tackle this difficulty, this workinvestigates TabPFN, a learning-free model designed for low-data regimes. We evaluate its performance against established baselines such as Random Forest, LightGBM and XGBoost, across multiple class configurations. Our experimental results indicate that TabPFN surpasses all other models in low-data regimes, with a 2% to 6% improvement observed across multiple performance metrics. However, this increase in performance has an impact on its computation time in a particular case. These findings highlight both the promise and the practical limitations of integrating TabPFN into cybersecurity workflows.
|
https://arxiv.org/abs/2601.07305
|
Academic Papers
|
svg
|
35fe0839c2b811a1075235946f966197272bd1486ea9fc37e1a5533b65e96c20
|
2026-01-13T00:00:00-05:00
|
Low-Altitude Satellite-AAV Collaborative Joint Mobile Edge Computing and Data Collection via Diffusion-based Deep Reinforcement Learning
|
arXiv:2601.07307v1 Announce Type: new Abstract: The integration of satellite and autonomous aerial vehicle (AAV) communications has become essential for the scenarios requiring both wide coverage and rapid deployment, particularly in remote or disaster-stricken areas where the terrestrial infrastructure is unavailable. Furthermore, emerging applications increasingly demand simultaneous mobile edge computing (MEC) and data collection (DC) capabilities within the same aerial network. However, jointly optimizing these operations in heterogeneous satellite-AAV systems presents significant challenges due to limited on-board resources and competing demands under dynamic channel conditions. In this work, we investigate a satellite-AAV-enabled joint MEC-DC system where these platforms collaborate to serve ground devices (GDs). Specifically, we formulate a joint optimization problem to minimize the average MEC end-to-end delay and AAV energy consumption while maximizing the collected data. Since the formulated optimization problem is a non-convex mixed-integer nonlinear programming (MINLP) problem, we propose a Q-weighted variational policy optimization-based joint AAV movement control, GD association, offloading decision, and bandwidth allocation (QAGOB) approach. Specifically, we reformulate the optimization problem as an action space-transformed Markov decision process to adapt the variable action dimensions and hybrid action space. Subsequently, QAGOB leverages the multi-modal generation capacities of diffusion models to optimize policies and can achieve better sample efficiency while controlling the diffusion costs during training. Simulation results show that QAGOB outperforms five other benchmarks, including traditional DRL and diffusion-based DRL algorithms. Furthermore, the MEC-DC joint optimization achieves significant advantages when compared to the separate optimization of MEC and DC.
|
https://arxiv.org/abs/2601.07307
|
Academic Papers
|
svg
|
b7cd45960144f695a1efc0c2d9ab6dad098c6f2ea18a6f83acdb376c73d70a98
|
2026-01-13T00:00:00-05:00
|
Bringing Computation to the data: Interoperable serverless function execution for astrophysical data analysis in the SRCNet
|
arXiv:2601.07308v1 Announce Type: new Abstract: Serverless computing is a paradigm in which the underlying infrastructure is fully managed by the provider, enabling applications and services to be executed with elastic resource provisioning and minimal operational overhead. A core model within this paradigm is Function-as-a-Service (FaaS), where lightweight functions are deployed and triggered on demand, scaling seamlessly with workload. FaaS offers flexibility, cost-effectiveness, and fine-grained scalability, qualities particularly relevant for large-scale scientific infrastructures where data volumes are too large to centralise and computation must increasingly occur close to the data. The Square Kilometre Array Observatory (SKAO) exemplifies this challenge. Once operational, it will generate about 700~PB of data products annually, distributed across the SKA Regional Centre Network (SRCNet), a federation of international centres providing storage, computing, and analysis services. In such a context, FaaS offers a mechanism to bring computation to the data. We studied the principles of serverless and FaaS computing and explored their application to radio astronomy workflows. Representative functions for astrophysical data analysis were developed and deployed, including micro-functions derived from existing libraries and wrappers around domain-specific applications. In particular, a Gaussian convolution function was implemented and integrated within the SRCNet ecosystem. The use case demonstrates that FaaS can be embedded into the existing SRCNet ecosystem of services, allowing functions to run directly at sites where data replicas are stored. This reduces latency, minimises transfers, and improves efficiency, aligning with federated, data-proximate computation. The results show that serverless models provide a scalable and efficient pathway to address the data volumes of the SKA era.
|
https://arxiv.org/abs/2601.07308
|
Academic Papers
|
svg
|
14af22124a4521ef24b6946fe0155d0b0d9f067120fb9aed4ed252eb84c1f99f
|
2026-01-13T00:00:00-05:00
|
ARM: Role-Conditioned Neuron Transplantation for Training-Free Generalist LLM Agent Merging
|
arXiv:2601.07309v1 Announce Type: new Abstract: Interactive large language model agents have advanced rapidly, but most remain specialized to a single environment and fail to adapt robustly to other environments. Model merging offers a training-free alternative by integrating multiple experts into a single model. In this paper, we propose Agent-Role Merging (ARM), an activation-guided, role-conditioned neuron transplantation method for model merging in LLM agents. ARM improves existing merging methods from static natural language tasks to multi-turn agent scenarios, and over the generalization ability across various interactive environments. This is achieved with a well designed 3-step framework: 1) constructing merged backbones, 2) selection based on its role-conditioned activation analysis, and 3) neuron transplantation for fine-grained refinements. Without gradient-based optimization, ARM improves cross-benchmark generalization while enjoying efficiency. Across diverse domains, the model obtained via ARM merging outperforms prior model merging methods and domain-specific expert models, while demonstrating strong out-of-domain generalization.
|
https://arxiv.org/abs/2601.07309
|
Academic Papers
|
svg
|
015db61b4c6e16c39f2ef3447c4c2d8097c486b7da0886601afcaa6c59be6823
|
2026-01-13T00:00:00-05:00
|
Revisiting the Ordering of Channel and Spatial Attention: A Comprehensive Study on Sequential and Parallel Designs
|
arXiv:2601.07310v1 Announce Type: new Abstract: Attention mechanisms have become a core component of deep learning models, with Channel Attention and Spatial Attention being the two most representative architectures. Current research on their fusion strategies primarily bifurcates into sequential and parallel paradigms, yet the selection process remains largely empirical, lacking systematic analysis and unified principles. We systematically compare channel-spatial attention combinations under a unified framework, building an evaluation suite of 18 topologies across four classes: sequential, parallel, multi-scale, and residual. Across two vision and nine medical datasets, we uncover a "data scale-method-performance" coupling law: (1) in few-shot tasks, the "Channel-Multi-scale Spatial" cascaded structure achieves optimal performance; (2) in medium-scale tasks, parallel learnable fusion architectures demonstrate superior results; (3) in large-scale tasks, parallel structures with dynamic gating yield the best performance. Additionally, experiments indicate that the "Spatial-Channel" order is more stable and effective for fine-grained classification, while residual connections mitigate vanishing gradient problems across varying data scales. We thus propose scenario-based guidelines for building future attention modules. Code is open-sourced at https://github.com/DWlzm.
|
https://arxiv.org/abs/2601.07310
|
Academic Papers
|
svg
|
81bfb3b173bd4e97940d8efabd7c407212e4992bc53762db0aeea340f43e961d
|
2026-01-13T00:00:00-05:00
|
PsyCLIENT: Client Simulation via Conversational Trajectory Modeling for Trainee Practice and Model Evaluation in Mental Health Counseling
|
arXiv:2601.07312v1 Announce Type: new Abstract: LLM-based client simulation has emerged as a promising tool for training novice counselors and evaluating automated counseling systems. However, existing client simulation approaches face three key challenges: (1) limited diversity and realism in client profiles, (2) the lack of a principled framework for modeling realistic client behaviors, and (3) a scarcity in Chinese-language settings. To address these limitations, we propose PsyCLIENT, a novel simulation framework grounded in conversational trajectory modeling. By conditioning LLM generation on predefined real-world trajectories that incorporate explicit behavior labels and content constraints, our approach ensures diverse and realistic interactions. We further introduce PsyCLIENT-CP, the first open-source Chinese client profile dataset, covering 60 distinct counseling topics. Comprehensive evaluations involving licensed professional counselors demonstrate that PsyCLIENT significantly outperforms baselines in terms of authenticity and training effectiveness. Notably, the simulated clients are nearly indistinguishable from human clients, achieving an about 95\% expert confusion rate in discrimination tasks. These findings indicate that conversational trajectory modeling effectively bridges the gap between theoretical client profiles and dynamic, realistic simulations, offering a robust solution for mental health education and research. Code and data will be released to facilitate future research in mental health counseling.
|
https://arxiv.org/abs/2601.07312
|
Academic Papers
|
svg
|
dfd9e657a77573d262408d8d95a70be3bf0965e8fe3bfbbd2fe6240f9b113aa1
|
2026-01-13T00:00:00-05:00
|
Explaining Machine Learning Predictive Models through Conditional Expectation Methods
|
arXiv:2601.07313v1 Announce Type: new Abstract: The rapid adoption of complex Artificial Intelligence (AI) and Machine Learning (ML) models has led to their characterization as black boxes due to the difficulty of explaining their internal decision-making processes. This lack of transparency hinders users' ability to understand, validate and trust model behavior, particularly in high-risk applications. Although explainable AI (XAI) has made significant progress, there remains a need for versatile and effective techniques to address increasingly complex models. This work introduces Multivariate Conditional Expectation (MUCE), a model-agnostic method for local explainability designed to capture prediction changes from feature interactions. MUCE extends Individual Conditional Expectation (ICE) by exploring a multivariate grid of values in the neighborhood of a given observation at inference time, providing graphical explanations that illustrate the local evolution of model predictions. In addition, two quantitative indices, stability and uncertainty, summarize local behavior and assess model reliability. Uncertainty is further decomposed into uncertainty+ and uncertainty- to capture asymmetric effects that global measures may overlook. The proposed method is validated using XGBoost models trained on three datasets: two synthetic (2D and 3D) to evaluate behavior near decision boundaries, and one transformed real-world dataset to test adaptability to heterogeneous feature types. Results show that MUCE effectively captures complex local model behavior, while the stability and uncertainty indices provide meaningful insight into prediction confidence. MUCE, together with the ICE modification and the proposed indices, offers a practical contribution to local explainability, enabling both graphical and quantitative insights that enhance the interpretability of predictive models and support more trustworthy and transparent decision-making.
|
https://arxiv.org/abs/2601.07313
|
Academic Papers
|
svg
|
9ca81575831238194bbee111ae0ccc1f18b928ad1c8190a941e9321d77ea6395
|
2026-01-13T00:00:00-05:00
|
Mitrasamgraha: A Comprehensive Classical Sanskrit Machine Translation Dataset
|
arXiv:2601.07314v1 Announce Type: new Abstract: While machine translation is regarded as a "solved problem" for many high-resource languages, close analysis quickly reveals that this is not the case for content that shows challenges such as poetic language, philosophical concepts, multi-layered metaphorical expressions, and more. Sanskrit literature is a prime example of this, as it combines a large number of such challenges in addition to inherent linguistic features like sandhi, compounding, and heavy morphology, which further complicate NLP downstream tasks. It spans multiple millennia of text production time as well as a large breadth of different domains, ranging from ritual formulas via epic narratives, philosophical treatises, poetic verses up to scientific material. As of now, there is a strong lack of publicly available resources that cover these different domains and temporal layers of Sanskrit. We therefore introduce Mitrasamgraha, a high-quality Sanskrit-to-English machine translation dataset consisting of 391,548 bitext pairs, more than four times larger than the largest previously available Sanskrit dataset Itih=asa. It covers a time period of more than three millennia and a broad range of historical Sanskrit domains. In contrast to web-crawled datasets, the temporal and domain annotation of this dataset enables fine-grained study of domain and time period effects on MT performance. We also release a validation set consisting of 5,587 and a test set consisting of 5,552 post-corrected bitext pairs. We conduct experiments benchmarking commercial and open models on this dataset and fine-tune NLLB and Gemma models on the dataset, showing significant improvements, while still recognizing significant challenges in the translation of complex compounds, philosophical concepts, and multi-layered metaphors. We also analyze how in-context learning on this dataset impacts the performance of commercial models
|
https://arxiv.org/abs/2601.07314
|
Academic Papers
|
svg
|
241b97ef1810eb8d7dc4b083381d110d197c8f595cc8e5b5b1dc17460f97b1e0
|
2026-01-13T00:00:00-05:00
|
VLM-CAD: VLM-Optimized Collaborative Agent Design Workflow for Analog Circuit Sizing
|
arXiv:2601.07315v1 Announce Type: new Abstract: Analog mixed-signal circuit sizing involves complex trade-offs within high-dimensional design spaces. Existing automatic analog circuit sizing approaches often underutilize circuit schematics and lack the explainability required for industry adoption. To tackle these challenges, we propose a Vision Language Model-optimized collaborative agent design workflow (VLM-CAD), which analyzes circuits, optimizes DC operating points, performs inference-based sizing and executes external sizing optimization. We integrate Image2Net to annotate circuit schematics and generate a structured JSON description for precise interpretation by Vision Language Models. Furthermore, we propose an Explainable Trust Region Bayesian Optimization method (ExTuRBO) that employs collaborative warm-starting from agent-generated seeds and offers dual-granularity sensitivity analysis for external sizing optimization, supporting a comprehensive final design report. Experiment results on amplifier sizing tasks using 180nm, 90nm, and 45nm Predictive Technology Models demonstrate that VLM-CAD effectively balances power and performance, achieving a 100% success rate in optimizing an amplifier with a complementary input and a class-AB output stage, while maintaining total runtime under 43 minutes across all experiments.
|
https://arxiv.org/abs/2601.07315
|
Academic Papers
|
svg
|
45a72ba2a017dbaf2046fb756198390cd42b5533428b9fabcd8ee687f19baac4
|
2026-01-13T00:00:00-05:00
|
BEAT-Net: Injecting Biomimetic Spatio-Temporal Priors for Interpretable ECG Classification
|
arXiv:2601.07316v1 Announce Type: new Abstract: Although deep learning has advanced automated electrocardiogram (ECG) diagnosis, prevalent supervised methods typically treat recordings as undifferentiated one-dimensional (1D) signals or two-dimensional (2D) images. This formulation compels models to learn physiological structures implicitly, resulting in data inefficiency and opacity that diverge from medical reasoning. To address these limitations, we propose BEAT-Net, a Biomimetic ECG Analysis with Tokenization framework that reformulates the problem as a language modeling task. Utilizing a QRS tokenization strategy to transform continuous signals into biologically aligned heartbeat sequences, the architecture explicitly decomposes cardiac physiology through specialized encoders that extract local beat morphology while normalizing spatial lead perspectives and modeling temporal rhythm dependencies. Evaluations across three large-scale benchmarks demonstrate that BEAT-Net matches the diagnostic accuracy of dominant convolutional neural network (CNN) architectures while substantially improving robustness. The framework exhibits exceptional data efficiency, recovering fully supervised performance using only 30 to 35 percent of annotated data. Moreover, learned attention mechanisms provide inherent interpretability by spontaneously reproducing clinical heuristics, such as Lead II prioritization for rhythm analysis, without explicit supervision. These findings indicate that integrating biological priors offers a computationally efficient and interpretable alternative to data-intensive large-scale pre-training.
|
https://arxiv.org/abs/2601.07316
|
Academic Papers
|
svg
|
61a3b583704efd8a26581fab8ab66eb1248d82ccb21404a83fa9637913d4145c
|
2026-01-13T00:00:00-05:00
|
Engineering Favorable Propagation: Near-Field IRS Deployment for Spatial Multiplexing
|
arXiv:2601.07317v1 Announce Type: new Abstract: In intelligent reflecting surface IRS assisted multiple input multiple output MIMO systems, a strong line of sight LoS link is required to compensate for the severe cascaded path loss. However, such a link renders the effective channel highly rank deficient and fundamentally limits spatial multiplexing. To overcome this limitation, this paper leverages the large aperture of sparse arrays to harness near field spherical wavefronts, and establishes a deterministic deployment criterion that strategically positions the IRS in the near field of a base station BS. This placement exploits the spherical wavefronts of the BS IRS link to engineer decorrelated channels, thereby fundamentally overcoming the rank deficiency issue in far field cascaded channels. Based on a physical channel model for the sparse BS array and the IRS, we characterize the rank properties and inter user correlation of the cascaded BS IRS user channel. We further derive a closed form favorable propagation metric that reveals how the sparse array geometry and the IRS position can be tuned to reduce inter user channel correlation. The resulting geometry driven deployment rule provides a simple guideline for creating a favorable propagation environment with enhanced effective degrees of freedom. The favorable channel statistics induced by our deployment criterion enable a low complexity maximum ratio transmission MRT precoding scheme. This serves as the foundation for an efficient algorithm that jointly optimizes the IRS phase shifts and power allocation based solely on long term statistical channel state information CSI. Simulation results validate the effectiveness of our deployment criterion and demonstrate that our optimization framework achieves significant performance gains over benchmark schemes.
|
https://arxiv.org/abs/2601.07317
|
Academic Papers
|
svg
|
9052142238198902e2060c88afb2badbaeb263bcd4a523d738ddaed3682b07d8
|
2026-01-13T00:00:00-05:00
|
Segmental Advantage Estimation: Enhancing PPO for Long-Context LLM Training
|
arXiv:2601.07320v1 Announce Type: new Abstract: Training Large Language Models (LLMs) for reasoning tasks is increasingly driven by Reinforcement Learning with Verifiable Rewards (RLVR), where Proximal Policy Optimization (PPO) provides a principled framework for stable policy updates. However, the practical application of PPO is hindered by unreliable advantage estimation in the sparse-reward RLVR regime. This issue arises because the sparse rewards in RLVR lead to inaccurate intermediate value predictions, which in turn introduce significant bias when aggregated at every token by Generalized Advantage Estimation (GAE). To address this, we introduce Segmental Advantage Estimation (SAE), which mitigates the bias that GAE can incur in RLVR. Our key insight is that aggregating $n$-step advantages at every token(as in GAE) is unnecessary and often introduces excessive bias, since individual tokens carry minimal information. Instead, SAE first partitions the generated sequence into coherent sub-segments using low-probability tokens as heuristic boundaries. It then selectively computes variance-reduced advantage estimates only from these information-rich segment transitions, effectively filtering out noise from intermediate tokens. Our experiments demonstrate that SAE achieves superior performance, with marked improvements in final scores, training stability, and sample efficiency. These gains are shown to be consistent across multiple model sizes, and a correlation analysis confirms that our proposed advantage estimator achieves a higher correlation with an approximate ground-truth advantage, justifying its superior performance.
|
https://arxiv.org/abs/2601.07320
|
Academic Papers
|
svg
|
6f9f9d43870cbb3282b01380b52fae76f27d36171f447c42df398cdc63705c51
|
2026-01-13T00:00:00-05:00
|
Performance Bounds of Joint Detection with Kalman Filtering and Channel Decoding for Wireless Networked Control Systems
|
arXiv:2601.07322v1 Announce Type: new Abstract: The joint detection uses Kalman filtering (KF) to estimate the prior probability of control outputs to assist channel decoding. In this paper, we regard the joint detection as maximum a posteriori (MAP) decoding and derive the lower and upper bounds based on the pairwise error probability considering system interference, quantization interval, and weight distribution. We first derive the limiting bounds as the signal-to-noise ratio (SNR) goes to infinity and the system interference goes to zero. Then, we construct an infinite-state Markov chain to describe the consecutive packet losses of the control systems to derive the MAP bounds. Finally, the MAP bounds are approximated as the bounds of the transition probability from the state with no packet loss to the state with consecutive single packet loss. The simulation results show that the MAP performance of $\left(64,16\right)$ polar code and 16-bit CRC coincides with the limiting upper bound as the SNR increases and has $3.0$dB performance gain compared with the normal approximation of the finite block rate at block error rate $10^{-3}$.
|
https://arxiv.org/abs/2601.07322
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.